Category Archives: microprocessors

HP’s Project Voyager Alights on Server Value

Hewlett-Packard earlier this week announced the HP ProLiant Generation 8 (Gen8) line of servers, based on the HP ProActive Insight architecture. The technology behind the architecture and the servers results from Project Voyager, a two-year initiative to redefine data-center economics by automating every aspect of the server lifecycle.

You can read the HP press release on the announcement, which covers all the basics, and you also can peruse coverage at a number of different media outposts online.

Voyager Follows Moonshot and Odyssey

The Project Voyager-related announcement follows Project Moonshot and Project Odyssey announcements last fall. Moonshot, you might recall, related to low-energy computing infrastructure for web-scale deployments, whereas Odyssey was all about unifying mission-critical computing — encompassing Unix and x86-based Windows and Linux servers — in one system.

A $300-million, two-year program that yielded more than 900 patents, Project Voyager’s fruits, as represented by the ProActive Insight architecture, will span the entire HP Converged Infrastructure.

Intelligence and automation are the buzzwords behind HP’s latest server push. By enabling servers to “virtually take care of themselves,” HP is looking to reduce data-center complexity and cost, while increasing system uptime and boosting compute-related innovation. In support of the announcement, HP culled assorted facts and figures to assert that savings from the new servers can be significant across various enterprise deployment scenarios.

Taking Care of Business

In taking care of its customers, of course, HP is taking care of itself. HP says it tested the ProLiant servers in more than 100 real-world data centers, and that they include more than 150 client-inspired design innovations. That process was smart, and so were the results, which not only speak to real needs of customers, but also address areas that are beyond the purview of Intel (or AMD).

The HP launch eschewed emphasis on system boards, processors, and “feeds and speeds.” While some observers wondered whether that decision was taken because Intel had yet to launch its latest Xeon chips, the truth is that HP is wise to redirect the value focus away from chip performance and toward overall system and data-center capabilities.

Quest for Sustainable Value, Advantage 

Processor performance, including speeds and feeds, is the value-added purview of Intel, not of HP. All system vendors ultimately get the same chips from Intel (or AMD). They really can’t differentiate on the processor, because the processor isn’t theirs. Any gains they get from being first to market with a new Intel processor architecture will be evanescent.

They can, however, differentiate more sustainably around and above the processor, which is what HP has done here. Certainly, a lot of value-laden differentiation has been created, as the 900 patent filings attest. In areas such as management, conservation, and automation, HP has found opportunity not only to innovate, but also to make a compelling argument that its servers bring unique benefits into customer data centers.

With margin pressure unlikely to abate in server hardware, HP needed to make the sort of commitment and substantial investment that Project Voyager represented.

Questions About Competition, Patents

From a competitive standpoint, however, two questions arise. First, how easy (or hard) will it be for HP’s system rivals to counter what HP has done, thereby mitigating HP’s edge? Second, what sort of strategy, if any, does HP have in store for its Voyager-related patent portfolio? Come to think of it, those questions — and the answers to them — might be related.

As a final aside, the gentle folks at The Register inform us that HP’s new series of servers is called the ProLiant Gen8 rather than ProLiant G8 — the immediately predecessors are called ProLiant G7 (for Generation 7) — because the sound “gee-ate” is uncomfortably similar to a slang term for “penis” in Mandarin.

Presuming that to be true, one can understand why HP made the change.

HP’s Launches Its Moonshot Amid Changing Industry Dynamics

As I read about HP’s new Project Moonshot, which was covered extensively by the trade press, I wondered about the vendor’s strategic end game. Where was it going with this technology initiative, and does it have a realistic likelihood of meeting its objectives?

Those questions led me to consider how drastically the complexion of the IT industry has changed as cloud computing takes hold. Everything is in flux, advancing toward an ultimate galactic configuration that, in many respects, will be far different from what we’ve known previously.

What’s the Destination?

It seems to me that Project Moonshot, with its emphasis on a power-sipping and space-saving server architecture for web-scale processing, represents an effort by HP to re-establish a reputation for innovation and thought leadership in a burgeoning new market. But what, exactly, is the market HP has in mind?

Contrary to some of what I’ve seen written on the subject, HP doesn’t really have a serious chance of using this technology to wrest meaningful patronage from the behemoths of the cloud service-provider world. Google won’t be queuing up for these ARM-based, Calxeda-designed, HP-branded “micro servers.” Nor will Facebook or Microsoft. Amazon or Yahoo probably won’t be in the market for them, either.

The biggest of the big cloud providers are heading in a different direction, as evidenced by their aggressive patronage of open-source hardware initiatives that, when one really thinks about it, are designed to reduce their dependence on traditional vendors of server, storage, and networking hardware. They’re breaking that dependence — in some ways, they see it as taking back their data centers — for a variety of reasons, but their behavior is invariably motivated by their desire to significantly reduce operating expenditures on data-center infrastructure while freeing themselves to innovate on the fly.

When Customers Become Competitors

We’ve reached an inflection point where the largest cloud players — the Googles, the Facebooks, the Amazons, some of the major carriers who have given thought to such matters — have figured out that they can build their own hardware infrastructure, or order it off the shelf from ODMs, and get it to do everything they need it to do (they have relatively few revenue-generating applications to consider) at a lower operating cost than if they kept buying relatively feature-laden, more-expensive gear from hardware vendors.

As one might imagine, this represents a major business concern for the likes of HP, as well as for Cisco and others who’ve built a considerable business selling hardware at sustainable margins to customers in those markets. An added concern is that enterprise customers, starting with many SMBs, have begun transitioning their application workloads to cloud-service providers. The vendor problem, then, is not only that the cloud market is growing, but also that segments of the enterprise market are at risk.

Attempt to Reset Technology Agenda

The vendors recognize the problem, and they’re doing what they can to adapt to changing circumstances. If the biggest web-scale cloud providers are moving away from reliance on them, then hardware vendors must find buyers elsewhere. Scores of cloud service providers are not as big, or as specialized, or as resourceful as Google, Facebook, or Microsoft. Those companies might be considering the paths their bigger brethren have forged, with initiatives such as the Open Compute Project and OpenFlow (for computing and networking infrastructure, respectively), but perhaps they’re not entirely sold on those models or don’t think they’re quite right  for their requirements just yet.

This represents an opportunity for vendors such as HP to reset the technology agenda, at least for these sorts of customers. Hence, Project Moonshot, which, while clearly ambitious, remains a work in progress consisting of the Redstone Server Development Platform, an HP Discovery Lab (the first one is in Houston), and HP Pathfinder, a program designed to create open standards and third-party technology support for the overall effort.

I’m not sure I understand who will buy the initial batch of HP’s “extreme low-power servers” based on Calxeda’s EnergyCore ARM server-on-a-chip processors. As I said before, and as an article at Ars Technica explains, those buyers are unlikely to be the masters of the cloud universe, for both technological and business reasons. For now, buyers might not even come from the constituency of smaller cloud providers

Friends Become Foes, Foes Become Friends (Sort Of)

But HP is positioning itself for that market and to be involved in those buying decisions relating to the energy-efficient system architectures.  Its Project Moonshot also will embrace energy-efficient microprocessors from Intel and AMD.

Incidentally, what’s most interesting here is not that HP adopted an ARM-based chip architecture before opting for an Intel server chipset — though that does warrant notice — but that Project Moonshot has been devised not so much as to compete against other server vendors as it is meant to provide a rejoinder to an open-computing model advanced by Facebook and others.

Just a short time ago, industry dynamics were relatively easy to discern. Hardware and system vendors competed against one another for the patronage of service providers and enterprises. Now, as cloud computing grows and its business model gains ascendance, hardware vendors also find themselves competing against a new threat represented by mammoth cloud service providers and their cost-saving DIY ethos.

Intel-Microsoft Mobile Split All Business

In an announcement today, Google and Intel said they would work together to optimize future versions of the  Android operating system for smartphones and other mobile devices powered by Intel chips.

It makes good business sense.

Pursuit of Mobile Growth

Much has been made of alleged strains in the relationship between the progenitors of Wintel — Microsoft’s Windows operating system and Intel’s microprocessors — but business partnerships are not affairs of the heart; they’re always pragmatic and results oriented. In this case, each company is seeking growth and pursuing its respective interests.

I don’t believe there’s any malice between Intel and Microsoft. The two companies will combine on the desktop again in early 2012, when Microsoft’s Windows 8 reaches market on PCs powered by Intel’s chips as well as on systems running the ARM architecture.

Put simply, Intel must pursue growth in mobile markets and data centers. Microsoft must similarly find partners that advance its interests.  Where their interests converge, they’ll work together; where their interests diverge, they’ll go in other directions.

Just Business

In PCs, the Wintel tandem was and remains a powerful industry standard. In mobile devices, Intel is well behind ARM in processors, while Microsoft is well behind Google and Apple in mobile operating systems. It makes sense that Intel would want to align with a mobile industry leader in Google, and that Microsoft would want to do likewise with ARM. A combination of Microsoft and Intel in mobile computing would amount to two also-rans combining to form . . . well, two also-rans in mobile computing.

So, with Intel and Microsoft, as with all alliances in the technology industry, it’s always helpful to remember the words of Don Lucchesi in The Godfather: Part III: “It’s not personal, it’s just business.”

Attention Shifts to Cavium After Broadcom’s Announced Buy of NetLogic

As most of you will know by now, Broadcom announced the acquisition of NetLogic Microsystems earlier this morning. The deal, expected to close in the first half of 2012, involves Broadcom paying out $3.7 billion in cash, or about $50 per NetLogic (NETL) share. For NetLogic shareholders, that’s a 57-percent premium on the company’s closing share price on Friday, September 9.

Sharp Premium

The sharp premium suggests a couple possibilities. One is that Broadcom had competition for NetLogic. Given that Frank Quattrone’s investment bank, Qatalyst Partners, served as an adviser to NetLogic, it’s certainly possible that a lively market existed for the seller. Another possibility is that Broadcom wanted to make a preemptive strike, issuing a bid that it knew would pass muster with NetLogic’s board and shareholders, while also precluding the emergence of a competitive bid.

Either way, both companies’ boards have approved the deal, which now awaits regulatory clearance and an approbatory nod from NetLogics’ shareholders.

In a press release announcing the acquisition, Broadcom provided an official rationale for the move:

Deal Rationale

“The acquisition meaningfully extends Broadcom’s infrastructure portfolio with a number of critical new product lines and technologies, including knowledge-based processors, multi-core embedded processors, and digital front-end processors, each of which offers industry-leading performance and capabilities. The combination enables Broadcom to deliver best-in-class, seamlessly-integrated network infrastructure platforms to its customers, reducing both their time-to-market and their development costs.”

Said Scott McGregor, Broadcom’s president and CEO:

“This transaction delivers on all fronts for Broadcom’s shareholders — strategic fit, leading-edge technology and significant financial upside. With NetLogic Microsystems, Broadcom is acquiring a leading multi-core embedded processor solution, market leading knowledge-based processors, and unique digital front-end technology for wireless base stations that are key enablers for the next generation infrastructure build-out. Broadcom is now better positioned to meet growing customer demand for integrated, end-to-end communications and processing platforms for network infrastructure.”

“Today’s transaction is consistent with Broadcom’s strategic portfolio review process and with our focus on value creation through disciplined capital allocation while delivering best-in-class platforms for customers in the fastest growing segments of the communications industry.”

Sensible Move for Broadcom

Indeed, the transaction makes a lot of sense for Broadcom. Even though obtaining NetLogic’s technology for wireless base stations undoubtedly was a key business driver behind the deal, NetLogic addresses other markets that will be of value to Broadcom. Some of NetLogic’s latest commercial offerings are applicable to data- plane processing in large routers, security appliances,  network-attached storage and storage-area networking, next-generation cellular networks, and other communications equipment. The deal should Broadcom bolster its presence with existing customers and perhaps help it drive into some new accounts.

NetLogic’s primary competitors are Cavium Networks (CAVM) and Freescale Semiconductor (FSL). Considering Broadcom’s strategic requirements and the capabilities of the prospective acquisition candidates, NetLogic seems to offer the greatest upside, the lowest risk profile, and the fewest product overlaps.

Now the market’s attention will turn to Cavium, which was valued at $1.51 billion as of last Friday, before today’s transaction was announced, but whose shares are up more than seven percent in early trade this morning.

Why RIM Takeover Palaver is Premature

Whether it is experiencing good times or bad times, Research in Motion (RIM) always seems to be perceived as an acquisition target.

When its fortunes were bright, RIM was rumored to be on the acquisitive radar of a number of vendors, including Nokia, Cisco, Microsoft, and Dell. Notwithstanding that some of those vendors also have seen their stars dim, RIM faces a particularly daunting set of challenges.

Difficult Circumstances

Its difficult circumstances are reflected in its current market capitalization. Prior to trading today, RIM had a market capitalization of $11.87 billion; at the end of August last year, it was valued at $23.27 billion. While some analysts argue that RIM’s stock has been oversold and that the company now is undervalued, others contend that RIM’s valuation might have further to fall. In the long run, unless it can arrest its relative decline in smartphones and mobile computing, RIM appears destined for continued hardship.

Certainly, at least through the end of this year — and until we see whether its QNX-based smartphones represent compelling alternatives to Apple’s next crop of iPhones and the succeeding wave of Android-based devices from Google licensees — RIM does not seem to have the wherewithal to reverse its market slide.

All of which brings us to the current rumors about RIM and potential suitors.

Dell’s Priorities Elsewhere

Dell has been mentioned, yet again, but Dell is preoccupied with other business. In an era of IT consumerization, in which consumers increasingly are determining which devices they’ll use professionally and personally, Dell neither sees itself nor RIM as having the requisite consumer cache to win hearts and minds, especially when arrayed against some well-entrenched industry incumbents. Besides, as noted above, Dell has other priorities, most of which are in the data center, which Dell sees not only as an enterprise play but also — as cloud computing gains traction — as a destination for the applications and services of many of its current SMB customers.

In my view, Dell doesn’t feel that it needs to own a mobile operating system. On the mobile front, it will follow the zeitgeist of IT consumerization and support the operating systems and device types that its customers want. It will sell Android or Windows Phone devices to the extent that its customers want them (and want to buy them from Dell), but I also expect the company to provide heterogeneous mobile-management solutions.

Google Theory

Google also has been rumored to be a potential acquirer of RIM. Notable on this front has been former Needham & Company and ThinkEquity analyst Anton Wahlman, who wrote extensively on why he sees Google as a RIM suitor. His argument essentially comes down to three drivers: platform convergence, with Google’s Android 4.0 and RIM’s QNX both running on the same Texas Instruments OMAP 4400 series platform; Google’s need for better security to facilitate its success in mobile-retail applications featuring Near-Field Communications (NFC); and Google’s increasing need to stock up on mobile patents and intellectual property as it comes under mounting litigious attack.

They are interesting data points, but they don’t add up to a Google acquisition of RIM.

Convergence of hardware platforms doesn’t lead inexorably to Google wanting to buy RIM. It’s a big leap of logic — and a significant leap of faith for stock speculators — to suppose that Google would see value in taking out RIM just because they’re both running the same mobile chipset. On security, meanwhile, Google could address any real or perceived NFC issues without having to complete a relatively costly and complex acquisition of a mobile-OS competitor. Finally, again, Google could address its mobile-IT deficit organically, inorganically, and legally in ways that would be neither as complicated nor as costly as having to buy RIM, a deal that would almost certainly draw antitrust scrutiny from the Department of Justice (DoJ), the Securities and Exchange Commission (SEC), and probably the European Union (EU).

Google doesn’t need those sorts of distractions, not when it’s trying to keep a stable of handset licensees happy while also attempting to portray itself as the well-intentioned victim in the mobile-IP wars.

Microsoft’s Wait

Finally, back again as a rumored acquirer of RIM, we find Microsoft. At one time, a deal between the companies might have made sense, and it might make sense again. Now, though, the timing is inauspicious.

Microsoft has invested significant resources in a relationship with Nokia, and it will wait to see whether that bet pays off before it resorts to a Plan B. Microsoft has done the math, and it figures as long as Nokia’s Symbian installed base doesn’t hemorrhage extravagantly, it should be well placed to finally have a competitive entry in the mobile-OS derby with Windows Phone. Now, though, as Nokia comes under attack from above (Apple and high-end Android smartphones) and from below (inexpensive feature phones and lower-end Android smartphones), there’s some question as to whether Nokia can deliver the market pull that Microsoft anticipated. Nonetheless, Microsoft isn’t ready to hit the panic button.

Not Going Anywhere . . . This Year

Besides, as we’ve already deduced, RIM isn’t going anywhere. That’s not just because the other rumored players aren’t sufficiently interested in making the buy, but also because RIM’s executive team and its board of directors aren’t ready to sell.  Despite the pessimism of outside observers, RIM remains relatively sanguine about its prospects. The feeling on campus is that the QNX platform will get RIM back on track in 2012. Until that supposition is validated or refuted, RIM will not seek strategic alternatives.

This narrative will play out in due course.  Much will depend on the market share and revenue Microsoft and Windows Phone derive from Nokia. If that relationship runs aground, Microsoft — which really feels it must succeed in mobile and cloud to ensure a bright future — will look for alternatives. At the same time, RIM will be determining whether QNX is the software tonic for its corporate regeneration. If  the cure takes, RIM won’t be in need of external assistance. If QNX is no panacea, and RIM loses further ground to Apple and the Google Android camp, then it will be more receptive to outside interests.

Those answers will come not this year, but in 2012.

Pondering Intel’s Grand Design for McAfee

Befuddlement and buzz jointly greeted Intel’s announcement today regarding its pending acquisition of security-software vendor McAfee for $7.68 billion in cash.

Intel was not among the vendors I expected to take an acquisitive run at McAfee. It appears I was not alone in that line of thinking, because the widespread reaction to the news today involved equal measures of incredulity and confusion. That was partly because Intel was McAfee’s buyer, of course, but also because Intel had agreed to pay such a rich premium, $48 per McAfee share, 60 percent above McAfee’s closing price of $29.93 on Wednesday.

What was Intel Thinking?

That Intel paid such a price tells us a couple things. First, that Intel really felt it had to make this acquisition; and, second, that Intel probably had competition for the deal. Who that competition might have been is anybody’s guess, but check my earlier posts on potential McAfee acquirers for a list of suspects.

One question that came to many observers’ minds today was a simple one: What the hell was Intel thinking? Put another way, just what does Intel hope to derive from ownership of McAfee that it couldn’t have gotten from a less-expensive partnership with the company?

Many attempting to answer this question have pointed to smartphones and other mobile devices, such as slates and tablets, as the true motivations for Intel’s purchase of McAfee. There’s a certain logic to that line of thinking, to the idea that Intel would want to embed as much of McAfee’s security software as possible into chips that it heretofore has had a difficult time selling to mobile-device vendors, who instead have gravitated to  designs from ARM.

Embedded M2M Applications

In the big picture, that’s part of Intel’s plan, no doubt. But I also think other motivations were at play.  An important market for Intel, for instance, is the machine-to-machine (M2M) space.

That M2M space is where nearly everything that can be assigned an IP address and managed or monitored remotely — from devices attached to the smart grid (smart meters, hardened switches in substations, power-distribution gear) to medical equipment, to building-control systems, to televisions and set-top boxes  — is being connected to a communications network. As Intel’s customers sell systems into those markets, downstream buyers have expressed concerns about potential security vulnerabilities. Intel could help its embedded-systems customers ship more units and generate more revenue for Intel by assuaging the security fears of downstream buyers.

Still, that roadmap, if it exists, will take years to reach fruition. In the meantime, Intel will be left with slideware and a necessarily loose coupling of its microprocessors with McAfee’s security software. As Nathan Brookwood, principal analyst at Insight 64 suggested, Intel could start off by designing its hardware to work better with McAfee software, but it’s likely to take a few years, and new processor product cycles, for McAfee technology to get fully baked into Intel’s chips.

Will Take Time

So, for a while, Intel won’t be able to fully realize the value of McAfee as a asset. What’s more, there are parts of McAfee that probably don’t fit into Intel’s chip-centric view of the world. I’m not sure, for example, what this transaction portends for McAfee’s line of Internet-security products obtained through its acquisition of Secure Computing. Given that McAfee will find its new home inside Intel’s Software and Service division, as Richard Stiennon notes, the prospects for the Secure Computing product line aren’t bright.

I know Intel wouldn’t do this deal just because it flipped a coin or lost a bet, but Intel has a spotty track record, at best, when it comes to M&A activity. Media observers sometimes assume that technology executives are like masters of the universe, omniscient beings with superior intellects and brilliant strategic designs. That’s rarely true, though. Usually, they’re just better-paid, reasonably intelligent human beings, doing their best, with limited information and through hazy visibility, to make the right business decisions. They make mistakes, sometimes big ones.

M&A Road Full of Potholes

Don’t take it from me; consult the business-school professors. A Wharton course on mergers and acquisitions spotlights this quote from Robert W. Holthausen, Nomura Securities Company Professor, Professor of Accounting and Finance and Management:

“Various studies have shown that mergers have failure rates of more than 50 percent. One recent study found that 83 percent of all mergers fail to create value and half actually destroy value. This is an abysmal record. What is particularly amazing is that in polling the boards of the companies involved in those same mergers, over 80 percent of the board members thought their acquisitions had created value.”

I suppose I’m trying to say is that just because Intel thinks it has a plan for McAfee, that doesn’t mean the plan is a a good one or, even presuming it is a good plan, that it will be executed successfully. There are many potholes and unwanted detours along M&A road.

Components Shortages Affecting Vendors Worldwide

At the moment, components shortages seem to be pervasive in the technology industry. Vendors large and small, throughout most of the world, have been affected by them to greater or lesser degrees.

The problem appears to be with us for a while. To be best of my knowledge — and I will concede at the outset that my research hasn’t been definitive — vendors everywhere in the world are having difficulty sourcing adequate numbers of many types of components. The only exception is China, where vendors in telecommunications, cleantech, and other fields have not reported that same component-sourcing difficulties that have hobbled their counterparts in Europe, North America, and other parts of Asia.

That doesn’t necessarily mean that Chinese companies aren’t affected by components shortages. All it means is that they haven’t reported them, at least in the English-speaking media I’ve perused. Still, it’s a development that bears watching. In that China does not ascribe to the tenets of unfettered capitalism, it sometimes operates according to a unique set of rules.

Today’s component shortages span various semiconductor types, including but not limited to DSPs, FETs, diodes, and amplifiers. Vendors of solar inverters, particularly those based in Europe, also have been affected.

Meanwhile, Reuters reports that a shortage of basic electrical components could last into the second half of 2011, limiting the ability of telecommunications-equipment manufacturers to respond to improving market demand.

Reuters reports that memory chips and other fundamental components such as resistors and capacitors are in short supply after their makers slashed output, fired staff, put equipment purchases on hold or went out of business during the recession.
The shortages already have been blamed for weaker-than-expected results last quarter at telecommunications-equipement vendors Alcatel-Lucent and Ericsson, which really don’t need the added grief.

Alcatel-Lucent blamed components shortages for a large loss that it posted in its first fiscal quarter. Alcatel-Lucent’s CEO Ben Verwaayen said the said the shortages involved “everyday” low-cost components. He explained that most components come from China, where the manufacturing industry hasn’t been revamped since major cuts that followed the severe global downturn. 

We already know that the supply-chain issues that afflicted Cisco’s channel partners and customers were blamed partly on component shortages.
What’s more, Dell partly blamed shortages and higher costs of components, including memory, for its inability to maintain gross margins during its just-reported quarter.

And AU Optronics, Taiwan’s second-ranked LCD manufacturer and a supplier to Dell and Sony, reported that an LCD panel shortage is likely to last into the second half of this year.

By no means are those the only vendors affected. You only have read the recent 10-Qs or conference-call transcripts of companies involved in computer networking, telecommunications gear, personal computers, smartphones, displays, or cleantech hardware to understand that components shortages are nearly everywhere.

I just wonder — and I make no accusation in doing so — whether Chinese manufacturers are as affected by the shortages as are their competitors in other parts of the world.