Category Archives: Intel

HP’s Project Voyager Alights on Server Value

Hewlett-Packard earlier this week announced the HP ProLiant Generation 8 (Gen8) line of servers, based on the HP ProActive Insight architecture. The technology behind the architecture and the servers results from Project Voyager, a two-year initiative to redefine data-center economics by automating every aspect of the server lifecycle.

You can read the HP press release on the announcement, which covers all the basics, and you also can peruse coverage at a number of different media outposts online.

Voyager Follows Moonshot and Odyssey

The Project Voyager-related announcement follows Project Moonshot and Project Odyssey announcements last fall. Moonshot, you might recall, related to low-energy computing infrastructure for web-scale deployments, whereas Odyssey was all about unifying mission-critical computing — encompassing Unix and x86-based Windows and Linux servers — in one system.

A $300-million, two-year program that yielded more than 900 patents, Project Voyager’s fruits, as represented by the ProActive Insight architecture, will span the entire HP Converged Infrastructure.

Intelligence and automation are the buzzwords behind HP’s latest server push. By enabling servers to “virtually take care of themselves,” HP is looking to reduce data-center complexity and cost, while increasing system uptime and boosting compute-related innovation. In support of the announcement, HP culled assorted facts and figures to assert that savings from the new servers can be significant across various enterprise deployment scenarios.

Taking Care of Business

In taking care of its customers, of course, HP is taking care of itself. HP says it tested the ProLiant servers in more than 100 real-world data centers, and that they include more than 150 client-inspired design innovations. That process was smart, and so were the results, which not only speak to real needs of customers, but also address areas that are beyond the purview of Intel (or AMD).

The HP launch eschewed emphasis on system boards, processors, and “feeds and speeds.” While some observers wondered whether that decision was taken because Intel had yet to launch its latest Xeon chips, the truth is that HP is wise to redirect the value focus away from chip performance and toward overall system and data-center capabilities.

Quest for Sustainable Value, Advantage 

Processor performance, including speeds and feeds, is the value-added purview of Intel, not of HP. All system vendors ultimately get the same chips from Intel (or AMD). They really can’t differentiate on the processor, because the processor isn’t theirs. Any gains they get from being first to market with a new Intel processor architecture will be evanescent.

They can, however, differentiate more sustainably around and above the processor, which is what HP has done here. Certainly, a lot of value-laden differentiation has been created, as the 900 patent filings attest. In areas such as management, conservation, and automation, HP has found opportunity not only to innovate, but also to make a compelling argument that its servers bring unique benefits into customer data centers.

With margin pressure unlikely to abate in server hardware, HP needed to make the sort of commitment and substantial investment that Project Voyager represented.

Questions About Competition, Patents

From a competitive standpoint, however, two questions arise. First, how easy (or hard) will it be for HP’s system rivals to counter what HP has done, thereby mitigating HP’s edge? Second, what sort of strategy, if any, does HP have in store for its Voyager-related patent portfolio? Come to think of it, those questions — and the answers to them — might be related.

As a final aside, the gentle folks at The Register inform us that HP’s new series of servers is called the ProLiant Gen8 rather than ProLiant G8 — the immediately predecessors are called ProLiant G7 (for Generation 7) — because the sound “gee-ate” is uncomfortably similar to a slang term for “penis” in Mandarin.

Presuming that to be true, one can understand why HP made the change.

Advertisement

Rackspace’s Bridge Between Clouds

OpenStack has generated plenty of sound and fury during the past several months, and, with sincere apologies to William Shakespeare, there’s evidence to suggest the frenzied activity actually signifies something.

Precisely what it signifies, and how important OpenStack might become, is open to debate, of which there has been no shortage. OpenStack is generally depicted as an open-source cloud operating system, but that might be a generous interpretation. On the OpenStack website, the following definition is offered:

“OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.”

Just for fun and giggles (yes, that phrase has been modified so as not to offend reader sensibilities), let’s parse that passage, shall we? In the words of the OpenStackers themselves, their project is an open-source cloud-computing platform for public and private clouds, and it is reputedly simple to implement, massively scalable, and feature rich. Notably, it consists of a “series of interrelated projects delivering various components for a cloud infrastructure solution.”

Simple for Some

Given that description, especially the latter reference to interrelated projects spawning various components, one might wonder exactly how “simple” OpenStack is to implement and by whom. That’s a question others have raised, including David Linthicum in a recent piece at InfoWorld. In that article, Linthicum notes that undeniable vendor enthusiasm and burgeoning market momentum accrue to OpenStack — the community now has 138 member companies (and counting), including big-name players such as HP, Dell, Intel, Rackspace, Cisco, Citrix, Brocade, and others — but he also offers the following caveat:

“So should you consider OpenStack as your cloud computing solution? Not on its own. Like many open source projects, it takes a savvy software vendor, or perhaps cloud provider, to create a workable product based on OpenStack. The good news is that many providers are indeed using OpenStack as a foundation for their products, and most of those are working, or will work, just fine.”

Creating Value-Added Services

Meanwhile, taking issue with a recent InfoWorld commentary by Savio Rodrigues — who argued that OpenStack will falter while its open-source counterpart Eucalyptus will thrive — James Urquhart, formerly of Cisco and now VP of product strategy at enStratus, made this observation:

“OpenStack is not a cloud service, per se, but infrastructure automation tuned to cloud-model services, like IaaS, PaaS and SaaS. Intsall OpenStack, and you don’t get a system that can instantly bill customers, provide a service catalog, etc. That takes additional software.

What OpenStack represents is the commodity element of cloud services: the VM, object, server image and networking management components. Yeah, there is a dashboard to interact with those commodity elements, but it is not a value-add capability in-and-of itself.

What HP, Dell, Cisco, Citrix, Piston, Nebula and others are doing with OpenStack is creating value-add services on top (or within) the commodity automation. Some focus more on “being commodity”, others focus more on value-add, but they are all building on top of the core OpenStack projects.”

New Revenue Stream for Rackspace

All of which brings us, in an admittedly roundabout fashion, to Rackspace’s recent announcement of its Rackspace Cloud Private Edition, a packaged version of OpenStack components that can be used by enterprises for private-cloud deployments. This move makes sense for OpenStack on couple levels.

First off, it opens up a new revenue stream for the company. While Rackspace won’t try to make money on the OpenStack software or the reference designs — featuring a strong initial emphasis on Dell servers and Cisco networking gear for now, though bare-bones OpenCompute servers are likely to be embraced before long —  it will provide value-add, revenue-generating managed services to customers of Rackspace Cloud Private Edition. These managed services will comprise installation of OpenStack updates, analysis of system issues, and assistance with specific questions relating to systems engineering. Some security-related services also will be offered. While the reference architecture and the software are available now, Rackspace’s managed services won’t be available until January.

Building a Bridge

The launch of Rackspace Cloud Private Edition is a diversification initiative for Rackspace, which hitherto has made its money by hosting and managing applications and computing services for others in its own data centers. The OpenStack bundle takes it into the realm of provided managed services in its customers’ data centers.

As mentioned above, this represents a new revenue stream for Rackspace, but it also provides a technological bridge that will allow customers who aren’t ready for multi-tenant public cloud services today to make an easy transition to Rackspace’s data centers at some future date. It’s a smart move, preventing prospective customers from moving to another platform for private cloud deployment, ensuring in the process that said customers don’t enter the orbit of another vendor’s long-term gravitational pull.

The business logic coheres. For each customer engagement, Rackspace gets a payoff today, and potentially a larger one at a later date.

Like OpenFlow, Open Compute Signals Shift in Industry Power

I’ve written quite a bit recently about OpenFlow and the Open Networking Foundation (ONF). For a change of pace, I will focus today on the Open Compute Project.

In many ways, even though OpenFlow deals with networking infrastructure and Open Compute deals with computing infrastructure, they are analogous movements, springing from the same fundamental set of industry dynamics.

Open Compute was introduced formally to the world in April. Its ostensible goal was “to develop servers and data centers following the model traditionally associated with open-source software projects.”  That’s true insofar as it goes, but it’s only part of the story. The stated goal actually is a means to an end, which is to devise an operational template that allows cloud behemoths such as Facebook to save lots of money on computing infrastructure. It’s all about commoditizing and optimizing the operational efficiency of the hardware encompassed within many of the largest cloud data centers that don’t belong to Google.

Speaking of Google, it is not involved with Open Compute. That’s primarily because Google has been taking a DIY approach to its data center long before Facebook began working on the blueprint for the Open Compute Project.

Google as DIY Trailblazer

For Google, its ability to develop and deliver its own data-center technologies — spanning computing, networking and storage infrastructure — became a source of competitive advantage. By using off-the-shelf hardware components, Google was able to provide itself with cost- and energy-efficient data-center infrastructure that did exactly what it needed to do — and no more. Moreover, Google no longer had to pay a premium to technology vendors that offered products that weren’t ideally suited to its requirements and that offered extraneous “higher-value” (pricier) features and functionality.

Observing how Google had used its scale and its ample resources to fashion its cost-saving infrastructure, Facebook  considered how it might follow suit. The goal at Facebook was to save money, of course, but also to mitigate or perhaps eliminate the infrastructure-based competitive advantage Google had developed. Facebook realized that it could never compete with Google at scale in the infrastructure cost-saving game, so it sought to enlist others in the cause.

And so the Open Computer project was born. The aim is to have a community of shared interest deliver cost-saving open-hardware innovations that can help Facebook scale its infrastructure at an operational efficiency approximating Google’s. If others besides Facebook benefit, so be it. That’s not a concern.

Collateral Damage

As Facebook seeks to boost its advertising revenue, it is effectively competing with Google. The search giant still derives nearly 97 percent of its revenue from advertising, and its Google+ is intended to distract it not derail Facebook’s core business, just as Google Apps is meant to keep Microsoft focused on protecting one of its crown jewels rather than on allocating more corporate resources to search and search advertising.

There’s nothing particularly striking about that. Cloud service providers are expected to compete against other by developing new revenue-generating services and by achieving new cost-saving operational efficiencies.  In that context, the Open Compute Project can be seen, at least in one respect, as Facebook’s open-source bid to level the infrastructure playing field and undercut, as previously noted, what has been a Google competitive advantage.

But there’s another dynamic at play. As the leading cloud providers with their vast data centers increasingly seek to develop their own hardware infrastructure — or to create an open-source model that facilitates its delivery — we will witness some significant collateral damage. Those taking the hit, as is becoming apparent, will be the hardware systems vendors, including HP, IBM, Oracle (Sun), Dell, and even Cisco. That’s only on the computing side of the house, of course. In networking, as software-defined networking (SDN) and OpenFlow find ready embrace among the large cloud shops, Cisco and others will be subject to the loss of revenue and profit margin, though how much and how soon remain to be seen.

Who’s Steering the OCP Ship?

So, who, aside from Facebook, will set the strategic agenda of Open Compute? To answer to that question, we need only consult the identities of those named to the Open Compute Project Foundation’s board of directors:

  • Chairman/President – Frank Frankovsky, Director, Technical Operations at Facebook
  • Jason Waxman, General Manager, High Density Computing, Data Center Group, Intel
  • Mark Roenigk, Chief Operating Officer, Rackspace Hosting
  • Andy Bechtolshiem, Industry Guru
  • Don Duet, Managing Director, Goldman-Sachs

It’s no shocker that Facebook retains the chairman’s role. Facebook didn’t launch this initiative to have somebody else steer the ship.

Similarly, it’s not a surprise that Intel is involved. Intel benefits regardless of whether cloud shops build their own systems, buy them from HP or Dell, or even get them from a Taiwanese or Chinese ODM.

As for the Rackspace representation, that makes sense, too. Rackspace already has OpenStack, open-source software for private and public clouds, and the Open Compute approach provides a logical hardware complement to that effort.

After that, though, the board membership of the Open Compute Project Foundation gets rather interesting.

Examining Bechtolsheim’s Involvement

First, there’s the intriguing presence of Andy Bechtolsheim. Those who follow the networking industry will know that Andy Bechtolsheim is more than an “industry guru,” whatever that means. Among his many roles, Bechtolsheim serves as the chief development officer and co-founder of Arista Networks, a growing rival to Cisco in low-latency data-center switching, especially at cloud-scale web shops and financial-services companies. It bears repeating that Open Compute’s mandate does not extend to network infrastructure, which is the preserve of the analogous OpenFlow.

Bechtolsheim’s history is replete with successes, as a technologist and as an investor. He was one of the earliest investors in Google, which makes his involvement in Open Compute deliciously ironic.

More recently, he disclosed a seed-stage investment in Nebula, which, as Derrick Harris at GigaOM wrote this summer, has “developed a hardware appliance pre-loaded with customized OpenStack software and Arista networking tools, designed to manage racks of commodity servers as a private cloud.” The reference architectures for the commodity servers comprise Dell’s PowerEdge C Micro Servers and servers that adhere to Open Compute specifications.

We know, then, why Bechtolsheim is on the board. He’s a high-profile presence that I’m sure Open Compute was only too happy to welcome with open arms (pardon the pun), and he also has business interests that would benefit from a furtherance of Open Compute’s agenda. Not to put too fine a point on it, but there’s an Arista and a Nebula dimension to Bechtolsheim’s board role at the Open Compute Project Foundation.

OpenStack Angle for Rackspace, Dell

Interestingly, the presence of Bechtolsheim and Rackspace’s Mark Roenigk on the board both emphasize OpenStack considerations, as does Dell’s involvement with Open Compute. Dell doesn’t have a board seat — at least not according to the Open Compute website — but it seems to think it can build a business for solutions based on Open Compute and OpenStack among second-tier purveyors of public-cloud services and among those pursuing large private or hybrid clouds. Both will become key strategic markets for Dell as its SMB installed base migrates applications and spending to the cloud.

Dell notably lost a chunk of server business when Facebook chose to go the DIY route, in conjunction with Taiwanese ODM Quanta Computer, for servers in its data center in Pineville, Oregon. Through its involvement in Open Compute, Dell might be trying to regain lost ground at Facebook, but I suspect that ship has sailed. Instead, Dell probably is attempting to ensure that it prevents or mitigates potential market erosion among smaller service providers and enterprise customers.

What Goldman Sachs Wants

The other intriguing presence on the Open Compute Project Foundation board is Don Duet from Goldman Sachs. Here’s what Duet had to say about his firm’s involvement with Open Compute:

“We build a lot of our own technology, but we are not at the hyperscale of Google or Facebook. We are a mid-scale company with a large global footprint. The work done by the OCP has the potential to lower the TCO [total cost of ownership] and we are extremely interested in that.”

Indeed, that perspective probably worries major server vendors more than anything else about Open Compute. Once Goldman Sachs goes this route, other financial-services firms will be inclined to follow, and nobody knows where the market attrition will end, presuming it ends at all.

Like Facebook, Goldman Sachs saw what Google was doing with its home-brewed, scale-out data-center infrastructure, and wondered how it might achieve similar business benefits. That has to be disconcerting news for major server vendors.

Welcome to the Future

The big takeaway for me, as I absorb these developments, is how the power axis of the industry is shifting. The big systems vendors used to set the agenda, promoting and pushing their products and influencing the influencers so that enterprise buyers kept their growth rates on the uptick. Now, though, a combination of factors — widespread data-center virtualization, the rise of cloud computing, a persistent and protected global economic downturn (which has placed unprecedented emphasis on IT cost containment) — is reshaping the IT universe.

Welcome to the future. Some might like it more than others, but there’s no going back.

HP’s Launches Its Moonshot Amid Changing Industry Dynamics

As I read about HP’s new Project Moonshot, which was covered extensively by the trade press, I wondered about the vendor’s strategic end game. Where was it going with this technology initiative, and does it have a realistic likelihood of meeting its objectives?

Those questions led me to consider how drastically the complexion of the IT industry has changed as cloud computing takes hold. Everything is in flux, advancing toward an ultimate galactic configuration that, in many respects, will be far different from what we’ve known previously.

What’s the Destination?

It seems to me that Project Moonshot, with its emphasis on a power-sipping and space-saving server architecture for web-scale processing, represents an effort by HP to re-establish a reputation for innovation and thought leadership in a burgeoning new market. But what, exactly, is the market HP has in mind?

Contrary to some of what I’ve seen written on the subject, HP doesn’t really have a serious chance of using this technology to wrest meaningful patronage from the behemoths of the cloud service-provider world. Google won’t be queuing up for these ARM-based, Calxeda-designed, HP-branded “micro servers.” Nor will Facebook or Microsoft. Amazon or Yahoo probably won’t be in the market for them, either.

The biggest of the big cloud providers are heading in a different direction, as evidenced by their aggressive patronage of open-source hardware initiatives that, when one really thinks about it, are designed to reduce their dependence on traditional vendors of server, storage, and networking hardware. They’re breaking that dependence — in some ways, they see it as taking back their data centers — for a variety of reasons, but their behavior is invariably motivated by their desire to significantly reduce operating expenditures on data-center infrastructure while freeing themselves to innovate on the fly.

When Customers Become Competitors

We’ve reached an inflection point where the largest cloud players — the Googles, the Facebooks, the Amazons, some of the major carriers who have given thought to such matters — have figured out that they can build their own hardware infrastructure, or order it off the shelf from ODMs, and get it to do everything they need it to do (they have relatively few revenue-generating applications to consider) at a lower operating cost than if they kept buying relatively feature-laden, more-expensive gear from hardware vendors.

As one might imagine, this represents a major business concern for the likes of HP, as well as for Cisco and others who’ve built a considerable business selling hardware at sustainable margins to customers in those markets. An added concern is that enterprise customers, starting with many SMBs, have begun transitioning their application workloads to cloud-service providers. The vendor problem, then, is not only that the cloud market is growing, but also that segments of the enterprise market are at risk.

Attempt to Reset Technology Agenda

The vendors recognize the problem, and they’re doing what they can to adapt to changing circumstances. If the biggest web-scale cloud providers are moving away from reliance on them, then hardware vendors must find buyers elsewhere. Scores of cloud service providers are not as big, or as specialized, or as resourceful as Google, Facebook, or Microsoft. Those companies might be considering the paths their bigger brethren have forged, with initiatives such as the Open Compute Project and OpenFlow (for computing and networking infrastructure, respectively), but perhaps they’re not entirely sold on those models or don’t think they’re quite right  for their requirements just yet.

This represents an opportunity for vendors such as HP to reset the technology agenda, at least for these sorts of customers. Hence, Project Moonshot, which, while clearly ambitious, remains a work in progress consisting of the Redstone Server Development Platform, an HP Discovery Lab (the first one is in Houston), and HP Pathfinder, a program designed to create open standards and third-party technology support for the overall effort.

I’m not sure I understand who will buy the initial batch of HP’s “extreme low-power servers” based on Calxeda’s EnergyCore ARM server-on-a-chip processors. As I said before, and as an article at Ars Technica explains, those buyers are unlikely to be the masters of the cloud universe, for both technological and business reasons. For now, buyers might not even come from the constituency of smaller cloud providers

Friends Become Foes, Foes Become Friends (Sort Of)

But HP is positioning itself for that market and to be involved in those buying decisions relating to the energy-efficient system architectures.  Its Project Moonshot also will embrace energy-efficient microprocessors from Intel and AMD.

Incidentally, what’s most interesting here is not that HP adopted an ARM-based chip architecture before opting for an Intel server chipset — though that does warrant notice — but that Project Moonshot has been devised not so much as to compete against other server vendors as it is meant to provide a rejoinder to an open-computing model advanced by Facebook and others.

Just a short time ago, industry dynamics were relatively easy to discern. Hardware and system vendors competed against one another for the patronage of service providers and enterprises. Now, as cloud computing grows and its business model gains ascendance, hardware vendors also find themselves competing against a new threat represented by mammoth cloud service providers and their cost-saving DIY ethos.

OVA Members Hope to Close Ground

I discussed the fast-growing Open Virtualization Alliance (OVA) in a recent post about its primary objective, which is to commoditize VMware’s daunting market advantage. In catching up on my reading, I came across an excellent piece by InformationWeek’s Charles Babcock that puts the emergence of OVA into historical perspective.

As Babcock writes, the KVM-centric OVA might not have come into existence at all if an earlier alliance supporting another open-source hypervisor hadn’t foundered first. Quoting Babcock regarding OVA’s vanguard members:

Hewlett-Packard, IBM, Intel, AMD, Red Hat, SUSE, BMC, and CA Technologies are examples of the muscle supporting the alliance. As a matter of fact, the first five used to be big backers of the open source Xen hypervisor and Xen development project. Throw in the fact Novell was an early backer of Xen as the owner of SUSE, and you have six of the same suspects. What happened to support for Xen? For one, the company behind the project, XenSource, got acquired by Citrix. That took Xen out of the strictly open source camp and moved it several steps closer to the Microsoft camp, since Citrix and Microsoft have been close partners for over 20 years.

Xen is still open source code, but its backers found reasons (faster than you can say vMotion) to move on. The Open Virtualization Alliance still shares one thing in common with the Xen open source project. Both groups wish to slow VMware’s rapid advance.

Wary Eyes

Indeed, that is the goal. Most of the industry, with the notable exception of VMware’s parent EMC, is casting a wary eye at the virtualization juggernaut, wondering how far and wide its ambitions will extend and how they will impact the market.

As Babcock points out, however, by moving in mid race from one hypervisor horse (Xen) to another (KVM), the big backers of open-source virtualization might have surrendered insurmountable ground to VMware, and perhaps even to Microsoft. Much will depend on whether VMware abuses its market dominance, and whether Microsoft is successful with its mid-market virtualization push into its still-considerable Windows installed base.

Long Way to Go

Last but perhaps not least, KVM and the Open Virtualization Alliance (OVA) will have a say in the outcome. If OVA members wish to succeed, they’ll not only have to work exceptionally hard, but they’ll also have to work closely together.

Coming from behind is never easy, and, as Babcock contends, just trying to ride Linux’s coattails will not be enough. KVM will have to continue to define its own value proposition, and it will need all the marketing and technological support its marquee backers can deliver. One area of particular importance is operations management in the data center.

KVM’s market share, as reported by Gartner earlier this year, was less than one percent in server virtualization. It has a long way to go before it causes VMware’s executives any sleepless nights. That it wasn’t the first choice of its proponents, and that it has lost so much time and ground, doesn’t help the cause.

Cisco Hedges Virtualization Bets

Pursuant to my post last week on the impressive growth of the Open Virtualization Alliance (OVA), which aims to commoditize VMware’s virtualization advantage by offering a viable open-virtualization alternative to the market leader, I note that Red Hat and five other major players have founded the oVirt Project, established to transform Red Hat Enterprise Virtualization Manager (RHEV-M) into a feature-rich virtualization management platform with well-defined APIs.

Cisco to Host Workshop

According to coverage at The Register, Red Hat has been joined on the oVirt Project by Cisco, IBM, Intel, NetApp and SuSE, all of which have committed to building a KVM-based pluggable hypervisor management framework along with an ecosystem of plug-in partners.

Although Cisco will be hosting an oVirt workshop on November 1-3 at its main campus in San Jose, the article at The Register suggests that the networking giant is the only one of the six founding companies not on the oVirt Project’s governance board.  Indeed, the sole reference to Cisco on the oVirt Project website relates to the workshop.

Nonetheless, Cisco’s participation in oVirt warrants attention.

Insurance Policies and Contingency Plans

Realizing that VMware could increasingly eat into the value, and hence the margins, associated with its network infrastructure as cloud computing proliferates, Cisco seems to be devising insurance policies and contingency plans in the event that its relationship with the virtualization market leader becomes, well, more complicated.

To be sure, the oVirt Project isn’t Cisco’s only backup plan. Cisco also is involved with OpenStack, the open-source cloud-computing project that effectively competes with oVirt — and which Red Hat assails as a community “owned”  by its co-founder and driving force, Rackspace — and it has announced that its Cisco Nexus 1000V distributed virtual switch and the Cisco Unified Computing System with Virtual Machine Fabric Extender (VM-FEX) capabilities will support the Windows Server Hyper-V hypervisor to be released with Microsoft Windows Server 8.

Increasingly, Cisco is spreading its virtualization bets across the board, though it still has (and makes) most of its money on VMware.

Intel-Microsoft Mobile Split All Business

In an announcement today, Google and Intel said they would work together to optimize future versions of the  Android operating system for smartphones and other mobile devices powered by Intel chips.

It makes good business sense.

Pursuit of Mobile Growth

Much has been made of alleged strains in the relationship between the progenitors of Wintel — Microsoft’s Windows operating system and Intel’s microprocessors — but business partnerships are not affairs of the heart; they’re always pragmatic and results oriented. In this case, each company is seeking growth and pursuing its respective interests.

I don’t believe there’s any malice between Intel and Microsoft. The two companies will combine on the desktop again in early 2012, when Microsoft’s Windows 8 reaches market on PCs powered by Intel’s chips as well as on systems running the ARM architecture.

Put simply, Intel must pursue growth in mobile markets and data centers. Microsoft must similarly find partners that advance its interests.  Where their interests converge, they’ll work together; where their interests diverge, they’ll go in other directions.

Just Business

In PCs, the Wintel tandem was and remains a powerful industry standard. In mobile devices, Intel is well behind ARM in processors, while Microsoft is well behind Google and Apple in mobile operating systems. It makes sense that Intel would want to align with a mobile industry leader in Google, and that Microsoft would want to do likewise with ARM. A combination of Microsoft and Intel in mobile computing would amount to two also-rans combining to form . . . well, two also-rans in mobile computing.

So, with Intel and Microsoft, as with all alliances in the technology industry, it’s always helpful to remember the words of Don Lucchesi in The Godfather: Part III: “It’s not personal, it’s just business.”

PC Market: Tired, Commoditized — But Not Dead

As Hewlett-Packard prepares to spinoff or sell its PC business within the next 12 to 18 months, many have spoken about the “death of the PC.”

Talk of “Death” and “Killing”

Talk of metaphorical “death” and “killing” has been rampant in technology’s new media for the past couple years . When observers aren’t noting that a product or technology is “dead,” they’re saying that an emergent product of one sort or another will “kill” a current market leader. It’s all exaggeration and melodrama, of course, but it’s not helpful. It lowers the discourse, and it makes the technology industry appear akin to professional wrestling with nerds. Nobody wants to see that.

Truth be told, the PC is not dead. It’s enervated, it’s best days are behind it, but it’s still here. It has, however, become a commodity with paper-thin margins, and that’s why HP — more than six years after IBM set the precedent — is bailing on the PC market.

Commoditized markets are no place for thrill seekers or for CEOs of companies that desperately seek bigger profit margins. HP CEO Leo Apotheker, as a longtime software executive, must have viewed HP’s PC business, which still accounts for about 30 percent of the company’s revenues, with utter disdain when he first joined the company.

No Room for Margin

As  I wrote in this forum a while back, PC vendors these days have little room to add value (and hence margin) to the boxes they sell. It was bad enough when they were trying to make a living atop the microprocessors and operating systems of Intel and Microsoft, respectively. Now they also have to factor original design manufacturers (ODMs)  into the shrinking-margin equation.

It’s almost a dirty little secret, but the ODMs do a lot more than just manufacture PCs for the big brands, including HP and Dell. Many ODMs effectively have taken over hardware design and R&D from cost-cutting PC brands. Beyond a name on a bezel, and whatever brand equity that name carries, PC vendor aren’t adding much value to the box that ships.

For further background on how it came to this — and why HP’s exit from the PC market was inevitable — I direct you to my previous post on the subject, written more than a year ago. In that post, I quoted and referenced Stan Shih, Acer’s founder, who said that “U.S. computer brands may disappear over the next 20 years, just like what happened to U.S. television brands.”

Given the news this week, and mounting questions about Dell’s commitment to the low-margin PC business, Shih might want to give that forecast a sharp forward revision.

Is Li-Fi the Next Wi-Fi?

The New Scientist published a networking-related article last week that took me back to my early days in the industry.

The piece in question dealt with Visible Light Communication (VLC), a form of light-based networking in which data is encoded and transmitted by varying the rate at which LEDs flicker on and off, all at intervals imperceptible to the human eye.

Also called Li-Fi — yes, indeed, the marketers are involved already — VLC is being positioned for various applications, including those in hospitals, on aircraft, on trading floors, in automotive car-to-car and traffic-control scenarios, on trade-show floors, in military settings,  and perhaps even in movie theaters where VLC-based projection might improve the visual acuity of 3D films. (That last wacky one was just something that spun off the top of my shiny head.)

From FSO to VLC

Where I don’t see VLC playing a big role, certainly not as a replacement for Wi-Fi or its future RF-based successors, is in home networking. VLC’s requirement for line of sight will make it a non-starter for Wi-Fi scenarios where wireless networking must traverse floors, walls, and ceilings. There are other room-based applications for VLC in the home, though, and those might work if device (PC, tablet, mobile phone), display,  and lighting vendors get sufficiently behind the technology.

I feel relatively comfortable pronouncing an opinion on this technology. The idea of using light-based networking has been with us for some time, and I worked extensively with infrared and laser data-transmission technologies back in the early to mid 90s. Those were known as free-space optical (FSO) communications systems, and they fulfilled a range of niche applications, primarily in outdoor point-to-point settings. The vendor for which I worked provided systems for campus deployments at universities, hospitals, museums, military bases, and other environments where relatively high-speed connectivity was required but couldn’t be delivered by trenched fiber.

The technology mostly worked . . . except when it didn’t. Connectivity disruptions typically were caused by what I would term “transient environmental factors,” such as fog, heavy rain or snow, as well as dust and sand particulate. (We had some strange experiences with one or two desert deployments). From what I can gather, the same parameters generally apply to VLC systems.

Will that be White, Red, or Resonant Cavity?

Then again, the performance of VLC systems goes well beyond what we were able to achieve with FSO in the 90s. Back then, laser-based free-space optics could deliver maximum bandwidth of OC3 speeds (144Mbps), whereas the current high-end performance of VLC systems reaches transmission rates of 500Mbps. An article published earlier this year at theEngineer.com provides an overview of VLC performance capabilities:

 “The most basic form of white LEDs are made up of a bluish to ultraviolet LED surrounded by a yellow phosphor, which emits white light when stimulated. On average, these LEDs can achieve data rates of up to 40Mb/sec. Newer forms of LEDs, known as RGBs (red, green and blue), have three separate LEDs that, when lit at the same time, emit a light that is perceived to be white. As these involve no delay in stimulating a phosphor, data rates in RGBs can reach up to 100Mb/sec.

But it doesn’t stop there. Resonant-cavity LEDs (RCLEDs), which are similar to RGB LEDs and are fitted with reflectors for spectral clarity, can now work at even higher frequencies. Last year, Siemens and Berlin’s Heinrich Hertz Institute achieved a data-transfer rate of 500Mb/sec with a white LED, beating their earlier record of 200Mb/sec. As LED technology improves with each year, VLC is coming closer to reality and engineers are now turning their attention to its potential applications.”

I’ve addressed potential applications earlier in this post, but a sage observation is offered in theEngineer.com piece by Oxford University’s Dr. Dominic O’Brien, who sees applications falling into two broad buckets: those that “augment existing infrastructure,” and those in which  visible networking offers a performance or security advantage over conventional alternatives.

Will There Be Light?

Despite the merit and potential of VLC technology, its market is likely to be limited, analogous to the demand that developed for FSO offerings. One factor that has changed, and that could work in VLC’s favor, is RF spectrum scarcity. VLC could potentially help to conserve RF spectrum by providing much-needed bandwidth; but such a scenario would require more alignment and cooperation between government and industry than we’ve seen heretofore. Curb your enthusiasm accordingly.

The lighting and display industries have a vested interest in seeing VLC prosper. Examining the membership roster of the Visible Light Communications Consortium (VLCC), one finds it includes many of Japan’s big names in consumer electronics. Furthermore, in its continuous pursuit of new wireless technologies, Intel has taken at least a passing interest in VLC/Li-Fi.

If the vendor community positions it properly, standards cohere, and the market demands it, perhaps there will be at least some light.

Intel’s Fulcrum Buy Validates Merchant Silicon, Rise of Cloud

When I wrote my post earlier today on Cisco’s merchant-silicon dilemma, I had yet to read about Intel’s acquisition of Fulcrum Microsystems, purveyor of silicon for 10GbE and 40GbE switches.

While the timing of my post was fortuitous, today’s news suggests that Intel has been thinking about the data-center merchant silicon for some time. Acquisitions typically don’t come together overnight, and Intel doubtless has been taking careful note of the same trends many of us have witnessed.

Data Center on a Chip

In announcing the deal today, Intel has been straightforward about its motivations and objectives. As Intel officials explained to eWeek, Fulcrum’s chip technology will not only allow network-equipment vendors to satisfy demand for high-performance, low-latency 10GbE and 40GbE gear, but it also will put Intel in position to fulfill silicon requirements for all aspects of converged data centers. With that in mind, Intel has stated that it is working to integrate a portfolio of comprehensive data-center components — covering servers, storage, and networking — based on its Xeon processors.

With converged data centers all the rage at Cisco, HP, Dell, IBM, (and many other vendors besides), Intel wants to put itself in position to meet the burgeoning need.

Intel did not disclose financial details of the acquisition, which is expected to close in the third quarter, but analysts generally believe the deal will have only modest impact on Intel’s bottom line.

Strategically, though, the consensus is that it offers considerable upside. Intel apparently has told Deutsche Bank analysts that it now captures only about two percent of overall expenditures dedicated to data-center technology. Fulcrum is seen as a key ingredient in helping Intel substantially boost its data-center take.

Unlikely to Repeat Past Mistakes

The deal puts Intel into direct competition with other merchant-silicon vendors in the networking market, including Broadcom and Marvell. Perhaps a bigger concern, as pointed out by Insight64 analyst Nathan Brookwood, is that Intel failed in its previous acquisitions of network-chip suppliers. Those acquisitions, executed during the late 90s, included the $2.2-billion purchase of Level One.

Much has changed since then, of course — in the market in general as well as in Intel’s product portfolio — and Brookwood concedes that the Fulcrum buy seems a better fit strategically and technologically than Intel’s earlier forays into the networking space. Obviously, data-center convergence was not on the cards back then.

Aligned with March to Merchant Silicon, Rise of Cloud

To be sure, the acquisition is perfectly aligned with the networking community’s shift to merchant silicon and with the evolution of highly virtualized converged data centers, including cloud computing.

One vendor that’s enthusiastic about the deal is Arista Networks. In email correspondence after the deal was announced, Arista CEO Jayshree Ullal explained why she and her team are so excited at today’s news.

Arista Thrilled 

First off, Ullal noted that Arista is one of Fulcrum’s top customers. Intel’s acquisition of Fulcrum, Ullal said, “validates the enterprise-to-cloud networking migration.” What’s also validated, Ullal said, is merchant silicon, as opposed to “outdated clunky ASICs.” Now there are three major merchant chip vendors serving the networking industry: Intel, Broadcom, and Marvell.

Ullal also echoed others in saying that the deal is great for Intel because it moves the chip kingpin into networking/switch silicon and cloud computing. Finally, she said Fulcrum benefits because, with the full backing of Intel, it can leverage the parent company’s “processes and keep innovating now and beyond for big data, cloud, and virtualization.”

Even though, monetarily, there have been bigger acquisitions, today’s deal seems to have a strategic resonance that will be felt for a long time. Intel could play a significant role in expediting the already-quickening commoditization of networking hardware — in switches and in the converged data center — thereby putting even more pressure on networking and data-center vendors to compensate with the development and delivery of value-add software.

Cloud Buyers Put Vendors on Notice

No matter where you look in the vendor community, cloud-computing strategies proliferate. It doesn’t matter whether the vendors sell servers, storage, networking gear, management software, or professional services, they are united in their fervor to spin compelling private, public, and hybrid cloud narratives.

Secret Sauce or Sticky Glue?

At the same time, of course, many of these vendors seek competitive differentiation that features a proprietary secret sauce that ultimately serves more as glue than comestible, binding paying customers to them indefinitely.

Customers, many of which are familiar with the history of information technology, are cognizant of the vendor maneuvering. They’ve seen similar shows in the past, and they know how those productions usually end — with customers typically bound to technology investments they may not want to perpetuate while enmeshed in unhealthy relationships with vendors that delivered dependency disguised as liberation.

Ideally, vendors and customers should enjoy mutually beneficial relationships, with each side deriving value from the engagements. Unfortunately, vendors seek not only to deliver value to customers, but also to differentiate themselves from their competitors, often by finding a way of locking the latter out of their customer base. Proprietary technologies — not so interoperable with the those offered by other vendors — often serve the purpose.

Won’t Get Fooled Again

In the realm of cloud computing, customers are trying not to get fooled again. They’re banding together on multiple fronts to ensure that their requirements are fully acknowledged in the development and realization of cloud-computing industry standards covering data portability, cloud interoperability, and cloud security. What they obviously fear is that big vendors, without customer oversight and constant vigilance, will find ways to gerrymander the standards process in their favor, perhaps to the long-term disadvantage of cloud-computing clientele.

With that in mind, organizations such as the Cloud Standards Customer Council (CSCC), announced by OMG in April, and the Open Data Center Alliance, launched last fall, have formed.

The Open Data Center Alliance bills itself as an independent IT consortium led by global IT organizations – including BMW, China Life, Deutsche Bank. JPMorgan Chase, Lockheed Martin, Marriott International, Inc. and other well-known corporate entities — that is committed to provide a unified vision for long-term data center and cloud infrastructure requirements. It pursues that objective through the development of a vendor-agnostic usage-model roadmap. Intel Corporation serves as a technical advisor to the alliance, which suggests that it is not without vendor representation.

For its part, the Cloud Standards Customer Council also is infused with vendor blood. Among its founding enterprise members are IBM, Kaavo, CA Technologies, Rackspace, and Software AG.  Organizations (and major IT buyers) that have joined the council include Lockheed Martin, Citigroup, State Street, and North Carolina State University.

It’s interesting that Lockheed Martin is involved with both the Open Data Center Alliance and the Cloud Standards Customer Council. That indicates that, while overlap between the two bodies might exist, Lockheed Martin believes each satisfies — at least for it needs and from its perspective — a distinct purpose.

Activist Language

The Cloud Standards Customer Council says it is an “end user advocacy group dedicated to accelerating cloud’s successful adoption, and drilling down into the standards, security, and interoperability issues surrounding the transition to the cloud.” It says it will do the following:

  • Drive customer requirements into the development process to gain acceptance by the Global 2000
  • Deliver customer-focused content in the form of best practices, patterns, case studies, use cases, and standards roadmaps.
  • Influence the standards development process for new cloud standards.
  • Facilitate the exchange of real-world stories, practices, lessons and insights.

Its tone, despite the presence of vendors among its founding members, is relatively activist regarding the urgent need for customer requirements and real-world insights as essential ingredients in the standards-making process.

It remains to be seen how the Cloud Standards Customer Council and the Open Data Center Alliance will evolve, separately and together, and it’s also too early to say whether customers will be entirely successful in their efforts to get what they want and need from cloud-computing standards bodies.

Nonetheless, there’s already a tension, if not a distrust, between buyers and sellers of cloud-computing technology and services. The vendors are on notice.

Pondering Intel’s Grand Design for McAfee

Befuddlement and buzz jointly greeted Intel’s announcement today regarding its pending acquisition of security-software vendor McAfee for $7.68 billion in cash.

Intel was not among the vendors I expected to take an acquisitive run at McAfee. It appears I was not alone in that line of thinking, because the widespread reaction to the news today involved equal measures of incredulity and confusion. That was partly because Intel was McAfee’s buyer, of course, but also because Intel had agreed to pay such a rich premium, $48 per McAfee share, 60 percent above McAfee’s closing price of $29.93 on Wednesday.

What was Intel Thinking?

That Intel paid such a price tells us a couple things. First, that Intel really felt it had to make this acquisition; and, second, that Intel probably had competition for the deal. Who that competition might have been is anybody’s guess, but check my earlier posts on potential McAfee acquirers for a list of suspects.

One question that came to many observers’ minds today was a simple one: What the hell was Intel thinking? Put another way, just what does Intel hope to derive from ownership of McAfee that it couldn’t have gotten from a less-expensive partnership with the company?

Many attempting to answer this question have pointed to smartphones and other mobile devices, such as slates and tablets, as the true motivations for Intel’s purchase of McAfee. There’s a certain logic to that line of thinking, to the idea that Intel would want to embed as much of McAfee’s security software as possible into chips that it heretofore has had a difficult time selling to mobile-device vendors, who instead have gravitated to  designs from ARM.

Embedded M2M Applications

In the big picture, that’s part of Intel’s plan, no doubt. But I also think other motivations were at play.  An important market for Intel, for instance, is the machine-to-machine (M2M) space.

That M2M space is where nearly everything that can be assigned an IP address and managed or monitored remotely — from devices attached to the smart grid (smart meters, hardened switches in substations, power-distribution gear) to medical equipment, to building-control systems, to televisions and set-top boxes  — is being connected to a communications network. As Intel’s customers sell systems into those markets, downstream buyers have expressed concerns about potential security vulnerabilities. Intel could help its embedded-systems customers ship more units and generate more revenue for Intel by assuaging the security fears of downstream buyers.

Still, that roadmap, if it exists, will take years to reach fruition. In the meantime, Intel will be left with slideware and a necessarily loose coupling of its microprocessors with McAfee’s security software. As Nathan Brookwood, principal analyst at Insight 64 suggested, Intel could start off by designing its hardware to work better with McAfee software, but it’s likely to take a few years, and new processor product cycles, for McAfee technology to get fully baked into Intel’s chips.

Will Take Time

So, for a while, Intel won’t be able to fully realize the value of McAfee as a asset. What’s more, there are parts of McAfee that probably don’t fit into Intel’s chip-centric view of the world. I’m not sure, for example, what this transaction portends for McAfee’s line of Internet-security products obtained through its acquisition of Secure Computing. Given that McAfee will find its new home inside Intel’s Software and Service division, as Richard Stiennon notes, the prospects for the Secure Computing product line aren’t bright.

I know Intel wouldn’t do this deal just because it flipped a coin or lost a bet, but Intel has a spotty track record, at best, when it comes to M&A activity. Media observers sometimes assume that technology executives are like masters of the universe, omniscient beings with superior intellects and brilliant strategic designs. That’s rarely true, though. Usually, they’re just better-paid, reasonably intelligent human beings, doing their best, with limited information and through hazy visibility, to make the right business decisions. They make mistakes, sometimes big ones.

M&A Road Full of Potholes

Don’t take it from me; consult the business-school professors. A Wharton course on mergers and acquisitions spotlights this quote from Robert W. Holthausen, Nomura Securities Company Professor, Professor of Accounting and Finance and Management:

“Various studies have shown that mergers have failure rates of more than 50 percent. One recent study found that 83 percent of all mergers fail to create value and half actually destroy value. This is an abysmal record. What is particularly amazing is that in polling the boards of the companies involved in those same mergers, over 80 percent of the board members thought their acquisitions had created value.”

I suppose I’m trying to say is that just because Intel thinks it has a plan for McAfee, that doesn’t mean the plan is a a good one or, even presuming it is a good plan, that it will be executed successfully. There are many potholes and unwanted detours along M&A road.