I’ve written quite a bit recently about OpenFlow and the Open Networking Foundation (ONF). For a change of pace, I will focus today on the Open Compute Project.
In many ways, even though OpenFlow deals with networking infrastructure and Open Compute deals with computing infrastructure, they are analogous movements, springing from the same fundamental set of industry dynamics.
Open Compute was introduced formally to the world in April. Its ostensible goal was “to develop servers and data centers following the model traditionally associated with open-source software projects.” That’s true insofar as it goes, but it’s only part of the story. The stated goal actually is a means to an end, which is to devise an operational template that allows cloud behemoths such as Facebook to save lots of money on computing infrastructure. It’s all about commoditizing and optimizing the operational efficiency of the hardware encompassed within many of the largest cloud data centers that don’t belong to Google.
Speaking of Google, it is not involved with Open Compute. That’s primarily because Google has been taking a DIY approach to its data center long before Facebook began working on the blueprint for the Open Compute Project.
Google as DIY Trailblazer
For Google, its ability to develop and deliver its own data-center technologies — spanning computing, networking and storage infrastructure — became a source of competitive advantage. By using off-the-shelf hardware components, Google was able to provide itself with cost- and energy-efficient data-center infrastructure that did exactly what it needed to do — and no more. Moreover, Google no longer had to pay a premium to technology vendors that offered products that weren’t ideally suited to its requirements and that offered extraneous “higher-value” (pricier) features and functionality.
Observing how Google had used its scale and its ample resources to fashion its cost-saving infrastructure, Facebook considered how it might follow suit. The goal at Facebook was to save money, of course, but also to mitigate or perhaps eliminate the infrastructure-based competitive advantage Google had developed. Facebook realized that it could never compete with Google at scale in the infrastructure cost-saving game, so it sought to enlist others in the cause.
And so the Open Computer project was born. The aim is to have a community of shared interest deliver cost-saving open-hardware innovations that can help Facebook scale its infrastructure at an operational efficiency approximating Google’s. If others besides Facebook benefit, so be it. That’s not a concern.
As Facebook seeks to boost its advertising revenue, it is effectively competing with Google. The search giant still derives nearly 97 percent of its revenue from advertising, and its Google+ is intended to distract it not derail Facebook’s core business, just as Google Apps is meant to keep Microsoft focused on protecting one of its crown jewels rather than on allocating more corporate resources to search and search advertising.
There’s nothing particularly striking about that. Cloud service providers are expected to compete against other by developing new revenue-generating services and by achieving new cost-saving operational efficiencies. In that context, the Open Compute Project can be seen, at least in one respect, as Facebook’s open-source bid to level the infrastructure playing field and undercut, as previously noted, what has been a Google competitive advantage.
But there’s another dynamic at play. As the leading cloud providers with their vast data centers increasingly seek to develop their own hardware infrastructure — or to create an open-source model that facilitates its delivery — we will witness some significant collateral damage. Those taking the hit, as is becoming apparent, will be the hardware systems vendors, including HP, IBM, Oracle (Sun), Dell, and even Cisco. That’s only on the computing side of the house, of course. In networking, as software-defined networking (SDN) and OpenFlow find ready embrace among the large cloud shops, Cisco and others will be subject to the loss of revenue and profit margin, though how much and how soon remain to be seen.
Who’s Steering the OCP Ship?
So, who, aside from Facebook, will set the strategic agenda of Open Compute? To answer to that question, we need only consult the identities of those named to the Open Compute Project Foundation’s board of directors:
- Chairman/President – Frank Frankovsky, Director, Technical Operations at Facebook
- Jason Waxman, General Manager, High Density Computing, Data Center Group, Intel
- Mark Roenigk, Chief Operating Officer, Rackspace Hosting
- Andy Bechtolshiem, Industry Guru
- Don Duet, Managing Director, Goldman-Sachs
It’s no shocker that Facebook retains the chairman’s role. Facebook didn’t launch this initiative to have somebody else steer the ship.
Similarly, it’s not a surprise that Intel is involved. Intel benefits regardless of whether cloud shops build their own systems, buy them from HP or Dell, or even get them from a Taiwanese or Chinese ODM.
As for the Rackspace representation, that makes sense, too. Rackspace already has OpenStack, open-source software for private and public clouds, and the Open Compute approach provides a logical hardware complement to that effort.
After that, though, the board membership of the Open Compute Project Foundation gets rather interesting.
Examining Bechtolsheim’s Involvement
First, there’s the intriguing presence of Andy Bechtolsheim. Those who follow the networking industry will know that Andy Bechtolsheim is more than an “industry guru,” whatever that means. Among his many roles, Bechtolsheim serves as the chief development officer and co-founder of Arista Networks, a growing rival to Cisco in low-latency data-center switching, especially at cloud-scale web shops and financial-services companies. It bears repeating that Open Compute’s mandate does not extend to network infrastructure, which is the preserve of the analogous OpenFlow.
Bechtolsheim’s history is replete with successes, as a technologist and as an investor. He was one of the earliest investors in Google, which makes his involvement in Open Compute deliciously ironic.
More recently, he disclosed a seed-stage investment in Nebula, which, as Derrick Harris at GigaOM wrote this summer, has “developed a hardware appliance pre-loaded with customized OpenStack software and Arista networking tools, designed to manage racks of commodity servers as a private cloud.” The reference architectures for the commodity servers comprise Dell’s PowerEdge C Micro Servers and servers that adhere to Open Compute specifications.
We know, then, why Bechtolsheim is on the board. He’s a high-profile presence that I’m sure Open Compute was only too happy to welcome with open arms (pardon the pun), and he also has business interests that would benefit from a furtherance of Open Compute’s agenda. Not to put too fine a point on it, but there’s an Arista and a Nebula dimension to Bechtolsheim’s board role at the Open Compute Project Foundation.
OpenStack Angle for Rackspace, Dell
Interestingly, the presence of Bechtolsheim and Rackspace’s Mark Roenigk on the board both emphasize OpenStack considerations, as does Dell’s involvement with Open Compute. Dell doesn’t have a board seat — at least not according to the Open Compute website — but it seems to think it can build a business for solutions based on Open Compute and OpenStack among second-tier purveyors of public-cloud services and among those pursuing large private or hybrid clouds. Both will become key strategic markets for Dell as its SMB installed base migrates applications and spending to the cloud.
Dell notably lost a chunk of server business when Facebook chose to go the DIY route, in conjunction with Taiwanese ODM Quanta Computer, for servers in its data center in Pineville, Oregon. Through its involvement in Open Compute, Dell might be trying to regain lost ground at Facebook, but I suspect that ship has sailed. Instead, Dell probably is attempting to ensure that it prevents or mitigates potential market erosion among smaller service providers and enterprise customers.
What Goldman Sachs Wants
The other intriguing presence on the Open Compute Project Foundation board is Don Duet from Goldman Sachs. Here’s what Duet had to say about his firm’s involvement with Open Compute:
“We build a lot of our own technology, but we are not at the hyperscale of Google or Facebook. We are a mid-scale company with a large global footprint. The work done by the OCP has the potential to lower the TCO [total cost of ownership] and we are extremely interested in that.”
Indeed, that perspective probably worries major server vendors more than anything else about Open Compute. Once Goldman Sachs goes this route, other financial-services firms will be inclined to follow, and nobody knows where the market attrition will end, presuming it ends at all.
Like Facebook, Goldman Sachs saw what Google was doing with its home-brewed, scale-out data-center infrastructure, and wondered how it might achieve similar business benefits. That has to be disconcerting news for major server vendors.
Welcome to the Future
The big takeaway for me, as I absorb these developments, is how the power axis of the industry is shifting. The big systems vendors used to set the agenda, promoting and pushing their products and influencing the influencers so that enterprise buyers kept their growth rates on the uptick. Now, though, a combination of factors — widespread data-center virtualization, the rise of cloud computing, a persistent and protected global economic downturn (which has placed unprecedented emphasis on IT cost containment) — is reshaping the IT universe.
Welcome to the future. Some might like it more than others, but there’s no going back.