Category Archives: Facebook

Amazon-RIM: Summer Reunion?

Think back to last December, just before the holidays. You might recall a Reuters report, quoting “people with knowledge of the situation,” claiming that Research in Motion (RIM) rejected takeover propositions from Amazon.com and others.

The report wasn’t clear on whether the informal discussions resulted in any talk of price between Amazon and RIM, but apparently no formal offer was made. RIM, then still under the stewardship of former co-CEOs Jim Balsillie and Mike Lazaridis, reportedly preferred to remain independent and to address its challenges alone.

I Know What You Discussed Last Summer

Since then, a lot has happened. When the Reuters report was published — on December 20, 2011 — RIM’s market value had plunged 77 percent during the previous year, sitting then at about $6.8 billion. Today, RIM’s market capitalization is $3.7 billion. What’s more, the company now has Thorsten Heins as its CEO, not Balsillie and Lazardis, who were adamantly opposed to selling the company. We also have seen recent reports that IBM approached RIM regarding a potential acquisition of the Waterloo, Ontario-based company’s enterprise business, and rumors have surfaced that RIM might sell its handset business to Amazon or Facebook.

Meanwhile, RIM’s prospects for long-term success aren’t any brighter than they were last winter, and activist shareholders, not interested in a protracted turnaround effort, continue to lobby for a sale of the company.

As for Amazon, it is said to be on the cusp of entering the smartphone market, presumably using a forked version of Android, which is what it runs on the Kindle tablet.  From the vantage point of the boardroom at Amazon, that might not be a sustainable long-term plan. Google is looking more like an Amazon competitor, and the future trajectory of Android is clouded by Google’s strategic considerations and by legal imbroglios relating to patents. Those presumably were among the reasons Amazon approached RIM last December.

Uneasy Bedfellows

It’s no secret that Amazon and Google are uneasy Android bedfellows. As Eric Jackson wrote just after the Reuters story hit the wires:

Amazon has never been a big supporter of Google’s Android OS for its Kindle. And Google’s never been keen on promoting Amazon as part of the Android ecosystem. It seems that both companies know this is just a matter of time before each leaves the other.

Yes, there’s some question as to how much value inheres in RIM’s patents. Estimates on their worth are all over the map. Nevertheless, RIM’s QNX mobile-operating system could look compelling to Amazon. With QNX and with RIM’s patents, Amazon would have something more than a contingency plan against any strategic machinations by Google or any potential litigiousness by Apple (or others).  The foregoing case, of course, rests on the assumption that QNX, rechristened BlackBerry 10, is as far along as RIM claims. It also rests on the assumption that Amazon wants a mobile platform all its own.

It was last summer when Amazon reportedly made its informal approach to RIM. It would not be surprising to learn that a reprise of discussions occurred this summer. RIM might be more disposed to consider a formal offer this time around.

Advertisement

Inevitability of Virtualized Infrastructure

As a previous post, Infrastructure Virtualization Versus Converged Infrastructure, attests, I strongly believe that virtualization is leading us to a future in which underlying hardware becomes largely undifferentiated and interchangeable. Applications and orchestration will reside in software riding atop the virtualization layer, which effectively will function as an abstraction buffer above hardware infrastructure.  The latter will eventually include hardware for computer, networking, and storage.

Vendors that ride hardware-based business models will have trouble adapting to this new reality. Many of these companies have hordes of software developers and software engineers, but they inextricably intertwine their software and hardware as a matter of business practice, selling the latter as proprietary boxes that often cannot interoperate with, or be swapped out for, competing hardware. It’s classic hardware-based vendor lock-in, and it’s been with us for many years. This applies to vendors that sell all three main types of hardware infrastructure, and to those that sell them tied together as converged infrastructure.

Loosening a Tenacious Grip

Proprietary data-center hardware would appear to be running on borrowed time, though it will not disappear overnight. Its grip will be especially tenacious in the enterprise, though the pull of the cloud eventually will weaken its hold. Proprietary compute infrastructure will be the first to succumb, but networking and storage will fall, too. The economic and operational logic powering the transition is inexorable, so it’s a question of when, not whether, it will happen.

While CapEx cost savings are an obvious benefit, operational flexibility (shifting workloads with agility and less effort) and OpEx savings also are factors. Infrastructure hardware will be cheaper, as well as easier and less costly to run. Pools of industry-standard hardware will be reallocated on demand to serve the needs of application workloads. Data-center customers no longer will be constrained by the hardware-release schedules of their previous vendors of choice. Customers also will be able to take advantage of the latest industry-standard chipsets, which will power hardware with improved energy efficiency and better cooling characteristics.

In servers, and now in storage, Facebook’s Open Compute Project (OCP) has sought to expedite the move to off-the-shelf hardware. Last week at Oscon, Frank Frankovsky, a vice president at  Facebook and the chairman and president of the OCP, rallied the open-source troops by arguing that proprietary x86 systems are “gratuitously differentiated.” He called for all hardware-design specifications to be open.

OCP as Competitive Cudgel

That would benefit Facebook, which launched OCP as a vehicle to help it lower data-center CapEx and OpEx, boost operational flexibility, and — last but not least — mitigate a competitive advantage held by Google, which had a massive head start in rationalizing and fine-tuning its data centers and IT infrastructure. In fact, Google cloaks its IT operations in extreme secrecy, believing that its practices and technologies deliver substantial competitive advantage over its main rivals, including Facebook. The latter must agree, because the animating idea behind Open Compute is to create a market, demand and supply, for commodity server hardware will reduce or eliminate Google’s edge.

Some have wondered why Google hasn’t joined OCP, but the answer should be obvious. Google believes it has cracked the infrastructure code, and it is therefore disinclined to share its insights and best practices with its competitors. Google isn’t a fan of proprietary vanity hardware — it’s been designing its own gear, then going to server and network ODMs, for some time now — but Google feels it has nothing to gain, and much to lose, from opening its kimono to the OCP crowd.

With networking, though, Google felt it needed a little help from its friends — as well as from its enemies. That explains why it allied with Facebook and other cloud-service providers in the Open Networking Foundation (ONF), which I have written about here on many occasions. The goal of the ONF, as with OCP, is to slip the proprietary shackles of hardware vendors, whose gear functions as an impediment to operational agility as well as a costs that could be reduced through SDN-style network virtualization. Google’s communitarian approach to addressing the network-virtualization riddle suggests that it believes it cannot achieve the desired outcome on its own.

Cracking the Nut

Whereas compute hardware was well on its way to standardization, networking hardware, until the ONF, was akin to a vertically integrated mainframe system, replete with a proliferating number of both proprietary and industry-standard protocols. Networking is a bigger, and tougher, nut to crack.

But crack it will, first at the big cloud-service providers, then, as the cloud gains momentum, at enterprises.

PS: I will post something tomorrow about VMware’s just-announced acquisition of Nicira, which is big news no matter how you slice it.  I wrote the above post before I learned of the acquisition.

Direct from ODMs: The Hardware Complement to SDN

Subsequent to my return from Network Field Day 3, I read an interesting article published by Wired that dealt with the Internet giants’ shift toward buying networking gear from original design manufacturers (ODMs) rather than from brand-name OEMs such as Cisco, HP Networking, Juniper, and Dell’s Force10 Networks.

The development isn’t new — Andrew Schmitt, now an analyst at Infonetics, wrote about Google designing its own 10-GbE switches a few years ago — but the story confirmed that the trend is gaining momentum and drawing a crowd, which includes brokers and custom suppliers as well as increasing numbers of buyers.

In the Wired article, Google, Microsoft, Amazon, and Facebook were explicitly cited as web giants buying their switches directly from ODMs based in Taiwan and China. These same buyers previously procured their servers directly from ODMs, circumventing brand-name server vendors such as HP and Dell.  What they’re now doing with networking hardware, then, is a variation on an established theme.

The ONF Connection

Just as with servers, the web titans have their reasons for going directly to ODMs for their networking hardware. Sometimes they want a simpler switch than the brand-name networking vendors offer, and sometimes they want certain functionality that networking vendors do not provide in their commercial products. Most often, though, they’re looking for cheap commodity switches based on merchant silicon, which has become more than capable of handling the requirements the big service providers have in mind.

Software is part of the picture, too, but the Wired story didn’t touch on it. Look at the names of the Internet companies that have gone shopping for ODM switches: Google, Microsoft, Facebook, and Amazon.

What do those companies have in common besides their status as Internet giants and their purchases of copious amounts of networking gear? Yes, it’s true that they’re also cloud service providers. But there’s something else, too.

With the exception of Amazon, the other three are board members in good standing of the Open Networking Foundation (ONF). What’s more,  even though Amazon is not an ONF board member (or even a member), it shares the ONF’s philosophical outlook in relation to making networking infrastructure more flexible and responsive, less complex and costly, and generally getting it out of the way of critical data-center processes.

Pica8 and Cumulus

So, yes, software-defined networking (SDN) is the software complement to cloud-service providers’ direct procurement of networking hardware from ODMs.  In the ONF’s conception of SDN, the server-based controller maps application-driven traffic flows to switches running OpenFlow or some other mechanism that provides interaction between the controller and the switch. Therefore, switches for SDN environments don’t need to be as smart as conventional “vertically integrated” switches that combine packet forwarding and the control plane in the same box.

This isn’t just guesswork on my part. Two companies are cited in the Wired article as “brokers” and “arms dealers” between switch buyers and ODM suppliers. Pica8 is one, and Cumulus Networks is the other.

If you visit the Pica8 website,  you’ll see that the company’s goal is “to commoditize the network industry and to make the network platforms easy to program, robust to operate, and low-cost to procure.” The company says it is “committed to providing high-quality open software with commoditized switches to break the current performance/price barrier of the network industry.” The company’s latest switch, the Pronto 3920, uses Broadcom’s Trident+ chipset, which Pica8 says can be found in other ToR switches, including the Cisco Nexus 3064, Force10 S4810, IBM G8264, Arista 7050S, and Juniper QFC-3500.

That “high-quality open software” to which Pica8 refers? It features XORP open-source routing code, support for Open vSwitch and OpenFlow, and Linux. Pica8 also is a relatively longstanding member of ONF.

Hardware and Software Pedigrees

Cumulus Networks is the other switch arms dealer mentioned in the Wired article. There hasn’t been much public disclosure about Cumulus, and there isn’t much to see on the company’s website. From background information on the professional pasts of the company’s six principals, though, a picture emerges of a company that would be capable of putting together bespoke switch offerings, sourced directly from ODMs, much like those Pica8 delivers.

The co-founders of Cumulus are J.R. Rivers, quoted extensively in the Wired article, and Nolan Leake. A perusal of their LinkedIn profiles reveals that both describe Cumulus as “satisfying the networking needs of large Internet service clusters with high-performance, cost-effective networking equipment.”

Both men also worked at Cisco spin-in venture Nuova Systems, where Rivers served as vice president of systems architecture and Leake served in the “Office of the CTO.” Rivers has a hardware heritage, whereas Leake has a software background, beginning his career building a Java IDE and working at senior positions at VMware and 3Leaf Networks before joining Nuova.

Some of you might recall that 3Leaf’s assets were nearly acquired by Huawei, before the Chinese networking company withdrew its offer after meeting with strenuous objections from the Committee on Foreign Investment in the United States (CFIUS). It was just the latest setback for Huawei in its recurring and unsuccessful attempts to acquire American assets. 3Com, anyone?

For the record, Leake’s LinkedIn profile shows that his work at 3Leaf entailed leading “the development of a distributed virtual machine monitor that leveraged a ccNUMA ASIC to run multiple large (many-core) single system image OSes on a Infiniband-connected cluster of commodity x86 nodes.”

For Companies Not Named Google

Also at Cumulus is Shrijeet Mukherjee, who serves as the startup company’s vice president of software engineering. He was at Nuova, too, and worked at Cisco right up until early this year. At Cisco, Mukherjee focused on” virtualization-acceleration technologies, low-latency Ethernet solutions, Fibre Channel over Ethernet (FCoE), virtual switching, and data center networking technologies.” He boasts of having led the team that delivered the Cisco Virtualized Interface Card (vNIC) for the UCS server platform.

Another Nuova alumnus at Cumulus is Scott Feldman, who was employed at Cisco until May of last year. Among other projects, he served in a leading role on development of “Linux/ESX drivers for Cisco’s UCS vNIC.” (Do all these former Nuova guys at Cumulus realize that Cisco reportedly is offering big-bucks inducements to those who join its latest spin-in venture, Insieme?)

Before moving to Nuova and then to Cisco, J.R. Rivers was involved with Google’s in-house switch design. In the Wired article, Rivers explains the rationale behind Google’s switch design and the company’s evolving relationship with ODMs. Google originally bought switches designed by the ODMs, but now it designs its own switches and has the ODMs manufacture them to the specifications, similar to how Apple designs its iPads and iPhones, then  contracts with Foxconn for assembly.

Rivers notes, not without reason, that Google is an unusual company. It can easily design its own switches, but other service providers possess neither the engineering expertise nor the desire to pursue that option. Nonetheless, they still might want the cost savings that accrue from buying bare-bones switches directly from an ODM. This is the market Cumulus wishes to serve.

Enterprise/Cloud-Service Provider Split

Quoting Rivers from the Wired story:

“We’ve been working for the last year on opening up a supply chain for traditional ODMs who want to sell the hardware on the open market for whoever wants to buy. For the buyers, there can be some very meaningful cost savings. Companies like Cisco and Force10 are just buying from these same ODMs and marking things up. Now, you can go directly to the people who manufacture it.”

It has appeal, but only for large service providers, and perhaps also for very large companies that run prodigious server farms, such as some financial-services concerns. There’s no imminent danger of irrelevance for Cisco, Juniper, HP, or Dell, who still have the vast enterprise market and even many service providers to serve.

But this is a trend worth watching, illustrating the growing chasm between the DIY hardware and software mentality of the biggest cloud shops and the more conventional approach to networking taken by enterprises.

Networking Vendors Tilt at ONF Windmill

Closely following the latest developments and continuing progress of software-defined networking (SDN), I am reminded of what somebody who shall remain nameless said not long ago about why he chose to leave Cisco to pursue his career elsewhere.

He basically said that Cisco, as a huge networking company, is having trouble reconciling itself to the reality that the growing force known as cloud computing is not “network centric.” His words stuck with me, and I’ve been giving them a lot of thought since then.

All Computing Now

His opinion was validated earlier this week at a NetEvents symposium in Garmisch, Germany, where Dan Pitt, executive director of the Open Networking Foundation (ONF) made some statements about software-defined networking (SDN) that, while entirely consistent with what we’ve heard before from that community’s most fervent proponents, also seemed surprisingly provocative. Quoting Pitt, from a blog post published at ZDNet UK:

“In future, networking will become just an integral part of computing, using same tools as the rest of computing. Enterprises will get out of managing plumbing, operators will become software companies, IT will add more business value, and there will be more network startups from Generation Y.”

Pitt was asked what impact this architectural shift would have on network performance. He said that a 30,000-user campus could be supported by a four-year-old Dell PC.

Redefining Architecture, Redefining Value

Naturally, networking vendors can’t be elated at that prospect. Under the SDN master plan, the intelligence (and hence the value) of switching and routing gets moved to a server, or to a cluster of servers, on the edge of the network. Whether this is done with OpenFlow, Open vSwitch, or some other mechanism between the control plane and the switch doesn’t really matter in the big picture. What matters is that networking architectures will be redefined, and networking value will migrate into (and be subsumed within) a computing paradigm. Not to put too fine a point on it, but networking value will be inherent in applications and control-plane software, not in the dumb, physical hardware that will be relegated to shunting packets on the network.

At that same NetEvents symposium in Germany, a Computerworld UK story quoted Pitt saying something very similar to, though perhaps less eloquent than, what Berkeley professor and Nicira co-founder Scott Shenker said about network-protocol complexity.

Said Pitt:

“There are lots of networking protocols which make it very labour intensive to manage a network. There are too many “band aids” being used to keep a network working, and these band aids can actually cause many of the problems elsewhere in the network.”

Politics of ONF

I’ve written previously about the political dynamics of the Open Networking Foundation (ONF).

Just to recap, if you look at the composition of the board of directors at the ONF, you’ll know all you need to know about who wields power in that organization. The ONF board members are Google, Yahoo, Verizon, Deutsche Telekom, NTT, and Microsoft. Make no mistake about Microsoft’s presence. It is there as a cloud service provider, not as a vendor of technology products.

The ONF is run by large cloud service providers, and it’s run for large cloud service providers, though it’s conceivable that much of what gets done in the ONF will have applicability and value to cloud shops of smaller size and stature. I suppose it’s also conceivable that some of the ONF’s works will prove valuable at some point to large enterprises, though it should be noted that the enterprise isn’t a constituency that is foremost of mind to the ONF.

Vendors Not Driving

One thing is certain: Networking vendors are not steering the ONF ship. I’ve written that before, and I’ll no doubt write it again. In fact, I’ll quote Dan Pit to that effect right now:

“No vendors are allowed on the (ONF) board. Only the board can found a working group, approve standards, and appoint chairs of working groups. Vendors can be on the groups but not chair them. So users are in the driving seat.”

And those users — really the largest of the cloud service providers — aren’t about to move over. In fact, the power elite that governs that ONF has a definite vision in mind for the future of networking, a future that — as we’ve already seen — will make the networking subservient to applications, programmability, and computing.

Transition on the Horizon

As the SDN vision moves downstream from the largest service providers, such as those who run the show at the ONF, to smaller service providers and then to large enterprises, networking companies will have to transform themselves into software vendors — with software business models.

Can they do that? Some of them probably can, but others — including probably the largest of all — will have a difficult time making the transition, a prisoner of its own past success and circumscribed by the classic “innovator’s dilemma.”  Cisco, a networking colossus, has built a thriving franchise and dominant market position, replete with a full-fledged business model and an enormous sales machine. It will be hard to move away from a formula that’s filled the coffers all these years.

Still, move they must, though timing, as it often does, will count for a lot. The SDN wave won’t inundate the marketplace overnight, but, regardless of the underlying protocols and mechanisms that might run alongside or supersede OpenFlow, SDN seems set to eventually win adherents in CFO and CIO offices beyond the realm of the companies represented on the ONF’s board of directors. It will take some time, probably many years, but it’s a movement that will gain followers and momentum as it delivers quantifiable business benefits to those that adopt it.

Enterprise As Last Redoubt

The enterprise will be the last redoubt of conventional networking infrastructure, and it’s not difficult to envision Cisco doing everything in its power to keep it that way for as long as possible. Expect networking’s old guard to strongly resist the siren song of SDN. That’s only natural, even if — in the very long haul — it seems a vain pursuit and, ultimately, a losing battle.

At this point, I just want to emphasizes that SDN need not lead to the commoditization of networking. Granted, it might lead to the commoditization of certain types of networking hardware, but there’s still value, much of it proprietary, that software-centric networking vendors can bring to the SDN table. But, as I said earlier, for many vendors that will mean a shift in business model, product focus, and go-to-market strategy.

In that Computerworld piece, some wonder whether networking vendors could prevent the rise of software-defined networking by refusing to play along.

Not Going Away

Again, I can easily imagine the vendors slowing and impeding the ascent of SDN within enterprises, but there’s absolutely no way for them to forestall its adoption at the major service providers represented by the ONF board members. Those players have the capital and the operational resources, to say nothing of the business motivation, to roll their own switches, perhaps with the help of ODMs, and to program their own applications and networks. That train has left the station and it can’t be recalled by even the largest of networking vendors, who really have no leverage or say in the matter. They can play along and try to find a sinecure where they can continue to add value, or they can dig in their heels and get circumvented entirely.  It’s their choice.

Either way, the tension between the ONF and the traditional networking vendors is palpable. In the IETF, the vendors are casting glances and sometimes aspersions at the ONF, trying to figure out how they can mount a counterattack. The battle will be joined, but the ONF rules its own roost — and it isn’t going away.

Big Switch Hopes Floodlight Draws Crowd

As the curtain came down on 2011, software-defined networking (SDN) and its open-source enabling protocol, OpenFlow, continued to draw plenty of attention. So far, 2012 has been no different, with SDN serving as a locus of intense activity, heady discourse, and steady technological advance.

Just last week, for instance, Big Switch Networks announced the release of Floodlight, a Java-based, Apache-licensed OpenFlow controller. In making Floodlight available under the Apache license, which allows the code to be reused for both research and commercial purposes, Big Switch hopes to establish the controller as a platform for OpenFlow application development.

Big Switch acknowledges that other OpenFlow controllers are available — the company even asks rhetorically, in a blog post accompanying the announcement, whether the world really needs another OpenFlow controller — but it believes that Floodlight is differentiated through its ease 0f use, extensibility, and robustness.

Controller as Platform 

I think we all realize by now that OpenFlow is just an SDN protocol. It allows data-path flow tables on switches to be programmed by a software-based controller, represented by the likes of Floodlight.  While OpenFlow might be essential as a mechanism for the realization of software-defined networks, it is not where SDN business value will be delivered or where vendors will find their pots of gold.

Next up in the hierarchy of SDN value are the controllers. As Big Switch recognizes, they can serve as platforms for SDN application development. Many vendors, including HP, believe that applications will define the value (and hence the money-making potential) in the SDN universe. That’s a fair assumption.

Big Switch Networks has indicated that it wants to be the “VMware of networking,” delivering network virtualization and providing enterprise-oriented OpenFlow applications. If it can establish its controller as a popular platform for OpenFlow application development, it will set a foundation both for its own commercial success as well as for enterprise OpenFlow in general.

Seeking Enterprise Value

The key to success, of course, will be the degree to which the applications, and the business value that accrues from them, are compelling. We’ll also see management and orchestration, perhaps integrated with the controller(s), but the commercial acceptance of the applications will determine the need and scope for automated management of the overall SDN environment. This is particularly true in the enterprise market that Big Switch has targeted.

What will those enterprise applications be? Well, if I knew answer to that question, I might be on a personal trajectory to obscene wealth, membership in an exclusive secret society, and perhaps ownership of a professional sports team (or, at minimum, a racehorse).

Service Providers Have Different Agenda

Meanwhile, in the rarefied heights of the largest cloud providers, such as the companies that populate the board at the Open Networking Foundation (ONF), I suspect that nearly everything of meaningful business value connected with OpenFlow and SDN will be done internally. Google and Facebook, for instance, will design and build (perhaps through ODMs) their own bare-bones servers and switches, and they will develop their own SDN controllers and applications. Their network infrastructure is a business asset, even a competitive advantage, and they will prefer to build and customize their own SDN environments rather than procure products and solutions from networking vendors, whether established players or startups.

Most enterprises, though, will be inclined to look toward the vendor community to equip them with SDN-related products, technologies, and expertise. This is presuming, of course, that an enterprise market for OpenFlow-based SDNs actually finds its legs.

Plenty of Work Ahead

So, again, it all comes back to the power and value of the applications, and this is why Big Switch is so keen to open-source its controller.  The enterprise market for OpenFlow-based SDNs won’t grow unless IT departments are comfortable adopting it. Vendors such as Big Switch will have to demonstrate that they are safe bets, capable of providing unprecedented value at minimal risk.

It’s a daunting challenge. OpenFlow definitely possesses long-term enterprise potential, but today it remains a long way from being able to check all the enterprise boxes. Big Switch, not to mention the enterprise OpenFlow community, needs a meaningful ecosystem to materialize sooner rather than later.

Like OpenFlow, Open Compute Signals Shift in Industry Power

I’ve written quite a bit recently about OpenFlow and the Open Networking Foundation (ONF). For a change of pace, I will focus today on the Open Compute Project.

In many ways, even though OpenFlow deals with networking infrastructure and Open Compute deals with computing infrastructure, they are analogous movements, springing from the same fundamental set of industry dynamics.

Open Compute was introduced formally to the world in April. Its ostensible goal was “to develop servers and data centers following the model traditionally associated with open-source software projects.”  That’s true insofar as it goes, but it’s only part of the story. The stated goal actually is a means to an end, which is to devise an operational template that allows cloud behemoths such as Facebook to save lots of money on computing infrastructure. It’s all about commoditizing and optimizing the operational efficiency of the hardware encompassed within many of the largest cloud data centers that don’t belong to Google.

Speaking of Google, it is not involved with Open Compute. That’s primarily because Google has been taking a DIY approach to its data center long before Facebook began working on the blueprint for the Open Compute Project.

Google as DIY Trailblazer

For Google, its ability to develop and deliver its own data-center technologies — spanning computing, networking and storage infrastructure — became a source of competitive advantage. By using off-the-shelf hardware components, Google was able to provide itself with cost- and energy-efficient data-center infrastructure that did exactly what it needed to do — and no more. Moreover, Google no longer had to pay a premium to technology vendors that offered products that weren’t ideally suited to its requirements and that offered extraneous “higher-value” (pricier) features and functionality.

Observing how Google had used its scale and its ample resources to fashion its cost-saving infrastructure, Facebook  considered how it might follow suit. The goal at Facebook was to save money, of course, but also to mitigate or perhaps eliminate the infrastructure-based competitive advantage Google had developed. Facebook realized that it could never compete with Google at scale in the infrastructure cost-saving game, so it sought to enlist others in the cause.

And so the Open Computer project was born. The aim is to have a community of shared interest deliver cost-saving open-hardware innovations that can help Facebook scale its infrastructure at an operational efficiency approximating Google’s. If others besides Facebook benefit, so be it. That’s not a concern.

Collateral Damage

As Facebook seeks to boost its advertising revenue, it is effectively competing with Google. The search giant still derives nearly 97 percent of its revenue from advertising, and its Google+ is intended to distract it not derail Facebook’s core business, just as Google Apps is meant to keep Microsoft focused on protecting one of its crown jewels rather than on allocating more corporate resources to search and search advertising.

There’s nothing particularly striking about that. Cloud service providers are expected to compete against other by developing new revenue-generating services and by achieving new cost-saving operational efficiencies.  In that context, the Open Compute Project can be seen, at least in one respect, as Facebook’s open-source bid to level the infrastructure playing field and undercut, as previously noted, what has been a Google competitive advantage.

But there’s another dynamic at play. As the leading cloud providers with their vast data centers increasingly seek to develop their own hardware infrastructure — or to create an open-source model that facilitates its delivery — we will witness some significant collateral damage. Those taking the hit, as is becoming apparent, will be the hardware systems vendors, including HP, IBM, Oracle (Sun), Dell, and even Cisco. That’s only on the computing side of the house, of course. In networking, as software-defined networking (SDN) and OpenFlow find ready embrace among the large cloud shops, Cisco and others will be subject to the loss of revenue and profit margin, though how much and how soon remain to be seen.

Who’s Steering the OCP Ship?

So, who, aside from Facebook, will set the strategic agenda of Open Compute? To answer to that question, we need only consult the identities of those named to the Open Compute Project Foundation’s board of directors:

  • Chairman/President – Frank Frankovsky, Director, Technical Operations at Facebook
  • Jason Waxman, General Manager, High Density Computing, Data Center Group, Intel
  • Mark Roenigk, Chief Operating Officer, Rackspace Hosting
  • Andy Bechtolshiem, Industry Guru
  • Don Duet, Managing Director, Goldman-Sachs

It’s no shocker that Facebook retains the chairman’s role. Facebook didn’t launch this initiative to have somebody else steer the ship.

Similarly, it’s not a surprise that Intel is involved. Intel benefits regardless of whether cloud shops build their own systems, buy them from HP or Dell, or even get them from a Taiwanese or Chinese ODM.

As for the Rackspace representation, that makes sense, too. Rackspace already has OpenStack, open-source software for private and public clouds, and the Open Compute approach provides a logical hardware complement to that effort.

After that, though, the board membership of the Open Compute Project Foundation gets rather interesting.

Examining Bechtolsheim’s Involvement

First, there’s the intriguing presence of Andy Bechtolsheim. Those who follow the networking industry will know that Andy Bechtolsheim is more than an “industry guru,” whatever that means. Among his many roles, Bechtolsheim serves as the chief development officer and co-founder of Arista Networks, a growing rival to Cisco in low-latency data-center switching, especially at cloud-scale web shops and financial-services companies. It bears repeating that Open Compute’s mandate does not extend to network infrastructure, which is the preserve of the analogous OpenFlow.

Bechtolsheim’s history is replete with successes, as a technologist and as an investor. He was one of the earliest investors in Google, which makes his involvement in Open Compute deliciously ironic.

More recently, he disclosed a seed-stage investment in Nebula, which, as Derrick Harris at GigaOM wrote this summer, has “developed a hardware appliance pre-loaded with customized OpenStack software and Arista networking tools, designed to manage racks of commodity servers as a private cloud.” The reference architectures for the commodity servers comprise Dell’s PowerEdge C Micro Servers and servers that adhere to Open Compute specifications.

We know, then, why Bechtolsheim is on the board. He’s a high-profile presence that I’m sure Open Compute was only too happy to welcome with open arms (pardon the pun), and he also has business interests that would benefit from a furtherance of Open Compute’s agenda. Not to put too fine a point on it, but there’s an Arista and a Nebula dimension to Bechtolsheim’s board role at the Open Compute Project Foundation.

OpenStack Angle for Rackspace, Dell

Interestingly, the presence of Bechtolsheim and Rackspace’s Mark Roenigk on the board both emphasize OpenStack considerations, as does Dell’s involvement with Open Compute. Dell doesn’t have a board seat — at least not according to the Open Compute website — but it seems to think it can build a business for solutions based on Open Compute and OpenStack among second-tier purveyors of public-cloud services and among those pursuing large private or hybrid clouds. Both will become key strategic markets for Dell as its SMB installed base migrates applications and spending to the cloud.

Dell notably lost a chunk of server business when Facebook chose to go the DIY route, in conjunction with Taiwanese ODM Quanta Computer, for servers in its data center in Pineville, Oregon. Through its involvement in Open Compute, Dell might be trying to regain lost ground at Facebook, but I suspect that ship has sailed. Instead, Dell probably is attempting to ensure that it prevents or mitigates potential market erosion among smaller service providers and enterprise customers.

What Goldman Sachs Wants

The other intriguing presence on the Open Compute Project Foundation board is Don Duet from Goldman Sachs. Here’s what Duet had to say about his firm’s involvement with Open Compute:

“We build a lot of our own technology, but we are not at the hyperscale of Google or Facebook. We are a mid-scale company with a large global footprint. The work done by the OCP has the potential to lower the TCO [total cost of ownership] and we are extremely interested in that.”

Indeed, that perspective probably worries major server vendors more than anything else about Open Compute. Once Goldman Sachs goes this route, other financial-services firms will be inclined to follow, and nobody knows where the market attrition will end, presuming it ends at all.

Like Facebook, Goldman Sachs saw what Google was doing with its home-brewed, scale-out data-center infrastructure, and wondered how it might achieve similar business benefits. That has to be disconcerting news for major server vendors.

Welcome to the Future

The big takeaway for me, as I absorb these developments, is how the power axis of the industry is shifting. The big systems vendors used to set the agenda, promoting and pushing their products and influencing the influencers so that enterprise buyers kept their growth rates on the uptick. Now, though, a combination of factors — widespread data-center virtualization, the rise of cloud computing, a persistent and protected global economic downturn (which has placed unprecedented emphasis on IT cost containment) — is reshaping the IT universe.

Welcome to the future. Some might like it more than others, but there’s no going back.

HP’s Launches Its Moonshot Amid Changing Industry Dynamics

As I read about HP’s new Project Moonshot, which was covered extensively by the trade press, I wondered about the vendor’s strategic end game. Where was it going with this technology initiative, and does it have a realistic likelihood of meeting its objectives?

Those questions led me to consider how drastically the complexion of the IT industry has changed as cloud computing takes hold. Everything is in flux, advancing toward an ultimate galactic configuration that, in many respects, will be far different from what we’ve known previously.

What’s the Destination?

It seems to me that Project Moonshot, with its emphasis on a power-sipping and space-saving server architecture for web-scale processing, represents an effort by HP to re-establish a reputation for innovation and thought leadership in a burgeoning new market. But what, exactly, is the market HP has in mind?

Contrary to some of what I’ve seen written on the subject, HP doesn’t really have a serious chance of using this technology to wrest meaningful patronage from the behemoths of the cloud service-provider world. Google won’t be queuing up for these ARM-based, Calxeda-designed, HP-branded “micro servers.” Nor will Facebook or Microsoft. Amazon or Yahoo probably won’t be in the market for them, either.

The biggest of the big cloud providers are heading in a different direction, as evidenced by their aggressive patronage of open-source hardware initiatives that, when one really thinks about it, are designed to reduce their dependence on traditional vendors of server, storage, and networking hardware. They’re breaking that dependence — in some ways, they see it as taking back their data centers — for a variety of reasons, but their behavior is invariably motivated by their desire to significantly reduce operating expenditures on data-center infrastructure while freeing themselves to innovate on the fly.

When Customers Become Competitors

We’ve reached an inflection point where the largest cloud players — the Googles, the Facebooks, the Amazons, some of the major carriers who have given thought to such matters — have figured out that they can build their own hardware infrastructure, or order it off the shelf from ODMs, and get it to do everything they need it to do (they have relatively few revenue-generating applications to consider) at a lower operating cost than if they kept buying relatively feature-laden, more-expensive gear from hardware vendors.

As one might imagine, this represents a major business concern for the likes of HP, as well as for Cisco and others who’ve built a considerable business selling hardware at sustainable margins to customers in those markets. An added concern is that enterprise customers, starting with many SMBs, have begun transitioning their application workloads to cloud-service providers. The vendor problem, then, is not only that the cloud market is growing, but also that segments of the enterprise market are at risk.

Attempt to Reset Technology Agenda

The vendors recognize the problem, and they’re doing what they can to adapt to changing circumstances. If the biggest web-scale cloud providers are moving away from reliance on them, then hardware vendors must find buyers elsewhere. Scores of cloud service providers are not as big, or as specialized, or as resourceful as Google, Facebook, or Microsoft. Those companies might be considering the paths their bigger brethren have forged, with initiatives such as the Open Compute Project and OpenFlow (for computing and networking infrastructure, respectively), but perhaps they’re not entirely sold on those models or don’t think they’re quite right  for their requirements just yet.

This represents an opportunity for vendors such as HP to reset the technology agenda, at least for these sorts of customers. Hence, Project Moonshot, which, while clearly ambitious, remains a work in progress consisting of the Redstone Server Development Platform, an HP Discovery Lab (the first one is in Houston), and HP Pathfinder, a program designed to create open standards and third-party technology support for the overall effort.

I’m not sure I understand who will buy the initial batch of HP’s “extreme low-power servers” based on Calxeda’s EnergyCore ARM server-on-a-chip processors. As I said before, and as an article at Ars Technica explains, those buyers are unlikely to be the masters of the cloud universe, for both technological and business reasons. For now, buyers might not even come from the constituency of smaller cloud providers

Friends Become Foes, Foes Become Friends (Sort Of)

But HP is positioning itself for that market and to be involved in those buying decisions relating to the energy-efficient system architectures.  Its Project Moonshot also will embrace energy-efficient microprocessors from Intel and AMD.

Incidentally, what’s most interesting here is not that HP adopted an ARM-based chip architecture before opting for an Intel server chipset — though that does warrant notice — but that Project Moonshot has been devised not so much as to compete against other server vendors as it is meant to provide a rejoinder to an open-computing model advanced by Facebook and others.

Just a short time ago, industry dynamics were relatively easy to discern. Hardware and system vendors competed against one another for the patronage of service providers and enterprises. Now, as cloud computing grows and its business model gains ascendance, hardware vendors also find themselves competing against a new threat represented by mammoth cloud service providers and their cost-saving DIY ethos.

ONF Deadly Serious About OpenFlow-Based SDNs

Yes, I’m back for further cogitation on software-defined networking (SDN) and OpenFlow.

As I wrote in my last post, relating to Cisco’s recent support for OpenFlow, I wasn’t able to attend the Open Networking Summit held last week at Stanford University.  I have, however, been reading coverage of the conference, and I am now convinced of a few fundamental SDN market realities.

Let’s start with who’s steering this particular SDN ship. The Open Networking Foundation (ONF) has been the driving force behind OpenFlow-based SDN. As I’ve written before, perhaps to the point of mind-numbing redundancy, the ONF is controlled not by networking vendors, but by the behemoths of the cloud service-provider community.

Control and the Power 

Networking vendors can be (and are) ONF members, but one needs to appreciate their place in the foundation’s hierarchy.  They are second-class citizens, and they are not setting the agenda. One more time, I will list the “founding and board members” of the ONF: Deutsche Telekom, Verizon, Google, Facebook, Microsoft, and Yahoo. Microsoft is there by dint of its status as a cloud service provider, not because it is a technology vendor.

Any doubts about where control and power reside within ONF were put to definitive rest in a recap of a third day of the Open Networking Summit provided by Dell’s Art Fewell on the NetworkWorld website:

“ . . . . Open Networking Foundation (ONF) Director Dan Pitt gave an excellent presentation that demonstrated that the ONF put a lot of thought into how they designed and structured the organization to incorporate lessons learned from older standards bodies, software communities and from the devops and open source movements. He noted that the ONF’s charter would not allow technology vendors to serve on the board of directors, but rather it should be governed by the network operators who have to live with the results. Working group chairs are assigned by the board, and a system of checks and balances has been put into place to try to prevent the problems that some standards organizations have become notorious for.”

It’s All About the Money

The message is clear. The network operators know what they want from SDN and OpenFlow, and they believe they know how to get it. What’s more, they don’t want the networking vendors compromising, subverting, or undermining the result.* (*Not that they’d do that sort of thing, of course.)

What, then, is the overriding objective these big network operators have in mind? Well, it’s to save money, as I explained in my previous post. SDN, and especially SDN enabled by an industry-standard protocol such as OpenFlow, is perceived by the major service providers as a means of substantially reducing network-related capital and, more to the point, operating expenditures. Service-provider executives, especially the mahogany-row bean counters, get excited about that sort of thing.

As Stacey Higginbotham notes, recounting an Open Networking Summit address given by a representative of Verizon:

“Stuart Elby, VP and network architecture & technology chief technologist for Verizon Digital Media Services, laid out how the promise of software-defined networking could make the company’s cost curve match its revenue by cutting down on the need for expensive gear that is costly to buy and even more costly to operate. In a conversation before his presentation, Elby explained how Verizon’s network can view every single packet on the network, but how keeping track of those packets is both a big data problem and expensive from a network management perspective.”

Verizon’s Compelling Chart

Verizon is not alone. Every one of the founding players in ONF sees the same business value in OpenFlow-enabled SDN. In the eyes of the ONF’s most powerful players, conventional network infrastructure is holding back substantial business benefits. It’s not personal, but it is business. And it is how and why major tectonic shifts in this industry come about.

Along those lines, Elby presented a visually powerful illustration that makes clear just how big an issue network-related costs are for Verizon. The chart is reproduced in Higginbotham’s article at GigaOM and in Fewell’s piece at NetworkWorld. If you haven’t seen it, I suggest you take a look. It really is worth a thousand words, but I’ll summarize as follows: Verizon’s network operating costs soon will surpass its revenues, resulting in what Verizon quaintly calls a “non-sustainable business case.” Therefore, there is an urgent need for a solution that lowers network-equipment expenditures, through utilization of off-the-shelf hardware, and enables a business case that better aligns operating costs with revenues. Verizon sees SDN and OpenFlow as the ticket to “inexpensive feature insertion for new services and revenue uplift.”

Verizon is not alone. It’s safe to say the others on the ONF board are dealing with variations of the same problem and are seeking similar solutions.

Google Goes Further

Google, for one, isn’t stopping at switches. As Higginbotham explored in an earlier post at GigaOM last week, Google is a fervent proponent of Quagga and the Open Sourcing Routing Project. The search giant’s goals are practical, namely  “cheaper, highly programmable routers it can use in its (core) network.” Called the Open LSR, Google’s router, as Higginbotham writes, is “an open-source router that consists of a switch made with merchant silicon and running Open vSwitch that talks to a server that has an OpenFlow-based controller and uses Quagga to generate the routing tables and forwarding information.”

As if the theme needs further belaboring, it’s all about taking cost out of network infrastructure. Google is working with others in the service-provider community to make its low-cost routing dream a reality.

It is clear, then, that the largest service providers, and perhaps may smaller ones besides, want to gain more control over their networks and with the costs associated with them. They have constructed the Open Network Foundation with a clear purpose in mind, they see SDN and Open Flow as solutions to a clearly articulated business problem, and they seem determined to see it through to fruition.

What About the Enterprise?

What remains to be seen is how willing enterprises will be to go along for the SDN ride. This is a point that was hammered home by Peter Cristy of the Internet Research Group, who, as reported by Fewell, told the audience at the Open Networking Summit that SDN and OpenFlow are likely to face significant challenges in cracking the enterprise market. Cristy’s points were valid. His most salient observations were that there have been few OpenFlow “killler apps,” and that enterprises do not favor “reproducing the same thing with new technology,” especially if that technology is new and complicated.

He’s right. But we have to remember that the ONF is captained by service providers, and they are not leading their particular SDN charge because they are motivated by altruistic concern for enterprise networks and their stewards. No, for now at least, the ONF’s conception of SDNs will be applicable to the demographic represented by the composition of the ONF board. Enterprises will have to wait, it seems, and that’s probably good news for the established order of networking vendors, especially for Cisco Systems.

Assessing Market Implications

Still, I have to wonder. Cristy is correct to note that the enterprise accounts for the “biggest part of the networking market.” Nonetheless, times are changing. As more applications move to the cloud, and to cloud service providers, SDN and presumably OpenFlow are likely to increasingly affect the top and bottom lines of networking vendors.

Those companies — Cisco, Juniper, and all the rest — have to keep a wary eye on SDN developments. Even if networking vendors eventually lose a chunk of business at network service providers, they’ll still have the enterprise, presuming they can position themselves correctly and anticipate change rather than react belatedly to it.

There’s a lot at stake as this story plays out in the months and years ahead.

Thoughts on Cisco’s OpenFlow Conversion

It has not been easy finding time to write this past week. In addition to work and other demands on my time, I had been suffering from a blockage in my ear that impaired my hearing, upset my balance, and generally annoyed the hell out of me.  That problem has been resolved, and I’m back to being as normal as I get.

Ensconced at the keyboard once again, however, I found myself suffering from writer’s block after having rid myself of ear block. So, I consulted the Idea Generator on my iPhone, and it offered this troika for inspiration: “narcotic neon coat.” I gave that trio of words due consideration, then I decided to write about OpenFlow. Trust me, it’s really for the best.

More than OpenFlow

Some of you might contend that OpenFlow has received too much attention. That’s fair, I suppose, but value judgements about whether a topic has gotten too little, too much, or just enough attention are subjective, and also subject to changing circumstances.

Others might argue that software-defined networking encompasses more than OpenFlow. If that’s your claim, you’d be right. OpenFlow is just one mechanism or means of realizing a software-defined network. There are other ways to get it done, standards-based and proprietary. That said, OpenFlow has major industry backers and momentum, it’s becoming inextricably linked with SDN, and it’s been reluctant to surrender the spotlight.

No matter how you slice it, this was a big week for SDN and for OpenFlow. At Stanford University, the Open Networking Summit was in full swing, dedicated to discourse on SDNs and how they could be realized with OpenFlow.

Crowded at the Summit

I wasn’t there, but many were. More than 600 people applied to attend the summit, but only 350 could be accommodate by organizers, who now have decided to hold the next instance of the event in April rather than waiting a full year until the following October.

Notwithstanding the hype, then, Open Flow has emerged as a networking topic for all seasons. Certainly the great and the good of the networking industry would seem to agree. Cisco Systems was well represented at the summit, and Cisco got out the message that it is a believer in SDN and plans to support OpenFlow on its Nexus switches, starting with the low-latency Nexus 3000 line. A specific timetable hasn’t been provided. (Or, if it has, I haven’t seen it.)

Cisco: SDN Next Evolution of Networking

In a Cisco blog post penned by Omar Sultan, David Meyer, a Cisco distinguished engineer (as opposed to the undistinguished ones), had the following to say about why Cisco, in supporting OpenFlow, has made what many might interpret as a counterintuitive move:

“. . . . Cisco had always embraced disruption–we don’t always get it right on the first shot, but we usually get it in the end.  Take server virtualization as an example–while we may not have been first off the line, we now have the broadest and strongest portfolio of virtualization networking technologies in the market.  Critics only saw the short-term impact to our switching revenue (less ports sold) but we saw the transformational value of virtualization. We see SDN in a similar light–as the next evolution of networking and we see OF as an excellent mechanism to drive maturation of both the technology and the underlying thinking.”

That last sentence is commendable for its clarity and transparency and it bears further inspection. Cisco sees SDN as the next evolution in networking, and it perceives OpenFlow as “an excellent mechanism to drive maturation of both the technology and the underlying thinking.”

OpenFlow if Necessary, But Not Necessarily OpenFlow

Now I will foul the waters with my interpretation of what it signifies, beyond the obvious. By necessity, I will veer into the murky shallows of speculation and ambiguity, because — until Cisco provides further elaboration — we won’t know, at least for now, how Cisco ultimately will play its SDN cards. (Yes, I mixed metaphors in that last sentence. So shoot me — but only figuratively).

My take, which might be worth the proverbial two cents, is that Cisco is all in on SDN. As for OpenFlow, I think Cisco is less enamored.  I read Meyer’s and Cisco’s comments and I get the feeling Cisco is saying that it will support OpenFlow as an SDN mechanism if necessary, but not necessarily OpenFlow as its preferred SDN mechanism. Meyer says Open Flow can drive maturation of SDN technology and thinking, but he hasn’t said that it ultimately will be the only means, or event Cisco’s preferred means, of achieving SDN.

I know that others, including Craig Matsumoto of Light Reading, see a close conjoining of SDN and OpenFlow in Cisco’s positioning. I respectfully disagree, though I could, as always (does it even bear saying?), be wrong.

Diverged Business Interests of ONF’s Board and Networking Vendors

Matsumoto has posited that OpenFlow is looking less like a threat to Cisco’s and its business model. At this point, it’s still hard to say, but I think Cisco would suffer materially in the long run if OpenFlow matures as the Open Networking Foundation’s six founding board members — which include carriers and large cloud service providers such as Deutsche Telekom, Verizon, Facebook, Google, Microsoft, and Yahoo — would like it to do, and if the public cloud fulfills the bulk of its commercial promise.

Further, I think the goal of the the ONF Founding Six is completely virtualized infrastructure (compute, storage, networking) run on wall-to-wall, bare-bones hardware, overseen by a management layer of software and driven by applications and services. This would bring lower capital expenditures for gear, and reduced operational expenditures for network management.

I realize there’s been a search for OpenFlow’s killer app — and that search should continue, obviously — but the founders of ONF seem to be focused primarily on cost savings. For them, it’s not about doing something strikingly new or revolutionary, but about getting more from less, and for less. In that context, OpenFlow makes sense — at least for them — as it delivers quantifiable business benefits that they have not been able to derive from current network infrastructure.

In the Enterprise, A Different Story

Of course, what the ONF founders want might not be what enterprise IT buyers need. There’s an opening here for Cisco, for HP Networking, for Juniper, for Arista Networks, and for all the other networking vendors to define SDN in ways that are more amenable to those enterprise buyers across a wide range of horizontal and vertical markets.

If all else fails, though, and OpenFlow becomes an SDN juggernaut, there’s always recourse to “embrace and extend,” particularly at the management layer. It’s not as though vendors haven’t cracked open that chestnut before.

OVA Aims to Commoditize VMware’s Advantage

Although it’s no threat to VMware yet, the growth of the Open Virtualization Alliance (OVA) has been impressive. Formally announced in May, the OVA has grown from its original seven founding members — its four Governing Members (Red Hat, Intel, HP, and IBM), plus  BMC, Eucalyptus Systems, and Novel (SUSE) — expanding with the addition of 65 new members in June, finally encompassing  more than 200 members as of yesterday.

The overriding objective of the OVA is to popularize the open-source Kernel-based Virtual Machine (KVM) so that it can become a viable alternative to proprietary server-virtualization offerings, namely market leader VMware.  To achieve that goal, OVA is counting on broad-based industry support from large and small players alike as it works to accelerate the development of an ecosystem of KVM-based third-party solutions. In conjunction with that effort, OVA also is encouraging interoperability, promoting best practices, spotlighting customer successes, and generally raising awareness of KVM through marketing events and initiatives.

Give the People What They Want 

While VMware isn’t breaking out in a cold sweat or losing sleep over OVA, it’s clear that many members of OVA are anxious about the potential stranglehold VMware could gain in cloud infrastructure if its virtualization hegemony goes unchecked. In that regard, it’s notable that certain VMware partners — IBM and HP among them — are at the forefront of OVA.

If customers are demanding VMware, as they clearly have been doing, then that’s what IBM and HP will give them. It’s good business practice for service-based solution providers to give customers what they want. But circumstances can change — customers might be persuaded to accept alternatives to VMware — and IBM and HP probably wouldn’t mind if they did.

Certainly VMware recognizes that its partners also can be its competitors. There’s even well-worn industry phrase for  it: coopetition. At the same time, though, IBM and HP would welcome customer demand for an open-source alternative to VMware, which explains their avidity for and evangelization of KVM.

Client-Server Reprise?

An early lead in a strategic market can result in long-term industry dominance. That’s what VMware wants to achieve, and it’s what nearly everybody else — excluding VMware’s majority shareholder, EMC — would like to prevent. Industry giants IBM and HP have seen this script play out in the client-server era with Microsoft’s Windows, and they’re not keen to relive the experience in cloud computing.

VMware’s customer appeal and market differentiation derive from its dominance in server virtualization, a foundation that allows it to extend up and out into areas that could give it a stranglehold on cloud computing’s most valuable technologies. Nearly every vendor with a stake in the data center is keeping a wary eye on VMware. Some, such as Microsoft and Oracle, are outright competitors seeking to cut into VMware’s market lead, while others — such as HP, IBM, and Cisco — are partnering pragmatically with VMware while pursuing strategic alternatives and contingency plans.

Commoditizing Competitor’s Edge

In promoting an open-source alternative as a means of undercutting a competitor’s competitive advantage, IBM and its OVA cohorts are taking a page from a well-worn strategic handbook. This is what Google unleashed against Apple in mobile operating systems with Android, and what Facebook is trying to achieve against Google in cloud data centers with its Open Compute Project. For OVA’s charter members, it’s all about attempting to commoditize a market leader’s competitive differentiation to level the playing field — and perhaps to eventually tilt it to your advantage.

IBM and HP have integration prowess and professional-services capabilities that VMware lacks. If they can nullify virtualization as a strategic asset by commoditizing it, they relegate VMware to a lesser role. However, if they fail and VMware’s differentiation is maintained and extended further, they risk losing a great deal of long-term account control in a burgeoning market.

KVM Rather than XenServer

Some might wonder why the open-source server virtualization alternative became KVM and not, say, XenServer, whose custodian, XenSource, is owned by Citrix. One of the reasons could be Citrix’s relatively warm embrace by Microsoft. When Gartner released its Magic Quadrant for x86 Server Virtualization Infrastructure this summer, it questioned whether Citrix’s ties to Microsoft could result in XenServer being compromised. Microsoft, of course, has its own server-virtualization entry in Hyper-V.

In the end, the OVA gang put down its money on KVM rather than XenServer, seeing the former as a less-complicated proposition than the latter. That appears to have been the right move.

Clearly OVA has experienced striking growth in just a few months, but it has a long way to go before it meets the strategic mandate envisioned by its founders.

OpenFlow Crystal Ball Still Foggy

OpenFlow originated in academia, from research work conducted at Stanford University and the University of California, Berkeley. Academics remain intensively involved in the development of OpenFlow, but the protocol, a manifestation of software-defined networking (SDN), appears destined for potentially widespread commercial deployment, first at major data centers and cloud service providers, and perhaps later at enterprises of various shapes and sizes.

Encompassing a set of APIs, OpenFlow enables programmability and control of flow tables in routers and switches. Today’s switches combine network-control functions (control plane) and packet processing and forwarding functions (data plane). OpenFlow aims to separate the two, abstracting flow manipulation and control from the underlying switch hardware. thus making it possible to define flows and determine what paths they take through a network.

From Academic Origins to Commercial Data Centers

Getting back to the academics, they wanted to use OpenFlow as a means of making networks more amenable to experimentation and innovation. The law of unintended consequences intervened, however, and OpenFlow is spreading in many different directions, spawning a growing number of applications.

To see where (or, at least, by whom) OpenFlow will be applied first commercially, consider the composition of the board of directors of the Open Networking Foundation (ONF), which bills itself as “a nonprofit organization dedicated to promoting a new approach to networking called Software-Defined Networking (SDN). SDN allows owners and operators of networks to control and manage their networks to best serve their users’ needs. ONF’s first priority is to develop and use the OpenFlow protocol. Through simplified hardware and network management, OpenFlow seeks to increase network functionality while lowering the cost associated with operating networks.”

The six board members at ONF are Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and Yahoo. As I’ve noted previously, what they have in common are large, heavily virtualized data centers. They’re all presumably looking for ways to run them more efficiently, with the network having become one of their biggest inhibitor to data-center scaling. While servers and storage have been virtualized and have become more dynamic and programmable, networks lag behind, not keeping pace with new requirements but still accounting for a large share of capital and operational expenditures.

Problem Shrieking for a Solution

That, my friends, is a problem shrieking for a solution. While academia hatched OpenFlow, there’s nothing academic about the data-center pain that the six board members of the ONF are feeling. They need their network infrastructure to become more dynamic, flexible, and functional, and they also want to lower their network operating costs.

The economic and operational impetus for change is considerable. The networking industry, at least the portion of it that wants to serve the demographic profile represented by the board members of ONF, must sit up and take notice. And if you look at the growing vendor membership of the ONF, the networking industry is paying attention.

One of many questions I have relates to how badly Cisco and, to a less extent, Juniper Networks — proponents of proprietary alternatives to some of the problems SDN and OpenFlow are intended to address — might be affected by an OpenFlow wave.

Two Schools of Thought

There are at least two schools of thought on the topic. One school, inhabited by more than a few market analysts, says that OpenFlow will hasten and intensify the commoditization of networking gear, as a growing percentage of switches will be made to serve as simple packet-forwarding boxes. Another learned quarter contends that, just as the ONF charter says, the focus and the impact will be primarily on network-related operating costs, and not so much on capital costs. In other words, OpenFlow — even if it is wildly popular — leaves plenty of room for continued switch differentiation, and thus for margin erosion to be at least somewhat mitigated.

The long-term implications of OpenFlow are difficult to predict. Prophecy is made more daunting by OpenFlow hype and disinformation, disseminated by the protocol’s proponents and detractors, respectively.  It does have the feeling of something big, though, and I’ve been spending increasing amounts of time trying to get my limited gray matter around it.

Look for further zigzagging peregrinations on my journey toward OpenFlow understanding.

ONF Board Members Call OpenFlow Tune

The concept of software-defined networking (SDN) has generated considerable interest during the last several months.  Although SDNs can be realized in more than one way, the OpenFlow protocol seems to have drawn a critical mass of prospective customers (mainly cloud-service providers with vast data centers) and solicitous vendors.

If you aren’t up to speed with the basics of software-defined networking and OpenFlow, I suggest you visit the Open Networking Foundation (ONF) and OpenFlow websites to familiarize yourself the underlying ideas.  Others have written some excellent articles on the technology, its perceived value, and its potential implications.

In a recent piece he wrote originally for GigaOm, Kyle Forster of Big Switch Networks offers this concise definition:

Concisely Defined

“At its most basic level, OpenFlow is a protocol for server software (a “controller”) to send instructions to OpenFlow-enabled switches, where these instructions give direct control over how those switches forward traffic through the network.

I think of OpenFlow like an x86 instruction set for the network – it’s low-level, but it’s very powerful. Continuing that analogy, if you read the x86 instruction set for the first time, you might walk away thinking it could be useful if you need to build a fancy calculator, but using it to build Linux, Apache, Microsoft Word or World of Warcraft wouldn’t exactly be obvious. Ditto for OpenFlow. It isn’t the protocol that is interesting by itself, but rather all of the layers of software that are starting to emerge on top of it, similar to the emergence of operating systems, development environments, middleware and applications on top of x86.”

Increased Network Functionality, Lower Network Operating Costs

The Open Networking Foundation’s charter summarizes its objectives and the value proposition that advocates of SDN and OpenFlow believe they can deliver:

 “The Open Networking Foundation is a nonprofit organization dedicated to promoting a new approach to networking called Software-Defined Networking (SDN). SDN allows owners and operators of networks to control and manage their networks to best serve their users’ needs. ONF’s first priority is to develop and use the OpenFlow protocol. Through simplified hardware and network management, OpenFlow seeks to increase network functionality while lowering the cost associated with operating networks.”

That last part is the key to understanding the composition of ONF’s board of directors, which includes Deutsche Telecom, Facebook, Google, Microsoft, Verizon, and Yahoo. All of these companies are major cloud-service providers with multiple, sizable data centers. (Yes, Microsoft also is a cloud-technology purveyor, but what it has in common with the other board members is its status as a cloud-service provider that owns and runs data centers.)

Underneath the board of directors are member companies. Most of these are vendors seeking to serve the needs of the ONF board members and similar cloud-service providers that share their business objective: boosting network functionality while reducing the costs associated with network operations.

Who’s Who of Networking

Among the vendor members are a veritable who’s who of the networking industry: Cisco, HP, Juniper, Brocade, Dell/Force10, IBM, Huawei, Nokia Siemens Networks, Riverbed, Extreme, and others. Also members, not surprisingly, are virtualization vendors such as VMware and Citrix, as well as the aforementioned Microsoft. There’s a smattering of SDN/OpenFlow startups, too, such as Big Switch Networks and Nicira Networks.

Of course, membership does not necessarily entail avid participation. Some vendors, including Cisco, likley would not be thrilled at any near-term prospect of OpenFlow’s widespread market adoption. Cisco would be pleased to see the networking status quo persist for as long as possible, and its involvement in ONF probably is more that of vigilant observer than of fervent proponent. In fact, many vendors are taking a wait-and-see approach to OpenFlow. Some members, including Force10, are bearish and have suggested that the protocol is a long way from delivering the maturity and scalability that would satisfy enterprise customers.

Vendors Not In Charge

Still, the board members are steering the ONF ship, not the vendors. Regardless of when OpenFlow or something like it comes of age, the rise of software-defined networking seems inevitable. Servers and storage gear have been virtualized and have become more application-driven, but networks haven’t changed much in the last several years. They’re faster, yes, but they’re still provisioned in the traditional manner, configured rather than programmed. That takes time, consumes resources, and costs money.

Major cloud-service providers, such as those on the ONF board, want network infrastructure to become more elastic, flexible, and dynamic. Vendors will have to respond accordingly, whether with OpenFlow or with some other approach that delivers similar operational outcomes and business benefits.

I’ll be following these developments closely, watching to see how the business concerns of the cloud providers and the business interests of the networking-vendor community ultimately reconcile.