Category Archives: Yahoo

Networking Vendors Tilt at ONF Windmill

Closely following the latest developments and continuing progress of software-defined networking (SDN), I am reminded of what somebody who shall remain nameless said not long ago about why he chose to leave Cisco to pursue his career elsewhere.

He basically said that Cisco, as a huge networking company, is having trouble reconciling itself to the reality that the growing force known as cloud computing is not “network centric.” His words stuck with me, and I’ve been giving them a lot of thought since then.

All Computing Now

His opinion was validated earlier this week at a NetEvents symposium in Garmisch, Germany, where Dan Pitt, executive director of the Open Networking Foundation (ONF) made some statements about software-defined networking (SDN) that, while entirely consistent with what we’ve heard before from that community’s most fervent proponents, also seemed surprisingly provocative. Quoting Pitt, from a blog post published at ZDNet UK:

“In future, networking will become just an integral part of computing, using same tools as the rest of computing. Enterprises will get out of managing plumbing, operators will become software companies, IT will add more business value, and there will be more network startups from Generation Y.”

Pitt was asked what impact this architectural shift would have on network performance. He said that a 30,000-user campus could be supported by a four-year-old Dell PC.

Redefining Architecture, Redefining Value

Naturally, networking vendors can’t be elated at that prospect. Under the SDN master plan, the intelligence (and hence the value) of switching and routing gets moved to a server, or to a cluster of servers, on the edge of the network. Whether this is done with OpenFlow, Open vSwitch, or some other mechanism between the control plane and the switch doesn’t really matter in the big picture. What matters is that networking architectures will be redefined, and networking value will migrate into (and be subsumed within) a computing paradigm. Not to put too fine a point on it, but networking value will be inherent in applications and control-plane software, not in the dumb, physical hardware that will be relegated to shunting packets on the network.

At that same NetEvents symposium in Germany, a Computerworld UK story quoted Pitt saying something very similar to, though perhaps less eloquent than, what Berkeley professor and Nicira co-founder Scott Shenker said about network-protocol complexity.

Said Pitt:

“There are lots of networking protocols which make it very labour intensive to manage a network. There are too many “band aids” being used to keep a network working, and these band aids can actually cause many of the problems elsewhere in the network.”

Politics of ONF

I’ve written previously about the political dynamics of the Open Networking Foundation (ONF).

Just to recap, if you look at the composition of the board of directors at the ONF, you’ll know all you need to know about who wields power in that organization. The ONF board members are Google, Yahoo, Verizon, Deutsche Telekom, NTT, and Microsoft. Make no mistake about Microsoft’s presence. It is there as a cloud service provider, not as a vendor of technology products.

The ONF is run by large cloud service providers, and it’s run for large cloud service providers, though it’s conceivable that much of what gets done in the ONF will have applicability and value to cloud shops of smaller size and stature. I suppose it’s also conceivable that some of the ONF’s works will prove valuable at some point to large enterprises, though it should be noted that the enterprise isn’t a constituency that is foremost of mind to the ONF.

Vendors Not Driving

One thing is certain: Networking vendors are not steering the ONF ship. I’ve written that before, and I’ll no doubt write it again. In fact, I’ll quote Dan Pit to that effect right now:

“No vendors are allowed on the (ONF) board. Only the board can found a working group, approve standards, and appoint chairs of working groups. Vendors can be on the groups but not chair them. So users are in the driving seat.”

And those users — really the largest of the cloud service providers — aren’t about to move over. In fact, the power elite that governs that ONF has a definite vision in mind for the future of networking, a future that — as we’ve already seen — will make the networking subservient to applications, programmability, and computing.

Transition on the Horizon

As the SDN vision moves downstream from the largest service providers, such as those who run the show at the ONF, to smaller service providers and then to large enterprises, networking companies will have to transform themselves into software vendors — with software business models.

Can they do that? Some of them probably can, but others — including probably the largest of all — will have a difficult time making the transition, a prisoner of its own past success and circumscribed by the classic “innovator’s dilemma.”  Cisco, a networking colossus, has built a thriving franchise and dominant market position, replete with a full-fledged business model and an enormous sales machine. It will be hard to move away from a formula that’s filled the coffers all these years.

Still, move they must, though timing, as it often does, will count for a lot. The SDN wave won’t inundate the marketplace overnight, but, regardless of the underlying protocols and mechanisms that might run alongside or supersede OpenFlow, SDN seems set to eventually win adherents in CFO and CIO offices beyond the realm of the companies represented on the ONF’s board of directors. It will take some time, probably many years, but it’s a movement that will gain followers and momentum as it delivers quantifiable business benefits to those that adopt it.

Enterprise As Last Redoubt

The enterprise will be the last redoubt of conventional networking infrastructure, and it’s not difficult to envision Cisco doing everything in its power to keep it that way for as long as possible. Expect networking’s old guard to strongly resist the siren song of SDN. That’s only natural, even if — in the very long haul — it seems a vain pursuit and, ultimately, a losing battle.

At this point, I just want to emphasizes that SDN need not lead to the commoditization of networking. Granted, it might lead to the commoditization of certain types of networking hardware, but there’s still value, much of it proprietary, that software-centric networking vendors can bring to the SDN table. But, as I said earlier, for many vendors that will mean a shift in business model, product focus, and go-to-market strategy.

In that Computerworld piece, some wonder whether networking vendors could prevent the rise of software-defined networking by refusing to play along.

Not Going Away

Again, I can easily imagine the vendors slowing and impeding the ascent of SDN within enterprises, but there’s absolutely no way for them to forestall its adoption at the major service providers represented by the ONF board members. Those players have the capital and the operational resources, to say nothing of the business motivation, to roll their own switches, perhaps with the help of ODMs, and to program their own applications and networks. That train has left the station and it can’t be recalled by even the largest of networking vendors, who really have no leverage or say in the matter. They can play along and try to find a sinecure where they can continue to add value, or they can dig in their heels and get circumvented entirely.  It’s their choice.

Either way, the tension between the ONF and the traditional networking vendors is palpable. In the IETF, the vendors are casting glances and sometimes aspersions at the ONF, trying to figure out how they can mount a counterattack. The battle will be joined, but the ONF rules its own roost — and it isn’t going away.

HP’s Launches Its Moonshot Amid Changing Industry Dynamics

As I read about HP’s new Project Moonshot, which was covered extensively by the trade press, I wondered about the vendor’s strategic end game. Where was it going with this technology initiative, and does it have a realistic likelihood of meeting its objectives?

Those questions led me to consider how drastically the complexion of the IT industry has changed as cloud computing takes hold. Everything is in flux, advancing toward an ultimate galactic configuration that, in many respects, will be far different from what we’ve known previously.

What’s the Destination?

It seems to me that Project Moonshot, with its emphasis on a power-sipping and space-saving server architecture for web-scale processing, represents an effort by HP to re-establish a reputation for innovation and thought leadership in a burgeoning new market. But what, exactly, is the market HP has in mind?

Contrary to some of what I’ve seen written on the subject, HP doesn’t really have a serious chance of using this technology to wrest meaningful patronage from the behemoths of the cloud service-provider world. Google won’t be queuing up for these ARM-based, Calxeda-designed, HP-branded “micro servers.” Nor will Facebook or Microsoft. Amazon or Yahoo probably won’t be in the market for them, either.

The biggest of the big cloud providers are heading in a different direction, as evidenced by their aggressive patronage of open-source hardware initiatives that, when one really thinks about it, are designed to reduce their dependence on traditional vendors of server, storage, and networking hardware. They’re breaking that dependence — in some ways, they see it as taking back their data centers — for a variety of reasons, but their behavior is invariably motivated by their desire to significantly reduce operating expenditures on data-center infrastructure while freeing themselves to innovate on the fly.

When Customers Become Competitors

We’ve reached an inflection point where the largest cloud players — the Googles, the Facebooks, the Amazons, some of the major carriers who have given thought to such matters — have figured out that they can build their own hardware infrastructure, or order it off the shelf from ODMs, and get it to do everything they need it to do (they have relatively few revenue-generating applications to consider) at a lower operating cost than if they kept buying relatively feature-laden, more-expensive gear from hardware vendors.

As one might imagine, this represents a major business concern for the likes of HP, as well as for Cisco and others who’ve built a considerable business selling hardware at sustainable margins to customers in those markets. An added concern is that enterprise customers, starting with many SMBs, have begun transitioning their application workloads to cloud-service providers. The vendor problem, then, is not only that the cloud market is growing, but also that segments of the enterprise market are at risk.

Attempt to Reset Technology Agenda

The vendors recognize the problem, and they’re doing what they can to adapt to changing circumstances. If the biggest web-scale cloud providers are moving away from reliance on them, then hardware vendors must find buyers elsewhere. Scores of cloud service providers are not as big, or as specialized, or as resourceful as Google, Facebook, or Microsoft. Those companies might be considering the paths their bigger brethren have forged, with initiatives such as the Open Compute Project and OpenFlow (for computing and networking infrastructure, respectively), but perhaps they’re not entirely sold on those models or don’t think they’re quite right  for their requirements just yet.

This represents an opportunity for vendors such as HP to reset the technology agenda, at least for these sorts of customers. Hence, Project Moonshot, which, while clearly ambitious, remains a work in progress consisting of the Redstone Server Development Platform, an HP Discovery Lab (the first one is in Houston), and HP Pathfinder, a program designed to create open standards and third-party technology support for the overall effort.

I’m not sure I understand who will buy the initial batch of HP’s “extreme low-power servers” based on Calxeda’s EnergyCore ARM server-on-a-chip processors. As I said before, and as an article at Ars Technica explains, those buyers are unlikely to be the masters of the cloud universe, for both technological and business reasons. For now, buyers might not even come from the constituency of smaller cloud providers

Friends Become Foes, Foes Become Friends (Sort Of)

But HP is positioning itself for that market and to be involved in those buying decisions relating to the energy-efficient system architectures.  Its Project Moonshot also will embrace energy-efficient microprocessors from Intel and AMD.

Incidentally, what’s most interesting here is not that HP adopted an ARM-based chip architecture before opting for an Intel server chipset — though that does warrant notice — but that Project Moonshot has been devised not so much as to compete against other server vendors as it is meant to provide a rejoinder to an open-computing model advanced by Facebook and others.

Just a short time ago, industry dynamics were relatively easy to discern. Hardware and system vendors competed against one another for the patronage of service providers and enterprises. Now, as cloud computing grows and its business model gains ascendance, hardware vendors also find themselves competing against a new threat represented by mammoth cloud service providers and their cost-saving DIY ethos.

ONF Deadly Serious About OpenFlow-Based SDNs

Yes, I’m back for further cogitation on software-defined networking (SDN) and OpenFlow.

As I wrote in my last post, relating to Cisco’s recent support for OpenFlow, I wasn’t able to attend the Open Networking Summit held last week at Stanford University.  I have, however, been reading coverage of the conference, and I am now convinced of a few fundamental SDN market realities.

Let’s start with who’s steering this particular SDN ship. The Open Networking Foundation (ONF) has been the driving force behind OpenFlow-based SDN. As I’ve written before, perhaps to the point of mind-numbing redundancy, the ONF is controlled not by networking vendors, but by the behemoths of the cloud service-provider community.

Control and the Power 

Networking vendors can be (and are) ONF members, but one needs to appreciate their place in the foundation’s hierarchy.  They are second-class citizens, and they are not setting the agenda. One more time, I will list the “founding and board members” of the ONF: Deutsche Telekom, Verizon, Google, Facebook, Microsoft, and Yahoo. Microsoft is there by dint of its status as a cloud service provider, not because it is a technology vendor.

Any doubts about where control and power reside within ONF were put to definitive rest in a recap of a third day of the Open Networking Summit provided by Dell’s Art Fewell on the NetworkWorld website:

“ . . . . Open Networking Foundation (ONF) Director Dan Pitt gave an excellent presentation that demonstrated that the ONF put a lot of thought into how they designed and structured the organization to incorporate lessons learned from older standards bodies, software communities and from the devops and open source movements. He noted that the ONF’s charter would not allow technology vendors to serve on the board of directors, but rather it should be governed by the network operators who have to live with the results. Working group chairs are assigned by the board, and a system of checks and balances has been put into place to try to prevent the problems that some standards organizations have become notorious for.”

It’s All About the Money

The message is clear. The network operators know what they want from SDN and OpenFlow, and they believe they know how to get it. What’s more, they don’t want the networking vendors compromising, subverting, or undermining the result.* (*Not that they’d do that sort of thing, of course.)

What, then, is the overriding objective these big network operators have in mind? Well, it’s to save money, as I explained in my previous post. SDN, and especially SDN enabled by an industry-standard protocol such as OpenFlow, is perceived by the major service providers as a means of substantially reducing network-related capital and, more to the point, operating expenditures. Service-provider executives, especially the mahogany-row bean counters, get excited about that sort of thing.

As Stacey Higginbotham notes, recounting an Open Networking Summit address given by a representative of Verizon:

“Stuart Elby, VP and network architecture & technology chief technologist for Verizon Digital Media Services, laid out how the promise of software-defined networking could make the company’s cost curve match its revenue by cutting down on the need for expensive gear that is costly to buy and even more costly to operate. In a conversation before his presentation, Elby explained how Verizon’s network can view every single packet on the network, but how keeping track of those packets is both a big data problem and expensive from a network management perspective.”

Verizon’s Compelling Chart

Verizon is not alone. Every one of the founding players in ONF sees the same business value in OpenFlow-enabled SDN. In the eyes of the ONF’s most powerful players, conventional network infrastructure is holding back substantial business benefits. It’s not personal, but it is business. And it is how and why major tectonic shifts in this industry come about.

Along those lines, Elby presented a visually powerful illustration that makes clear just how big an issue network-related costs are for Verizon. The chart is reproduced in Higginbotham’s article at GigaOM and in Fewell’s piece at NetworkWorld. If you haven’t seen it, I suggest you take a look. It really is worth a thousand words, but I’ll summarize as follows: Verizon’s network operating costs soon will surpass its revenues, resulting in what Verizon quaintly calls a “non-sustainable business case.” Therefore, there is an urgent need for a solution that lowers network-equipment expenditures, through utilization of off-the-shelf hardware, and enables a business case that better aligns operating costs with revenues. Verizon sees SDN and OpenFlow as the ticket to “inexpensive feature insertion for new services and revenue uplift.”

Verizon is not alone. It’s safe to say the others on the ONF board are dealing with variations of the same problem and are seeking similar solutions.

Google Goes Further

Google, for one, isn’t stopping at switches. As Higginbotham explored in an earlier post at GigaOM last week, Google is a fervent proponent of Quagga and the Open Sourcing Routing Project. The search giant’s goals are practical, namely  “cheaper, highly programmable routers it can use in its (core) network.” Called the Open LSR, Google’s router, as Higginbotham writes, is “an open-source router that consists of a switch made with merchant silicon and running Open vSwitch that talks to a server that has an OpenFlow-based controller and uses Quagga to generate the routing tables and forwarding information.”

As if the theme needs further belaboring, it’s all about taking cost out of network infrastructure. Google is working with others in the service-provider community to make its low-cost routing dream a reality.

It is clear, then, that the largest service providers, and perhaps may smaller ones besides, want to gain more control over their networks and with the costs associated with them. They have constructed the Open Network Foundation with a clear purpose in mind, they see SDN and Open Flow as solutions to a clearly articulated business problem, and they seem determined to see it through to fruition.

What About the Enterprise?

What remains to be seen is how willing enterprises will be to go along for the SDN ride. This is a point that was hammered home by Peter Cristy of the Internet Research Group, who, as reported by Fewell, told the audience at the Open Networking Summit that SDN and OpenFlow are likely to face significant challenges in cracking the enterprise market. Cristy’s points were valid. His most salient observations were that there have been few OpenFlow “killler apps,” and that enterprises do not favor “reproducing the same thing with new technology,” especially if that technology is new and complicated.

He’s right. But we have to remember that the ONF is captained by service providers, and they are not leading their particular SDN charge because they are motivated by altruistic concern for enterprise networks and their stewards. No, for now at least, the ONF’s conception of SDNs will be applicable to the demographic represented by the composition of the ONF board. Enterprises will have to wait, it seems, and that’s probably good news for the established order of networking vendors, especially for Cisco Systems.

Assessing Market Implications

Still, I have to wonder. Cristy is correct to note that the enterprise accounts for the “biggest part of the networking market.” Nonetheless, times are changing. As more applications move to the cloud, and to cloud service providers, SDN and presumably OpenFlow are likely to increasingly affect the top and bottom lines of networking vendors.

Those companies — Cisco, Juniper, and all the rest — have to keep a wary eye on SDN developments. Even if networking vendors eventually lose a chunk of business at network service providers, they’ll still have the enterprise, presuming they can position themselves correctly and anticipate change rather than react belatedly to it.

There’s a lot at stake as this story plays out in the months and years ahead.

Nicira Downplays OpenFlow on Road to Network Virtualization

While recent discussions of software-defined networking (SDN) and network virtualization have focused nearly exclusively on the OpenFlow protocol, various parties are making the point that OpenFlow is just one facet of a bigger story.

One of those parties is Nicira Networks, which was treated to favorable coverage in the New York Times earlier today. In the article, the words “software-defined networking” and “OpenFlow” are conspicuous by their absence. Sure, the big-picture concept of software-defined networking hovers over proceedings, but Nicira takes pains to position itself as a purveyor of “network virtualization,” which is a neater, simpler concept for the broader technology market to grasp.

VMware of Networking

Indeed, leveraging the idea of network virtualization, Nicira positions itself as the VMware of networking, contending that it will resolve the problem of inflexible, inefficient, complex, and costly data-center networks with a network hypervisor that decouples network services from the underlying hardware. Nicira’s goal, then, is to be the first vendor to bring network virtualization up to speed with server and storage virtualization.  

GigaOM’s Stacey Higginbotham takes issue with the New York Times article and with Nicira’s claims relating to its putatively peerless place in the networking firmament. Writes Higginbotham: 

“The article . . . .  does a disservice to the companies pursing network virtualization by conflating the idea of flexible and programmable networks with Nicira becoming “to networking something like what VMWare was to computer servers.” This is a nice trick for the lay audience, but unlike server virtualization, which VMware did pioneer and then control, network virtualization currently has a variety of vendors pushing solutions that range from being tied to the hardware layer (hello, Juniper and Xsigo) to the software (Embrane and Nicira). In addition to there being multiple companies pushing their own standards, there’s an open source effort to set the building blocks and standards in place to create virtualized networks.”

The ONF Factor

The open-source effort in question is the Open Networking Foundation (ONF), which is promulgating OpenFlow as the protocol by which software-defined networking will be attained. I have written about OpenFlow and the ONF previously, and will have more to say on both shortly. Recently, I also recounted HP’s position on OpenFlow

Nicira says nothing about OpenFlow, which suggests the company is playing down the protocol or might  be going in a different direction to realize its vision of network virtualization. As has been noted, there’s more than one road to software-defined networking, even though OpenFlow is a path that has been well traveled thus far by industry notables, including six major service providers that are the ONF’s founding board members (Google, Deutsche Telekom, Verizon, Microsoft, Facebook, and Yahoo.)

Then again, you will find Nicira Networks among the ONF’s membership, along with a number of other established and nascent networking vendors. Nicira sees a role for OpenFlow, then, though it clearly wants to put the emphasis on its own software and the applications and services that it enables. There’s nothing wrong with that. In fact, it’s a perfectly sensible strategy for a vendor to pursue.

Tension Between Vendors and Service Providers

Alan S. Cohen, a recent addition to the Nicira team, put it into pithy perspective on his personal blog, where he wrote about why he joined Nicira and why the network will be virtualized. Wrote Cohen:

“Virtualization and the cloud is the most profound change in information technology since client-server and the web overtook mainframes and mini computers.  We believe the full promise of virtualization and the cloud can only be fully realized when the network enables rather than hinders this movement.  That is why it needs to be virtualized.

Oh, by the way, OpenFlow is a really small part of the story.  If people think the big shift in networking is simply about OpenFlow, well, they don’t get it.”

So, the big service providers might see OpenFlow as a nifty mechanism that will allow them to reduce their capital expenditures on high-margin networking gear while also lowering their operational expenditures on network management,  but the networking vendors — neophytes and veterans alike — still seek and need to provide value (and derive commensurate margins) above and beyond OpenFlow’s parameters. 

Brief Note on Bartz’ Yahoo Ouster

I haven’t had much to say on Yahoo for a while, and I won’t be prolix in discussing the ouster of Carol Bartz as the company’s CEO yesterday. She apparently was relieved of her executive duties on a telephone call from the company’s chairman, Roy Bostock, and she promptly shared that fact with Yahoo staff in a brief, presumably valedictory email message.

As I noted nearly two years ago, Bartz seemed lost at Yahoo. She provided lots of sound and fury, not to mention abundant theatrics, but her reign was more sideshow than focused leadership. Yahoo didn’t need a sideshow. There’s not much money in that.

To be fair, though, Bartz was miscast in her role. Before she came to Yahoo, she made her name and reputation as the chief executive at Autodesk, a company that specializes in the development of 3D-design, engineering, and entertainment software.

As you might imagine, Autodesk’s software was (and still is) sold to and used by design professionals and engineers,  not consumers. On the other hand, Yahoo is a content, media, and communications company that serves a broad-based consumer market. They’re very different companies, and it’s not clear why the Yahoo board thought Bartz’ previous experience made her the ideal candidate to reverse the dimming fortunes of one of the Internet’s brightest lights during the wild 90s.

Anyway, the whole Yahoo saga of the last decade has been an unremittingly sad story.  Yahoo retains some valuable assets, but nobody there seems to know how to get the most from them.

OpenFlow Crystal Ball Still Foggy

OpenFlow originated in academia, from research work conducted at Stanford University and the University of California, Berkeley. Academics remain intensively involved in the development of OpenFlow, but the protocol, a manifestation of software-defined networking (SDN), appears destined for potentially widespread commercial deployment, first at major data centers and cloud service providers, and perhaps later at enterprises of various shapes and sizes.

Encompassing a set of APIs, OpenFlow enables programmability and control of flow tables in routers and switches. Today’s switches combine network-control functions (control plane) and packet processing and forwarding functions (data plane). OpenFlow aims to separate the two, abstracting flow manipulation and control from the underlying switch hardware. thus making it possible to define flows and determine what paths they take through a network.

From Academic Origins to Commercial Data Centers

Getting back to the academics, they wanted to use OpenFlow as a means of making networks more amenable to experimentation and innovation. The law of unintended consequences intervened, however, and OpenFlow is spreading in many different directions, spawning a growing number of applications.

To see where (or, at least, by whom) OpenFlow will be applied first commercially, consider the composition of the board of directors of the Open Networking Foundation (ONF), which bills itself as “a nonprofit organization dedicated to promoting a new approach to networking called Software-Defined Networking (SDN). SDN allows owners and operators of networks to control and manage their networks to best serve their users’ needs. ONF’s first priority is to develop and use the OpenFlow protocol. Through simplified hardware and network management, OpenFlow seeks to increase network functionality while lowering the cost associated with operating networks.”

The six board members at ONF are Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and Yahoo. As I’ve noted previously, what they have in common are large, heavily virtualized data centers. They’re all presumably looking for ways to run them more efficiently, with the network having become one of their biggest inhibitor to data-center scaling. While servers and storage have been virtualized and have become more dynamic and programmable, networks lag behind, not keeping pace with new requirements but still accounting for a large share of capital and operational expenditures.

Problem Shrieking for a Solution

That, my friends, is a problem shrieking for a solution. While academia hatched OpenFlow, there’s nothing academic about the data-center pain that the six board members of the ONF are feeling. They need their network infrastructure to become more dynamic, flexible, and functional, and they also want to lower their network operating costs.

The economic and operational impetus for change is considerable. The networking industry, at least the portion of it that wants to serve the demographic profile represented by the board members of ONF, must sit up and take notice. And if you look at the growing vendor membership of the ONF, the networking industry is paying attention.

One of many questions I have relates to how badly Cisco and, to a less extent, Juniper Networks — proponents of proprietary alternatives to some of the problems SDN and OpenFlow are intended to address — might be affected by an OpenFlow wave.

Two Schools of Thought

There are at least two schools of thought on the topic. One school, inhabited by more than a few market analysts, says that OpenFlow will hasten and intensify the commoditization of networking gear, as a growing percentage of switches will be made to serve as simple packet-forwarding boxes. Another learned quarter contends that, just as the ONF charter says, the focus and the impact will be primarily on network-related operating costs, and not so much on capital costs. In other words, OpenFlow — even if it is wildly popular — leaves plenty of room for continued switch differentiation, and thus for margin erosion to be at least somewhat mitigated.

The long-term implications of OpenFlow are difficult to predict. Prophecy is made more daunting by OpenFlow hype and disinformation, disseminated by the protocol’s proponents and detractors, respectively.  It does have the feeling of something big, though, and I’ve been spending increasing amounts of time trying to get my limited gray matter around it.

Look for further zigzagging peregrinations on my journey toward OpenFlow understanding.

ONF Board Members Call OpenFlow Tune

The concept of software-defined networking (SDN) has generated considerable interest during the last several months.  Although SDNs can be realized in more than one way, the OpenFlow protocol seems to have drawn a critical mass of prospective customers (mainly cloud-service providers with vast data centers) and solicitous vendors.

If you aren’t up to speed with the basics of software-defined networking and OpenFlow, I suggest you visit the Open Networking Foundation (ONF) and OpenFlow websites to familiarize yourself the underlying ideas.  Others have written some excellent articles on the technology, its perceived value, and its potential implications.

In a recent piece he wrote originally for GigaOm, Kyle Forster of Big Switch Networks offers this concise definition:

Concisely Defined

“At its most basic level, OpenFlow is a protocol for server software (a “controller”) to send instructions to OpenFlow-enabled switches, where these instructions give direct control over how those switches forward traffic through the network.

I think of OpenFlow like an x86 instruction set for the network – it’s low-level, but it’s very powerful. Continuing that analogy, if you read the x86 instruction set for the first time, you might walk away thinking it could be useful if you need to build a fancy calculator, but using it to build Linux, Apache, Microsoft Word or World of Warcraft wouldn’t exactly be obvious. Ditto for OpenFlow. It isn’t the protocol that is interesting by itself, but rather all of the layers of software that are starting to emerge on top of it, similar to the emergence of operating systems, development environments, middleware and applications on top of x86.”

Increased Network Functionality, Lower Network Operating Costs

The Open Networking Foundation’s charter summarizes its objectives and the value proposition that advocates of SDN and OpenFlow believe they can deliver:

 “The Open Networking Foundation is a nonprofit organization dedicated to promoting a new approach to networking called Software-Defined Networking (SDN). SDN allows owners and operators of networks to control and manage their networks to best serve their users’ needs. ONF’s first priority is to develop and use the OpenFlow protocol. Through simplified hardware and network management, OpenFlow seeks to increase network functionality while lowering the cost associated with operating networks.”

That last part is the key to understanding the composition of ONF’s board of directors, which includes Deutsche Telecom, Facebook, Google, Microsoft, Verizon, and Yahoo. All of these companies are major cloud-service providers with multiple, sizable data centers. (Yes, Microsoft also is a cloud-technology purveyor, but what it has in common with the other board members is its status as a cloud-service provider that owns and runs data centers.)

Underneath the board of directors are member companies. Most of these are vendors seeking to serve the needs of the ONF board members and similar cloud-service providers that share their business objective: boosting network functionality while reducing the costs associated with network operations.

Who’s Who of Networking

Among the vendor members are a veritable who’s who of the networking industry: Cisco, HP, Juniper, Brocade, Dell/Force10, IBM, Huawei, Nokia Siemens Networks, Riverbed, Extreme, and others. Also members, not surprisingly, are virtualization vendors such as VMware and Citrix, as well as the aforementioned Microsoft. There’s a smattering of SDN/OpenFlow startups, too, such as Big Switch Networks and Nicira Networks.

Of course, membership does not necessarily entail avid participation. Some vendors, including Cisco, likley would not be thrilled at any near-term prospect of OpenFlow’s widespread market adoption. Cisco would be pleased to see the networking status quo persist for as long as possible, and its involvement in ONF probably is more that of vigilant observer than of fervent proponent. In fact, many vendors are taking a wait-and-see approach to OpenFlow. Some members, including Force10, are bearish and have suggested that the protocol is a long way from delivering the maturity and scalability that would satisfy enterprise customers.

Vendors Not In Charge

Still, the board members are steering the ONF ship, not the vendors. Regardless of when OpenFlow or something like it comes of age, the rise of software-defined networking seems inevitable. Servers and storage gear have been virtualized and have become more application-driven, but networks haven’t changed much in the last several years. They’re faster, yes, but they’re still provisioned in the traditional manner, configured rather than programmed. That takes time, consumes resources, and costs money.

Major cloud-service providers, such as those on the ONF board, want network infrastructure to become more elastic, flexible, and dynamic. Vendors will have to respond accordingly, whether with OpenFlow or with some other approach that delivers similar operational outcomes and business benefits.

I’ll be following these developments closely, watching to see how the business concerns of the cloud providers and the business interests of the networking-vendor community ultimately reconcile.