OpenFlow originated in academia, from research work conducted at Stanford University and the University of California, Berkeley. Academics remain intensively involved in the development of OpenFlow, but the protocol, a manifestation of software-defined networking (SDN), appears destined for potentially widespread commercial deployment, first at major data centers and cloud service providers, and perhaps later at enterprises of various shapes and sizes.
Encompassing a set of APIs, OpenFlow enables programmability and control of flow tables in routers and switches. Today’s switches combine network-control functions (control plane) and packet processing and forwarding functions (data plane). OpenFlow aims to separate the two, abstracting flow manipulation and control from the underlying switch hardware. thus making it possible to define flows and determine what paths they take through a network.
From Academic Origins to Commercial Data Centers
Getting back to the academics, they wanted to use OpenFlow as a means of making networks more amenable to experimentation and innovation. The law of unintended consequences intervened, however, and OpenFlow is spreading in many different directions, spawning a growing number of applications.
To see where (or, at least, by whom) OpenFlow will be applied first commercially, consider the composition of the board of directors of the Open Networking Foundation (ONF), which bills itself as “a nonprofit organization dedicated to promoting a new approach to networking called Software-Defined Networking (SDN). SDN allows owners and operators of networks to control and manage their networks to best serve their users’ needs. ONF’s first priority is to develop and use the OpenFlow protocol. Through simplified hardware and network management, OpenFlow seeks to increase network functionality while lowering the cost associated with operating networks.”
The six board members at ONF are Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and Yahoo. As I’ve noted previously, what they have in common are large, heavily virtualized data centers. They’re all presumably looking for ways to run them more efficiently, with the network having become one of their biggest inhibitor to data-center scaling. While servers and storage have been virtualized and have become more dynamic and programmable, networks lag behind, not keeping pace with new requirements but still accounting for a large share of capital and operational expenditures.
Problem Shrieking for a Solution
That, my friends, is a problem shrieking for a solution. While academia hatched OpenFlow, there’s nothing academic about the data-center pain that the six board members of the ONF are feeling. They need their network infrastructure to become more dynamic, flexible, and functional, and they also want to lower their network operating costs.
The economic and operational impetus for change is considerable. The networking industry, at least the portion of it that wants to serve the demographic profile represented by the board members of ONF, must sit up and take notice. And if you look at the growing vendor membership of the ONF, the networking industry is paying attention.
One of many questions I have relates to how badly Cisco and, to a less extent, Juniper Networks — proponents of proprietary alternatives to some of the problems SDN and OpenFlow are intended to address — might be affected by an OpenFlow wave.
Two Schools of Thought
There are at least two schools of thought on the topic. One school, inhabited by more than a few market analysts, says that OpenFlow will hasten and intensify the commoditization of networking gear, as a growing percentage of switches will be made to serve as simple packet-forwarding boxes. Another learned quarter contends that, just as the ONF charter says, the focus and the impact will be primarily on network-related operating costs, and not so much on capital costs. In other words, OpenFlow — even if it is wildly popular — leaves plenty of room for continued switch differentiation, and thus for margin erosion to be at least somewhat mitigated.
The long-term implications of OpenFlow are difficult to predict. Prophecy is made more daunting by OpenFlow hype and disinformation, disseminated by the protocol’s proponents and detractors, respectively. It does have the feeling of something big, though, and I’ve been spending increasing amounts of time trying to get my limited gray matter around it.
Look for further zigzagging peregrinations on my journey toward OpenFlow understanding.
I spoke to Kyle Forster who wears both OpenFlow and Big Switch hats and I’d agree with most of what you wrote. It’s still very early days for this technology and the 6 board members are the same ones that will also be pushing for Terabit Ethernet, i.e., there’s a question as to the applicability for most enterprises and service providers tend to take a bit longer to roll something like this out. There’s definitely some substance here and a problem that needs to be solved (making networking a resource, not a restraint), but it’s still a nascent technology that needs some time to mature.
Thanks for the comment, Stu. I agree that the problem is well understood and that some high-profile customers want it solved.
That said, OpenFlow is making a potentially bumpy transition from academia to the commercial sphere. Although by no means desired, growing pains are likely.
I agree there is a lot of bumping around in the dark. However, the larger issue is how to enable the programibility of networks in order to ensure that a mix of applications, management systems, and infrastructure can all work in sync with each other. There has been a lot of hand wringing and declarations concerning the commodization of network gear, but someone has always been squawking about that (and generally they have been wrong, but that’s a post by itself)
When it comes to software defined networks, openflow is the tip of an iceberg that mirrors the IT layer cake model that disrupted the likes of Digital and IBM in the 90s. The way you build and interact with networks is only 20 years behind. Merchant silicon (Intel), modular operating systems (Microsoft), independent software vendors (Oracle/SAP), and SDKs (Microsoft) were all hallmarks of that era. Fast forward 15 years, sprinkle in some open source, highly distributed apps, and the wild world of APIs and here we are. I agree, it feels big and exciting.
Like most things, the squeaky wheels of those with the most severe network challenges will get oiled first. Personally, I sit in both of your camps, but the operating cost and complexity camp is the one most likely to get the most attention firstly because opex outweighs capex in most environments and secondly because that where the independent developers/start-ups seem to be focusing their attention.
The issue of commoditization does warrant a separate discussion.
What we’re seeing now is that many customers — service providers and large enterprises alike — seem more concerned with network operating costs than with network capital costs. That doesn’t mean they want to be gouged on the prices they pay for the gear, but it does suggest that the cost and complexity of network operations are their primary concerns. As you note, those are also the problems drawing attention from startups and established vendors.
Is anybody exploring it deploying in a Wireless Broadband network
Radhakant, you might want to look into developments involving OpenRoads and OpenMesh. In addition to those, there might be other OpenFlow-based wireless projects of which I am not aware. Perhaps others can provide further assistance.