SDN’s Continuing Evolution

At the risk of understatement, I’ll begin this post by acknowledging that we are witness to intensifying discussion about the applicability and potential of software-defined networking (SDN). Frequently, such discourse is conjoined and conflated with discussion of OpenFlow.

But the two, as we know, are neither the same nor necessarily inextricable. Software-defined networking is a big-picture concept involving controller-driven programmable networks whereas OpenFlow is a protocol that enables interaction between a control plane and the data plane of a switch.

Not Necessarily Inextricable

A salient point to remember — there are others, I’m sure, but I’m leaning toward minimalism today — is that, while SDN and OpenFlow often are presented as joined at the hip, they need not be. You can have SDN without Open Flow. Furthermore, it’s worth bearing in mind that the real magic of SDN resides beyond OpenFlow’s reach, at a higher layer of  abstraction in the SDN value hierarchy.

So, with that in mind, let’s take a brief detour into SDN history, to see whether the past can inform the present and illuminate the future. I was fortunate enough to have some help on this journey from  Amin Tootoonchian, a PhD student in the Systems and Networking Group, Department of Computer Science, University of Toronto.

Tootoonchian is actively involved in research projects related to software-defined networking and OpenFlow. He wrote a paper in conjunction with Yashar Ganjali, his advisor and an assistant professor at the University of Toronto, on HyperFlow, an application that runs on the open-source NOX controller to create a logically centralized but physically distributed control plane for OpenFlow. Tootoonchian developed and implemented HyperFlow, and he also is working on the next release of NOX. Recently, he spent six months pursuing SDN research at the University of California Berkeley.

His ongoing research has afforded insights into the origins and evolution of SDN. During a discussion over a coffee, he kindly recommended some reference material for my edification and enlightenment. I’m all for generosity here, so I’m going to share those recommendations with you over in what might become a series of posts. (I’d like to be more definitive, I really would, but I never know where I’m going to steer this thing I call a blog. It all comes down to time, opportunity, circumstances, and whether I get hit by a bus.)

Anyway, let’s start, strangely enough, at the beginning, with SDN concepts that ultimately led to the development of the OpenFlow protocol.

4D and Ethane: SDN Milestones 

Tootoonchian pointed me to papers and previous research involving academic projects such as 4D and Ethane, which served as recent antecedents to OpenFlow. There are other papers and initiatives he mentioned, a few of which I will reference, if all goes according to my current plan, in forthcoming posts.

Before 4D and Ethane, however, there were other SDN predecessors, most of which were captured in a presentation by Edward Crabbe, network architect at Google. Helpfully titled “The (Long) Road to SDN,” Crabbe’s presentation was given at a Tech Field Day last autumn.

Crabbe draws an SDN evolutionary line from Ipsilon’s General Switch Management Protocol (GSMP) in 1996 through a number of subsequent initiatives — including the IEFT’s Forwarding and Control Element Separation (FORCES) and Path Computation Element (PCE) working groups — gradually progressing toward the advent of OpenFlow in 2008. He points to common threads in SDN that include partitioning of resources and control within network elements; and minimization of the network-element local control plane, involving offline control of forwarding state and offline control of network-element resource allocation.

As for why SDN has drawn growing interest, development and support, Crabbe cites two main reasons: cost and “innovation velocity.” I (and others) have touched on the cost savings previously, but Crabble’s particular view from the parapets of Google warrants attention.

Capex and Opex Savings 

In his presentation, Crabbe cites cost savings relating to both capital and operating expenditures.

On the capex side, he notes that SDN can deliver efficient use of  IT infrastructure resources, which, I note, results in the need to purchase fewer new resources. He makes particular mention of how efficient resource utilization applies to network element CPU and memory as well as to underlying network capacity. He also notes SDN’s facility at moving the “heaviest workloads off expensive, relatively slow embedded systems to cheap, fast, commodity hardware.” Unstated, but seemingly implicit, is that the former are often proprietary whereas the latter are not.

Crabbe also mentions that capex savings can accrue from SDN’s ability to “provide visibility into, and synchronized control of, network state, such that underlying capacity may be used more efficiently.” Again, efficient utilization of the resources one owns means one derives full value from them before having to allocate spending to the purchase of new ones.

As for lower operating expenditures, Crabbe broadly states that SDN enables reduced network complexity, which results in less operational overhead and fewer outages. He offers a number of supporting examples, and the case he makes is straightforward and valid. If you can reduce network complexity, you will mitigate operational risk, save time, boost network-related productivity, and perhaps get the opportunity to allocate valuable resources to other, potentially more productive uses.

Enterprise Narrative Just Beginning 

Speaking of which, that brings us to Crabbe’s assertion that SDN confers “innovation velocity.” He cites several examples of how and where such innovation can be expedited, including faster feature implementation and deployment; partitioning of resources and control for relatively safe experimentation; and implementations on “relatively simple, well-known systems with well-defined interfaces.” Finally, he also emphasizes that the decoupling of the control plane from the network element facilitates “novel decision algorithms and hardware uses.”

It makes sense, all of it, at least insofar as Google is concerned. Crabbe’s points, of course, are similarly valid for other web-scale, cloud service providers.  But what about enterprises, large and small? Well, that’s a question still to be explored and answered, though the early adopters IBM and NEC brought forward earlier this week indicate that SDN also has a future in at least a few enterprise application environments.

One response to “SDN’s Continuing Evolution

  1. It’s my recollection that Ed Crabbe said that Google’s position is that software control over the network is the most important feature. If the network architecture could be managed in a software application then they could drive higher utilisation of current resources. A nice side benefit is that they can reduce capital with cheaper network devices by having the smaller and simpler device firmware (that is customised to Google’s requirements) and avoid paying large profit margins to incumbent vendors.

    Buying simple, dumb switches works for their exact application, but my view is that this does not translate for most networks. And the cost of the network has been shifted from hardware purchase to software development costs – which works for superscale companies, but for enterprise and most data centre networks is largely irrelevent because the DevOps skills don’t exist. Not do most IT Managers understand the concept.

    I do wonder if this will change.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s