Between What Is and What Will Be

I have refrained from writing about recent developments in software-defined networking (SDN) and in the larger realm of what VMware, now hosting VMworld in San Francisco, calls the  “software-defined data center” (SDDC).

My reticence hasn’t resulted from indifference or from hype fatigue — in fact, these technologies do not possess the jaundiced connotations of “hype” — but from a realization that we’ve entered a period of confusion, deception, misdirection, and murk.  Amidst the tumult, my single, independent voice — though resplendent in its dulcet tones — would be overwhelmed or forgotten.

Choppy Transition

We’re in the midst of a choppy transitional period. Where we’ve been is behind us, where we’re going is ahead of us, and where we find ourselves today is between the two. So-called legacy vendors, in both networking and compute hardware, are trying to slow progress toward the future, which will involve the primacy of software and services and related business models. There will be virtualized infrastructure, but not necessarily converged infrastructure, which is predicated on the development and sale of proprietary hardware by a single vendor or by an exclusive club of vendors.

Obviously, there still will be hardware. You can’t run software without server hardware, and you can’t run a network without physical infrastructure. But the purpose and role of that hardware will change. The closed box will be replaced by an open one, not because of any idealism or panglossian optimism, but because of economic, operational, and technological imperatives that first are remaking the largest of public-cloud data centers and soon will stretch into private clouds at large enterprises.

No Wishful Thinking

After all, the driving purpose of the Open Networking Foundation (ONF) involved shifting the balance of power into the hands of customers, who had their own business and operational priorities to address. Where legacy networking failed them, SDN provided a way forward, saving money on capital expenditures and operational costs while also providing flexibility and responsiveness to changing business and technology requirements.

The same is true for the software-defined data center, where SDN will play a role in creating a fluid pool of virtualized infrastructure that can be utilized to optimal business benefit. What’s important to note is that this development will not be restricted to the public cloud-service providers, including all the big names at the top of the ONF power structure. VMware, which coined software-defined data center, is aiming directly for the private cloud, as Greg Ferro mentioned in his analysis of VMware’s acquisition of Nicira Networks.

Fighting Inevitability

Still, it hasn’t happened yet, even though it will happen. Senior staff and executives at the incumbent vendors know what’s happening, they know that they’re fighting against an inevitability, but fight it they must. Their organizations aren’t built to go with this flow, so they will resist it.

That’s where we find ourselves. The signal-to-noise ratio isn’t great. It’s a time marked by disruption and turmoil. The dust and smoke will clear, though. We can see which way the wind is blowing.

8 responses to “Between What Is and What Will Be

  1. Hi Brad,
    I agree on the vested interests of customers of commoditizing network hardware by using SDN, since SDN essentially takes the intelligence away from the network.
    1. But with Nicira-like proprietary solutions of network overlay, or any other proprietary SDN solution, wouldn’t it just mean moving from network-hardware-vendor-lockin to SDN-software-vendor-lockin.
    [I did not consider Openflow “standard”, because it is an SDN enabler, and not SDN in itself]
    2. Also, can you shed some light on what business critical applications are hindered by legacy networking hardware, where SDN could be a remedy?
    3. I am not up-to-date on Openflow, but does Openflow or anyone else have a solution to de-couple the “SDN control messages required to program the virtualized network” from the data-plane-traffic, or are they considering a simple out-of-band SDN control connectivity?
    Thanks,
    Rahul

    • #1 – Of course it’s just a higher-layer lock-in until the northbound APIs and/or the control plane is standardized;

      #2 – I’m still waiting to see one 😉

      #3 – Smart people do out-of-band control plane.

    • Thanks for the reply, Rahul. I will try not to be prolix in my response.

      First off, though, I want to note that your message comes from a Cisco domain, but you have failed to disclose that you are a Cisco employee. I prefer that vendor representatives disclose their loyalties.

      Nonetheless, in reply to your first question, I’d say that there’s some truth to the assertion that SDN shifts value and competitive differentiation to software. The question is, what will customers think?

      Given that customers have driven the creation and development of the Open Networking Foundation (ONF) — yes, they are primarily service providers, not enterprises — I would argue the answer will be affirmative.

      As for OpenFlow, no, it is not SDN, just an enabling protocol between SDN controllers and switches; but it is open. As Ivan writes above (Hello, Ivan!), we will see APIs evolve over time that will bring a degree of openness to the control and management planes, too. But, no, not everything will be open source. There’s nothing wrong with proprietary value, though. That said, customers need to be wary of lock-in, and I believe they’ll do just that. It’s my view that SDN will provide a greater degree of latitude, and greater freedom from vendor lock-in, than legacy networking provides.

      As for your second question, your premise seems off to me. The whole point of SDN, as driven by the ONF, is to make the network more flexible and responsive to virtualized workloads and to cloud computing. It turns out that SDN, which effects network virtualization and enhanced programmability, also can save capital expenditures and operating costs because it automates configuration and management underlying network infrastructure, which increasingly will be based on merchant silicon. I know Cisco has alternative approaches to network virtualization and programmability for its installed base of customers. In the near term, what Cisco proposes for those customers might prove effective. That’s where we are today, but I don’t think it changes the long-term dynamic.

      Ivan has provided an answer to question 3, but smart people are working to develop other options. 🙂 We’re early in this game, and further advances are coming.

  2. Ahh, a couple of the best out there laying it out, what a treat! Brad, I’m glad you reiterated (again) how young we are in this process. This all got really serious less than 1-2 years ago when Google/Amazon and a couple of others (hyper-x) got on board. Prior to that it was those “wacky” computer scientists trying to apply such crazy ideas as x86 frameworks to networks because, the network is so far off from compute and storage. We are talking about chips that have fab times in the years via a foundry, so nothing in HW will move quickly, thus early adoption in SW. Its only hype if we expect something tomorrow. Long term vision and leadership being driven from consumers while vendors are trying to figure out how to deal with that is more accurate. It is happening in a couple of other verticals especially orchestration today.

    We are taking 2 decades of proprietary

    1. Lockin: Agreed, with a big old, but. This allows for turnkey vendor lock solutions abstracted up into the application layer, we call that value add or an alternative. That should be business policy and SLAs rather than technical decisions. VMware does that with a hypervisor very well today. Take something that is a commodity (implying a hypervisor here) and add value vis a vi software differentiation. Hardware vendors did that for years and still to this day in some cases with silicon. The reason we are seeing success from the likes of Nicira is that is software and time to market is much easier in software innovation (thus a major problem today). No standardized set of primitives leaves us with proprietary software and proprietary APIs all running on proprietary hardware.

    2. Use Cases: One can run down that rabbit hole for a while but orchestration and commodity brought this about. Vendors asking what the use case is a peeve, come out of the silo and talk to some customers experiences managing distributed systems and it might clue them in. I recommend looking at what policy administration, operation and policy administration looked like to the wireless market five years ago and what it looks like today. One can go even further back and look at OpEx and service delivery of hundreds of distributed x86 servers as opposed to a single VM farm. If someone doesn’t at least announce integration of campus networks into a wireless controller like experience by some point next year I will be shocked. Thats still not enough, that solves some issues, but the ability to be agnostic to the edge device is still missing (though starting to crop up in wireless today thankfully). Vendors still take profit from controller licensing.

    3. ONF: Brad nailed the ONF, and I totally agree with Ivan in the data center on an OOB CP (Q-Fabric). In band CP doesn’t scare me in the campus as much ,as we rely on in-band (physically) CP today with wireless deployments and utilization is significantly lower. I see no difference for typical computing between wired and wireless personally. Implementing policy in distributed CPs has proven to be more of a challenge than the industry has been able to tackle at scale. Doesn’t mean it cant be done but I’m still waiting on a vendor agnostic product that can orchestrate a mix of thousands of different vendor devices loosely strung together with link state protocols and I am all in with that company. The bump in the wire (pockets of centralized CPs) allows for policy application. Dan Pitt and the ONF (whose board is made up of consumers not vendors) seem to recognize the value in decoupling software from hardware just as just as the x86 industry did over a decade ago. How to integrate SDN and Cloud Computing, is making pretty good strides in the OpenStack Quantum plugin to incorporate network as just another resource to decompose and consume like compute and storage have been for a while now. Great post and comments, Thanks!

  3. Thanks Ivan, Brad and Bret for your replies.
    Brad, I don’t mind disclosing that I am associated with Cisco.
    But my comments are in no way representing Cisco or its position on SDN. They are purely personal opinion resulting from an interest in following SDN technology and a deep urge to learn. Blogs like yours are definitely good food for thought. Thanks again.
    -Rahul

  4. Michael Bushong (@mbushong)

    Full disclosure: I am a Juniper employee (my team has been leading the SDN efforts for much of the past 2 years)

    I actually agree with your premise here. I posted up something very similar earlier this week (http://forums.juniper.net/t5/var-blog-Michael-Bushong/Motivation-Matters/ba-p/157138)

    Where I think people get a little bit confused (or at least treat the topic with a bit of imprecision) is in the term “incumbent”. I believe that “incumbent” is more related to market share than age-of-company. Where existing companies have relatively little share, disruption is in their best interest. It offers a chance to level the playing field and take on the dominant incumbent. Obviously, I am biased in my thinking here as I have been personally involved in pressing our own SDN plans forward, but the thinking is really just sound strategy.

    What I think will be interesting is how two dominant incumbents come together in a single space: Cisco and VMWare. They are coming at the same problem from different angles, but the long-term business interests of each will very possibly trump their technology affections. I have no doubt that individuals at each company will pursue SDN technologies (perhaps even with reckless abandon), but the companies more broadly are organisms that will act in self-defense (in the aggregate, anyway).

    As many readers will know, this is a scene taken straight from Christenson’s Innovator’s Dilemma.

  5. Pingback: The Data Center Journal Data Center Hardware And Emerging Technologies

Leave a comment