Category Archives: Software-Defined Networks

Network-Virtualization Startup PLUMgrid Announces Funding, Reveals Little

Admit it, you thought I’d lost interest in software-defined networking (SDN), didn’t you?

But you know that couldn’t be true. I’m still interested in SDN and how it facilitates network virtualization, network programmability, and what the empire-building folks at EMC/VMware are billing as the software-defined data center, which obviously encompasses more than just networking.

Game On

Apparently I’m not the only one who retains an abiding interest in SDN. In the immediate wake of VMware’s headline-grabbing acquisition of network-virtualization startup Nicira Networks, entrepreneurs and venture capitalists want us to know that the game has just begun.

Last week, for example, we learned that PLUMgrid, a network-virtualization startup in the irritatingly opaque state of development known as stealth mode, has raised $10.7 million in first-round funding led by moneybags VCs U.S. Venture Partners (USVP) and Hummer Winblad Venture Partners. USVP’s Chris Rust and Hummer Winblad’s Lars Leckie have joined PLUMgrid’s board of directors. You can learn more about the individual board members and the company’s executive team, which includes former Cisco employees who were involved in the networking giant’s early dalliance with OpenFlow a few years ago, by perusing the biographies on the PLUMgrid website.

Looking for Clues 

But don’t expect the website to provide a helpful description of the products and technologies that PLUMgrid is developing, apparently in consultation with prospective early customers. We’ll have to wait until the end of this year, or early next year, for PLUMgrid to disclose and discuss its products.

For now, what we get is a game of technology charades, in which PLUMgrid executives, including CEO Awais Nemat, drop hints about what the company might be doing and their media interlocutors then guess at what it all means. It’s amusing at times, but it’s not illuminating.

At SDNCentral, Matt Palmer surmises that PLUMgrid might be playing in “the service orchestration arena for both physical and virtual networks.” In an article written by Jim Duffy at Network World, we learn that PLUMgrid sees its technology as having applicability beyond the parameters of network virtualization. In the same article, PLUMgrid’s Nemat expresses reservations about OpenFlow. To wit:

 “It is a great concept (of decoupling the control plane for the data plane) but it is a demonstration of a concept. Is OpenFlow the right architecture for that separation? That remains to be seen.”

More to Come

That observation is somewhat reminiscent of what Scott Schenker, Nicira co-founder and chief scientist and a professor in the Electrical Engineering and Computer Science Department at the University of California at Berkeley, had to say about OpenFlow last year. (Shenker also is a co-founder and officer of the Open Networking Foundation, a champion and leading proponent of OpenFlow.)

What we know for certain about PLUMgrid is that it is based in Sunnyvale, Calif., and plans to sell its network-virtualization software to businesses that manage physical, virtual, and cloud data centers. In a few months, perhaps before the end of the year, we’ll know more.

Advertisements

Xsigo: Hardware Play for Oracle, Not SDN

When I wrote about Xsigo earlier this year, I noted that many saw Oracle as a potential acquirer of the I/O virtualization vendor. Yesterday morning, Oracle made those observers look prescient, pulling the trigger on a transaction of undisclosed value.

Chris Mellor at The Register calculates that Oracle might have paid about $800 million for Xsigo, but we don’t know. What we do know is that Xsigo’s financial backers were looking for an exit. We also know that Oracle was willing to accommodate it.

For the Love of InfiniBand, It’s Not SDN

Some think Oracle bought a software-defined networking (SDN) company. I was shocked at how many journalists and pundits repeated the mantra that Oracle had moved into SDN with its Xsigo acquisition. That is not right, folks, and knowledgeable observers have tried to rectify that misconception.

I’ve gotten over a killer flu, and I have a residual sinus headache that sours my usually sunny disposition, so I’m no mood to deliver a remedial primer on the fundamentals of SDN. Suffice it to say, readers of this forum and those familiar with the pronouncements of the ONF will understand that what Xsigo does, namely I/O virtualization, is not SDN.  That is not to say that what Xsigo does is not valuable, perhaps especially to Oracle. Nonetheless, it is not SDN.

Incidentally, I have seen a few commentators throwing stones at the Oracle marketing department for depicting Xsigo as an SDN player, comparing it to Nicira Networks, which VMware is in the process of acquiring for a princely sum of $1.26 billion. It’s probably true that Oracle’s marketing mavens are trying to gild their new lily by covering it with splashes of SDN gold, but, truth be told, the marketing team at Xsigo began dressing their company in SDN garb earlier this year, when it became increasingly clear that SDN was a lot more than an ephemeral science project involving OpenFlow and boffins in lab coats.

Why Confuse? It’ll be Obvious Soon Enough

At Network Computing, Howard Marks tries to get everybody onside. I encourage you to read his piece in its entirety, because it provides some helpful background and context, but his superbly understated money quote is this one: “I’ve long been intrigued by the concept of I/O virtualization, but I think calling it software-defined networking is a stretch.”

In this industry, words are stretched and twisted like origami until we can no longer recognize their meaning. The result, more often than not, is befuddlement and confusion, as we witnessed yesterday, an outcome that really doesn’t help anybody. In fact, I would argue that Oracle and Xsigo have done themselves a disservice by playing the SDN card.

As Marks points out, “Xsigo’s use of InfiniBand is a good fit with Oracle’s Exadata and other clustered solutions.” What’s more, Matt Palmer, who notes that Xsigo is “not really an SDN acquisition,” also writes that “Oracle is the perfect home for Xsigo.” Palmer makes the salient point that Xsigo is essentially a hardware play for Oracle, one that aligns with Oracle’s hardware-centric approaches to compute and storage.

Oracle: More Like Cisco Than Like VMWare

Oracle could have explained its strategy and detailed the synergies between Xsigo and its family of hardware-engineered “Exasystems” (Exadata and Exalogic) —  and, to be fair, it provided some elucidation (see slide 11 for a concise summary) — but it muddied the waters with SDN misdirection, confusing some and antagonizing others.

Perhaps my analysis is too crude, but I see a sharp divergence between the strategic direction VMware is heading with its acquisition of Nicira and the path Oracle is taking with its Exasystems and Xsigo. Remember, Oracle, after the Sun acquisition, became a proprietary hardware vendor. Its focus is on embedding proprietary hooks and competitive differentiation into its hardware, much like Cisco Systems and the other converged-infrastructure players.

VMware’s conception of a software-defined data center is a completely different proposition. Both offer virtualization, both offer programmability, but VMware treats the underlying abstracted hardware as an undifferentiated resource pool. Conversely, Oracle and Cisco want their engineered hardware to play integral roles in data-center virtualization. Engineered hardware is what they do and who they are.

Taking the Malocchio in New Directions

In that vein, I expect Oracle to look increasingly like Cisco, at least on the infrastructure side of the house. Does that mean Oracle soon will acquire a storage player, such as NetApp, or perhaps another networking company to fill out its data-center portfolio? Maybe the latter first, because Xsigo, whatever its merits, is an I/O virtualization vendor, not a switching or routing vendor. Oracle still has a networking gap.

For reasons already belabored, Oracle is an improbable SDN player. I don’t see it as the likeliest buyer of, say, Big Switch Networks. IBM is more likely to take that path, and I might even get around to explaining why in a subsequent post. Instead, I could foresee Oracle taking out somebody like Brocade, presuming the price is right, or perhaps Extreme Networks. Both vendors have been on and off the auction block, and though Oracle’s Larry Ellison once disavowed acquisitive interest in Brocade, circumstances and Oracle’s disposition have changed markedly since then.

Oracle, which has entertained so many bitter adversaries over the years — IBM, SAP, Microsoft, SalesForce, and HP among them — now appears ready to cast its “evil eye” toward Cisco.

Some Thoughts on VMware’s Strategic Acquisition of Nicira

If you were a regular or occasional reader of Nicira Networks CTO Martin Casado’s blog, Network Heresy, you’ll know that his penultimate post dealt with network virtualization, a topic of obvious interest to him and his company. He had written about network virtualization many times, and though Casado would not describe the posts as such, they must have looked like compelling sales pitches to the strategic thinkers at VMware.

Yesterday, as probably everyone reading this post knows, VMware announced its acquisition of Nicira for $1.26 billion. VMware will pay $1.05 billion in cash and $210 million in unvested equity awards.  The ubiquitous Frank Quattrone and his Quatalyst Partners, which reportedly had been hired previously to shop Brocade Communications, served as Nicira’s adviser.

Strategic Buy

VMware should have surprised no one when it emphasized that its acquisition of Nicira was a strategic move, likely to pay off in years to come, rather than one that will produce appreciable near-term revenue. As Reuters and the New York Times noted, VMware’s buy price for Nicira was 25 times the amount ($50 million) invested in the company by its financial backers, which include venture-capital firms Andreessen Horowitz, Lightspeed,and NEA. Diane Greene, co-founder and former CEO of VMware — replaced four years ago by Paul Maritz — had an “angel” stake in Nicira, as did as Andy Rachleff, a former general partner at Benchmark Capital.

Despite its acquisition of Nicira, VMware says it’s not “at war” with Cisco. Technically, that’s correct. VMware and its parent company, EMC, will continue to do business with Cisco as they add meat to the bones of their data-center virtualization strategy. But the die was cast, and  Cisco should have known it. There were intimations previously that the relationship between Cisco and EMC had been infected by mutual suspicion, and VMware’s acquisition of Nicira adds to the fear and loathing. Will Cisco, as rumored, move into storage? How will Insieme, helmed by Cisco’s aging switching gods, deliver a rebuttal to VMware’s networking aspirations? It won’t be too long before the answers trickle out.

Still, for now, Cisco, EMC, and VMware will protest that it’s business as usual. In some ways, that will be true, but it will also be a type of strategic misdirection. The relationship between EMC and Cisco will not be the same as it was before yesterday’s news hit the wires. When these partners get together for meetings, candor could be conspicuous by its absence.

Acquisitive Roads Not Traveled

Some have posited that Cisco might have acquired Nicira if VMware had not beaten it to the punch. I don’t know about that. Perhaps Cisco might have bought Nicira if the asking price were low, enabling Cisco to effectively kill the startup and be done with it. But Cisco would not have paid $1.26 billion for a company whose approach to networking directly contradicts Cisco’s hardware-based business model and market dominance. One typically doesn’t pay that much to spike a company, though I suppose if the prospective buyer were concerned enough about a strategic technology shift and a major market inflection, it might do so. In this case, though, I suspect Cisco was blindsided by VMware. It just didn’t see this coming — at least not now, not at such an early state of Nicira’s development.

Similarly, I didn’t see Microsoft or Citrix as buyers of Nicira. Microsoft is distracted by its cloud-service provider aspirations, and the $1.26 billion would have been too rich for Citrix.

IBM’s Moves and Cisco’s Overseas Cash Horde

One company I had envisioned as a potential (though less likely) acquirer of Nicira was IBM, which already has a vSwitch. IBM might now settle for the SDN-controller technology available from Big Switch Networks. The two have been working together on IBM’s Open Data Center Interoperable Network (ODIN), and Big Switch’s technology fits well with IBM’s PureSystems and its top-down model of having application workloads command and control  virtualized infrastructure. As the second network-virtualization domino to fall, Big Switch likely will go for a lower price than did Nicira.

On Twitter, Dell’s Brad Hedlund asked whether Cisco would use its vast cash horde to strike back with a bold acquisition of its own. Cisco has two problems here. First, I don’t see an acquisition that would effectively blunt VMware’s move. Second, about 90 percent of Cisco’s cash (more than $42 billion) is offshore, and CEO John Chambers doesn’t want to take a tax hit on its repatriation. He had been hoping for a “tax holiday” from the U.S. government, but that’s not going to happen in the middle of an election campaign, during a macroeconomic slump in which plenty of working Americans are struggling to make ends meet. That means a significant U.S.-based acquisition likely is off the table, unless the target company is very small or is willing to take Cisco stock instead of cash.

Cisco’s Innovator’s Dilemma

Oh, and there’s a third problem for Cisco, mentioned earlier in this prolix post. Cisco doesn’t want to embrace this SDN stuff. Cisco would rather resist it. The Cisco ONE announcement really was about Cisco’s take on network programmability, not about SDN-type virtualization in which overlay networks run atop an underyling physical network.

Cisco is caught in a classic innovator’s dilemma, held captive by the success it has enjoyed selling prodigious amounts of networking gear to its customers, and I don’t think it can extricate itself. It’s built a huge and massively successful business selling a hardware-based value proposition predicated on switches and routers. It has software, but it’s not really a software company.

For Cisco, the customer value, the proprietary hooks, are in its boxes. Its whole business model — which, again, has been tremendously successful — is based around that premise. The entire company is based around that business model.  Cisco eventually will have to reinvent itself, like IBM did after it failed to adapt to client-server computing, but the day of reckoning hasn’t arrived.

On the Defensive

Expect Cisco to continue to talk about the northbound interface (which can provide intelligence from the switch) and about network programmability, but don’t expect networking’s big leopard to change its spots. Cisco will try to portray the situation differently, but it’s defending rather than attacking, trying to hold off the software-based marauders of infrastructure virtualization as long as possible. The doomsday clock on when they’ll arrive in Cisco data centers just moved up a few ticks with VMware’s acquisition of Nicira.

What about the other networking players? Sadly, HP hasn’t figured out what to about SDN, even though OpenFlow is available on its former ProCurve switches. HP has a toe dipped in the SDN pool, but it doesn’t seeming willing to take the initiative. Juniper, which previously displayed ingenuity in bringing forward QFabric, is scrambling for an answer. Brocade is pragmatically embracing hybrid control planes to maintain account presence and margins in the near- to intermediate-term.

Arista Networks, for its part, might be better positioned to compete on networking’s new playing field. Arista Networks’ CEO Jayshree Ullal had the following to say about yesterday’s news:

“It’s exciting to see the return of innovative networking companies and the appreciation for great talent/technology. Software Defined Networking (SDN) is indeed disrupting legacy vendors. As a key partner of VMware and co-innovator in VXLANs, we welcome the interoperability of Nicira and VMWare controllers with Arista EOS.”

Arista’s Options

What’s interesting here is that Arista, which invariably presents its Extensible OS (EOS) as “controller friendly,” earlier this year demonstrated interoperability with controllers from VMware, Big Switch Networks, and Nebula, which has built a cloud controller for OpenStack.

One of Nebula’s investors is Andy Bechtolsheim, whom knowledgeable observers will recognize as the chief development officer (CDO) of, and major investor in, Arista Networks.  It is possible that Bechtolsheim sees a potential fit between the two companies — one building a cloud controller and one delivering cloud networking. To add fuel to this particular fire, which may or may not emit smoke, note that the Nebula cloud controller already features Arista technology, and that Nebula is hiring a senior network engineer, who ideally would have “experience with cloud infrastructure (OpenStack, AWS, etc. . . .  and familiarity with OpenFlow and Open vSwitch.”

 Open or Closed?

Speaking of Open vSwitch, Matt Palmer at SDN Centralwill feel some vindication now that VMware has purchased a company whose engineering team has made significant contributions to the OVS code. Palmer doubtless will cast a wary eye on VMware’s intentions toward OVS, but both Steve Herrod, VMware’s CTO, and Martin Casado, Nicira’s CTO, have provided written assurances that their companies, now combining, will not retreat from commitments to OVS and to Open Flow and Quantum, the OpenStack networking  project.

Meanwhile, GigaOm’s Derrick Harris thinks it would be bad business for VMware to jilt the open-source community, particularly in relation to hypervisors, which “have to be treated as the workers that merely carry out the management layer’s commands. If all they’re there to do is create virtual machines that are part of a resource pool, the hypervisor shouldn’t really matter.”

This seems about right. In this brave new world of virtualized infrastructure, the ultimate value will reside in an intelligent management layer.

PS: I wrote this post under a slight fever and a throbbing headache, so I would not be surprised to discover belatedly that it contains at least a couple typographical errors. Please accept my apologies in advance.

Inevitability of Virtualized Infrastructure

As a previous post, Infrastructure Virtualization Versus Converged Infrastructure, attests, I strongly believe that virtualization is leading us to a future in which underlying hardware becomes largely undifferentiated and interchangeable. Applications and orchestration will reside in software riding atop the virtualization layer, which effectively will function as an abstraction buffer above hardware infrastructure.  The latter will eventually include hardware for computer, networking, and storage.

Vendors that ride hardware-based business models will have trouble adapting to this new reality. Many of these companies have hordes of software developers and software engineers, but they inextricably intertwine their software and hardware as a matter of business practice, selling the latter as proprietary boxes that often cannot interoperate with, or be swapped out for, competing hardware. It’s classic hardware-based vendor lock-in, and it’s been with us for many years. This applies to vendors that sell all three main types of hardware infrastructure, and to those that sell them tied together as converged infrastructure.

Loosening a Tenacious Grip

Proprietary data-center hardware would appear to be running on borrowed time, though it will not disappear overnight. Its grip will be especially tenacious in the enterprise, though the pull of the cloud eventually will weaken its hold. Proprietary compute infrastructure will be the first to succumb, but networking and storage will fall, too. The economic and operational logic powering the transition is inexorable, so it’s a question of when, not whether, it will happen.

While CapEx cost savings are an obvious benefit, operational flexibility (shifting workloads with agility and less effort) and OpEx savings also are factors. Infrastructure hardware will be cheaper, as well as easier and less costly to run. Pools of industry-standard hardware will be reallocated on demand to serve the needs of application workloads. Data-center customers no longer will be constrained by the hardware-release schedules of their previous vendors of choice. Customers also will be able to take advantage of the latest industry-standard chipsets, which will power hardware with improved energy efficiency and better cooling characteristics.

In servers, and now in storage, Facebook’s Open Compute Project (OCP) has sought to expedite the move to off-the-shelf hardware. Last week at Oscon, Frank Frankovsky, a vice president at  Facebook and the chairman and president of the OCP, rallied the open-source troops by arguing that proprietary x86 systems are “gratuitously differentiated.” He called for all hardware-design specifications to be open.

OCP as Competitive Cudgel

That would benefit Facebook, which launched OCP as a vehicle to help it lower data-center CapEx and OpEx, boost operational flexibility, and — last but not least — mitigate a competitive advantage held by Google, which had a massive head start in rationalizing and fine-tuning its data centers and IT infrastructure. In fact, Google cloaks its IT operations in extreme secrecy, believing that its practices and technologies deliver substantial competitive advantage over its main rivals, including Facebook. The latter must agree, because the animating idea behind Open Compute is to create a market, demand and supply, for commodity server hardware will reduce or eliminate Google’s edge.

Some have wondered why Google hasn’t joined OCP, but the answer should be obvious. Google believes it has cracked the infrastructure code, and it is therefore disinclined to share its insights and best practices with its competitors. Google isn’t a fan of proprietary vanity hardware — it’s been designing its own gear, then going to server and network ODMs, for some time now — but Google feels it has nothing to gain, and much to lose, from opening its kimono to the OCP crowd.

With networking, though, Google felt it needed a little help from its friends — as well as from its enemies. That explains why it allied with Facebook and other cloud-service providers in the Open Networking Foundation (ONF), which I have written about here on many occasions. The goal of the ONF, as with OCP, is to slip the proprietary shackles of hardware vendors, whose gear functions as an impediment to operational agility as well as a costs that could be reduced through SDN-style network virtualization. Google’s communitarian approach to addressing the network-virtualization riddle suggests that it believes it cannot achieve the desired outcome on its own.

Cracking the Nut

Whereas compute hardware was well on its way to standardization, networking hardware, until the ONF, was akin to a vertically integrated mainframe system, replete with a proliferating number of both proprietary and industry-standard protocols. Networking is a bigger, and tougher, nut to crack.

But crack it will, first at the big cloud-service providers, then, as the cloud gains momentum, at enterprises.

PS: I will post something tomorrow about VMware’s just-announced acquisition of Nicira, which is big news no matter how you slice it.  I wrote the above post before I learned of the acquisition.

Debate Over Openness of Open vSwitch

Late last week, the illustrious Ivan Pepelnjak pointed me to a post by Matthew Palmer at SDN Central. Pepelnjak thought the post would interest me, and he was right.

While I encourage you to read Palmer’s post firsthand, I will summarize it briefly. Basically, Palmer makes a two-part argument and then leaves us with unsettled questions. The first part of his argument is that the virtual switch (vSwitch) has become the “prime real-estate for network virtualization within the datacenter.” As such, the vSwitch has become a strategic battleground for vendors and service providers alike.

This brings us to the second part of Palmer’s argument, which is more controversial. Palmer implies that the first part of his argument, about the valuable real-estate inhabited by the vSwitch, wouldn’t be a major point of contention if a genuine and viable open vSwitch — and not just an open-source vSwitch — were available. Alas, he says, that is not the case.

Open . . . or Just Open Source? 

Palmer suggests that Open vSwitch (OVS), which wears the mantle of open-source vSwitch, is a proprietary wolf in sheep’s clothing.  He says Open vSwitch might be open source, but that it is far from open. Instead, he says, it is under the direction of one company, Nicira Networks, which “controls the features, capabilities, and protocols supported within OVS and when they are released.”

Writes Palmer:

“Since OVS is ‘Open’ Nicira will gladly take your free labor to develop on OVS and give you an Apache license to ‘fork’ your own distribution; but they essentially decide which features and protocols, from what contributors will be included in the mainline distribution at what time.  This basically masquerades OVS as the ‘free’ switch in a freemium business model where the vendor locks you in with their better, proprietary, paid for version.  This is why many others in the networking community are looking for alternatives to invest their time and development resources. “

From Naive Newcomer to Proprietary Villain

My first reaction was that Nicira must be making some headway commercially. I don’t think I’ve ever seen a vendor go from virtual-networking upstart to proprietary villain in a shorter period of time. Palmer is an accomplished business-development executive, and he corresponds regularly with a large circle of industry notables. Clearly, Nicira has gotten their attention.

Not long ago, many denizens of that same community dismissed Nicira as a bunch of technically brilliant but commercially ingenuous SDN neophytes who weren’t a serious threat to the networking industry’s status quo. If Palmer’s post is an accurate barometer of industry sentiment, that view has undergone significant revision.

In some ways, Palmer’s post was foreshadowed by a commentary from Dell’s Brad Hedlund earlier this year. Whereas Palmer bemoaned the proprietary stranglehold that Nicira might gain over the Open Networking Foundation (ONF) and large swathes of the SDN community, though, Hedlund took a different tack. While he, like Palmer, noted that Nicira’s engineers played a defining role in developing Open vSwitch, Hedlund was more interested in how Nicira’s approach to SDN prefigured a “significant shift .  . .  when it comes to the relevance and role of “protocols” in building next generation virtual data center networks.”

Diverse Project

In light of Palmer’s charges, I thought I’d reach out to Nicira to solicit a reply. Fortunately, Martin Casado, Nicira’s CTO, was kind enough to get back to me with what he termed “off-the-cuff comments” on Palmer’s post.

His first point was that “Nicira doesn’t have a proprietary vSwitch (never has).” In his post, Palmer wrote that Nicira “has their own proprietary version of Open vSwitch . . . . “

Casado also noted that “Nicira’s kernel module is in mainline Linux, which is clearly not controlled by Nicira,” and that “OVS is one of the largest and most diverse open source projects in the world,” with a “profile better and broader than most projects.”

The Nicira CTO also wrote that Open vSwitch is used by “potentially competitive companies,” including Cisco, Big Switch Networks, NEC, and Midokura. Casado wrote that these vendors are “welcome to fork it, or do whatever they want with it.” On that point, he and Palmer appear to be in agreement, though Palmer contends that Nicira essentially controls the direction of OVS.

SDN’s Long, Hot Summer

Finally, though Palmer’s post suggested that Nicira’s could undermine OpenFlow by swapping it out for a “proprietary (i.e. non-OpenFlow) protocol that only works with Nicira’s vSwitch and controller,” Casado responded as follows: “Development of OpenFlow 1.1 – 1.3 is moving ahead at an extremely aggressive pace.  Multiple organizations are working on it (NTT, Google, T-Systems, and Nicira), and much of the implementation is done and has been committed.”

That response, in and of itself, does not close the door on Nicira leveraging another protocol — and we know that Nicira has proposed two variants of OpenFlow, one at the edge and one in the core, to support an MPLS-like SDN fabric — but it also suggests that OpenFlow isn’t in any imminent danger of being sidelined or relegated to oblivion.

Still, Palmer’s post raises compelling questions and demonstrates that, in the summer of 2012, SDN is generating heat as well as light.

Infrastructure Virtualization Versus Converged Infrastructure

While writing about software-defined networking (SDN) and what it makes possible, I have been thinking about how its essential premise, and the premise behind infrastructure virtualization, conflicts with visions of converged infrastructure promulgated by the leading systems vendors in the information-technology (IT) industry.

According to the Wikipedia definition, converged infrastructure encompasses servers, storage, networking gear, and software for IT infrastructure management, automation, and orchestration. Accordingly, converged infrastructure leverages pooled IT resources to facilitate automated resource provisioning in support of dynamic application workloads.

Hardware Pedigrees in Software World

Leading vendors, most with more hardware than software pedigrees, have sought to offer proprietary converged-infrastructure offerings that closely integrate the hardware elements with software-based management attributes. In this regard,  we can cite vendors such as Cisco (with a storage assist from EMC or NetApp), Hewlett-Packard, Dell, Hitachi Data Systems, Oracle (though networking remains on open question there),  and, perhaps to a lesser extent, IBM.

Now, let’s think about SDN and where it ultimately leads. Cisco would like us to believe that SDN, if it leads anywhere, will eventually take us to network programmability, with a heavy emphasis on the significance of a northbound API (or APIs).  Cisco says that the means — in this case, SDN — are not as important as the desired ends, networking programmability, and many of Cisco’s enterprise customers will doubtless agree.

SDN End Games

Another SDN outcome is network virtualization, which admittedly can also be achieved through other means. But an interesting aspect of SDN’s approach to network virtualization, with its decoupling of the network’s control and data planes, is that it results in the abstracting of software-based network intelligence from the underlying hardware-based network brawn. It’s a software paradigm taken to a logical extreme, with server-based software running at the network edge controlling an abstracted pool of no-frills networking hardware.

Indeed, this is one end game for SDN, first playing out in the data centers of the major cloud service providers that guide the affairs of the Open Networking Foundation (ONF), and then — at some indeterminate future point too difficult to forecast without a Ouija board and a bottle of scotch  — also at large enterprises worldwide.

Let’s elaborate further. SDN facilitates network virtualization, which in turn is harnessed and orchestrated by cloud-management software, which also manages virtualized compute and storage infrastructure. As we’ve seen already in the compute world of servers, it’s getting increasingly difficult for a vanity hardware vendor to earn a buck in a virtualized world. Many service providers have found that they can get boxes that satisfy their needs, at lower prices, directly from ODMs that often build servers for name-brand OEMs.  Storage is being virtualized, too.

Network’s Turn

And now it is the network’s turn.

In such a world, how much longer will it make sense for customers to achieve converged infrastructure from single-source vendors that equip their hardware with proprietary fripperies and hooks to facilitate lock-in? Again, we can see these trend playing out at large service providers. Some have begun buying their networking hardware off the rack from ODMs, saving not only on capital expenditures (certainly the case for servers), but also on operating expenses relating to the ongoing management of network infrastructure. It’s true that they’re trading one sort of complexity for another, pushing it up the stack and into software rather than an operational hardware, but it’s a trade-off they’re clearly willing to make, probably because they have the resources and skill sets to make it work (and pay).

Obviously that is not a recipe for everybody, certainly not for most enterprises today. But times are changing, and it isn’t inconceivable to foresee a day when the enterprise will be able to avail itself of third-party private-cloud software and management tools that will allow it to exploit a similar model of virtualized infrastructure.

Prescience Pays Off

In the big picture, as far as the established networking vendors are concerned, the ONF’s conception of SDN is about more than just OpenFlow, and even about more than network programmability. It’s about how SDN supports a model of network virtualization, in service to infrastructure virtualization, that significantly enfeebles hardware-based business models. Some of these hardware-oriented vendors will not successfully pivot to a model of virtualized infrastructure and software primacy.

On the other hand, some vendors have had the prescience to see this trend approaching on the horizon; they understand its inevitability, and they have positioned themselves better than others to survive, and perhaps even thrive, after the eventual market transition.

We’ll look at one of those vendors in a subsequent post.

SDN Focus Turns to Infrastructure

At this year’s SIGCOMM conference in Helsinki, Finland, a workshop called Hot Topics in Software-Defined Networking (HotSDN) will be held on August 13.  A number of papers will be presented as part of HotSDN’s technical program, but one has been written as a “call to arms for the SDN community.”

The paper is called: “Fabric: A Retrospective on Evolving SDN.” Its authors are Martin Casado, CTO of Nicira Networks; Teemu Koponen of the International Computer Science Institute (ICSI); Scott Shenker, a co-founder of Nicira Networks (along with Casado) and also a professor of computer science at University of California, Berkeley; and Amin Tootoonchian, a PhD candidate at the University of Toronto and a visiting researcher at ICSI.

SDN Fabrics

We’ll get to their definition of fabric soon enough, but let’s set the stage properly by explaining at the outset that the paper discusses SDN’s shortcomings and proposes “how they can be overcome by adopting the insight underlying MPLS,” which is seen as helping to facilitate an era of simple network hardware and flexible network control.

In the paper’s introductory section, the authors contend that “current networks are too expensive, too complicated to manage, too prone to vendor lock-in, and too hard to change. “ They write that the SDN community has done considerable research on network architecture, but not as much on network infrastructure, an omission that they then attempt to rectify.

Network infrastructure, the paper’s authors contend, has two components: the underlying hardware, and the software that controls the overall behavior of the network. Ideally, they write, hardware should be simple, vendor-neutral, and future-proof, while the control plane should be flexible.

Infrastructure Inadequacies

As far as the authors are concerned, today’s network infrastructure doesn’t satisfy any of those criteria, with “the inadequacies in these infrastructural aspects . . .  probably more problematic than the Internet’s architectural deficiencies.” The deficiencies cannot be overcome through today’s SDN alone, but a better SDN can be built by, as mentioned above, “leveraging the insights underlying MPLS.”

And that’s where network fabrics enter the picture. The authors define a network fabric as “a contiguous and coherently controlled portion of the network infrastructure, and do not limit its meaning to current commercial fabric offerings.” Later, they refer to a network fabric as “a collection of forwarding elements whose primary purpose is packet transport. Under this definition, a network fabric does not provide more complex network services such as filtering or isolation.”

Filtering, isolation, policy, and other network services will be handled in software at the network edge, while the fabric will serve primarily as a “fast and cheap” interconnect. The authors contend that we can’t reach that objective with today’s SDN and OpenFlow.

OpenFlow’s Failings

They write that OpenFlow’s inability to “distinguish between the Host-Network interface (how hosts inform the network of requirements) and the Packet-Switch interface (how a packet identifies itself to a switch) has resulted in three problems, the first of which is that OpenFlow, in its current form, does not “fulfill the promise of simplified hardware” because the protocol requires switch hardware to support lookups of hundreds of bits.

The second problem relates to flexibility. As host requirements evolve, the paper’s authors anticipate “increased generality in the Host-Network interface,” which will mean increasing the generality in . . . “matching allowed and the actions supported” on any every switch supported by OpenFlow. The authors are concerned that “needing functionality to be present on every switch will bias the decision towards a more limited feature set, reducing OpenFlow’s generality.”

The third problem, similar to the second, is that the current implementation of OpenFlow “couples host requirements to the network core behavior.” Consequently, if there is a change in external network protocols (such as a transition from IPv4 to IPv6), the change in packet matching would necessarily extend into the network core.

Toward a New Infrastructure

Accordingly, the authors propose a network fabric that borrows heavily from MPLS, with its labels and encapsulation, and that also benefits from proposed modifications to the SDN model and to OpenFlow itself. What we get is a model that includes a network fabric as “architectural building block” within SDN. A diagram illustrating this SDN model shows a source host connecting to an ingress edge switch, which then applies MPLS-like label-based forwarding within the “fabric elements.” On the other side of the fabric, an egress edge switch ensures that packets are delivered to the destination host. The ingress and egress edge switches answer to an “edge controller,” while a “fabric controller” controls the fabric elements.

The key properties associated with the SDN fabric are separation of forwarding and separation of control. Separation of forwarding is intended to simplify the fabric forwarding elements, but also to “allow for independent evolution of fabric and edge.” As for separation of control, I quote from the paper:

While there are multiple reasons to keep the fabric and the edge’s control planes separate, the one we would like to focus on is that they are solving two different problems. The fabric is responsible for packet transport across the network, while the edge is responsible for providing more semantically rich services such as network security, isolation, and mobility. Separating the control planes allows them each to evolve separately, focusing on the specifics of the problem. Indeed, a good fabric should be able to support any number of intelligent edges (even concurrently) and vice versa.

Two OpenFlows?

As the authors then write, “if the fabric interfaces are clearly defined and standardized, then fabrics offer vendor independence and . . . limiting the function of the fabric to forwarding enables simpler switch implementations.”

The paper goes on to address the fabric-service model, fabric path setup, addressing and forwarding in the fabric, and how the edge context is mapped to the fabric (the options are address translation and encapsulation, which is the authors’ favored mechanism.)

To conclude, the authors look at fabric implications, one of which involves proposed changes to OpenFlow. The authors prescribe an “edge” version of OpenFlow, more general than the current manifestation of the protocol, and a “core” version of OpenFlow that is similar to MPLS forwarding. The authors say the current OpenFlow is an “unhappy medium,” insufficiently general for the edge and not simple enough for the core. The authors say the generic “edge version of OpenFlow should aggressively adopt the assumption that it will be processed in software, and be designed with that freedom in mind.”

Refinements to the Model

In the final analysis, the authors believe their proposal to address infrastructure as well as architecture will result in an SDN model “where the edge processing is done in software and the core in simple (network) hardware,” the latter of which would deliver the joint benefits of reduced costs and “vendor neutrality. “

The paper essentially proposes a refinement to both OpenFlow and to the SDN architectural model. We might call it SDN 2.0, though that might seem a little glib and presumptuous (at least on my part). Regardless of what we call it, it is evident that certain elements in the vanguard of the SDN community continue to work hard to deliver a new type of cloud-era networking that delivers software-based services running over a brawny but relatively simple network infrastructure.

How will the broader SDN community and established vendors in network infrastructure respond? We won’t have to wait long to find out.