Category Archives: Nicira Networks

On Network Engineers and Industry Eccentrics

On Network Engineers

Alan Cohen, former marketing VP at Nicira Networks (until just after it was acquired by VMware), wrote an engrossing piece on the rise and fall of “human IT middleware.” His article deals broadly with how system and network administrators are being displaced by software developers in an IT hierarchy reordered by datacenter virtualization, automation, and cloud computing.

Previously, the future of the networking professional has been discussed and debated in a number of forums. In early 2011, back in the veritable dark ages before the ascent of software-defined networking (SDN), Ziyad Basheer, writing at Greg Ferro’s EtherealMind, wondered about how automation tools would affect network administrators. In June of this year, Derick Winkworth (aka CloudToad), in his last column at Packet Pushers before he joined Juniper Networks, opined on the rise of network-systems engineers

Also at Packet Pushers, Ethan Banks subsequently argued that network engineers could survive the onslaught of SDN if they could adapt and master new skills, such as virtualization and network programmability.  Ivan Pepelnjak, though he sounded a more skeptical note on SDN, made a similar point with the aid of his “magic graphs.” 

Regardless of when SDN conquers the enterprise, the consensus is that now is  not the time for complacency. The message: Never stop learning, never stop evolving, and stay apprised of relevant developments. 

On Industry Eccentrics 

Another story this week led me to take a different stroll down memory lane. As I read about the truly bizarre case of John McAfee, recounted in news articles and in recollections of those who knew him, I was reminded of notable eccentrics in the networking industry.

Some of you wizened industry veterans might recall Cabletron Systems, from which Enterasys was derived, run in its idiosyncratic heyday by founders Bob Levine and Craig Benson.   There’s an old Inc. article from 1991, still available online, that captures some of the madness that was Cabletron. Here’s a snippet on Levine: 

He is, after all, prone to excess. Want to know how Levine has spent his newfound wealth? He bought a tank. A real one, with a howitzer on top and turrets that spin around. Last summer, for kicks, he chased a pizza-delivery boy, and the following day while “four-wheeling in the woods,” he ran smack into a tree. He emerged with one less tooth and a concussion. The buddy with him got 17 stitches. Levine also owns 15 guns, which he has, on occasion, used to shoot up his own sprinkler system. His 67-foot Hatteras is named Soldier of Fortune. Some people swear they’ve seen the magazine of the same name lying on his desk. “I’m not a mercenary or anything,” he says with a smile. “But if business ever goes bad. . . . “

Here’s an excerpt from the same article on Benson, who later served as Governor of New Hampshire

Last summer Benson joined 40 employees for a Sunday boat trip. Afterward he ordered two of them fired immediately. One had not even started yet. “I hated him,” says Benson, who was eventually persuaded to give the new hire a chance. At sales meetings, reports Kenneth Levine, it’s standard to conduct private polls on who will go next.

You cannot make this stuff up. Well, I couldn’t.

Lest you think networking’s only colorful characters were Cabletron’s dynamic duo, I’d like to reference Henry T. Nicholas III, Broadcom’s founder and former CEO. He even had a Vanity Fair article written about him, though ultimately the lurid charges against Nicholas were dropped.

Big Switch Emphasizes Ecosystem, Channel

Big Switch Networks made the news very early today — one article was posted precisely at midnight ET — with an announcement of general availability of its SDN controller, two applications that run on it, and an ecosystem of partners.

Customers also are in the picture, though it wasn’t made explicit in the Big Switch press release whether Fidelity Investments and Goldman Sachs are running Big Switch’s products in production networks.  In a Network World article, however, Jim Duffy writes that Fidelity and Goldman Sachs are “production customers for the Big Switch Open SDN product suite.” 

Controller, Applications, Ecosystem

The company’s announced products, encompassed within its Open Software Defined Networking architecture, feature the Big Network Controller, a proprietary version of the open-source Floodlight controller, and the two aforementioned applications. An SDN controller without applications is like, well, an operating system without applications. Accordingly, Big Switch has introduced Big Virtual Switch, an application for network virtualization, and Big Tap, a unified network monitoring application. 

Big Virtual Switch is the company’s answer to Nicira’s Network Virtualization Platform (NVP).  Big Switch says the product supports up to 32,000 virtual-network segments and can be integrated with cloud-management platforms such as OpenStack (Quantum), CloudStack, Microsoft System Center, and VMware vCenter.  As Big Switch illustrates on its website, Big Virtual Switch can be deployed on Big Network Controller in pure overlay networks, in pure OpenFlow networks, and in hybrid network-virtualization environments.  

According to the company, Big Virtual Switch can deliver significant CAPEX and OPEX benefits. A graphical figure — tagged Economics of Big Virtual Switchincluded in a product data sheet claims the company’s L2/L3 network virtualization facilitates “up to 50% more VMs per rack” and delivers CAPEX savings of $500,000 per rack annually and OPEX savings of $30,000 per rack annually. For those estimates, Big Switch assumes a rack size of 40 servers and suggests savings can be accrued across severs, operating-system instances, storage, networking, and operations. 

Strategies in Flux

Big Virtual Switch and Big Tap are essential SDN applications, but the company’s ultimate success in the marketplace will turn on the support its Big Network Controller receives from third-party vendors. Big Switch is aware of its external dependencies, which is why it has placed so much emphasis on its ecosystem, which it says includes A10 Networks, Arista Networks, Broadcom, Brocade, Canonical, Cariden Technologies, Citrix, Cloudscaling, Coraid, Dell, Endace, Extreme Networks, F5 Networks, Fortinet, Gigamon, Infoblox, Juniper Networks, Mellanox Technologies, Microsoft, Mirantis, Nebula, Palo Alto Networks, Piston Cloud Computing, Radware, StackOps, ThreatSTOP, and vArmour. The Big Switch press release includes an appendix of “supporting quotes” from those companies, but the company will require more than lip service from its ecosystem. 

Some companies will find that their interests are well aligned with those of Big Switch, but others are likely to be less motivated to put energy and resources into Big Switch’s SDN platform.  If you consider the vendor names listed above, you might deduce that the SDN strategies of more than a few are in flux. Some are considering whether to offer SDN controllers of their own. Even those who have no controller aspirations might be disinclined to bet too heavily or too early on a controller platform. They’ll follow the customers and the money. 

A growing number of commercial controllers are on the market (VMware/Nicira, NEC, and Big Switch) or have been announced as coming to market (IBM, HP, Cisco). Others will follow. Loyalties will shift as controller fortunes wax and wane. 

Courting the Channel 

With that in mind, Big Switch is seeking to enlist channel partners as well as technology partners. In a CRN article, we learn that Big Switch “has begun to recruit systems integrator and data center infrastructure-focused solution providers that can consult and design network architecture using Big Switch software and products from a galaxy of ecosystem partners.” In fact, Big Switch wants all its commercial sales to go through channel partners. 

In the CRN piece, Dave Butler, VP of sales at Big Switch, is candid about the symbiotic relationship the company desires from partners:

“None of our products work well alone in a data center — this is a very rigorous and rich ecosystem of partners. We’ll pay a finder’s fee to anyone who brings the right opportunity to us, but we’re not really a product sale. We need the integrators that can create a bundled solution, because that’s what makes the difference.”

. . . . “We bring them (partners) in as the specialist, and they have probably a greater touch than we might. We are not taking deals direct. Then, you have to do all the work by yourself. This is a perfect solution for their services and expertise. And, they can make money with us.”

Needs a Little Help from Its Friends

The plan is clear. Big Switch’s vendor ecosystem is meant to attract channel partners that already are selling those vendors’ products and are interested in expanding into SDN solutions. The channel partners, including SIs and datacenter-solution providers, will then bring Big Switch’s SDN platform to customers, with whom they have existing relationships. 

In theory, it all coheres. Big Switch knows it can’t go it alone against industry giants. It knows it needs more than a little help from its friends in the vendor community and the channel. 

For Big Switch, the vendor ecosystem expedites channel recruitment, and an effective channel accelerates exposure to customers. Big Switch has to move fast and demonstrate staying power. The controller race is far from over. 

Between What Is and What Will Be

I have refrained from writing about recent developments in software-defined networking (SDN) and in the larger realm of what VMware, now hosting VMworld in San Francisco, calls the  “software-defined data center” (SDDC).

My reticence hasn’t resulted from indifference or from hype fatigue — in fact, these technologies do not possess the jaundiced connotations of “hype” — but from a realization that we’ve entered a period of confusion, deception, misdirection, and murk.  Amidst the tumult, my single, independent voice — though resplendent in its dulcet tones — would be overwhelmed or forgotten.

Choppy Transition

We’re in the midst of a choppy transitional period. Where we’ve been is behind us, where we’re going is ahead of us, and where we find ourselves today is between the two. So-called legacy vendors, in both networking and compute hardware, are trying to slow progress toward the future, which will involve the primacy of software and services and related business models. There will be virtualized infrastructure, but not necessarily converged infrastructure, which is predicated on the development and sale of proprietary hardware by a single vendor or by an exclusive club of vendors.

Obviously, there still will be hardware. You can’t run software without server hardware, and you can’t run a network without physical infrastructure. But the purpose and role of that hardware will change. The closed box will be replaced by an open one, not because of any idealism or panglossian optimism, but because of economic, operational, and technological imperatives that first are remaking the largest of public-cloud data centers and soon will stretch into private clouds at large enterprises.

No Wishful Thinking

After all, the driving purpose of the Open Networking Foundation (ONF) involved shifting the balance of power into the hands of customers, who had their own business and operational priorities to address. Where legacy networking failed them, SDN provided a way forward, saving money on capital expenditures and operational costs while also providing flexibility and responsiveness to changing business and technology requirements.

The same is true for the software-defined data center, where SDN will play a role in creating a fluid pool of virtualized infrastructure that can be utilized to optimal business benefit. What’s important to note is that this development will not be restricted to the public cloud-service providers, including all the big names at the top of the ONF power structure. VMware, which coined software-defined data center, is aiming directly for the private cloud, as Greg Ferro mentioned in his analysis of VMware’s acquisition of Nicira Networks.

Fighting Inevitability

Still, it hasn’t happened yet, even though it will happen. Senior staff and executives at the incumbent vendors know what’s happening, they know that they’re fighting against an inevitability, but fight it they must. Their organizations aren’t built to go with this flow, so they will resist it.

That’s where we find ourselves. The signal-to-noise ratio isn’t great. It’s a time marked by disruption and turmoil. The dust and smoke will clear, though. We can see which way the wind is blowing.

Network-Virtualization Startup PLUMgrid Announces Funding, Reveals Little

Admit it, you thought I’d lost interest in software-defined networking (SDN), didn’t you?

But you know that couldn’t be true. I’m still interested in SDN and how it facilitates network virtualization, network programmability, and what the empire-building folks at EMC/VMware are billing as the software-defined data center, which obviously encompasses more than just networking.

Game On

Apparently I’m not the only one who retains an abiding interest in SDN. In the immediate wake of VMware’s headline-grabbing acquisition of network-virtualization startup Nicira Networks, entrepreneurs and venture capitalists want us to know that the game has just begun.

Last week, for example, we learned that PLUMgrid, a network-virtualization startup in the irritatingly opaque state of development known as stealth mode, has raised $10.7 million in first-round funding led by moneybags VCs U.S. Venture Partners (USVP) and Hummer Winblad Venture Partners. USVP’s Chris Rust and Hummer Winblad’s Lars Leckie have joined PLUMgrid’s board of directors. You can learn more about the individual board members and the company’s executive team, which includes former Cisco employees who were involved in the networking giant’s early dalliance with OpenFlow a few years ago, by perusing the biographies on the PLUMgrid website.

Looking for Clues 

But don’t expect the website to provide a helpful description of the products and technologies that PLUMgrid is developing, apparently in consultation with prospective early customers. We’ll have to wait until the end of this year, or early next year, for PLUMgrid to disclose and discuss its products.

For now, what we get is a game of technology charades, in which PLUMgrid executives, including CEO Awais Nemat, drop hints about what the company might be doing and their media interlocutors then guess at what it all means. It’s amusing at times, but it’s not illuminating.

At SDNCentral, Matt Palmer surmises that PLUMgrid might be playing in “the service orchestration arena for both physical and virtual networks.” In an article written by Jim Duffy at Network World, we learn that PLUMgrid sees its technology as having applicability beyond the parameters of network virtualization. In the same article, PLUMgrid’s Nemat expresses reservations about OpenFlow. To wit:

 “It is a great concept (of decoupling the control plane for the data plane) but it is a demonstration of a concept. Is OpenFlow the right architecture for that separation? That remains to be seen.”

More to Come

That observation is somewhat reminiscent of what Scott Schenker, Nicira co-founder and chief scientist and a professor in the Electrical Engineering and Computer Science Department at the University of California at Berkeley, had to say about OpenFlow last year. (Shenker also is a co-founder and officer of the Open Networking Foundation, a champion and leading proponent of OpenFlow.)

What we know for certain about PLUMgrid is that it is based in Sunnyvale, Calif., and plans to sell its network-virtualization software to businesses that manage physical, virtual, and cloud data centers. In a few months, perhaps before the end of the year, we’ll know more.

Xsigo: Hardware Play for Oracle, Not SDN

When I wrote about Xsigo earlier this year, I noted that many saw Oracle as a potential acquirer of the I/O virtualization vendor. Yesterday morning, Oracle made those observers look prescient, pulling the trigger on a transaction of undisclosed value.

Chris Mellor at The Register calculates that Oracle might have paid about $800 million for Xsigo, but we don’t know. What we do know is that Xsigo’s financial backers were looking for an exit. We also know that Oracle was willing to accommodate it.

For the Love of InfiniBand, It’s Not SDN

Some think Oracle bought a software-defined networking (SDN) company. I was shocked at how many journalists and pundits repeated the mantra that Oracle had moved into SDN with its Xsigo acquisition. That is not right, folks, and knowledgeable observers have tried to rectify that misconception.

I’ve gotten over a killer flu, and I have a residual sinus headache that sours my usually sunny disposition, so I’m no mood to deliver a remedial primer on the fundamentals of SDN. Suffice it to say, readers of this forum and those familiar with the pronouncements of the ONF will understand that what Xsigo does, namely I/O virtualization, is not SDN.  That is not to say that what Xsigo does is not valuable, perhaps especially to Oracle. Nonetheless, it is not SDN.

Incidentally, I have seen a few commentators throwing stones at the Oracle marketing department for depicting Xsigo as an SDN player, comparing it to Nicira Networks, which VMware is in the process of acquiring for a princely sum of $1.26 billion. It’s probably true that Oracle’s marketing mavens are trying to gild their new lily by covering it with splashes of SDN gold, but, truth be told, the marketing team at Xsigo began dressing their company in SDN garb earlier this year, when it became increasingly clear that SDN was a lot more than an ephemeral science project involving OpenFlow and boffins in lab coats.

Why Confuse? It’ll be Obvious Soon Enough

At Network Computing, Howard Marks tries to get everybody onside. I encourage you to read his piece in its entirety, because it provides some helpful background and context, but his superbly understated money quote is this one: “I’ve long been intrigued by the concept of I/O virtualization, but I think calling it software-defined networking is a stretch.”

In this industry, words are stretched and twisted like origami until we can no longer recognize their meaning. The result, more often than not, is befuddlement and confusion, as we witnessed yesterday, an outcome that really doesn’t help anybody. In fact, I would argue that Oracle and Xsigo have done themselves a disservice by playing the SDN card.

As Marks points out, “Xsigo’s use of InfiniBand is a good fit with Oracle’s Exadata and other clustered solutions.” What’s more, Matt Palmer, who notes that Xsigo is “not really an SDN acquisition,” also writes that “Oracle is the perfect home for Xsigo.” Palmer makes the salient point that Xsigo is essentially a hardware play for Oracle, one that aligns with Oracle’s hardware-centric approaches to compute and storage.

Oracle: More Like Cisco Than Like VMWare

Oracle could have explained its strategy and detailed the synergies between Xsigo and its family of hardware-engineered “Exasystems” (Exadata and Exalogic) –  and, to be fair, it provided some elucidation (see slide 11 for a concise summary) — but it muddied the waters with SDN misdirection, confusing some and antagonizing others.

Perhaps my analysis is too crude, but I see a sharp divergence between the strategic direction VMware is heading with its acquisition of Nicira and the path Oracle is taking with its Exasystems and Xsigo. Remember, Oracle, after the Sun acquisition, became a proprietary hardware vendor. Its focus is on embedding proprietary hooks and competitive differentiation into its hardware, much like Cisco Systems and the other converged-infrastructure players.

VMware’s conception of a software-defined data center is a completely different proposition. Both offer virtualization, both offer programmability, but VMware treats the underlying abstracted hardware as an undifferentiated resource pool. Conversely, Oracle and Cisco want their engineered hardware to play integral roles in data-center virtualization. Engineered hardware is what they do and who they are.

Taking the Malocchio in New Directions

In that vein, I expect Oracle to look increasingly like Cisco, at least on the infrastructure side of the house. Does that mean Oracle soon will acquire a storage player, such as NetApp, or perhaps another networking company to fill out its data-center portfolio? Maybe the latter first, because Xsigo, whatever its merits, is an I/O virtualization vendor, not a switching or routing vendor. Oracle still has a networking gap.

For reasons already belabored, Oracle is an improbable SDN player. I don’t see it as the likeliest buyer of, say, Big Switch Networks. IBM is more likely to take that path, and I might even get around to explaining why in a subsequent post. Instead, I could foresee Oracle taking out somebody like Brocade, presuming the price is right, or perhaps Extreme Networks. Both vendors have been on and off the auction block, and though Oracle’s Larry Ellison once disavowed acquisitive interest in Brocade, circumstances and Oracle’s disposition have changed markedly since then.

Oracle, which has entertained so many bitter adversaries over the years — IBM, SAP, Microsoft, SalesForce, and HP among them — now appears ready to cast its “evil eye” toward Cisco.

Some Thoughts on VMware’s Strategic Acquisition of Nicira

If you were a regular or occasional reader of Nicira Networks CTO Martin Casado’s blog, Network Heresy, you’ll know that his penultimate post dealt with network virtualization, a topic of obvious interest to him and his company. He had written about network virtualization many times, and though Casado would not describe the posts as such, they must have looked like compelling sales pitches to the strategic thinkers at VMware.

Yesterday, as probably everyone reading this post knows, VMware announced its acquisition of Nicira for $1.26 billion. VMware will pay $1.05 billion in cash and $210 million in unvested equity awards.  The ubiquitous Frank Quattrone and his Quatalyst Partners, which reportedly had been hired previously to shop Brocade Communications, served as Nicira’s adviser.

Strategic Buy

VMware should have surprised no one when it emphasized that its acquisition of Nicira was a strategic move, likely to pay off in years to come, rather than one that will produce appreciable near-term revenue. As Reuters and the New York Times noted, VMware’s buy price for Nicira was 25 times the amount ($50 million) invested in the company by its financial backers, which include venture-capital firms Andreessen Horowitz, Lightspeed,and NEA. Diane Greene, co-founder and former CEO of VMware — replaced four years ago by Paul Maritz — had an “angel” stake in Nicira, as did as Andy Rachleff, a former general partner at Benchmark Capital.

Despite its acquisition of Nicira, VMware says it’s not “at war” with Cisco. Technically, that’s correct. VMware and its parent company, EMC, will continue to do business with Cisco as they add meat to the bones of their data-center virtualization strategy. But the die was cast, and  Cisco should have known it. There were intimations previously that the relationship between Cisco and EMC had been infected by mutual suspicion, and VMware’s acquisition of Nicira adds to the fear and loathing. Will Cisco, as rumored, move into storage? How will Insieme, helmed by Cisco’s aging switching gods, deliver a rebuttal to VMware’s networking aspirations? It won’t be too long before the answers trickle out.

Still, for now, Cisco, EMC, and VMware will protest that it’s business as usual. In some ways, that will be true, but it will also be a type of strategic misdirection. The relationship between EMC and Cisco will not be the same as it was before yesterday’s news hit the wires. When these partners get together for meetings, candor could be conspicuous by its absence.

Acquisitive Roads Not Traveled

Some have posited that Cisco might have acquired Nicira if VMware had not beaten it to the punch. I don’t know about that. Perhaps Cisco might have bought Nicira if the asking price were low, enabling Cisco to effectively kill the startup and be done with it. But Cisco would not have paid $1.26 billion for a company whose approach to networking directly contradicts Cisco’s hardware-based business model and market dominance. One typically doesn’t pay that much to spike a company, though I suppose if the prospective buyer were concerned enough about a strategic technology shift and a major market inflection, it might do so. In this case, though, I suspect Cisco was blindsided by VMware. It just didn’t see this coming — at least not now, not at such an early state of Nicira’s development.

Similarly, I didn’t see Microsoft or Citrix as buyers of Nicira. Microsoft is distracted by its cloud-service provider aspirations, and the $1.26 billion would have been too rich for Citrix.

IBM’s Moves and Cisco’s Overseas Cash Horde

One company I had envisioned as a potential (though less likely) acquirer of Nicira was IBM, which already has a vSwitch. IBM might now settle for the SDN-controller technology available from Big Switch Networks. The two have been working together on IBM’s Open Data Center Interoperable Network (ODIN), and Big Switch’s technology fits well with IBM’s PureSystems and its top-down model of having application workloads command and control  virtualized infrastructure. As the second network-virtualization domino to fall, Big Switch likely will go for a lower price than did Nicira.

On Twitter, Dell’s Brad Hedlund asked whether Cisco would use its vast cash horde to strike back with a bold acquisition of its own. Cisco has two problems here. First, I don’t see an acquisition that would effectively blunt VMware’s move. Second, about 90 percent of Cisco’s cash (more than $42 billion) is offshore, and CEO John Chambers doesn’t want to take a tax hit on its repatriation. He had been hoping for a “tax holiday” from the U.S. government, but that’s not going to happen in the middle of an election campaign, during a macroeconomic slump in which plenty of working Americans are struggling to make ends meet. That means a significant U.S.-based acquisition likely is off the table, unless the target company is very small or is willing to take Cisco stock instead of cash.

Cisco’s Innovator’s Dilemma

Oh, and there’s a third problem for Cisco, mentioned earlier in this prolix post. Cisco doesn’t want to embrace this SDN stuff. Cisco would rather resist it. The Cisco ONE announcement really was about Cisco’s take on network programmability, not about SDN-type virtualization in which overlay networks run atop an underyling physical network.

Cisco is caught in a classic innovator’s dilemma, held captive by the success it has enjoyed selling prodigious amounts of networking gear to its customers, and I don’t think it can extricate itself. It’s built a huge and massively successful business selling a hardware-based value proposition predicated on switches and routers. It has software, but it’s not really a software company.

For Cisco, the customer value, the proprietary hooks, are in its boxes. Its whole business model — which, again, has been tremendously successful — is based around that premise. The entire company is based around that business model.  Cisco eventually will have to reinvent itself, like IBM did after it failed to adapt to client-server computing, but the day of reckoning hasn’t arrived.

On the Defensive

Expect Cisco to continue to talk about the northbound interface (which can provide intelligence from the switch) and about network programmability, but don’t expect networking’s big leopard to change its spots. Cisco will try to portray the situation differently, but it’s defending rather than attacking, trying to hold off the software-based marauders of infrastructure virtualization as long as possible. The doomsday clock on when they’ll arrive in Cisco data centers just moved up a few ticks with VMware’s acquisition of Nicira.

What about the other networking players? Sadly, HP hasn’t figured out what to about SDN, even though OpenFlow is available on its former ProCurve switches. HP has a toe dipped in the SDN pool, but it doesn’t seeming willing to take the initiative. Juniper, which previously displayed ingenuity in bringing forward QFabric, is scrambling for an answer. Brocade is pragmatically embracing hybrid control planes to maintain account presence and margins in the near- to intermediate-term.

Arista Networks, for its part, might be better positioned to compete on networking’s new playing field. Arista Networks’ CEO Jayshree Ullal had the following to say about yesterday’s news:

“It’s exciting to see the return of innovative networking companies and the appreciation for great talent/technology. Software Defined Networking (SDN) is indeed disrupting legacy vendors. As a key partner of VMware and co-innovator in VXLANs, we welcome the interoperability of Nicira and VMWare controllers with Arista EOS.”

Arista’s Options

What’s interesting here is that Arista, which invariably presents its Extensible OS (EOS) as “controller friendly,” earlier this year demonstrated interoperability with controllers from VMware, Big Switch Networks, and Nebula, which has built a cloud controller for OpenStack.

One of Nebula’s investors is Andy Bechtolsheim, whom knowledgeable observers will recognize as the chief development officer (CDO) of, and major investor in, Arista Networks.  It is possible that Bechtolsheim sees a potential fit between the two companies — one building a cloud controller and one delivering cloud networking. To add fuel to this particular fire, which may or may not emit smoke, note that the Nebula cloud controller already features Arista technology, and that Nebula is hiring a senior network engineer, who ideally would have “experience with cloud infrastructure (OpenStack, AWS, etc. . . .  and familiarity with OpenFlow and Open vSwitch.”

 Open or Closed?

Speaking of Open vSwitch, Matt Palmer at SDN Centralwill feel some vindication now that VMware has purchased a company whose engineering team has made significant contributions to the OVS code. Palmer doubtless will cast a wary eye on VMware’s intentions toward OVS, but both Steve Herrod, VMware’s CTO, and Martin Casado, Nicira’s CTO, have provided written assurances that their companies, now combining, will not retreat from commitments to OVS and to Open Flow and Quantum, the OpenStack networking  project.

Meanwhile, GigaOm’s Derrick Harris thinks it would be bad business for VMware to jilt the open-source community, particularly in relation to hypervisors, which “have to be treated as the workers that merely carry out the management layer’s commands. If all they’re there to do is create virtual machines that are part of a resource pool, the hypervisor shouldn’t really matter.”

This seems about right. In this brave new world of virtualized infrastructure, the ultimate value will reside in an intelligent management layer.

PS: I wrote this post under a slight fever and a throbbing headache, so I would not be surprised to discover belatedly that it contains at least a couple typographical errors. Please accept my apologies in advance.

Debate Over Openness of Open vSwitch

Late last week, the illustrious Ivan Pepelnjak pointed me to a post by Matthew Palmer at SDN Central. Pepelnjak thought the post would interest me, and he was right.

While I encourage you to read Palmer’s post firsthand, I will summarize it briefly. Basically, Palmer makes a two-part argument and then leaves us with unsettled questions. The first part of his argument is that the virtual switch (vSwitch) has become the “prime real-estate for network virtualization within the datacenter.” As such, the vSwitch has become a strategic battleground for vendors and service providers alike.

This brings us to the second part of Palmer’s argument, which is more controversial. Palmer implies that the first part of his argument, about the valuable real-estate inhabited by the vSwitch, wouldn’t be a major point of contention if a genuine and viable open vSwitch — and not just an open-source vSwitch — were available. Alas, he says, that is not the case.

Open . . . or Just Open Source? 

Palmer suggests that Open vSwitch (OVS), which wears the mantle of open-source vSwitch, is a proprietary wolf in sheep’s clothing.  He says Open vSwitch might be open source, but that it is far from open. Instead, he says, it is under the direction of one company, Nicira Networks, which “controls the features, capabilities, and protocols supported within OVS and when they are released.”

Writes Palmer:

“Since OVS is ‘Open’ Nicira will gladly take your free labor to develop on OVS and give you an Apache license to ‘fork’ your own distribution; but they essentially decide which features and protocols, from what contributors will be included in the mainline distribution at what time.  This basically masquerades OVS as the ‘free’ switch in a freemium business model where the vendor locks you in with their better, proprietary, paid for version.  This is why many others in the networking community are looking for alternatives to invest their time and development resources. “

From Naive Newcomer to Proprietary Villain

My first reaction was that Nicira must be making some headway commercially. I don’t think I’ve ever seen a vendor go from virtual-networking upstart to proprietary villain in a shorter period of time. Palmer is an accomplished business-development executive, and he corresponds regularly with a large circle of industry notables. Clearly, Nicira has gotten their attention.

Not long ago, many denizens of that same community dismissed Nicira as a bunch of technically brilliant but commercially ingenuous SDN neophytes who weren’t a serious threat to the networking industry’s status quo. If Palmer’s post is an accurate barometer of industry sentiment, that view has undergone significant revision.

In some ways, Palmer’s post was foreshadowed by a commentary from Dell’s Brad Hedlund earlier this year. Whereas Palmer bemoaned the proprietary stranglehold that Nicira might gain over the Open Networking Foundation (ONF) and large swathes of the SDN community, though, Hedlund took a different tack. While he, like Palmer, noted that Nicira’s engineers played a defining role in developing Open vSwitch, Hedlund was more interested in how Nicira’s approach to SDN prefigured a “significant shift .  . .  when it comes to the relevance and role of “protocols” in building next generation virtual data center networks.”

Diverse Project

In light of Palmer’s charges, I thought I’d reach out to Nicira to solicit a reply. Fortunately, Martin Casado, Nicira’s CTO, was kind enough to get back to me with what he termed “off-the-cuff comments” on Palmer’s post.

His first point was that “Nicira doesn’t have a proprietary vSwitch (never has).” In his post, Palmer wrote that Nicira “has their own proprietary version of Open vSwitch . . . . “

Casado also noted that “Nicira’s kernel module is in mainline Linux, which is clearly not controlled by Nicira,” and that “OVS is one of the largest and most diverse open source projects in the world,” with a “profile better and broader than most projects.”

The Nicira CTO also wrote that Open vSwitch is used by “potentially competitive companies,” including Cisco, Big Switch Networks, NEC, and Midokura. Casado wrote that these vendors are “welcome to fork it, or do whatever they want with it.” On that point, he and Palmer appear to be in agreement, though Palmer contends that Nicira essentially controls the direction of OVS.

SDN’s Long, Hot Summer

Finally, though Palmer’s post suggested that Nicira’s could undermine OpenFlow by swapping it out for a “proprietary (i.e. non-OpenFlow) protocol that only works with Nicira’s vSwitch and controller,” Casado responded as follows: “Development of OpenFlow 1.1 – 1.3 is moving ahead at an extremely aggressive pace.  Multiple organizations are working on it (NTT, Google, T-Systems, and Nicira), and much of the implementation is done and has been committed.”

That response, in and of itself, does not close the door on Nicira leveraging another protocol — and we know that Nicira has proposed two variants of OpenFlow, one at the edge and one in the core, to support an MPLS-like SDN fabric — but it also suggests that OpenFlow isn’t in any imminent danger of being sidelined or relegated to oblivion.

Still, Palmer’s post raises compelling questions and demonstrates that, in the summer of 2012, SDN is generating heat as well as light.

SDN Focus Turns to Infrastructure

At this year’s SIGCOMM conference in Helsinki, Finland, a workshop called Hot Topics in Software-Defined Networking (HotSDN) will be held on August 13.  A number of papers will be presented as part of HotSDN’s technical program, but one has been written as a “call to arms for the SDN community.”

The paper is called: “Fabric: A Retrospective on Evolving SDN.” Its authors are Martin Casado, CTO of Nicira Networks; Teemu Koponen of the International Computer Science Institute (ICSI); Scott Shenker, a co-founder of Nicira Networks (along with Casado) and also a professor of computer science at University of California, Berkeley; and Amin Tootoonchian, a PhD candidate at the University of Toronto and a visiting researcher at ICSI.

SDN Fabrics

We’ll get to their definition of fabric soon enough, but let’s set the stage properly by explaining at the outset that the paper discusses SDN’s shortcomings and proposes “how they can be overcome by adopting the insight underlying MPLS,” which is seen as helping to facilitate an era of simple network hardware and flexible network control.

In the paper’s introductory section, the authors contend that “current networks are too expensive, too complicated to manage, too prone to vendor lock-in, and too hard to change. “ They write that the SDN community has done considerable research on network architecture, but not as much on network infrastructure, an omission that they then attempt to rectify.

Network infrastructure, the paper’s authors contend, has two components: the underlying hardware, and the software that controls the overall behavior of the network. Ideally, they write, hardware should be simple, vendor-neutral, and future-proof, while the control plane should be flexible.

Infrastructure Inadequacies

As far as the authors are concerned, today’s network infrastructure doesn’t satisfy any of those criteria, with “the inadequacies in these infrastructural aspects . . .  probably more problematic than the Internet’s architectural deficiencies.” The deficiencies cannot be overcome through today’s SDN alone, but a better SDN can be built by, as mentioned above, “leveraging the insights underlying MPLS.”

And that’s where network fabrics enter the picture. The authors define a network fabric as “a contiguous and coherently controlled portion of the network infrastructure, and do not limit its meaning to current commercial fabric offerings.” Later, they refer to a network fabric as “a collection of forwarding elements whose primary purpose is packet transport. Under this definition, a network fabric does not provide more complex network services such as filtering or isolation.”

Filtering, isolation, policy, and other network services will be handled in software at the network edge, while the fabric will serve primarily as a “fast and cheap” interconnect. The authors contend that we can’t reach that objective with today’s SDN and OpenFlow.

OpenFlow’s Failings

They write that OpenFlow’s inability to “distinguish between the Host-Network interface (how hosts inform the network of requirements) and the Packet-Switch interface (how a packet identifies itself to a switch) has resulted in three problems, the first of which is that OpenFlow, in its current form, does not “fulfill the promise of simplified hardware” because the protocol requires switch hardware to support lookups of hundreds of bits.

The second problem relates to flexibility. As host requirements evolve, the paper’s authors anticipate “increased generality in the Host-Network interface,” which will mean increasing the generality in . . . “matching allowed and the actions supported” on any every switch supported by OpenFlow. The authors are concerned that “needing functionality to be present on every switch will bias the decision towards a more limited feature set, reducing OpenFlow’s generality.”

The third problem, similar to the second, is that the current implementation of OpenFlow “couples host requirements to the network core behavior.” Consequently, if there is a change in external network protocols (such as a transition from IPv4 to IPv6), the change in packet matching would necessarily extend into the network core.

Toward a New Infrastructure

Accordingly, the authors propose a network fabric that borrows heavily from MPLS, with its labels and encapsulation, and that also benefits from proposed modifications to the SDN model and to OpenFlow itself. What we get is a model that includes a network fabric as “architectural building block” within SDN. A diagram illustrating this SDN model shows a source host connecting to an ingress edge switch, which then applies MPLS-like label-based forwarding within the “fabric elements.” On the other side of the fabric, an egress edge switch ensures that packets are delivered to the destination host. The ingress and egress edge switches answer to an “edge controller,” while a “fabric controller” controls the fabric elements.

The key properties associated with the SDN fabric are separation of forwarding and separation of control. Separation of forwarding is intended to simplify the fabric forwarding elements, but also to “allow for independent evolution of fabric and edge.” As for separation of control, I quote from the paper:

While there are multiple reasons to keep the fabric and the edge’s control planes separate, the one we would like to focus on is that they are solving two different problems. The fabric is responsible for packet transport across the network, while the edge is responsible for providing more semantically rich services such as network security, isolation, and mobility. Separating the control planes allows them each to evolve separately, focusing on the specifics of the problem. Indeed, a good fabric should be able to support any number of intelligent edges (even concurrently) and vice versa.

Two OpenFlows?

As the authors then write, “if the fabric interfaces are clearly defined and standardized, then fabrics offer vendor independence and . . . limiting the function of the fabric to forwarding enables simpler switch implementations.”

The paper goes on to address the fabric-service model, fabric path setup, addressing and forwarding in the fabric, and how the edge context is mapped to the fabric (the options are address translation and encapsulation, which is the authors’ favored mechanism.)

To conclude, the authors look at fabric implications, one of which involves proposed changes to OpenFlow. The authors prescribe an “edge” version of OpenFlow, more general than the current manifestation of the protocol, and a “core” version of OpenFlow that is similar to MPLS forwarding. The authors say the current OpenFlow is an “unhappy medium,” insufficiently general for the edge and not simple enough for the core. The authors say the generic “edge version of OpenFlow should aggressively adopt the assumption that it will be processed in software, and be designed with that freedom in mind.”

Refinements to the Model

In the final analysis, the authors believe their proposal to address infrastructure as well as architecture will result in an SDN model “where the edge processing is done in software and the core in simple (network) hardware,” the latter of which would deliver the joint benefits of reduced costs and “vendor neutrality. “

The paper essentially proposes a refinement to both OpenFlow and to the SDN architectural model. We might call it SDN 2.0, though that might seem a little glib and presumptuous (at least on my part). Regardless of what we call it, it is evident that certain elements in the vanguard of the SDN community continue to work hard to deliver a new type of cloud-era networking that delivers software-based services running over a brawny but relatively simple network infrastructure.

How will the broader SDN community and established vendors in network infrastructure respond? We won’t have to wait long to find out.

Addressing SDN Burnout

In the universe of staccato text bursts that is Twitter, I have diagnosed a recent exhaustion of interest in software defined networking (SDN).

To a certain degree, the burnout is understandable. It is a relatively nascent space, generating more in the way of passionate sound and fury than in commercial substance. Some Twitter denizens with a networking bent have even questioned whether an SDN market — involving buyers as well as sellers — actually exists.

On that score, the pointed skepticism has been refuted. SDN vendors, including Nicira Networks and Big Switch Networks, increasingly are reporting sales and customer traction. What’s more, market-research firms have detected signs of commercial life. International Data Corporation (IDC), for example, has said the SDN market will be worth a modest $50 million this  year,  but that it will grow to $200 million in 2013 and to $2 billion by 2016. MarketsandMarkets estimates that the global SDN market will expand from $198 million in 2012 to $2.10 billion in 2017, representing a compound annual growth rate (CAGR) of 60.43% during that span.  I’m sure other market measurers will make their projections soon enough.

But just what are they counting? SDN isn’t a specific product category, like a switch; it’s an architectural model. In IDC’s case, the numbers include SDN-specific switching and routing as well as services and software (presumably including controllers and the applications that run on them). MarketsandMarkets is counting  SDN “switching, controllers, cloud virtualization applications, and network virtualization security solutions.”

Still, established networking vendors will argue that the SDN hype is out of proportion with on-the-ground reality. In that respect, they can cite recent numbers from Infonetics Research that estimate global revenue derived from sales of data-center network equipment — the market segment SDN is likely to make most headway during the next several years — was worth $2.2 billion in the first quarter of 2012. Those numbers include sales of Ethernet switches, application delivery controllers (ADCs), and WAN-optimization appliances.

This is where things get difficult and admittedly subjective. If we’re considering where the industry and customers stand today, then there’s no question that SDN gets more attention than it warrants. Most of us, including enterprise IT staff, do not wish to live in the past and don’t have the luxury of looking too far into the future.

That said, some people have the job of looking ahead and trying to figure out how the future will be different from the present. In the context of SDN, those constituencies would include the aforementioned market researchers as well as venture capitalists, strategic planners, and technology visionaries. I would also include in this class industry executives at established and emerging vendors, both those directly involved in networking technologies and those that interact with networking infrastructure in areas such as virtualization and data-center management and orchestration.

For these individuals, SDN is more than a sensationalized will-o’-the-wisp.  It’s coming. The only question is when, and getting that timing right will be tremendously important.

I suppose my point here is that some can afford to be dismissive of SDN, but others definitely cannot and should not. Is interest in SDN overdone? That’s subjective, and therefore it’s your call. I, for one, will continue to pay close attention to developments in a realm that is proving refreshingly dynamic, both technologically and as an emerging market.

Putting an ONF Conspiracy Theory to Rest

We know that the Open Networking Foundation (ONF) is controlled by the six major service providers that constitute its board of directors.

It is no secret that the ONF is built this way by design. The board members wanted to make sure that they got what they wanted from the ONF’s deliberations, and they felt that existing standards bodies, such as the IETF and IEEE, were gerrymandered and dominated by vendors with self-serving agendas.

The ONF was devised with a different purpose in mind — not to serve the interests of the vendors, but to further the interests of the service-provider community, especially the service providers who sit on the ONF’s board of directors. In their view, conventional networking was a drag on their innovation and business agility, obstructing progress elsewhere in their data centers and IT operations. Whereas compute and storage resources had been virtualized and orchestrated, networking remained a relatively costly and unwieldy fiefdom ruled by “masters of complexity” rummaging manually through an ever-expanding bag of ad-hoc protocols.

Organizing for Clout

Not getting what they desired from their networking vendors, the service providers decided to seize the initiative. Acting on its own,  Google already had done just that, designing and deploying DIY networking gear.

The study of political elites tells us that an organized minority comprising powerful interests can impose its will on a disorganized majority.  In the past, as individual companies, the ONF board members had been unable to counter the agendas of the networking vendors. Together, they hoped to effect the change they desired.

So, we have the ONF, and it’s unlike the IETF and the IEEE in more ways than one. While not a standards body — the ONF describes itself as a “non-profit consortium dedicated to the transformation of networking through the development and standardization of a unique architecture called Software-Defined Networking (SDN)” — there’s no question that the ONF wants to ensure that it defines and delivers SDN according to its own rules  And at its own pace, too, not tied to the product-release schedules of networking vendors.

In certain respects, the ONF is all about consortium of customers taking control and dictating what it wants from the vendor community, which, in this case, should be understood to comprise not only OEM networking vendors, but also ODMs, SDN startups, and purveyors of merchant silicon.

Vehicle of Insurrection?

Just to ensure that its leadership could not be subverted, though, the ONF stipulated that vendors would not be permitted to serve on its board of directors. That means that representatives of Cisco, Juniper, and HP Networking, for example, will never be able to serve on the ONF board.

At least within their self-determined jurisdiction, the ONF’s board members call all the shots. Or do they?

Commenting on my earlier post regarding Cisco’s SDN counterstrategy, a reader, who wished to remain anonymous (Anon4This1), wrote the following:

Regarding this point: “Ultimately, [Cisco] does not control the ONF.”

That was one of the key reasons for the creation of the ONF. That is, there was a sense that existing standards bodies were under the collective thumb of large vendors. ONF was created such that only the ONF board can vote on binding decisions, and no vendors are allowed on the board. Done, right? Ah, well, not so fast. The ONF also has a Technical Advisory Group (TAG). For most decisions, the board actually acts on the recommendations of the TAG. The TAG does not have the same membership restrictions that apply to the ONF board. Indeed, the current chairman of the TAG is none other than influential Cisco honcho, Dave Ward. So if the ONF board listens to the TAG, and the TAG listens to its chairman… Who has more control over the ONF than anyone? https://www.opennetworking.org/about/tag

Board’s Iron Grip

If you follow the link provided by my anonymous commenter, you will find an extensive overview of the ONF’s Technical Advisory Group (TAG). Could the TAG, as constituted, be the tail that wags the ONF dog?

My analysis leads me to a different conclusion.  As I see it, the TAG serves at the pleasure of the ONF board of directors, individually and collectively. Nobody on the TAG does so without the express consent of the board of directors. Moreover, “TAG term appointments are annual and the chair position rotates quarterly.” Whereas Cisco’s Dave Ward serves as the current chair, his term will expire and somebody else will succeed him.

What about the suggestion that the “board actually acts on recommendations of the TAG,” as my commenter asserts. In many instances, that might be true, but the form and substance of the language on the TAG webpage articulates clearly that the TAG is, as its acronym denotes, an advisory body that reports to (and “responds to requests from”) the ONF board of directors.  The TAG offers technical guidance and recommendations, but the board makes the ultimate decisions. If the board doesn’t like what it’s getting from TAG members, annual appointments presumably can be allowed to expire and new members can succeed those who leave.

Currently, two networking-gear OEMs are represented on the ONF’s TAG. Cisco is represented by the aforementioned David Ward, and HP is represented by Jean Tourrilhes, an HP-Labs researcher in Networking and Communication who has worked with OpenFlow since 2008. These gentlemen seem to be on the TAG because those who run the ONF believe they can make meaningful contributions to the development of SDN.

No Coup

It’s instructive to note the company affiliations of the other six members serving on TAG. We find, for instance, Nicira CTO Martin Casado, as well as Verizon’s Dave McDysan, Google’s Amin Vahdat, Microsoft’s Albert Greenberg, Broadcom’s Puneet Agarwal, and Stanford’s Nick McKeown, who also is known as a Nicira co-founder and serves on that company’s board of directors.

If any company has pull, then, on the ONF’s TAG, it would seem to be Nicira Networks, not Cisco Systems. After all, Nicira has two of its corporate directors serving on the ONF’s TAG. Again, though, both gentlemen from Nicira are highly regarded and esteemed SDN proponents, who played critical roles in the advent and development of OpenFlow.

And that’s my point. If you look at who serves on the ONF’s TAG, you can clearly see why they’re in those roles and you can understand why the ONF board members would desire their contributions.

The TAG as a vehicle for an internal coup d’etat at the ONF? That’s one conspiracy theory that I’m definitely not buying.

SDN Controller Ecosystems Critical to Market Success

Software-defined networking (SDN) is a relatively new phenomenon. Consequently, analogies to preceding markets and technologies often are invoked by its proponents to communicate key concepts. One oft-cited analogy involves the server-based solution stack and the nascent SDN stack.

In this comparison, server hardware equates to networking hardware, with the CPU instruction set positioned as analogous to the OpenFlow instruction set. Above those layers, the server operating system is said to be analogous to the SDN controller, which effectively runs a “network operating system.” Above that layer, the analogy extends to similarities between server OS and network OS APIs and to the applications that run atop both stacks.

Analogies and Implications

Let’s consider the comparison of the server operating system to the SDN controller.  While the analogy is apt, it carries implications that prospective early adopters of SDN need to fully understand. As we’ve discussed before, SDN controllers based on OpenFlow today carry no guarantees of interoperability. An application that runs on one controller might not be available (or run) on another controller, just as an application developed for a Windows server might not be available on Linux (and vice versa).

Moreover, we don’t know how difficult it will be to port applications from one OpenFlow-based controller to another. It could be a trivial exercise or an agonizing one. There are many nagging questions, far fewer answers.

Keep in mind that this is an entirely different matter from the question of interoperability between OpenFlow-based controllers and switches. Presuming the OpenFlow standard is adhered to and implemented correctly in all cases, OpenFlow-based controllers on the market today should be able to communicate with OpenFlow-based switches.

But interoperability (or lack thereof) is an unwritten book at the layers where the SDN controller (an NOS akin to a server-based operating system) and the NOS APIs reside.  The poses a potential problem for the market development of a viable SDN ecosystem, at least for the enterprise market. (It’s not as much of an issue for the gargantuan service providers that drive the agenda of the Open Network Foundation; those companies have ample resources and will make their own internal standardization decisions relating to controllers and practically everything else that falls under the SDN rubric.)

Controller Derby

At SDNCentral, no fewer than seven open-source OpenFlow controllers are listed. Three of those controllers are Java based: Beacon, Floodlight, and Jaxon. The other open-source OpenFlow controllers listed at SDNCentral are FlowER, NodeFlow, POX, and Trema.  Additionally, OpenFlow controllers have been developed by several companies, including NEC, BigSwitch Networks (which offers a commercial version of the Floodlight controller), and Nicira Networks, which has built on the foundation of the Onix controller.

Interestingly, Google and Ericsson also have based their controllers on Onix. In a blog post last summer, Nicira CTO Martin Casado described Onix as a “general SDN controller” rather than an OpenFlow controller. Casado admitted that he was devising terminology on the fly, but he defined a “general SDN controller as “one in which the control application is fully decoupled from both the underlying protocol(s) communicating with the switches, and the protocol(s) exchanging state between controller instances (assuming the controllers can run in a cluster).” So, OpenFlow could be part of the picture, but it doesn’t have to be there; another mechanism could substitute for it.

Casado conceded that Onix is the right controller for many environments, but not for others. Wrote Casado:

There have been multiple control applications built on Onix, and it is used in large production deployments in the data centers, as well as in the access and core networks. However, it is probably too heavyweight for smaller networks (the home or small enterprise), and it is certainly too complex to use as a basic research tool.

Horses for Courses

So, there are horses for courses, and there are controllers for applications. Early indications suggest that it will not be a one-size-fits-all world. Nonetheless, at the end of his blog post, Casado expressed that opinion that “standards should be kept away” from controller design, and that the market’s natural-selection process should be allowed to run its course.

Perhaps that is the right prescription. It seems too early for leaden-footed standards bodies, such as the IETF and IEEE, to intervene. Nevertheless, customers will have to be wary. They’ll have to do their research, perform due diligence, and thoroughly understand the strengths, weaknesses, and characteristics of candidate controllers. Without assured controller interoperability, customers that adopt and deploy applications on one controller might have considerable difficulty shifting their investment and their software elsewhere.

Of course, if Google and the other major service providers who rule the roost at ONF want to expedite matters, they could publicly and aggressively endorse one or two controller platforms as de facto standards. But that’s probably unlikely, for a variety of reasons. Even if it were to happen, as Casado points out, any controller that proves favorable at large cloud service providers might not be the best choice for enterprises, especially smaller ones.

Opening for Networking’s Old Guard

At this point, it’s not clear how the SDN controller market will shake out. SDN controllers will struggle for sustenance not only against each other, but also against networking’s conventional distributed control planes already on the market, as well as so-called hybrid approaches — whereby the data path is jointly controlled by the conventional box-based control planes as well as by server-based controllers — that will be articulated and promoted by the major networking vendors, all of whom are keen to retard “pure SDN’s” advance from the environs of the largest cloud service providers to those of enterprise buyers. (As mentioned previously, the hybrid-control approach also is perceived by the ONF as a transitional necessity for customers seeking to move from their networks as they are constituted today to future SDN architectures.)

In that regard, the big networking powers are fortunate that the ONF’s early mandate is focused primarily, if not exclusively, on the requirements of the large cloud-service providers that populate its board of directors. The ambiguity surrounding controllers and their interoperability (or lack thereof) represents another factor that will dissuade enterprise buyers from taking an early leap of faith into the arms of SDN purveyors.

The faster the SDN market sorts out a controller hierarchy — determining the suitability and market prevalence of certain controllers in specific application environments — the sooner valuable ecosystems will form and enterprises will take serious notice.

For now, though, a shakeout doesn’t appear imminent.

Cisco Not Going Anywhere, but Changes Coming to Networking

Initially, I intended not to comment on the Wired article on Nicira Networks. While it contained some interesting quotes and a few good observations, its tone and too much of its focus were misplaced. It was too breathless, trying to too hard to make the story fit into a simplistic, sensationalized narrative of outsized personalities and the threatened “irrelevance” of Cisco Systems.

There was not enough focus on how Nicira’s approach to network virtualization and its particular conception of software defined networking (SDN) might open new horizons and enable new possibilities to the humble network. On his blog, Plexxi’s William Koss, commenting not about the Wired article but about reaction to SDN from the industry in general, wrote the following:

In my view, SDN is not a tipping point.  SDN is not obsoleting anyone.  SDN is a starting point for a new network.  It is an opportunity to ask if I threw all the crap in my network in the trash and started over what would we build, how would we architect the network and how would it work?  Is there a better way?

Cisco Still There

I think that’s a healthy focus. As Koss writes, and I agree, Cisco isn’t going anywhere; the networking giant will be with us for some time, tending its considerable franchise and moving incrementally forward. It will react more than it “proacts” — yes, I apologize now for the Haigian neologism — but that’s the fate of any industry giant of a certain age, Apple excepted.

Might Cisco, more than a decade from now, be rendered irrelevant?  I, for one, don’t make predictions over such vast swathes of time. Looking that far ahead and attempting to forecast outcomes is a mug’s game. It is nothing but conjecture disguised as foresight, offered by somebody who wants to flash alleged powers of prognostication while knowing full well that nobody else will remember the prediction a month from now, much less years into the future.

As far out as we can see, Cisco will be there. So, we’ll leave ambiguous prophecies to the likes of Nostradamus, whom I believe forecast the deaths of OS/2, Token Ring, and desktop ATM.

Answers are Coming

Fortunately, I think we’re beginning to get answers as to where and how Nicira’s approaches to network virtualization and SDN can deliver value and open new possibilities. The company has been making news with customer testimonials that include background on how its technology has been deployed. (Interestingly, the company has issued just three press releases in 2012, and all of them deal with customer deployments of its Network Virtualization Platform (NVP).)

There’s a striking contrast between the moderation implicit in Nicira’s choice of press releases and the unchecked grandiosity of the Wired story. Then again, I understand that vendors have little control over what journalists (and bloggers) write about them.

That said, one particular quote in the Wired article provoked some thinking from this quarter. I had thought about the subject previously, but the following excerpt provided some extra grist for my wood-burning mental mill:

In virtualizing the network, Nicira lets you make such changes in software, without touching the underlying hardware gear. “What Nicira has done is take the intelligence that sits inside switches and routers and moved that up into software so that the switches don’t need to know much,” says John Engates, the chief technology officer of Rackspace, which has been working with Nicira since 2009 and is now using the Nicira platform to help drive a new beta version of its cloud service. “They’ve put the power in the hands of the cloud architect rather than the network architect.”

Who Controls the Network?

It’s the last sentence that really signifies a major break with how things have been done until now, and this is where the physical separation of the control plane from the switch has potentially major implications.  As Scott Shenker has noted, network architects and network professionals have made their bones by serving as “masters of complexity,” using relatively arcane knowledge of proprietary and industry-standard protocols to keep networks functioning amid increasing demands of virtualized compute and storage infrastructure.

SDN promises an easier way, one that potentially offers a faster, simpler, less costly approach to network operations. It also offers the creative possibility of unleashing new applications and new ways of optimizing data-center resources. In sum, it can amount to a compelling business case, though not everywhere, at least not yet.

Where it does make sense, however, cloud architects and the devops crowd will gain primacy and control over the network. This trend is reflected already in the press releases from Nicira. Notice that customer quotes from Nicira do not come from network architects, network engineers, or anybody associated with conventional approaches to running a network. Instead, we see encomiums to NVP offered by cloud-architects, cloud-architect executives, and VPs of software development.

Similarly, and not surprisingly, Nicira typically doesn’t sell NVP to the traditional networking professional. It goes to the same “cloudy” types to whom quotes are attributed in its press releases. It’s true, too, that Nicira’s SDN business case and value proposition play better at cloud service providers than at enterprises.

Potentially a Big Deal

This is an area where I think the advent of  the programmable server-based controller is a big deal. It changes the customer power dynamic, putting the cloud architects and the programmers in the driver’s seat, effectively placing the network under their control. (Jason Edelman has begun thinking about what the rise of SDN means for the network engineer.) In this model, the network eventually gets subsumed under the broader rubric of computing and becomes just another flexible piece of cloud infrastructure.

Nicira can take this approach because it has nothing to lose and everything to gain. Of course, the same holds true of other startup vendors espousing SDN.

Perhaps that’s why Koss closed his latest post by writing that “the architects, the revolutionaries, the entrepreneurs, the leaders of the next twenty years of networking are not working at the incumbents.”  The word “revolutionaries” seems too strong, and the incumbents will argue that Koss, a VP at startup Plexxi, isn’t an unbiased party.

They’re right, but that doesn’t mean he’s wrong.