Category Archives: OpenStack

Big Switch Emphasizes Ecosystem, Channel

Big Switch Networks made the news very early today — one article was posted precisely at midnight ET — with an announcement of general availability of its SDN controller, two applications that run on it, and an ecosystem of partners.

Customers also are in the picture, though it wasn’t made explicit in the Big Switch press release whether Fidelity Investments and Goldman Sachs are running Big Switch’s products in production networks.  In a Network World article, however, Jim Duffy writes that Fidelity and Goldman Sachs are “production customers for the Big Switch Open SDN product suite.” 

Controller, Applications, Ecosystem

The company’s announced products, encompassed within its Open Software Defined Networking architecture, feature the Big Network Controller, a proprietary version of the open-source Floodlight controller, and the two aforementioned applications. An SDN controller without applications is like, well, an operating system without applications. Accordingly, Big Switch has introduced Big Virtual Switch, an application for network virtualization, and Big Tap, a unified network monitoring application. 

Big Virtual Switch is the company’s answer to Nicira’s Network Virtualization Platform (NVP).  Big Switch says the product supports up to 32,000 virtual-network segments and can be integrated with cloud-management platforms such as OpenStack (Quantum), CloudStack, Microsoft System Center, and VMware vCenter.  As Big Switch illustrates on its website, Big Virtual Switch can be deployed on Big Network Controller in pure overlay networks, in pure OpenFlow networks, and in hybrid network-virtualization environments.  

According to the company, Big Virtual Switch can deliver significant CAPEX and OPEX benefits. A graphical figure — tagged Economics of Big Virtual Switchincluded in a product data sheet claims the company’s L2/L3 network virtualization facilitates “up to 50% more VMs per rack” and delivers CAPEX savings of $500,000 per rack annually and OPEX savings of $30,000 per rack annually. For those estimates, Big Switch assumes a rack size of 40 servers and suggests savings can be accrued across severs, operating-system instances, storage, networking, and operations. 

Strategies in Flux

Big Virtual Switch and Big Tap are essential SDN applications, but the company’s ultimate success in the marketplace will turn on the support its Big Network Controller receives from third-party vendors. Big Switch is aware of its external dependencies, which is why it has placed so much emphasis on its ecosystem, which it says includes A10 Networks, Arista Networks, Broadcom, Brocade, Canonical, Cariden Technologies, Citrix, Cloudscaling, Coraid, Dell, Endace, Extreme Networks, F5 Networks, Fortinet, Gigamon, Infoblox, Juniper Networks, Mellanox Technologies, Microsoft, Mirantis, Nebula, Palo Alto Networks, Piston Cloud Computing, Radware, StackOps, ThreatSTOP, and vArmour. The Big Switch press release includes an appendix of “supporting quotes” from those companies, but the company will require more than lip service from its ecosystem. 

Some companies will find that their interests are well aligned with those of Big Switch, but others are likely to be less motivated to put energy and resources into Big Switch’s SDN platform.  If you consider the vendor names listed above, you might deduce that the SDN strategies of more than a few are in flux. Some are considering whether to offer SDN controllers of their own. Even those who have no controller aspirations might be disinclined to bet too heavily or too early on a controller platform. They’ll follow the customers and the money. 

A growing number of commercial controllers are on the market (VMware/Nicira, NEC, and Big Switch) or have been announced as coming to market (IBM, HP, Cisco). Others will follow. Loyalties will shift as controller fortunes wax and wane. 

Courting the Channel 

With that in mind, Big Switch is seeking to enlist channel partners as well as technology partners. In a CRN article, we learn that Big Switch “has begun to recruit systems integrator and data center infrastructure-focused solution providers that can consult and design network architecture using Big Switch software and products from a galaxy of ecosystem partners.” In fact, Big Switch wants all its commercial sales to go through channel partners. 

In the CRN piece, Dave Butler, VP of sales at Big Switch, is candid about the symbiotic relationship the company desires from partners:

“None of our products work well alone in a data center — this is a very rigorous and rich ecosystem of partners. We’ll pay a finder’s fee to anyone who brings the right opportunity to us, but we’re not really a product sale. We need the integrators that can create a bundled solution, because that’s what makes the difference.”

. . . . “We bring them (partners) in as the specialist, and they have probably a greater touch than we might. We are not taking deals direct. Then, you have to do all the work by yourself. This is a perfect solution for their services and expertise. And, they can make money with us.”

Needs a Little Help from Its Friends

The plan is clear. Big Switch’s vendor ecosystem is meant to attract channel partners that already are selling those vendors’ products and are interested in expanding into SDN solutions. The channel partners, including SIs and datacenter-solution providers, will then bring Big Switch’s SDN platform to customers, with whom they have existing relationships. 

In theory, it all coheres. Big Switch knows it can’t go it alone against industry giants. It knows it needs more than a little help from its friends in the vendor community and the channel. 

For Big Switch, the vendor ecosystem expedites channel recruitment, and an effective channel accelerates exposure to customers. Big Switch has to move fast and demonstrate staying power. The controller race is far from over. 

Chinese Merchant-Silicon Vendor Joins ONF, Enters SDN Picture

Switching-silicon ODM/OEM Centec Networks last week became the latest company to join the Open Networking Foundation (ONF).

According to a press release, Centec is “committed to contributing to SDN development as a merchant silicon vendor and to pioneering in the promotion of SDN adoption in China.” From the ONF’s standpoint, the more merchant silicon on the market for OpenFlow switches, the better.  Expansion in China doubtless is a welcome prospect, too.

Established in 2005, Centec has been financed by China-Singapore Suzhou Industrial Park Venture Capital, Delta Venture Enterprise, Infinity I-China Investments (Israel), and Suzhou Rongda. A little more than a year ago, Centec announced a $10.7-million “C” round of financing, in which Delta Venture Enterprise, Infinity I-China Investments (Israel), and SuZhou Rongda participated.

Acquisition Rumor

Before that round was announced, Centec’s CEO James Sun, formerly of Cisco and of Fore Systems, told Light Reading’s Craig Matsumoto that the company aspired to become an alternative supplier to Broadcom in the Ethernet merchant-silicon market. As a Chinese company, Centec not surprisingly has cultivated relationships with Chinese carriers and network-gear vendors. In his Light Reading article, in fact, Matsumoto cited a rumor that Centec had declined an acquisition offer from HiSilicon Technologies Co. Ltd., the semiconductor subsidiary of Huawei Technologies, China’s largest network-equipment vendor.

Huawei has been working not only to bolster its enterprise-networking presence, but also to figure out how best to utilize SDN and OpenFlow (and OpenStack, too).  Like Centec, Huawei is a member of the ONF, and it also has been active in IETF and IRTF discourse relating to SDN. What’s more, Huawei has been hiring SDN-savvy engineers in China and in the U.S.

As for Centec, the company made its debut on the SDN stage early this year at the Ethernet Technology Summit, where CEO James Sun gave a silicon vendor’s perspective on OpenFlow and spoke about the company’s plans to release a reference design based on Centec’s TransWarp switching silicon and an SDK with support for Open vSwitch 1.2. That reference design subsequently was showcased at the Open Networking Summit in April.

It will be interesting to see how Centec develops, both in competitive relation to Broadcom and within the context of the SDN ecosystem.

Some Thoughts on VMware’s Strategic Acquisition of Nicira

If you were a regular or occasional reader of Nicira Networks CTO Martin Casado’s blog, Network Heresy, you’ll know that his penultimate post dealt with network virtualization, a topic of obvious interest to him and his company. He had written about network virtualization many times, and though Casado would not describe the posts as such, they must have looked like compelling sales pitches to the strategic thinkers at VMware.

Yesterday, as probably everyone reading this post knows, VMware announced its acquisition of Nicira for $1.26 billion. VMware will pay $1.05 billion in cash and $210 million in unvested equity awards.  The ubiquitous Frank Quattrone and his Quatalyst Partners, which reportedly had been hired previously to shop Brocade Communications, served as Nicira’s adviser.

Strategic Buy

VMware should have surprised no one when it emphasized that its acquisition of Nicira was a strategic move, likely to pay off in years to come, rather than one that will produce appreciable near-term revenue. As Reuters and the New York Times noted, VMware’s buy price for Nicira was 25 times the amount ($50 million) invested in the company by its financial backers, which include venture-capital firms Andreessen Horowitz, Lightspeed,and NEA. Diane Greene, co-founder and former CEO of VMware — replaced four years ago by Paul Maritz — had an “angel” stake in Nicira, as did as Andy Rachleff, a former general partner at Benchmark Capital.

Despite its acquisition of Nicira, VMware says it’s not “at war” with Cisco. Technically, that’s correct. VMware and its parent company, EMC, will continue to do business with Cisco as they add meat to the bones of their data-center virtualization strategy. But the die was cast, and  Cisco should have known it. There were intimations previously that the relationship between Cisco and EMC had been infected by mutual suspicion, and VMware’s acquisition of Nicira adds to the fear and loathing. Will Cisco, as rumored, move into storage? How will Insieme, helmed by Cisco’s aging switching gods, deliver a rebuttal to VMware’s networking aspirations? It won’t be too long before the answers trickle out.

Still, for now, Cisco, EMC, and VMware will protest that it’s business as usual. In some ways, that will be true, but it will also be a type of strategic misdirection. The relationship between EMC and Cisco will not be the same as it was before yesterday’s news hit the wires. When these partners get together for meetings, candor could be conspicuous by its absence.

Acquisitive Roads Not Traveled

Some have posited that Cisco might have acquired Nicira if VMware had not beaten it to the punch. I don’t know about that. Perhaps Cisco might have bought Nicira if the asking price were low, enabling Cisco to effectively kill the startup and be done with it. But Cisco would not have paid $1.26 billion for a company whose approach to networking directly contradicts Cisco’s hardware-based business model and market dominance. One typically doesn’t pay that much to spike a company, though I suppose if the prospective buyer were concerned enough about a strategic technology shift and a major market inflection, it might do so. In this case, though, I suspect Cisco was blindsided by VMware. It just didn’t see this coming — at least not now, not at such an early state of Nicira’s development.

Similarly, I didn’t see Microsoft or Citrix as buyers of Nicira. Microsoft is distracted by its cloud-service provider aspirations, and the $1.26 billion would have been too rich for Citrix.

IBM’s Moves and Cisco’s Overseas Cash Horde

One company I had envisioned as a potential (though less likely) acquirer of Nicira was IBM, which already has a vSwitch. IBM might now settle for the SDN-controller technology available from Big Switch Networks. The two have been working together on IBM’s Open Data Center Interoperable Network (ODIN), and Big Switch’s technology fits well with IBM’s PureSystems and its top-down model of having application workloads command and control  virtualized infrastructure. As the second network-virtualization domino to fall, Big Switch likely will go for a lower price than did Nicira.

On Twitter, Dell’s Brad Hedlund asked whether Cisco would use its vast cash horde to strike back with a bold acquisition of its own. Cisco has two problems here. First, I don’t see an acquisition that would effectively blunt VMware’s move. Second, about 90 percent of Cisco’s cash (more than $42 billion) is offshore, and CEO John Chambers doesn’t want to take a tax hit on its repatriation. He had been hoping for a “tax holiday” from the U.S. government, but that’s not going to happen in the middle of an election campaign, during a macroeconomic slump in which plenty of working Americans are struggling to make ends meet. That means a significant U.S.-based acquisition likely is off the table, unless the target company is very small or is willing to take Cisco stock instead of cash.

Cisco’s Innovator’s Dilemma

Oh, and there’s a third problem for Cisco, mentioned earlier in this prolix post. Cisco doesn’t want to embrace this SDN stuff. Cisco would rather resist it. The Cisco ONE announcement really was about Cisco’s take on network programmability, not about SDN-type virtualization in which overlay networks run atop an underyling physical network.

Cisco is caught in a classic innovator’s dilemma, held captive by the success it has enjoyed selling prodigious amounts of networking gear to its customers, and I don’t think it can extricate itself. It’s built a huge and massively successful business selling a hardware-based value proposition predicated on switches and routers. It has software, but it’s not really a software company.

For Cisco, the customer value, the proprietary hooks, are in its boxes. Its whole business model — which, again, has been tremendously successful — is based around that premise. The entire company is based around that business model.  Cisco eventually will have to reinvent itself, like IBM did after it failed to adapt to client-server computing, but the day of reckoning hasn’t arrived.

On the Defensive

Expect Cisco to continue to talk about the northbound interface (which can provide intelligence from the switch) and about network programmability, but don’t expect networking’s big leopard to change its spots. Cisco will try to portray the situation differently, but it’s defending rather than attacking, trying to hold off the software-based marauders of infrastructure virtualization as long as possible. The doomsday clock on when they’ll arrive in Cisco data centers just moved up a few ticks with VMware’s acquisition of Nicira.

What about the other networking players? Sadly, HP hasn’t figured out what to about SDN, even though OpenFlow is available on its former ProCurve switches. HP has a toe dipped in the SDN pool, but it doesn’t seeming willing to take the initiative. Juniper, which previously displayed ingenuity in bringing forward QFabric, is scrambling for an answer. Brocade is pragmatically embracing hybrid control planes to maintain account presence and margins in the near- to intermediate-term.

Arista Networks, for its part, might be better positioned to compete on networking’s new playing field. Arista Networks’ CEO Jayshree Ullal had the following to say about yesterday’s news:

“It’s exciting to see the return of innovative networking companies and the appreciation for great talent/technology. Software Defined Networking (SDN) is indeed disrupting legacy vendors. As a key partner of VMware and co-innovator in VXLANs, we welcome the interoperability of Nicira and VMWare controllers with Arista EOS.”

Arista’s Options

What’s interesting here is that Arista, which invariably presents its Extensible OS (EOS) as “controller friendly,” earlier this year demonstrated interoperability with controllers from VMware, Big Switch Networks, and Nebula, which has built a cloud controller for OpenStack.

One of Nebula’s investors is Andy Bechtolsheim, whom knowledgeable observers will recognize as the chief development officer (CDO) of, and major investor in, Arista Networks.  It is possible that Bechtolsheim sees a potential fit between the two companies — one building a cloud controller and one delivering cloud networking. To add fuel to this particular fire, which may or may not emit smoke, note that the Nebula cloud controller already features Arista technology, and that Nebula is hiring a senior network engineer, who ideally would have “experience with cloud infrastructure (OpenStack, AWS, etc. . . .  and familiarity with OpenFlow and Open vSwitch.”

 Open or Closed?

Speaking of Open vSwitch, Matt Palmer at SDN Centralwill feel some vindication now that VMware has purchased a company whose engineering team has made significant contributions to the OVS code. Palmer doubtless will cast a wary eye on VMware’s intentions toward OVS, but both Steve Herrod, VMware’s CTO, and Martin Casado, Nicira’s CTO, have provided written assurances that their companies, now combining, will not retreat from commitments to OVS and to Open Flow and Quantum, the OpenStack networking  project.

Meanwhile, GigaOm’s Derrick Harris thinks it would be bad business for VMware to jilt the open-source community, particularly in relation to hypervisors, which “have to be treated as the workers that merely carry out the management layer’s commands. If all they’re there to do is create virtual machines that are part of a resource pool, the hypervisor shouldn’t really matter.”

This seems about right. In this brave new world of virtualized infrastructure, the ultimate value will reside in an intelligent management layer.

PS: I wrote this post under a slight fever and a throbbing headache, so I would not be surprised to discover belatedly that it contains at least a couple typographical errors. Please accept my apologies in advance.

Cisco’s Storage Trap

Recent commentary from Barclays Capital analyst Jeff Kvaal has me wondering whether  Cisco might push into the storage market. In turn, I’ve begun to think about a strategic drift at Cisco that has been apparent for the last few years.

But let’s discuss Cisco and storage first, then consider the matter within a broader context.

Risks, Rewards, and Precedents

Obviously a move into storage would involve significant risks as well as potential rewards. Cisco would have to think carefully, as it presumably has done, about the likely consequences and implications of such a move. The stakes are high, and other parties — current competitors and partners alike — would not sit idly on their hands.

Then again, Cisco has been down this road before, when it chose to start selling servers rather than relying on boxes from partners, such as HP and Dell. Today, of course, Cisco partners with EMC and NetApp for storage gear. Citing the precedent of Cisco’s server incursion, one could make the case that Cisco might be tempted to call the same play .

After all, we’re entering a period of converged and virtualized infrastructure in the data center, where private and public clouds overlap and merge. In such a world, customers might wish to get well-integrated compute, networking, and storage infrastructure from a single vendor. That’s a premise already accepted at HP and Dell. Meanwhile, it seems increasingly likely data-center infrastructure is coming together, in one way or another, in service of application workloads.

Limits to Growth?

Cisco also has a growth problem. Despite attempts at strategic diversification, including failed ventures in consumer markets (Flip, anyone?), Cisco still hasn’t found a top-line driver that can help it expand the business while supporting its traditional margins. Cisco has pounded the table perennially for videoconferencing and telepresence, but it’s not clear that Cisco will see as much benefit from the proliferation of video collaboration as once was assumed.

To complicate matters, storm clouds are appearing on the horizon, with Cisco’s core businesses of switching and routing threatened by the interrelated developments of service-provider alienation and software-defined networking (SDN). Cisco’s revenues aren’t about to fall off a cliff by any means, but nor are they on the cusp of a second-wind surge.

Such uncertain prospects must concern Cisco’s board of directors, its CEO John Chambers, and its institutional investors.

Suspicious Minds

In storage, Cisco currently has marriages of mutual convenience with EMC (VBlocks and the sometimes-strained VCE joint venture) and with NetApp (the FlexPod reference architecture).  The lyrics of Mark James’ song Suspicious Minds are evocative of what’s transpiring between Cisco and these storage vendors. The problem is not only that Cisco is bigamous, but that the networking giant might have another arrangement in mind that leaves both partners jilted.

Neither EMC nor NetApp is oblivious to the danger, and each has taken care to reduce its strategic reliance on Cisco. Conversely, Cisco would be exposed to substantial risks if it were to abandon its existing partnership in favor of a go-it-alone approach to storage.

I think that’s particularly true in the case of EMC, which is the majority owner of server-virtualization market leader VMware as well as a storage vendor. The corporate tandem of VMware and EMC carries considerable enterprise clout, and Cisco is likely to be understandably reluctant to see the duo become its adversaries.

Caught in a Trap

Still, Cisco has boxed itself into a strategic corner. It needs growth, it hasn’t been able to find it from diversification away from the data center, and it could easily see the potential of broadening its reach from networking and servers to storage. A few years ago, the logical choice might have been for Cisco to acquire EMC. Cisco had the market capitalization and the onshore cash to pull it off five years ago, perhaps even three years ago.

Since then, though, the companies’ market fortunes have diverged. EMC now has a market capitalization of about $54 billion, while Cisco’s is slightly more than $90 billion. Even if Cisco could find a way of repatriating its offshore cash hoard without taking a stiff hit from the U.S. taxman, it wouldn’t have the cash to pull of an acquisition of EMC, whose shareholders doubtless would be disinclined to accept Cisco stock as part of a proposed transaction.

Therefore, even if it wanted to do so, Cisco cannot acquire EMC. It might have been a good move at one time, but it isn’t practical now.

Losing Control

Even NetApp, with a market capitalization of more than $12.1 billion, would rate as the biggest purchase by far in Cisco’s storied history of acquisitions. Cisco could pull it off, but then it would have to try to further counter and commoditize VMware’s virtualization and cloud-management presence through a fervent embrace of something like OpenStack or a potential acquisition of Citrix. I don’t know whether Cisco is ready for either option.

Actually, I don’t see an easy exit from this dilemma for Cisco. It’s mired in somewhat beneficial but inherently limiting and mutually distrustful relationships with two major storage players. It would probably like to own storage just as it owns servers, so that it might offer a full-fledged converged infrastructure stack, but it has let the data-center grass grow under its feet. Just as it missed a beat and failed to harness virtualization and cloud as well as it might have done, it has stumbled similarly on storage.

The status quo is likely to prevail until something breaks. As we all know, however, making no decision effectively is a decision, and it carries consequences. Increasingly, and to an extent that is unprecedented, Cisco is losing control of its strategic destiny.

Nicira Focuses on Value of NVP Deployments, Avoids Fetishization of OpenFlow

The continuing evolution of Nicira Networks has been intriguing to watch. At one point, not so long ago, many speculated on what Nicira, then still in a teasing stealth mode, might be developing behind the scenes. We now know that it was building its Network Virtualization Platform (NVP), and we’re beginning to learn about how the company’s early customers are deploying it.

Back in Nicira’s pre-launch days, the line between OpenFlow and software defined networking (SDN) was blurrier than it is today.  From the outset, though, Nicira was among the vendors that sought to provide clarity on OpenFlow’s role in the SDN hierarchy.  At the time — partly because the company was communicating in stealthy coyness  — it didn’t always feel like clarity, but the message was there, nonetheless.

Not the Real Story

For instance, when Alan Cohen first joined Nicira last fall to assume the role of VP marketing, he wrote the following on his personal blog:

Virtualization and the cloud is the most profound change in information technology since client-server and the web overtook mainframes and mini computers.  We believe the full promise of virtualization and the cloud can only be fully realized when the network enables rather than hinders this movement.  That is why it needs to be virtualized.

Oh, by the way, OpenFlow is a really small part of the story.  If people think the big shift in networking is simply about OpenFlow, well, they don’t get it.

A few months before Cohen joined the company, Nicira’s CTO Martin Casado had played down OpenFlow’s role  in the company’s conception of SDN. We understand now where Nicira was going, but at the time, when OpenFlow and SDN were invariably conjoined and seemingly inseparable in industry discourse, it might not have seemed as obvious.

Don’t Get Hung Up

That said, a compelling early statement on OpenFlow’s relatively modest role in SDN was delivered in a presentation by Scott Shenker, Nicira’s co-founder and chief scientist (as well as a professor of electrical engineering in the University of California at Berkeley’s Computer Science Department). I’ve written previously about Shenker’s presentation, “The Future of Networking, and the Past of Protocols,” but here I would just like to quote his comments on OpenFlow:

“OpenFlow is one possible solution (as a configuration mechanism); it’s clearly not the right solution. I mean, it’s a very good solution for now, but there’s nothing that says this is fundamentally the right answer. Think of OpenFlow as x86 instruction set. Is the x86 instruction set correct? Is it the right answer? No, It’s good enough for what we use it for. So why bother changing it? That’s what OpenFlow is. It’s the instruction set we happen to use, but let’s not get hung up on it.”

I still think too many industry types are “hung up” on OpenFlow, and perhaps not focused enough on the controller and above, where the applications will overwhelmingly define the value that SDN delivers.

As an open protocol that facilitates physical separation of the control and data-forwarding planes, OpenFlow has a role to play in SDN. Nonetheless, other mechanisms and protocols can play that role, too, and what really counts can be found at higher altitudes of the SDN value chain.

Minor Roles

In Nicira’s recently announced customer deployments, OpenFlow has played relatively minor supporting roles. Last week, for instance, Nicira announced at the OpenStack Design Summit & Conference that its Network Virtualization Platform (NVP) has been deployed at Rackspace in conjunction with OpenStack’s Quantum networking project. The goal at Rackspace was to automate network services independent of data-center network hardware in a bid to improve operational simplicity and to reduce the cost of managing large, multi-tenant clouds.

According to Brad McConnell, principal architect at Rackpspace, Quantum, Open vSwitch, and OpenFlow all were ingredients in the deployment. Quantum was used as the standardized API to describe network connectivity, and OpenFlow served as the underlying protocol that configured and managed Open vSwitch within hypervisors.

A week earlier, Nicira announced that cloud-service provider DreamHost would deploy its NVP to reduce costs and accelerate service delivery in its OpenStack datacenter. In the press release, the following quote is attributed to Carl Perry, DreamHost’s cloud architect:

“Nicira’s NVP software enables truly massive leaps in automation and efficiency.  NVP decouples network services from hardware, providing unique flexibility for both DreamHost and our customers.  By sidestepping the old network paradigm, DreamHost can rapidly build powerful features for our cloud.  Network virtualization is a critical component necessary for architecting the next-generation public cloud services.  Nicira’s plug-in technology, coupled with the open source Ceph and OpenStack software, is a technically sound recipe for offering our customers real infrastructure-as-a-service.”

Well-Placed Focus

You will notice that OpenFlow is not mentioned by Nicira in the press releases detailing NVP deployments at DreamHost and Rackspace. While OpenFlow is present at both deployments, Nicira correctly describes its role as a lesser detail on a bigger canvas.

At DreamHost, for example, NVP uses  OpenFlow for communication between the controller and Open vSwitch, but Nicira has acknowledged that other protocols, including SNMP, could have performed a similar function.

Reflecting on these deployments, I am reminded of Casado’s  earlier statement: “Open Flow is about as exciting as USB.”

For a long time now, Nicira has eschewed the fetishization of OpenFlow. Instead, it has focused on the bigger-picture value propositions associated with network virtualization and programmable networks. If it continues to do so, it likely will draw more customers to NVP.

Debating SDN, OpenFlow, and Cisco as a Software Company

Greg Ferro writes exceptionally well, is technologically knowledgeable, provides incisive commentary, and invariably makes cogent arguments over at EtherealMind.  Having met him, I can also report that he’s a great guy. So, it is with some surprise that I find myself responding critically to his latest blog post on OpenFlow and SDN.

Let’s start with that particular conjunction of terms. Despite occasional suggestions to the contrary, SDN and OpenFlow are not inseparable or interchangeable. OpenFlow is a protocol, a mechanism that allows a server, known in SDN parlance as a controller, to interact with and program flow tables (for packet forwarding) on switches. It facilitates the separation of the control plane from the data plane in some SDN networks.

But OpenFlow is not SDN, which can be achieved with or without OpenFlow.  In fact, Nicira Networks recently announced two SDN customer deployments of its Network Virtualization Platform (NVP) — at DreamHost and at Rackspace, respectively — and you won’t find mention of OpenFlow in either press release, though OpenStack and its Quantum networking project receive prominent billing. (I’ll be writing more about the Nicira deployments soon.)

A Protocol in the Big Picture 

My point is not to diminish or disparage OpenFlow, which I think can and will be used gainfully in a number of SDN deployments. My point is that we have to be clear that the bigger picture of SDN is not interchangeable with the lower-level functionality of OpenFlow.

In that respect, Ferro is absolutely correct when he says that software-defined networking, and specifically SDN controller and application software, are “where the money is.” He conflates it with OpenFlow — which may or may not be involved, as we already have established — but his larger point is valid.  SDN, at the controller and above, is where all the big changes to the networking model, and to the industry itself, will occur.

Ferro also likely is correct in his assertion that OpenFlow, in and of itself, will  not enable “a choice of using low cost network equipment instead of the expensive networking equipment that we use today. “ In the near term, at least, I don’t see major prospects for change on that front as long as backward compatibility, interoperability with a bulging bag of networking protocols, and the agendas of the networking old guard are at play.

Cisco as Software Company

However, I think Ferro is wrong when he says that the market-leading vendors in switching and routing, including Cisco and Juniper, are software companies. Before you jump down my throat, presuming that’s what you intend to do, allow me to explain.

As Ferro says, Cisco and Juniper, among others, have placed increasing emphasis on the software features and functionality of their products. I have no objection there. But Ferro pushes his argument too far and suggests that the “networking business today is mostly a software business.”  It’s definitely heading in that direction, but Cisco, for one, isn’t there yet and probably won’t be for some time.  The key word, by the way, is “business.”

Cisco is developing more software these days, and it is placing more emphasis on software features and functionality, but what it overwhelmingly markets and sells to its customers are switches, routers, and other hardware appliances. Yes, those devices contain software, but Cisco sells them as hardware boxes, with box-oriented pricing and box-oriented channel programs, just as it has always done. Nitpickers will note that Cisco also has collaboration and video software, which it actually sells like software, but that remains an exception to the rule.

Talks Like a Hardware Company, Walks Like a Hardware Company

For the most part, in its interactions with its customers and the marketplace in general, Cisco still thinks and acts like a hardware vendor, software proliferation notwithstanding. It might have more software than ever in its products, but Cisco is in the hardware business.

In that respect, Cisco faces the same fundamental challenge that server vendors such as HP, Dell, and — yes — Cisco confront as they address a market that will be radically transformed by the rise of cloud services and ODM-hardware-buying cloud service providers. Can it think, figuratively and literally, outside the box? Just because Cisco develops more software than it did before doesn’t mean the answer is yes, nor does it signify that Cisco has transformed itself into a software vendor.

Let’s look, for example, at Cisco’s approach to SDN. Does anybody really believe that Cisco, with its ongoing attachment to ASIC-based hardware differentiation, will move toward a software-based delivery model that places the primary value on server-based controller software rather than on switches and routers? It’s just not going to happen, because  it’s not what Cisco does or how it operates.

Missing the Signs 

And that bring us to my next objection.  In arguing that Cisco and others have followed the market and provided the software their customers want, Ferro writes the following:

“Billion dollar companies don’t usually miss the obvious and have moved to enhance their software to provide customer value.”

Where to begin? Well, billion-dollar companies frequently have missed the obvious and gotten it horribly wrong, often when at least some individuals within the companies in question knew that their employer was getting it horribly wrong.  That’s partly because past and present successes can sow the seeds of future failure. As in Clayton M. Christensen’s classic book The Innovator’s Dilemma, industry leaders can have their vision blinkered by past successes, which prevent them from detecting disruptive innovations. In other cases, former market leaders get complacent or fail to acknowledge the seriousness of a competitive threat until it is too late.

The list of billion-dollar technology companies that have missed the obvious and failed spectacularly, sometimes disappearing into oblivion, is too long to enumerate here, but some  names spring readily to mind. Right at the top (or bottom) of our list of industry ignominy, we find Nortel Networks. Once a company valued at nearly $400 billion, Nortel exists today only in thoroughly digested pieces that were masticated by other companies.

Is Cisco Decline Inevitable?

Today, we see a similarly disconcerting situation unfolding at Research In Motion (RIM), where many within the company saw the threat posed by Apple and by the emerging BYOD phenomenon but failed to do anything about it. Going further back into the annals of computing history, we can adduce examples such as Novell, Digital Equipment Corporation, as well as the raft of other minicomputer vendors who perished from the planet after the rise of the PC and client-sever computing. Some employees within those companies might even have foreseen their firms’ dark fates, but the organizations in which they toiled were unable to rescue themselves.

They were all huge successes, billion-dollar companies, but, in the face of radical shifts in industry and market dynamics, they couldn’t change who and what they were.  The industry graveyard is full of the carcasses of company’s that were once enormously successful.

Am I saying this is what will happen to Cisco in an era of software-defined networking? No, I’m not prepared to make that bet. Cisco should be able to adapt and adjust better than the aforementioned companies were able to do, but it’s not a given. Just because Cisco is dominant in the networking industry today doesn’t mean that it will be dominant forever. As the old investment disclaimer goes, past performance does not guarantee future results. What’s more, Cisco has shown a fallibility of late that was not nearly as apparent in its boom years more than a decade ago.

Early Days, Promising Future

Finally, I’m not sure that Ferro is correct when he says Open Network Foundation’s (ONF) board members and its biggest service providers, including Google, will achieve CapEx but not OpEx savings with SDN. We really don’t know whether these companies are deriving OpEx savings because they’re keeping what they do with their operations and infrastructure highly confidential. Suffice it to say, they see compelling reasons to move away from buying their networking gear from the industry’s leading vendors, and they see similarly compelling reasons to embrace SDN.

Ferro ends his piece with two statements, the first of which I agree with wholeheartedly:

“That is the future of Software Defined Networking – better, dynamic, flexible and business focussed networking. But probably not much cheaper in the long run.”

As for that last statement, I believe there is insufficient evidence on which to render a verdict. As we’ve noted before, these are early days for SDN.

Hardware Elephant in the HP Cloud

Taking another run at cloud computing, HP made news today with its strategy for the “Converged Cloud,” which focuses on hybrid cloud environments and provides a common architecture that spans existing data centers as well as private and public clouds.

In finally diving into infrastructure as a service (IaaS), with a public beta of HP Public Infrastructure as a Service slated for May 10, HP will go up against current IaaS market leader Amazon Web Services.

HP will tap OpenStack and hypervisor neutrality as it joins the battle. Not surprisingly, it also will leverage its own hardware portfolio for compute, storage, and networking — HP Converged Infrastructure, which it already has promoted for enterprise data centers — as well as a blend of software and services that is meant to provide bonding agents to keep customers in the HP fold regardless of where and how they want to run their applications.

Trying to Set the Cloud Agenda

In addition to HP Public Infrastructure as a Service — providing on-demand compute instances or virtual machines, online storage capacity, and cached content delivery — HP Cloud Services also will unveil a private beta of a relational database service for MySQL and a block storage service that supports movement of data from one compute instance to another.

While HP has chosen to go up against AWS in IaaS — though it apparently is targeting a different constituency from the one served by Amazon — perhaps a bigger story is that HP also will compete with other service providers, too, including other OpenStack purveyors.

There’s some risk in that decision, no question, but perhaps not as much as one might think. The long-term trend, already established at the largest cloud service providers on the planet, is to move away from branded, vanity hardware in favor of no-frills boxes from original design manufacturers (ODMs).  This will not only affect servers, but also storage and networking hardware, the latter of which has seen the rise of merchant silicon. HP can read the writing on the data-center wall, and it knows that it must attempt to set the cloud agenda, or cede the floor and watch its hardware sales atrophy.

Software and Services as Hooks

Hybrid clouds are HP’s best bet, though far from a sure thing. Indeed, one can interpret  HP’s Converged Cloud as a bulwark against what it would perceive as a premature decline in its hardware business.

Simply packaging and reselling OpenStack and a hypervisor of the customer’s choice wouldn’t achieve HP’s “sticky” business objectives, so it is tapping its software and services for the hooks and proprietary value that will keep customers from straying.

For managing hybrid environments, HP has its new Cloud Maps, which provides catalogue of prepackaged application templates to speed deployment of enterprise cloud-services applications.

To test the applications, the company offers HP Service Virtualization 2.0, which enables enterprise customers to test quality and performance of cloud or mobile applications without interfering with production systems. Meanwhile, HP Virtual Application Networks — which taps HP’s Intelligent Management Center (IMC) and the IMC Virtual Application Networks (VAN) Manager Module — also makes its debut. It is designed to eliminate network-related cloud-services bottlenecks by speeding application deployment, automating management, and ensuring service levels for virtual and cloud applications on HP’s FlexNetwork architecture.

Maintaining and Growing

HP also will launch two new networking services: HP Virtual Network Protection Service, which leverages best practices and is intended to set a baseline for security of network virtualization; and HP Network Cloud Optimization Service, which is intended to customers enhance their networks for delivery of cloud services.

For  enterprises that don’t want to manage their clouds, the company offers HP Enterprise Cloud Services as well as other services to get enterprises up to speed on how cloud can best be harnessed.

Whether the software and services will add sufficient stickiness to HP’s hardware business remains to be seen, but there’s no question that HP is looking to maintain existing revenue streams while establishing new ones.

Cheriton Sees Opportunity in Infrastructure

When I wrote my first post on this blog, way back in 2006, I assumed that technology infrastructure largely was a spent force. I expected incremental enhancements, gradual advances, but I didn’t anticipate another major boom or a significant disruption of the established order in what once had been a vibrant technology space.

While the technology industry as a whole can suffer from blinkered, willful optimism, perhaps I was afflicted by a different condition entirely. I might have been too pessimistic, too gloomy, dispirited by the technology downturn of the early 2000s and the lack of a meaningful, sustained recovery in the years that immediately followed.

By the way, when I refer to technology, I’m not talking about social networking such as Facebook. I understand that there’s a lot of technology behind the scenes at Facebook, but the customer-facing “social” phenomenon leaves me cold. I never did see the point of Facebook from a user’s perspective, though I understood how it could serve as an unprecedented data-mining machine for advertisers.

Opportunity Renewed

Fortunately, though, I was wrong about the decline and fall of infrastructure. It took a while, but a new era of infrastructure has arisen, based on virtualization, orchestration, and automation. Technological possibilities that we could only dream about more than a decade ago are now possible. In the networking realm, software-defined networking (SDN) is enabling comparatively outmoded network infrastructure to catch up with compute and, to a lesser degree, storage infrastructure as the promise of an application-driven, programmable data center comes into clearer view.

Suddenly, at long last, there’s new opportunity in infrastructure.

You don’t have to take my word for it, either. There are people who’ve designed and developed industry-leading technologies who espouse the same opinion. Some of these people are billionaires, and they’re backed their convictions with substantial sums of money, investing in technologies and companies with clear mandates to remake IT infrastructure.

Outrageously Wealthy Canuck

One of those people is David Cheriton, a billionaire who wears many hats. He is Professor of Computer Science and Electrical Engineering at Stanford University, where he researches networking and distributed systems, and he also serves as a co-founder and chief scientist at Arista Networks. He’s also an investor in startup companies. Back in 1998, one early-stage company in which he invested, along with Arista co-founder Andy Bechtolsheim, was Google.  The duo made a similar early investment in VMware, so they’ve done okay.

Born in Vancouver, raised in Edmonton, Alberta, and ranked 37th on a Wikipedia list of “richest Canadians”** — Forbes ranks him 21st among outrageously wealthy Canucks  — Cheriton recently spoke about innovation and entrepreneurship at a Churchill Club event in Silicon Valley. The event was co-hosted and organized by the Hua Yuan Science and Technology Association and also featured Ken Xie, who founded NetScreen (acquired by Juniper Networks in 2004) and is now president and CEO of unified-threat-management/firewall vendor Fortinet, a company he also founded.

In addition to his apparent knack as an investor, Cheriton has considerable firsthand experience as an entrepreneur and an innovator. Before he and Bechtolsheim combined forces at Arista Networks,  they founded Granite Systems, a Gigabit-Ethernet switching concern that was acquired by Cisco in 1996 for about $220 million in stock, back when shares of Cisco were continuously on the rise.  Subsequently, after the Google investment, Bechtolsheim and Cheriton combined forces again to found Kealia, which specialized in server technology based on AMD’s Opteron microprocessor.  That company was acquired by Sun Microsystems in 2004, providing technology included in the Sun Fire X4500 storage product.

Room for Improvement

In 2005, Cheriton and Bechtolsheiim followed up with Arista, then called Arastra, and its 10-GbE switching technology, which brings us to the approximate present and back to something Cheriton said at the Churchill Club event late last month. Noting that people tend to become preoccupied with the latest developments in social networking and mobility, Cheriton expressed his enthusiasm for infrastructure, as an investment vehicle as well as an area in which he has an abiding technical interest. As quoted in a BusinessWeek article, Cheriton said: “I think there is an opportunity to go back and say, ‘Gee, I think there’s lot of room for improvement in the infrastructure.’ ”

Reinforcing that point, he noted that technology infrastructure today is predicated on ideas that are about 30 years old. The network was the place to start the infrastructure refurbishment, Cheriton believed, and Arista Networks grew from that conviction.

But Cheriton hasn’t stopped there. He also founded a company called Optumsoft, about which not much is known. On its website, Optumsoft is described as an early-stage startup company “taking distributed computing and distributed software development mainstream.” Quoting from the website:

Recent advancements in multi-core computing systems, coupled with the ever increasing functional and performance requirements of software has created an exciting market opportunity for addressing the programmatic and architectural issues involved in modern software development. Optumsoft is addressing this growing market with a novel technology approach that is transparent, scalable, and portable, resulting in significant improvement to the development and maintenance of distributed/parallel structured software systems. Early production usage by commercial clients has validated the technology and value proposition.

Last fall, an anonymous source suggested on Quora that what Optumsoft was building related to “how to structure object-oriented RPC in a way that makes it easy to build robust systems.  The technology behind Arista’s EOS is based on some of these ideas, as was software structure at a previous startup, Kealia.  The technology includes an IDL and a C++ runtime, similar to what you’d get using CORBA.”

Nebula and Tintri

On the investment side, Cheriton and Bechtolsheim have put money into Nebula, which has venture-capital backing from Kleiner Perkins Caulfield & Byers and Highland Capital Partners. Built on OpenStack, the Nebula Enterprise Cloud Appliance is designed to provision and configure flexible, scalable cloud-computing infrastructure. Although it doesn’t say so on the Nebula website, previous reports indicated that Arista’s networking technology is included in the Nebula appliance.

According to the BusinessWeek article,  Cheriton also has a stake in Tintri, co-founded by Kieran Harty and Mark Gritter. Harty was EVP of R&D at VMware for seven years, and Gritter was one of the first of Cheriton’s employees at Kealia. They’ve assembled a PhD-laden engineering team that has developed a virtual-machine-aware storage appliance designed for virtualized environments, which the company says have been underserved by older storage technology that apparently contributes to “VM stall.”

Another early-stage investment that Cheriton made was in Aster Data Systems, a purveyor of a massively parallel DBMS that runs on clustered commodity servers. Already a minority owner of Aster, Teradata bought the 89% of the company it didn’t own for $263 million last year.

Cheriton has made bets on infrastructure, and he’ll likely make others. It’s an encouraging sign for those of us who gravitate to that part of the industry.

(**No, I am not on the list, but thanks for asking.)

Arista’s Adaptable Approach to SDN

In an earlier post regarding Arista Networks’ march toward an IPO, I wrote that I would provide an overview of the company’s positioning on software-defined networking (SDN), which now follows. I think the subject is worth exploring given the buzz generated both by the IPO-bound Arista, with its notable market successes in high-frequency trading and other application environments requiring low-latency switching, and by SDN itself.

Last fall, when OpenFlow fever reached a boiling point, Arista Networks’ CEO Jayshree Ullal pointed out that it was just one mechanism of many that could be leveraged in the service of SDN. Among the others, she opined, were existing command-line interfaces (CLIs), Simple Network Management Protocol (SNMP), Extensible Messaging and Presence Protocol (XMPP), Network Configuration Protocol (NETCONF), OpenStack (with its Quantum project), as well as APIs in VMware’s vSphere virtualization software.

The Four Pillars

On the larger SDN canvas, Arista has propounded its “four pillars” of software-defined cloud networking (SDCN). You can read about Arista’s “four pillars” in a blog post written late last year by Ullal or in a white paper that can be found on Arista’s website. In both, the four pillars are identified as follows:

Pillar 1. Single Point of Management, which Arista believes can be achieved through layering atop the traditional control plane and data path of a cloud network and through coordinating configurations across multiple otherwise-independent switches. Arista says no fabric technology is required, and it says its CloudVision is up to the challenge.

Pillar 2: Single-image L2/3 Control Plane.  Here, Arista believes “standards-based L2/L3 IETF control-plane specifications plus OpenFlow options (without hype) can be a promising open augmentation for providing single image control planes in the future.”

Pillar 3. Multi-path Active-Active Data Path. The company prescribes scaling cloud networking across multiple chassis with Multi-Chassis Link Aggregation Group (MLAG) at L2 Equal Cost Multi-pathing (ECMP) at L3.

Pillar 4. Network-Wide Virtualization. Regarding this last pillar, the company says it makes sense to provision the entire network to handle any application seamlessly and so that the economics of virtualization can be properly leveraged “using controllers from VMware and their new paradigm for VMWare’s VXLANS or Open Virtualization Switching (OVS) controllers in the future.”

Best of Both Worlds?

As has been above (and in earlier posts), software-defined networking can be implemented in more than one fashion. Some networking vendors — typically industry mainstays with large installed bases of customers and firmly established business models predicated on hardware ASICs, proprietary protocols, and relatively high margins — will opt for an SDN vision that features a distributed control plane. Not for them the dramatic shift to logically centralized server-based controllers, designed to subsume networking within a computing paradigm. To the traditional networking vendor, that road looks treacherous and leads to a diminution of the status and margins associated with the beloved switch.

As neither a raw SDN startup nor a legacy networking company, Arista takes a flexible position on how SDNs can be realized. The company says customers can implement SDNs by using controllers or by using distributed-control mechanisms. Ideally, according to Arista, both means should be employed for comprehensive SDN capabilities. A presentation available online explains the company’s position on this best-of-both-worlds approach to the control plane.

Finally, it probably comes as no surprise that Arista prescribes its own Linux-based Extensible Operating System (EOS) as the appropriate software foundation for its four pillars and for cloud networking in general. It also believes that “good old fashioned Ethernet scaling from 10 gigabits to 40 gigabits to 100 Gigabits and even terabits with well-defined standards and protocols for L2/L3 is the optimal approach.”

In view of the media blitz undertaken by Arista founders Andreas Bechtolsheim and David Cheriton late last year, we should expect the company’s next generation of switches to deliver as much bandwidth as Ethernet and merchant silicon will allow.

Why Many Networking Professionals Will Resist Software-Defined Networking

In the long run, I think software defined networking (SDN) is destined for tremendous success, not only at massive cloud service providers, where it already is finding favor and increased adoption, but also at smaller service providers and even — with time and perseverance — at enterprises.

It just might not happen as quickly as some expect.

Shape of Networking to Come

In a presentation last autumn at the Open Networking Summit, Nicira co-founder Nick McKeown asserted that SDN would shape the future of networking in several key respects. He said it would do so by empowering network owners and operators, by speeding the pace of innovation, by diversifying the supply chain, and by delivering a robust foundation for programmability predicated on a standardized forwarding abstraction and provable network properties.

On the whole, McKeown probably will be right, and his technological reasoning seems entirely reasonable. As in any market, however, the commercial appeal of SDN will be determined by human factors as well as by technological considerations.

The enterprise market will be the toughest nut to crack, though, and not only because the early agenda of SDN, as defined by the board members of the Open Networking Foundation (ONF) and others, has been focused resolutely on providing solutions for the largest of cloud service providers.

Winning Hearts and Minds

Capturing enterprise hearts and minds will be difficult for SDN, and it will be hard not just because of technological challenges, such as backward compatibility with (and investments in) existing network infrastructure, but also because of the cultural milieu and entrenched mindset of enterprise networking professionals.

I’ve written before, on two occasions actually, about how human and institutional resistance to change can strongly inhibit the commercial adoption of technologies with otherwise compelling credentials and qualifications. Generally, people fear change, especially when they suspect that the change in question will affect them adversely.

And make no mistake, software-defined networking will inspire fear and resistance in some quarters, enterprise networking professionals prominent among them.

Networking’s Cultural Artifacts

Jennifer Rexford, professor of computer science at Princeton University and a former AT&T Research staffer, wrote that one of her colleagues once observed that computer-networking people “really loved their artifacts.” Those artifacts probably would include the many distributed routing protocols that have proliferated over the years.

Software-defined networking wants to loosen emotional attachment to those artifacts, just as it wants to jettison the burgeoning bag of protocols that distinguishes networking from computer programming and other disciplines.  But many networking professionals, including those in enterprise IT departments, see their mastery of complex protocols as hallmarks of who they are and what they do.

Getting the Network “Out of the Way”

Yet there’s more to it than that. Consider the workplace implications of software-defined networks. The whole idea of SDN is to make networks programmable, to put applications and those who program and manage them in the driver’s seat, and to get the network “out of the way” of the sweeping virtualized progress that has enveloped all other data-center infrastructure.

To survive and thrive in this brave new virtual world, networking professionals might have to become more like programmers. From an organizational standpoint, even though there are compelling business and technological reasons to adopt SDN, resistance from the fraternity of networking professionals will be stiff and difficult to overcome.

In the realm of the super-sized data centers at Google and elsewhere, this isn’t a serious problem. The concepts associated with “DevOps” and with thinking outside boxes, departmental and otherwise, thrive in those precincts. Google long has eschewed the purchase of servers and networking gear from vendors, and it does things its own way. To greater or lesser degrees, other large cloud-service providers now dance to a similar beat. But the enterprise? Well, that’s a different animal altogether.

Vendors in No Hurry

Some of the new SDN startups already are meeting with pockets of resistance. They’re seeing cleavage — schism might be too strong a word, though maybe not — between cloud architects and server-virtualization specialists on one side of the house and network professionals on the opposing side. The two camps see things differently,with perspectives and priorities that are difficult to reconcile. (There are exceptions to the rule, of course, with some networking professionals eager to embrace SDN, but they currently are in the minority.)

As we’ve seen, the board of directors at the Open Networking Foundation (ONF) isn’t concerned about how quickly the enterprise gets with the SDN program. I also would suggest that most networking vendors, which are excluded from the ONF’s board, aren’t in a hurry to push an SDN agenda that features logically centralized, server-based controllers. You’ll see SDN from these vendors, yes, but the control plane will be distributed until such time as enterprises and service providers (not on the ONF board) demand otherwise. That will be a while, I suspect.

Deferred Gratification

We tend to underestimate resistance to change in this industry.  Gartner devised the “trough of disillusionment”  and the technology hype cycle for good reason. Some technologies remain in that basin longer than others. Some never emerge from what becomes a bottomless pit rather than a trough.

That won’t happen to SDN.  As I wrote earlier, I think it has a bright future. Don’t be surprised, though, if the hype gets ahead of the reality. When it comes to technologies and markets, our inherent optimism occasionally is thwarted by our intrinsic resistance to change.

Peeling the Nicira Onion

Nicira emerged from pseudo-stealth yesterday, drawing plenty of press coverage in the process. “Network virtualization” is the concise, two-word marketing message the company delivered, on its own and through the analysts and journalists who greeted its long-awaited official arrival on the networking scene.

The company’s website opened for business this week replete with a new look and an abundance of new content. Even so, the content seemed short on hard substance, and those covering the company’s launch interpreted Nicira’s message in a surprisingly varied manner, somewhat like blind men groping different parts of an elephant. (Onion in the title, now an elephant; I’m already mixing flora and fauna metaphors.)

VMware of Networking Ambiguity

Many made the point that Nicira aims to become the “VMware of networking.” Interestingly, Big Switch Networks has aspirations to wear that crown, asserting on its website that “networking needs a VMware.” The theme also has been featured in posts on Network Heresy, Nicira CTO Martin Casado’s blog. He and his colleagues have written alternately that networking both doesn’t and does need a VMware. Confused? That’s okay. Many are in the same boat . . . or onion field, as the case may be.

The point Casado and company were trying to make is that network virtualization, while seemingly overdue and necessary, is not the same as server virtualization. As stated in the first in that series of posts at Network Heresy:

“Virtualized servers are effectively self contained in that they are only very loosely coupled to one another (there are a few exceptions to this rule, but even then, the groupings with direct relationships are small). As a result, the virtualization logic doesn’t need to deal with the complexity of state sharing between many entities.

A virtualized network solution, on the other hand, has to deal with all ports on the network, most of which can be assumed to have a direct relationship (the ability to communicate via some service model). Therefore, the virtual networking logic not only has to deal with N instances of N state (assuming every port wants to talk to every other port), but it has to ensure that state is consistent (or at least safely inconsistent) along all of the elements on the path of a packet. Inconsistent state can result in packet loss (not a huge deal) or much worse, delivery of the packet to the wrong location.”

In Context of SDN Universe

That issue aside, many writers covering the Nicira launch presented information about the company and its overall value proposition consistently. Some articles were more detailed than others. One at MIT’s Technology Review provided good historical background on how Casado first got involved with the challenge of network virtualization and how Nicira was formed to deliver a solution.

Jim Duffy provided a solid piece touching on the company’s origins, its venture-capital investors, and its early adopters and the problems Nicira is solving for them. He also touched on where Nicira appears to fit within the context of the wider SDN universe, which includes established vendors such as Cisco Systems, HP, and Juniper Networks, as well as startup such as Big Switch Networks, Embrane, and Contextream.

In that respect, it’s interesting to note what Embrane co-founder and President Dante Malagrino told Duffy:

 “The introduction of another network virtualization product is further validation that the network is in dire need of increased agility and programmability to support the emergence of a more dynamic data center and the cloud.”

“Traditional networking vendors aren’t delivering this, which is why companies like Nicira and Embrane are so attractive to service providers and enterprises. Embrane’s network services platform can be implemented within the re-architected approach proposed by Nicira, or in traditional network architectures. At the same time, products that address Layer 2-3 and platforms that address Layer 4-7 are not interchangeable and it’s important for the industry to understand the differences as the network catches up to the cloud.”

What’s Nicira Selling?

All of which brings us back to what Nicira actually is delivering to market. The company’s website offers videos, white papers, and product data sheets addressing the Nicira Network Virtualization Platform (NVP) and its Distributed Network Virtualization Infrastructure (DNVI), but I found the most helpful and straightforward explanations, strangely enough, on the Frequently Asked Questions (FAQ) page.

This is an instance of a FAQ page that actually does provide answers to common questions. We learn, for example, that the key components of the Nicira Network Virtualization Platform (NVP) are the following:

- The Controller cluster, a distributed control system

- The Management software, an operations console

- The RESTful API that integrates into a range of Cloud Management Systems (CMS), including a Quantum plug-in for OpenStack.

Those components, which constitute the NVP software suite, are what Nicira sells, albeit in a service-oriented monthly subscription model that scales per virtual network port.

Open vSwitch, Minor Role for OpenFlow 

We then learn that the NVP communicates with the physical network indirectly, through Open vSwitch. Ivan Pepelnjak (I always worry that I’ll misspell his name, but not the Ivan part) provides further insight into how Nicira leverages Open vSwitch. As Nicira notes, the NVP Controller communicates directly with Open vSwitch (OVS), which is deployed in server hypervisors. The server hypervisor then connects to the physical network and end hosts connect to the vswitch. As a result, NVP does not talk directly to the physical network.

As for OpenFlow, its role is relatively minor. As Nicira explains: “OpenFlow is the communications protocol between the controller and OVS instances at the edge of the network. It does not directly communicate with the physical network elements and is thus not subject to scaling challenges of hardware-dependent, hop-by-hop OpenFlow solutions.”

Questions About L4-7 Network Services

Nicira sees its Network Virtualization Platform delivering value in a number of different contexts, including the provision of hardware-independent virtual networks; virtual-machine mobility across subnet boundaries (while maintaining L2 adjacency); edge-enforced, dynamic QoS and security policies (filters, tagging, policy routing, etc.) bound to virtual ports; centralized system-wide visibility & monitoring; address space isolation (L2 & L3); and Layer 4-7 services.

Now that last capability provokes some questions that cannot be answered in the FAQ.

Nicira says its NVP can integrate with third-party Layer 3-7 services, but it also says services can be created by Nicira or its customers.  Notwithstanding Embrane’s perfectly valid contention that its network-services platform can be delivered in conjunction with Nicira’s architectural model, there is a distinct possibility Nicira might have other plans.

This is something that bears watching, not only by Embrane but also by longstanding Layer 4-7 service-delivery vendors such as F5 Networks. At this point, I don’t pretend to know how far or how fast Nicira’s ambitions extend, but I would imagine they’ll be demarcated, at least partly, by the needs and requirements of its customers.

Nicira’s Early Niche

Speaking of which, Nicira has an impressive list of early adopters, including AT&T, eBay, Fidelity Investments, Rackspace, Deutsche Telekom, and Japan’s NTT. You’ll notice a commonality in the customer profiles, even if their application scenarios vary. Basically, these all are public cloud providers, of one sort or another, and they have what are called “web-scale” data centers.

While Nicira and Big Switch Networks both are purveyors of “network virtualization”  and controller platforms — and both proclaim that networking needs a VMware — they’re aiming at different markets. Big Switch is focusing on the enterprise and the private cloud, whereas Nicira is aiming for large public cloud-service providers or big enterprises that provide public-cloud services (such as Fidelity).

Nicira has taken care in selecting its market. An earlier post on Casado’s blog suggests that he and Nicira believe that OpenFl0w-based SDNs might be a solution in search of a problem already being addressed satisfactorily within many enterprises. I’m sure the team at Big Switch would argue otherwise.

At the same time, Nicira probably has conceded that it won’t be patronized by Open Networking Foundation (ONF) board members such as Google, Facebook, and Microsoft, each of which is likely to roll its own network-virtualization systems, controller platforms, and SDN applications. These companies not only have the resources to do so, but they also have a business imperative that drives them in that direction. This is especially true for Google, which views its data-center infrastructure as a competitive differentiator.

Telcos Viable Targets

That said, I can see at least a couple ONF board members that might find Nicira’s pitch compelling. In fact, one, Deutsche Telekom, already is on board, at least in part, and perhaps Verizon will come along later. The telcos are more likely than a Google to need assistance with SDN rollouts.

One last night on Nicira before I end this already-prolix post. In the feature article at Technology Review, Casado says it’s difficult for Nicira to impress a layperson with its technology, that “people do struggle to understand it.” That’s undoubtedly true, but Nicira needs to keep trying to refine its message, for its own sake as well as for those of prospective customers and other stakeholders.

That said, the company is stocked with impressive minds, on both the business and technology sides of the house, and I’m confident it will get there.

Reflecting on the Big Acquisition Cisco Didn’t Make

It has been nearly eight years since EMC acquired VMware. The acquisition announcement went over the newswires on December 15, 2003. EMC paid approximately $635 million for VMware, and Joe Tucci, EMC’s president and CEO, had this to say about the deal:

“Customers want help simplifying the management of their IT infrastructures. This is more than a storage challenge. Until now, server and storage virtualization have existed as disparate entities. Today, EMC is accelerating the convergence of these two worlds .“

“We’ve been working with the talented VMware team for some time now, and we understand why they are considered one of the hottest technology companies anywhere. With the resources and commitment of EMC behind VMware’s leading server virtualization technologies and the partnerships that help bring these technologies to market, we look forward to a prosperous future together.”

Virtualization Goldmine

Oh, the future was prosperous . . . and then some. It’s a deal that worked out hugely in EMC’s favor. Even though the storage behemoth has spun out VMware in the interim, allowing it to go public, EMC still retains more than 80 percent ownership of its virtualization goldmine.

Consider that EMC paid just $635 million in 2003 to buy the server-virtualization market leader. VMware’s current market capitalization is more than $38 billion. That means EMC’s stake in VMware is worth more than $30 billion, not including the gains it reaped when it took VMware public. I don’t think it’s hyperbolic to suggest that EMC’s purchase of VMware will be remembered as Tucci’s defining moment as EMC chieftain.

Now, let’s consider another vendor that had an opportunity to acquire VMware back in 2003.

Massive Market Cap, Industry Dominance

A few years earlier, at the pinnacle of the dot-com boom in March 2000, Cisco was the most valuable company in the world, sporting a market capitalization of more than US$500 billion.  It was a networking colossus that bestrode the globe, dominating its realm of the industry as much as any other technology company during any other period. (Its only peers in that regard were IBM in the mainframe era and Microsoft and Intel in the client-server epoch.)

Although Juniper Networks brought its first router to market in the fall of 1998 and began to challenge Cisco for routing patronage at many carriers early in the first decade of the new millennium, Cisco remained relatively unscathed in enterprise networking, where its Catalyst switches grew into a multibillion-dollar franchise after it saw off competitive challenges in the late 90s from companies such as 3Com, Cabletron, Nortel, and others.

As was its wont since its first acquisition, involving Crescendo Communications in 1993, Cisco remained an active buyer of technology companies. It bought companies to inorganically fortify its technological innovation, and to preclude competitors from gaining footholds among its expanding installed base of customers.

Non-Buyer’s Remorse?

It’s true that the post-boom dot-com bust cooled Cisco’s acquisitive ardor. Nonetheless, the networking giant made nine acquisitions from May 2002 through to the end of 2003. The companies Cisco acquired in that span included Hammerhead Networks, Navarro Networks, AYR Networks, Andiamo Systems, Psionic Software, Okena, SignalWorks, Linksys, and Latitude Communications.

The biggest acquisition in that period involved spin-in play Andiamo Systems, which provided the technological foundation for Cisco’s subsequent push to dominate storage networking. Cisco was at risk of paying as much as $2.5 billion for Andiamo, but the actual price tag for that convoluted spin-in transaction was closer to $750 million by the time it finally closed in 2004. The next-biggest Cisco acquisition during that period involved home-networking vendor Linksys, for which Cisco paid about $500 million.

Cisco announced the acquisitions of Hammerhead Networks and Navarro Networks in a single press release. Hammerhead, for which Cisco exchanged common stock valued at up to $173 million, developed software that accelerated the delivery of IP-based billing, security, and QoS; the company was folded into the Cable Business Unit in Cisco’s Network Edge and Aggregation Routing Group. Navarro Networks, for which Cisco exchanged common stock valued at up to $85 million, designed ASIC components for Ethernet switching.

To acquire AYR Networks, a vendor of “high-performance distributed networking services and highly scalable routing software technologies,” Cisco parted with about $113 million in common stock. AYR’s technology was intended to augment Cisco’s IOS software.

Andiamo Factor

Although the facts probably are familiar to many readers, Cisco’s acquisition of Andiamo was noteworthy for several reasons.  It was a spin-in acquisition, in which Cisco funded the company to go off and develop technology on its own, only later to be brought back in-house through acquisition. Andiamo was led by its CEO Buck Gee, and it included a core group of engineers who also were at Cresendo Communications.  The concept and execution of the spin-in move at Cisco was highly controversial within the company, seen as operationally and strategically innovative by many senior executives even though others claimed it engendered envy, invidious, and resentment among rank-and-file employees.

No matter, Andiamo was meant to provide market leadership for Cisco in the IP-based storage networking and to give Cisco a means of battering Brocade in Fibre Channel. That plan hasn’t come to fruition, with Brocade still leading in a tenacious Fibre Channel market and Cisco banking on Fibre Channel over Ethernet (FCoE) to go from the edge to the core. (The future of storage networking, including the often entertaining Fiber Channel-versus-FCoE debates, are another matter, and not within the purview of this post.)

While we’re on the topic of Andiamo, its former engineers continue to make news. Just this week, former Andiamo engineers Dante Malagrinò and Marco Di Benedetto officially launched Embrane, a company that is committed to delivering a platform for virtualized L4-7 network services at large cloud service providers. Those two gentlemen also were involved in Cisco last big spin-in move, Nuova Systems, which provided the foundation for Cisco’s Unified Computing Systems (UCS).

As for Cisco’s post-Andiamo acquisition announcements in 2002, Okena and Psionic both were involved in intrusion-detection technology. Of the two, Okena represented the larger transaction, valued at about $154 million in stock.

Interestingly, not much is available publicly these days regarding Cisco’s announced acquisition of SignalWorks in March of 2003. If you visit the CrunchBase profile for SignalWorks and click on a link that is supposed to take you to a Cisco press release announcing the deal, you’ll get a “Not Found” message. A search of the Cisco website turns up two press releases — relating to financial results in Cisco’s third and fourth quarters of fiscal year 2003, respectively — that obliquely mention the SignalWorks acquisition. The purchase price of the IP-audio company was about $16 million. CNet also covered the acquisition when it first came to light.

Other Strategic Priorities

Cisco’s last announced acquisitions in that timeframe involved home-networking player Linksys, part of Cisco’s ultimately underachieving bid to become a major player in the consumer space, and web-conferencing vendor Latitude Communications.

And now we get the crux of this post. Cisco announced a number of acquisitions in 2002 and 2003, but it was one they didn’t make that reverberates to this day. It was a watershed acquisition, a strategic masterstroke, but it was made by EMC, not by Cisco. As I said, the implications resound through to this day and probably will continue to ramify for years to come.

Some might contend that Cisco perhaps didn’t grasp the long-term significance of virtualization. Apparently, though, some at Cisco were clamoring for the company to buy VMware.  The missed opportunity wasn’t attributable to Cisco failing to see the importance of virtualization — some at Cisco had the prescience to see where the technology would lead — but because an acquisition of VMware wasn’t considered as high a priority as the spin-in of Andiamo for storage networking and the acquisition of Linksys for home networking.

Cisco placed its bets elsewhere, perhaps thinking that it had more time to develop a coherent and comprehensive strategy for virtualization. Then EMC made its move.

Missed the Big Chance

To this day, in my view, Cisco is paying an exorbitant opportunity cost for failing to take VMware off the market, leaving it for EMC and ultimately allowing the storage leader, yeas later, to gain the upper hand in the Virtual Computing Environment (VCE) Company joint venture that delivers UCS-encompassing VBlocks. There’s a rich irony there, too, when one considers that Cisco’s UCS contribution to the VBlock package is represented by technology derived from spin-in Nuova.

But forget about VCE and VBlocks. What about the bigger picture? Although Cisco likes to talk itself up as a leader in virtualization, it’s not nearly as prominent or dominant as it might have been. Is there anybody who would argue that Cisco, if it had acquired and then integrated and assimilated VMware as half as well as it digested Crescendo, wouldn’t have absolutely thrashed all comers in converged data-center infrastructure and cloud infrastructure?

Cisco belatedly recognized its error of omission, but it was too late. By 2009, EMC was not interested in selling its majority stake in VMware to Cisco, and Cisco was in no position to try to obtain it through an acquisition of EMC. In that regard, Cisco’s position has only worsened.

Although EMC’s ownership stake in VMware amounts to about 80 percent (or perhaps even just north of that amount), its has 98 percent of the voting shares in the company, which effectively means EMC steers the ship, regardless of public pronouncements VMware executives might issue regarding their firm being an autonomous corporate entity.

Keeping Cisco Interested but Contained 

Conversely, Cisco owns approximately five percent of VMware’s Class A shares, but none of its Class B shares, and it held just one percent of voting power as of March 2011.  As of that same date, EMC owned all of VMware’s 330,000,000 Class B Shares and 33,066,050 of its 118,462,369 shares of Class A common shares. Cisco has a stake in VMware, but it’s a small one and it has it at the pleasure of EMC, whose objective is to keep Cisco sufficiently interested so as not to pursue other strategic options in data-center virtualization and cloud infrastructure.

The EMC gambit has worked, up to the point. But Cisco, which missed its big chance  in 2003, has been trying ever since then to reassert its authority. Nuova, and all that flowed from it, was Cisco’s first attempt to regain lost ground, and now it is partnering, to varying degrees, with VMware and EMC competitors such as NetApp, Citrix, and Microsoft. It also has gotten involved involved with OpenStack and the oVirt Project in a bid to hedge its virtualization bets.

Yes, some of those moves are indicative of coopetition, and Cisco retains its occasionally strained VCE joint venture with EMC and VMware, but Cisco clearly is playing for time, looking for a way to redefine the rules of the game.

What Cisco is trying to do is break an impasse of its own making, a result of strategic choices it made nearly a decade ago.