Category Archives: VMware

Big Switch Emphasizes Ecosystem, Channel

Big Switch Networks made the news very early today — one article was posted precisely at midnight ET — with an announcement of general availability of its SDN controller, two applications that run on it, and an ecosystem of partners.

Customers also are in the picture, though it wasn’t made explicit in the Big Switch press release whether Fidelity Investments and Goldman Sachs are running Big Switch’s products in production networks.  In a Network World article, however, Jim Duffy writes that Fidelity and Goldman Sachs are “production customers for the Big Switch Open SDN product suite.” 

Controller, Applications, Ecosystem

The company’s announced products, encompassed within its Open Software Defined Networking architecture, feature the Big Network Controller, a proprietary version of the open-source Floodlight controller, and the two aforementioned applications. An SDN controller without applications is like, well, an operating system without applications. Accordingly, Big Switch has introduced Big Virtual Switch, an application for network virtualization, and Big Tap, a unified network monitoring application. 

Big Virtual Switch is the company’s answer to Nicira’s Network Virtualization Platform (NVP).  Big Switch says the product supports up to 32,000 virtual-network segments and can be integrated with cloud-management platforms such as OpenStack (Quantum), CloudStack, Microsoft System Center, and VMware vCenter.  As Big Switch illustrates on its website, Big Virtual Switch can be deployed on Big Network Controller in pure overlay networks, in pure OpenFlow networks, and in hybrid network-virtualization environments.  

According to the company, Big Virtual Switch can deliver significant CAPEX and OPEX benefits. A graphical figure — tagged Economics of Big Virtual Switchincluded in a product data sheet claims the company’s L2/L3 network virtualization facilitates “up to 50% more VMs per rack” and delivers CAPEX savings of $500,000 per rack annually and OPEX savings of $30,000 per rack annually. For those estimates, Big Switch assumes a rack size of 40 servers and suggests savings can be accrued across severs, operating-system instances, storage, networking, and operations. 

Strategies in Flux

Big Virtual Switch and Big Tap are essential SDN applications, but the company’s ultimate success in the marketplace will turn on the support its Big Network Controller receives from third-party vendors. Big Switch is aware of its external dependencies, which is why it has placed so much emphasis on its ecosystem, which it says includes A10 Networks, Arista Networks, Broadcom, Brocade, Canonical, Cariden Technologies, Citrix, Cloudscaling, Coraid, Dell, Endace, Extreme Networks, F5 Networks, Fortinet, Gigamon, Infoblox, Juniper Networks, Mellanox Technologies, Microsoft, Mirantis, Nebula, Palo Alto Networks, Piston Cloud Computing, Radware, StackOps, ThreatSTOP, and vArmour. The Big Switch press release includes an appendix of “supporting quotes” from those companies, but the company will require more than lip service from its ecosystem. 

Some companies will find that their interests are well aligned with those of Big Switch, but others are likely to be less motivated to put energy and resources into Big Switch’s SDN platform.  If you consider the vendor names listed above, you might deduce that the SDN strategies of more than a few are in flux. Some are considering whether to offer SDN controllers of their own. Even those who have no controller aspirations might be disinclined to bet too heavily or too early on a controller platform. They’ll follow the customers and the money. 

A growing number of commercial controllers are on the market (VMware/Nicira, NEC, and Big Switch) or have been announced as coming to market (IBM, HP, Cisco). Others will follow. Loyalties will shift as controller fortunes wax and wane. 

Courting the Channel 

With that in mind, Big Switch is seeking to enlist channel partners as well as technology partners. In a CRN article, we learn that Big Switch “has begun to recruit systems integrator and data center infrastructure-focused solution providers that can consult and design network architecture using Big Switch software and products from a galaxy of ecosystem partners.” In fact, Big Switch wants all its commercial sales to go through channel partners. 

In the CRN piece, Dave Butler, VP of sales at Big Switch, is candid about the symbiotic relationship the company desires from partners:

“None of our products work well alone in a data center — this is a very rigorous and rich ecosystem of partners. We’ll pay a finder’s fee to anyone who brings the right opportunity to us, but we’re not really a product sale. We need the integrators that can create a bundled solution, because that’s what makes the difference.”

. . . . “We bring them (partners) in as the specialist, and they have probably a greater touch than we might. We are not taking deals direct. Then, you have to do all the work by yourself. This is a perfect solution for their services and expertise. And, they can make money with us.”

Needs a Little Help from Its Friends

The plan is clear. Big Switch’s vendor ecosystem is meant to attract channel partners that already are selling those vendors’ products and are interested in expanding into SDN solutions. The channel partners, including SIs and datacenter-solution providers, will then bring Big Switch’s SDN platform to customers, with whom they have existing relationships. 

In theory, it all coheres. Big Switch knows it can’t go it alone against industry giants. It knows it needs more than a little help from its friends in the vendor community and the channel. 

For Big Switch, the vendor ecosystem expedites channel recruitment, and an effective channel accelerates exposure to customers. Big Switch has to move fast and demonstrate staying power. The controller race is far from over. 

Between What Is and What Will Be

I have refrained from writing about recent developments in software-defined networking (SDN) and in the larger realm of what VMware, now hosting VMworld in San Francisco, calls the  “software-defined data center” (SDDC).

My reticence hasn’t resulted from indifference or from hype fatigue — in fact, these technologies do not possess the jaundiced connotations of “hype” — but from a realization that we’ve entered a period of confusion, deception, misdirection, and murk.  Amidst the tumult, my single, independent voice — though resplendent in its dulcet tones — would be overwhelmed or forgotten.

Choppy Transition

We’re in the midst of a choppy transitional period. Where we’ve been is behind us, where we’re going is ahead of us, and where we find ourselves today is between the two. So-called legacy vendors, in both networking and compute hardware, are trying to slow progress toward the future, which will involve the primacy of software and services and related business models. There will be virtualized infrastructure, but not necessarily converged infrastructure, which is predicated on the development and sale of proprietary hardware by a single vendor or by an exclusive club of vendors.

Obviously, there still will be hardware. You can’t run software without server hardware, and you can’t run a network without physical infrastructure. But the purpose and role of that hardware will change. The closed box will be replaced by an open one, not because of any idealism or panglossian optimism, but because of economic, operational, and technological imperatives that first are remaking the largest of public-cloud data centers and soon will stretch into private clouds at large enterprises.

No Wishful Thinking

After all, the driving purpose of the Open Networking Foundation (ONF) involved shifting the balance of power into the hands of customers, who had their own business and operational priorities to address. Where legacy networking failed them, SDN provided a way forward, saving money on capital expenditures and operational costs while also providing flexibility and responsiveness to changing business and technology requirements.

The same is true for the software-defined data center, where SDN will play a role in creating a fluid pool of virtualized infrastructure that can be utilized to optimal business benefit. What’s important to note is that this development will not be restricted to the public cloud-service providers, including all the big names at the top of the ONF power structure. VMware, which coined software-defined data center, is aiming directly for the private cloud, as Greg Ferro mentioned in his analysis of VMware’s acquisition of Nicira Networks.

Fighting Inevitability

Still, it hasn’t happened yet, even though it will happen. Senior staff and executives at the incumbent vendors know what’s happening, they know that they’re fighting against an inevitability, but fight it they must. Their organizations aren’t built to go with this flow, so they will resist it.

That’s where we find ourselves. The signal-to-noise ratio isn’t great. It’s a time marked by disruption and turmoil. The dust and smoke will clear, though. We can see which way the wind is blowing.

Network-Virtualization Startup PLUMgrid Announces Funding, Reveals Little

Admit it, you thought I’d lost interest in software-defined networking (SDN), didn’t you?

But you know that couldn’t be true. I’m still interested in SDN and how it facilitates network virtualization, network programmability, and what the empire-building folks at EMC/VMware are billing as the software-defined data center, which obviously encompasses more than just networking.

Game On

Apparently I’m not the only one who retains an abiding interest in SDN. In the immediate wake of VMware’s headline-grabbing acquisition of network-virtualization startup Nicira Networks, entrepreneurs and venture capitalists want us to know that the game has just begun.

Last week, for example, we learned that PLUMgrid, a network-virtualization startup in the irritatingly opaque state of development known as stealth mode, has raised $10.7 million in first-round funding led by moneybags VCs U.S. Venture Partners (USVP) and Hummer Winblad Venture Partners. USVP’s Chris Rust and Hummer Winblad’s Lars Leckie have joined PLUMgrid’s board of directors. You can learn more about the individual board members and the company’s executive team, which includes former Cisco employees who were involved in the networking giant’s early dalliance with OpenFlow a few years ago, by perusing the biographies on the PLUMgrid website.

Looking for Clues 

But don’t expect the website to provide a helpful description of the products and technologies that PLUMgrid is developing, apparently in consultation with prospective early customers. We’ll have to wait until the end of this year, or early next year, for PLUMgrid to disclose and discuss its products.

For now, what we get is a game of technology charades, in which PLUMgrid executives, including CEO Awais Nemat, drop hints about what the company might be doing and their media interlocutors then guess at what it all means. It’s amusing at times, but it’s not illuminating.

At SDNCentral, Matt Palmer surmises that PLUMgrid might be playing in “the service orchestration arena for both physical and virtual networks.” In an article written by Jim Duffy at Network World, we learn that PLUMgrid sees its technology as having applicability beyond the parameters of network virtualization. In the same article, PLUMgrid’s Nemat expresses reservations about OpenFlow. To wit:

 “It is a great concept (of decoupling the control plane for the data plane) but it is a demonstration of a concept. Is OpenFlow the right architecture for that separation? That remains to be seen.”

More to Come

That observation is somewhat reminiscent of what Scott Schenker, Nicira co-founder and chief scientist and a professor in the Electrical Engineering and Computer Science Department at the University of California at Berkeley, had to say about OpenFlow last year. (Shenker also is a co-founder and officer of the Open Networking Foundation, a champion and leading proponent of OpenFlow.)

What we know for certain about PLUMgrid is that it is based in Sunnyvale, Calif., and plans to sell its network-virtualization software to businesses that manage physical, virtual, and cloud data centers. In a few months, perhaps before the end of the year, we’ll know more.

Xsigo: Hardware Play for Oracle, Not SDN

When I wrote about Xsigo earlier this year, I noted that many saw Oracle as a potential acquirer of the I/O virtualization vendor. Yesterday morning, Oracle made those observers look prescient, pulling the trigger on a transaction of undisclosed value.

Chris Mellor at The Register calculates that Oracle might have paid about $800 million for Xsigo, but we don’t know. What we do know is that Xsigo’s financial backers were looking for an exit. We also know that Oracle was willing to accommodate it.

For the Love of InfiniBand, It’s Not SDN

Some think Oracle bought a software-defined networking (SDN) company. I was shocked at how many journalists and pundits repeated the mantra that Oracle had moved into SDN with its Xsigo acquisition. That is not right, folks, and knowledgeable observers have tried to rectify that misconception.

I’ve gotten over a killer flu, and I have a residual sinus headache that sours my usually sunny disposition, so I’m no mood to deliver a remedial primer on the fundamentals of SDN. Suffice it to say, readers of this forum and those familiar with the pronouncements of the ONF will understand that what Xsigo does, namely I/O virtualization, is not SDN.  That is not to say that what Xsigo does is not valuable, perhaps especially to Oracle. Nonetheless, it is not SDN.

Incidentally, I have seen a few commentators throwing stones at the Oracle marketing department for depicting Xsigo as an SDN player, comparing it to Nicira Networks, which VMware is in the process of acquiring for a princely sum of $1.26 billion. It’s probably true that Oracle’s marketing mavens are trying to gild their new lily by covering it with splashes of SDN gold, but, truth be told, the marketing team at Xsigo began dressing their company in SDN garb earlier this year, when it became increasingly clear that SDN was a lot more than an ephemeral science project involving OpenFlow and boffins in lab coats.

Why Confuse? It’ll be Obvious Soon Enough

At Network Computing, Howard Marks tries to get everybody onside. I encourage you to read his piece in its entirety, because it provides some helpful background and context, but his superbly understated money quote is this one: “I’ve long been intrigued by the concept of I/O virtualization, but I think calling it software-defined networking is a stretch.”

In this industry, words are stretched and twisted like origami until we can no longer recognize their meaning. The result, more often than not, is befuddlement and confusion, as we witnessed yesterday, an outcome that really doesn’t help anybody. In fact, I would argue that Oracle and Xsigo have done themselves a disservice by playing the SDN card.

As Marks points out, “Xsigo’s use of InfiniBand is a good fit with Oracle’s Exadata and other clustered solutions.” What’s more, Matt Palmer, who notes that Xsigo is “not really an SDN acquisition,” also writes that “Oracle is the perfect home for Xsigo.” Palmer makes the salient point that Xsigo is essentially a hardware play for Oracle, one that aligns with Oracle’s hardware-centric approaches to compute and storage.

Oracle: More Like Cisco Than Like VMWare

Oracle could have explained its strategy and detailed the synergies between Xsigo and its family of hardware-engineered “Exasystems” (Exadata and Exalogic) —  and, to be fair, it provided some elucidation (see slide 11 for a concise summary) — but it muddied the waters with SDN misdirection, confusing some and antagonizing others.

Perhaps my analysis is too crude, but I see a sharp divergence between the strategic direction VMware is heading with its acquisition of Nicira and the path Oracle is taking with its Exasystems and Xsigo. Remember, Oracle, after the Sun acquisition, became a proprietary hardware vendor. Its focus is on embedding proprietary hooks and competitive differentiation into its hardware, much like Cisco Systems and the other converged-infrastructure players.

VMware’s conception of a software-defined data center is a completely different proposition. Both offer virtualization, both offer programmability, but VMware treats the underlying abstracted hardware as an undifferentiated resource pool. Conversely, Oracle and Cisco want their engineered hardware to play integral roles in data-center virtualization. Engineered hardware is what they do and who they are.

Taking the Malocchio in New Directions

In that vein, I expect Oracle to look increasingly like Cisco, at least on the infrastructure side of the house. Does that mean Oracle soon will acquire a storage player, such as NetApp, or perhaps another networking company to fill out its data-center portfolio? Maybe the latter first, because Xsigo, whatever its merits, is an I/O virtualization vendor, not a switching or routing vendor. Oracle still has a networking gap.

For reasons already belabored, Oracle is an improbable SDN player. I don’t see it as the likeliest buyer of, say, Big Switch Networks. IBM is more likely to take that path, and I might even get around to explaining why in a subsequent post. Instead, I could foresee Oracle taking out somebody like Brocade, presuming the price is right, or perhaps Extreme Networks. Both vendors have been on and off the auction block, and though Oracle’s Larry Ellison once disavowed acquisitive interest in Brocade, circumstances and Oracle’s disposition have changed markedly since then.

Oracle, which has entertained so many bitter adversaries over the years — IBM, SAP, Microsoft, SalesForce, and HP among them — now appears ready to cast its “evil eye” toward Cisco.

Some Thoughts on VMware’s Strategic Acquisition of Nicira

If you were a regular or occasional reader of Nicira Networks CTO Martin Casado’s blog, Network Heresy, you’ll know that his penultimate post dealt with network virtualization, a topic of obvious interest to him and his company. He had written about network virtualization many times, and though Casado would not describe the posts as such, they must have looked like compelling sales pitches to the strategic thinkers at VMware.

Yesterday, as probably everyone reading this post knows, VMware announced its acquisition of Nicira for $1.26 billion. VMware will pay $1.05 billion in cash and $210 million in unvested equity awards.  The ubiquitous Frank Quattrone and his Quatalyst Partners, which reportedly had been hired previously to shop Brocade Communications, served as Nicira’s adviser.

Strategic Buy

VMware should have surprised no one when it emphasized that its acquisition of Nicira was a strategic move, likely to pay off in years to come, rather than one that will produce appreciable near-term revenue. As Reuters and the New York Times noted, VMware’s buy price for Nicira was 25 times the amount ($50 million) invested in the company by its financial backers, which include venture-capital firms Andreessen Horowitz, Lightspeed,and NEA. Diane Greene, co-founder and former CEO of VMware — replaced four years ago by Paul Maritz — had an “angel” stake in Nicira, as did as Andy Rachleff, a former general partner at Benchmark Capital.

Despite its acquisition of Nicira, VMware says it’s not “at war” with Cisco. Technically, that’s correct. VMware and its parent company, EMC, will continue to do business with Cisco as they add meat to the bones of their data-center virtualization strategy. But the die was cast, and  Cisco should have known it. There were intimations previously that the relationship between Cisco and EMC had been infected by mutual suspicion, and VMware’s acquisition of Nicira adds to the fear and loathing. Will Cisco, as rumored, move into storage? How will Insieme, helmed by Cisco’s aging switching gods, deliver a rebuttal to VMware’s networking aspirations? It won’t be too long before the answers trickle out.

Still, for now, Cisco, EMC, and VMware will protest that it’s business as usual. In some ways, that will be true, but it will also be a type of strategic misdirection. The relationship between EMC and Cisco will not be the same as it was before yesterday’s news hit the wires. When these partners get together for meetings, candor could be conspicuous by its absence.

Acquisitive Roads Not Traveled

Some have posited that Cisco might have acquired Nicira if VMware had not beaten it to the punch. I don’t know about that. Perhaps Cisco might have bought Nicira if the asking price were low, enabling Cisco to effectively kill the startup and be done with it. But Cisco would not have paid $1.26 billion for a company whose approach to networking directly contradicts Cisco’s hardware-based business model and market dominance. One typically doesn’t pay that much to spike a company, though I suppose if the prospective buyer were concerned enough about a strategic technology shift and a major market inflection, it might do so. In this case, though, I suspect Cisco was blindsided by VMware. It just didn’t see this coming — at least not now, not at such an early state of Nicira’s development.

Similarly, I didn’t see Microsoft or Citrix as buyers of Nicira. Microsoft is distracted by its cloud-service provider aspirations, and the $1.26 billion would have been too rich for Citrix.

IBM’s Moves and Cisco’s Overseas Cash Horde

One company I had envisioned as a potential (though less likely) acquirer of Nicira was IBM, which already has a vSwitch. IBM might now settle for the SDN-controller technology available from Big Switch Networks. The two have been working together on IBM’s Open Data Center Interoperable Network (ODIN), and Big Switch’s technology fits well with IBM’s PureSystems and its top-down model of having application workloads command and control  virtualized infrastructure. As the second network-virtualization domino to fall, Big Switch likely will go for a lower price than did Nicira.

On Twitter, Dell’s Brad Hedlund asked whether Cisco would use its vast cash horde to strike back with a bold acquisition of its own. Cisco has two problems here. First, I don’t see an acquisition that would effectively blunt VMware’s move. Second, about 90 percent of Cisco’s cash (more than $42 billion) is offshore, and CEO John Chambers doesn’t want to take a tax hit on its repatriation. He had been hoping for a “tax holiday” from the U.S. government, but that’s not going to happen in the middle of an election campaign, during a macroeconomic slump in which plenty of working Americans are struggling to make ends meet. That means a significant U.S.-based acquisition likely is off the table, unless the target company is very small or is willing to take Cisco stock instead of cash.

Cisco’s Innovator’s Dilemma

Oh, and there’s a third problem for Cisco, mentioned earlier in this prolix post. Cisco doesn’t want to embrace this SDN stuff. Cisco would rather resist it. The Cisco ONE announcement really was about Cisco’s take on network programmability, not about SDN-type virtualization in which overlay networks run atop an underyling physical network.

Cisco is caught in a classic innovator’s dilemma, held captive by the success it has enjoyed selling prodigious amounts of networking gear to its customers, and I don’t think it can extricate itself. It’s built a huge and massively successful business selling a hardware-based value proposition predicated on switches and routers. It has software, but it’s not really a software company.

For Cisco, the customer value, the proprietary hooks, are in its boxes. Its whole business model — which, again, has been tremendously successful — is based around that premise. The entire company is based around that business model.  Cisco eventually will have to reinvent itself, like IBM did after it failed to adapt to client-server computing, but the day of reckoning hasn’t arrived.

On the Defensive

Expect Cisco to continue to talk about the northbound interface (which can provide intelligence from the switch) and about network programmability, but don’t expect networking’s big leopard to change its spots. Cisco will try to portray the situation differently, but it’s defending rather than attacking, trying to hold off the software-based marauders of infrastructure virtualization as long as possible. The doomsday clock on when they’ll arrive in Cisco data centers just moved up a few ticks with VMware’s acquisition of Nicira.

What about the other networking players? Sadly, HP hasn’t figured out what to about SDN, even though OpenFlow is available on its former ProCurve switches. HP has a toe dipped in the SDN pool, but it doesn’t seeming willing to take the initiative. Juniper, which previously displayed ingenuity in bringing forward QFabric, is scrambling for an answer. Brocade is pragmatically embracing hybrid control planes to maintain account presence and margins in the near- to intermediate-term.

Arista Networks, for its part, might be better positioned to compete on networking’s new playing field. Arista Networks’ CEO Jayshree Ullal had the following to say about yesterday’s news:

“It’s exciting to see the return of innovative networking companies and the appreciation for great talent/technology. Software Defined Networking (SDN) is indeed disrupting legacy vendors. As a key partner of VMware and co-innovator in VXLANs, we welcome the interoperability of Nicira and VMWare controllers with Arista EOS.”

Arista’s Options

What’s interesting here is that Arista, which invariably presents its Extensible OS (EOS) as “controller friendly,” earlier this year demonstrated interoperability with controllers from VMware, Big Switch Networks, and Nebula, which has built a cloud controller for OpenStack.

One of Nebula’s investors is Andy Bechtolsheim, whom knowledgeable observers will recognize as the chief development officer (CDO) of, and major investor in, Arista Networks.  It is possible that Bechtolsheim sees a potential fit between the two companies — one building a cloud controller and one delivering cloud networking. To add fuel to this particular fire, which may or may not emit smoke, note that the Nebula cloud controller already features Arista technology, and that Nebula is hiring a senior network engineer, who ideally would have “experience with cloud infrastructure (OpenStack, AWS, etc. . . .  and familiarity with OpenFlow and Open vSwitch.”

 Open or Closed?

Speaking of Open vSwitch, Matt Palmer at SDN Centralwill feel some vindication now that VMware has purchased a company whose engineering team has made significant contributions to the OVS code. Palmer doubtless will cast a wary eye on VMware’s intentions toward OVS, but both Steve Herrod, VMware’s CTO, and Martin Casado, Nicira’s CTO, have provided written assurances that their companies, now combining, will not retreat from commitments to OVS and to Open Flow and Quantum, the OpenStack networking  project.

Meanwhile, GigaOm’s Derrick Harris thinks it would be bad business for VMware to jilt the open-source community, particularly in relation to hypervisors, which “have to be treated as the workers that merely carry out the management layer’s commands. If all they’re there to do is create virtual machines that are part of a resource pool, the hypervisor shouldn’t really matter.”

This seems about right. In this brave new world of virtualized infrastructure, the ultimate value will reside in an intelligent management layer.

PS: I wrote this post under a slight fever and a throbbing headache, so I would not be surprised to discover belatedly that it contains at least a couple typographical errors. Please accept my apologies in advance.

Dell’s Steady Progression in Converged Infrastructure

With its second annual Dell Storage Forum in Boston providing the backdrop, Dell made a converged-infrastructure announcement this week.  (The company briefed me under embargo late last week.)

The press release is available on the company’s website, but I’d like to draw attention to a few aspects of the announcement that I consider noteworthy.

First off, Dell now is positioned to offer its customers a full complement of converged infrastructure, spanning server, storage, and networking hardware, as well as management software. For customers seeking a single-vendor, one-throat-to-choke solution, this puts Dell  on parity with IBM and HP, while Cisco still must partner with EMC or with NetApp for its storage technology.

Bringing the Storage

Until this announcement, Dell was lacking the storage ingredients. Now, with what Dell is calling the Dell Converged Blade Data Center solution, the company is adding its EqualLogic iSCSI Blade Arrays to Dell PowerEdge blade servers and Dell Force10 MXL blade switching. Dell says this package gives customers an entire data center within a single blade enclosure, streamlining operations and management, and thereby saving money.

Dell’s other converged-infrastructure offering is the Dell vStart 1000. For this iteration of vStart, Dell is including, for the first time, its Compellent storage and Force10 networking gear in one integrated rack for private-cloud environments.

The vStart 1000 comes in two configurations: the vStart 1000m and the vStart 1000v. The packages are nearly identical — PowerEdge M620 servers, PowerEdge R620 management servers, Dell Compellent Series 40 storage, Dell Force10 S4810 ToR Networking and Dell Force10 S4810 ToR Networking, plus Brocade 5100 ToR Fibre-Channel Switches — but the vStart 1000m comes with Windows Server 2008 R2 Datacenter (with the Hyper-V hypervisor), whereas the vStart 1000v features trial editions of VMware vCenter and VMware vSphere (with the ESXi hypervisor).

An an aside, it’s worth mentioning that Dell’s inclusion of Brocade’s Fibre-Channel switches confirms that Dell is keeping that partnership alive to satisfy customers’ FC requirements.

Full Value from Acquisitions

In summary, then, is Dell delivering converged infrastructure with both its in-house storage options, demonstrating that it has fully integrated its major hardware acquisitions into the mix.   It’s covering as much converged ground as it can with this announcement.

Nonetheless, it’s fair to ask where Dell will find customers for its converged offerings. During my briefing with Dell, I was told that mid-market was the real sweet spot, though Dell also sees departmental opportunities in large enterprises.

The mid-market, though, is a smart choice, not only because the various technology pieces, individually and collectively, seem well suited to the purpose, but also because Dell, given its roots and lineage, is a natural player in that space. Dell has a strong mandate to contest the mid-market, where it can hold its own against any of its larger converged-infrastructure rivals.

Mid-Market Sweet Spot

What’s more, the mid-market — unlike cloud-service providers today and some large enterprise in the not-too-distant future — are unlikely to have the inclination, resources, and skills to pursue a DIY, software-driven, DevOps-oriented variant of converged infrastructure that might involve bare-bones hardware from Asian ODMs. At the end of the day, converged infrastructure is sold as packaged hardware, and paying customers will need to perceive and realize value from buying the boxes.

The mid-market would seem more than receptive to the value proposition that Dell is selling, which is that its converged infrastructure will reduce the complexity of IT management and deliver operational cost savings.

This finally leads us to a discussion of Dell’s take on converged infrastructure. As noted in an eChannelLine article, Dell’s notion of converged infrastructure encompasses operations management, services management, and applications management. As Dell continues down the acquisition trail, we should expect the company to place greater emphasis on software-based intelligence in those areas.

That, too, would be a smart move. The battle never ends, but Dell — despite its struggles in the PC market — is now more than punching its own weight in converged infrastructure.

Cisco’s Storage Trap

Recent commentary from Barclays Capital analyst Jeff Kvaal has me wondering whether  Cisco might push into the storage market. In turn, I’ve begun to think about a strategic drift at Cisco that has been apparent for the last few years.

But let’s discuss Cisco and storage first, then consider the matter within a broader context.

Risks, Rewards, and Precedents

Obviously a move into storage would involve significant risks as well as potential rewards. Cisco would have to think carefully, as it presumably has done, about the likely consequences and implications of such a move. The stakes are high, and other parties — current competitors and partners alike — would not sit idly on their hands.

Then again, Cisco has been down this road before, when it chose to start selling servers rather than relying on boxes from partners, such as HP and Dell. Today, of course, Cisco partners with EMC and NetApp for storage gear. Citing the precedent of Cisco’s server incursion, one could make the case that Cisco might be tempted to call the same play .

After all, we’re entering a period of converged and virtualized infrastructure in the data center, where private and public clouds overlap and merge. In such a world, customers might wish to get well-integrated compute, networking, and storage infrastructure from a single vendor. That’s a premise already accepted at HP and Dell. Meanwhile, it seems increasingly likely data-center infrastructure is coming together, in one way or another, in service of application workloads.

Limits to Growth?

Cisco also has a growth problem. Despite attempts at strategic diversification, including failed ventures in consumer markets (Flip, anyone?), Cisco still hasn’t found a top-line driver that can help it expand the business while supporting its traditional margins. Cisco has pounded the table perennially for videoconferencing and telepresence, but it’s not clear that Cisco will see as much benefit from the proliferation of video collaboration as once was assumed.

To complicate matters, storm clouds are appearing on the horizon, with Cisco’s core businesses of switching and routing threatened by the interrelated developments of service-provider alienation and software-defined networking (SDN). Cisco’s revenues aren’t about to fall off a cliff by any means, but nor are they on the cusp of a second-wind surge.

Such uncertain prospects must concern Cisco’s board of directors, its CEO John Chambers, and its institutional investors.

Suspicious Minds

In storage, Cisco currently has marriages of mutual convenience with EMC (VBlocks and the sometimes-strained VCE joint venture) and with NetApp (the FlexPod reference architecture).  The lyrics of Mark James’ song Suspicious Minds are evocative of what’s transpiring between Cisco and these storage vendors. The problem is not only that Cisco is bigamous, but that the networking giant might have another arrangement in mind that leaves both partners jilted.

Neither EMC nor NetApp is oblivious to the danger, and each has taken care to reduce its strategic reliance on Cisco. Conversely, Cisco would be exposed to substantial risks if it were to abandon its existing partnership in favor of a go-it-alone approach to storage.

I think that’s particularly true in the case of EMC, which is the majority owner of server-virtualization market leader VMware as well as a storage vendor. The corporate tandem of VMware and EMC carries considerable enterprise clout, and Cisco is likely to be understandably reluctant to see the duo become its adversaries.

Caught in a Trap

Still, Cisco has boxed itself into a strategic corner. It needs growth, it hasn’t been able to find it from diversification away from the data center, and it could easily see the potential of broadening its reach from networking and servers to storage. A few years ago, the logical choice might have been for Cisco to acquire EMC. Cisco had the market capitalization and the onshore cash to pull it off five years ago, perhaps even three years ago.

Since then, though, the companies’ market fortunes have diverged. EMC now has a market capitalization of about $54 billion, while Cisco’s is slightly more than $90 billion. Even if Cisco could find a way of repatriating its offshore cash hoard without taking a stiff hit from the U.S. taxman, it wouldn’t have the cash to pull of an acquisition of EMC, whose shareholders doubtless would be disinclined to accept Cisco stock as part of a proposed transaction.

Therefore, even if it wanted to do so, Cisco cannot acquire EMC. It might have been a good move at one time, but it isn’t practical now.

Losing Control

Even NetApp, with a market capitalization of more than $12.1 billion, would rate as the biggest purchase by far in Cisco’s storied history of acquisitions. Cisco could pull it off, but then it would have to try to further counter and commoditize VMware’s virtualization and cloud-management presence through a fervent embrace of something like OpenStack or a potential acquisition of Citrix. I don’t know whether Cisco is ready for either option.

Actually, I don’t see an easy exit from this dilemma for Cisco. It’s mired in somewhat beneficial but inherently limiting and mutually distrustful relationships with two major storage players. It would probably like to own storage just as it owns servers, so that it might offer a full-fledged converged infrastructure stack, but it has let the data-center grass grow under its feet. Just as it missed a beat and failed to harness virtualization and cloud as well as it might have done, it has stumbled similarly on storage.

The status quo is likely to prevail until something breaks. As we all know, however, making no decision effectively is a decision, and it carries consequences. Increasingly, and to an extent that is unprecedented, Cisco is losing control of its strategic destiny.

Cheriton Sees Opportunity in Infrastructure

When I wrote my first post on this blog, way back in 2006, I assumed that technology infrastructure largely was a spent force. I expected incremental enhancements, gradual advances, but I didn’t anticipate another major boom or a significant disruption of the established order in what once had been a vibrant technology space.

While the technology industry as a whole can suffer from blinkered, willful optimism, perhaps I was afflicted by a different condition entirely. I might have been too pessimistic, too gloomy, dispirited by the technology downturn of the early 2000s and the lack of a meaningful, sustained recovery in the years that immediately followed.

By the way, when I refer to technology, I’m not talking about social networking such as Facebook. I understand that there’s a lot of technology behind the scenes at Facebook, but the customer-facing “social” phenomenon leaves me cold. I never did see the point of Facebook from a user’s perspective, though I understood how it could serve as an unprecedented data-mining machine for advertisers.

Opportunity Renewed

Fortunately, though, I was wrong about the decline and fall of infrastructure. It took a while, but a new era of infrastructure has arisen, based on virtualization, orchestration, and automation. Technological possibilities that we could only dream about more than a decade ago are now possible. In the networking realm, software-defined networking (SDN) is enabling comparatively outmoded network infrastructure to catch up with compute and, to a lesser degree, storage infrastructure as the promise of an application-driven, programmable data center comes into clearer view.

Suddenly, at long last, there’s new opportunity in infrastructure.

You don’t have to take my word for it, either. There are people who’ve designed and developed industry-leading technologies who espouse the same opinion. Some of these people are billionaires, and they’re backed their convictions with substantial sums of money, investing in technologies and companies with clear mandates to remake IT infrastructure.

Outrageously Wealthy Canuck

One of those people is David Cheriton, a billionaire who wears many hats. He is Professor of Computer Science and Electrical Engineering at Stanford University, where he researches networking and distributed systems, and he also serves as a co-founder and chief scientist at Arista Networks. He’s also an investor in startup companies. Back in 1998, one early-stage company in which he invested, along with Arista co-founder Andy Bechtolsheim, was Google.  The duo made a similar early investment in VMware, so they’ve done okay.

Born in Vancouver, raised in Edmonton, Alberta, and ranked 37th on a Wikipedia list of “richest Canadians”** — Forbes ranks him 21st among outrageously wealthy Canucks  — Cheriton recently spoke about innovation and entrepreneurship at a Churchill Club event in Silicon Valley. The event was co-hosted and organized by the Hua Yuan Science and Technology Association and also featured Ken Xie, who founded NetScreen (acquired by Juniper Networks in 2004) and is now president and CEO of unified-threat-management/firewall vendor Fortinet, a company he also founded.

In addition to his apparent knack as an investor, Cheriton has considerable firsthand experience as an entrepreneur and an innovator. Before he and Bechtolsheim combined forces at Arista Networks,  they founded Granite Systems, a Gigabit-Ethernet switching concern that was acquired by Cisco in 1996 for about $220 million in stock, back when shares of Cisco were continuously on the rise.  Subsequently, after the Google investment, Bechtolsheim and Cheriton combined forces again to found Kealia, which specialized in server technology based on AMD’s Opteron microprocessor.  That company was acquired by Sun Microsystems in 2004, providing technology included in the Sun Fire X4500 storage product.

Room for Improvement

In 2005, Cheriton and Bechtolsheiim followed up with Arista, then called Arastra, and its 10-GbE switching technology, which brings us to the approximate present and back to something Cheriton said at the Churchill Club event late last month. Noting that people tend to become preoccupied with the latest developments in social networking and mobility, Cheriton expressed his enthusiasm for infrastructure, as an investment vehicle as well as an area in which he has an abiding technical interest. As quoted in a BusinessWeek article, Cheriton said: “I think there is an opportunity to go back and say, ‘Gee, I think there’s lot of room for improvement in the infrastructure.’ ”

Reinforcing that point, he noted that technology infrastructure today is predicated on ideas that are about 30 years old. The network was the place to start the infrastructure refurbishment, Cheriton believed, and Arista Networks grew from that conviction.

But Cheriton hasn’t stopped there. He also founded a company called Optumsoft, about which not much is known. On its website, Optumsoft is described as an early-stage startup company “taking distributed computing and distributed software development mainstream.” Quoting from the website:

Recent advancements in multi-core computing systems, coupled with the ever increasing functional and performance requirements of software has created an exciting market opportunity for addressing the programmatic and architectural issues involved in modern software development. Optumsoft is addressing this growing market with a novel technology approach that is transparent, scalable, and portable, resulting in significant improvement to the development and maintenance of distributed/parallel structured software systems. Early production usage by commercial clients has validated the technology and value proposition.

Last fall, an anonymous source suggested on Quora that what Optumsoft was building related to “how to structure object-oriented RPC in a way that makes it easy to build robust systems.  The technology behind Arista’s EOS is based on some of these ideas, as was software structure at a previous startup, Kealia.  The technology includes an IDL and a C++ runtime, similar to what you’d get using CORBA.”

Nebula and Tintri

On the investment side, Cheriton and Bechtolsheim have put money into Nebula, which has venture-capital backing from Kleiner Perkins Caulfield & Byers and Highland Capital Partners. Built on OpenStack, the Nebula Enterprise Cloud Appliance is designed to provision and configure flexible, scalable cloud-computing infrastructure. Although it doesn’t say so on the Nebula website, previous reports indicated that Arista’s networking technology is included in the Nebula appliance.

According to the BusinessWeek article,  Cheriton also has a stake in Tintri, co-founded by Kieran Harty and Mark Gritter. Harty was EVP of R&D at VMware for seven years, and Gritter was one of the first of Cheriton’s employees at Kealia. They’ve assembled a PhD-laden engineering team that has developed a virtual-machine-aware storage appliance designed for virtualized environments, which the company says have been underserved by older storage technology that apparently contributes to “VM stall.”

Another early-stage investment that Cheriton made was in Aster Data Systems, a purveyor of a massively parallel DBMS that runs on clustered commodity servers. Already a minority owner of Aster, Teradata bought the 89% of the company it didn’t own for $263 million last year.

Cheriton has made bets on infrastructure, and he’ll likely make others. It’s an encouraging sign for those of us who gravitate to that part of the industry.

(**No, I am not on the list, but thanks for asking.)

Why Nicira Says Networking Doesn’t Need a VMware

At Martin Casado’s Network Heresy blog yesterday, a guest post was offered by Andrew Lambeth, who once led the vDS distributed switching project at VMware but is now, like Casado, ensconced at Nicira.  The post was titled provocatively, “Networking Doesn’t Need a VMWare.”

It was different in substance and tone from Casado’s posts, which typically are balanced, logical, and carefully constructed. I appreciate those qualities. Words matter, and Casado invariably takes the time to choose the right ones and to compose posts that communicate complicated ideas clearly. Even better, he does so without undue vendor bias.

Maybe he’s really a shrewd master of manipulation, but I always get the impression Casado is sincere, that he means what he says and says what he means.  One actually learns something from reading his blog. That’s always refreshing, in this industry or any other.

Defining (or Redefining) Network Virtualization 

As I said, the post from Lambeth was a departure in more ways than one. It was logical and carefully constructed, just like Casado’s writing, but it did not attempt to achieve any sort of balance. Instead, given the venue, it was strikingly partisan and tendentious.

Despite the technical window-dressing, it was devised to differentiate and distinguish Nicira’s approach to network virtualization from those of other players in the space, established vendors and startups alike. It also sought, implicitly if not explicitly, to derogate OpenFlow in the still-unfolding SDN hierarchy of value.

Just to summarize, though I encourage you to read the post yourself, Lambeth argues that, while there’s industry consensus on the desirability of network virtualization, there’s a significant difference of opinion on how it should be achieved. Network virtualization is not at all the same as server virtualization, he writes, citing the need in the former for “scale (lots of it) and distributed state consistency.” He concludes by saying that the current preoccupation with the data path, the realm of OpenFlow, is akin to “worrying about a trivial component of an otherwise enormously challenging problem.”

Positioning and Differentiation

Commenting on Lambeth’s post, Chris Hoff, formerly of Cisco and now with Juniper Networks (and a prolific tweeter,  I might add), concluded correctly that it “smacks of positioning against both OpenFlow as well as other network virtualization startups.”

In issuing that positioning statement, Nicira not only is attempting to distance itself from the OpenFlow crowd; it also has at least a couple specific vendors in mind.

One obvious target is Big Switch Networks. If you visit that vendor’s website,  you will find that it expresses unqualified love for OpenFlow on its home page. It also says candidly that “networking needs a VMware.” Diametrically opposing that view, Nicira says networking doesn’t need a VMware. Furthermore, as I noted in a previous post, Nicira continues to  expend considerable effort to downplay the significance of OpenFlow.

Thinking Beyond Big Switch

But Nicira is thinking about competitors other than Big Switch, too. Readers of this blog will know that of one of my recurring themes — some would call it a conspiracy theory — is that the VCE partnership between Cisco and EMC is subject to increasing strain and tension. In short, EMC acquired VMware, Cisco didn’t, and now virtualization — and maybe VWware — is becoming integral to the future of networking.

Nicira’s Lambeth, formerly involved with distributed switching at VMware, and his counterparts at Big Switch agree that network virtualization is important. Where they disagree, perhaps, is in how it should be achieved.

Meanwhile, both vendors at one time or another, as Lambeth concedes at the outset of his post, have espoused variations on the claim that “networking needs a VMware.” Apparently, the team at Nicira has reconsidered that premise and is going in a different direction.

It might have adjusted course for reasons other than (or in addition to) those relating to architecture and technological requirements.

VMware’s Networking Ambitions

You see, VMware seems to believe that networking already has a VMware, whose name, conveniently enough, is VMware. Circumstantial evidence, including a recent post by VMware CTO Steve Herrod, suggests that VMware has ambitions that extend beyond server virtualization and well into network virtualization. Back in June, Greg Ferro also noted VMware’s interest in carving out a significant role for itself in network virtualization. In his commentary, Ferro cited a post by Allwyn Sequeira, security CTO at VMWare.

Herrod has predicted that “software-defined networking will become a mainstay of data- center architectures” in 2012. It’s safe to assume that he foresees his company playing a major part in making his prognostication a reality.

Dell’s Bid for Data-Center Distinction

Since Dell’s acquisition of Force10 Networks, many of us have wondered how Dell’s networking business, under the leadership of former Cisco Systems executive Dario Zamarian, would chart a course of distinction in data-center networking.

While Zamarian has talked about adding Layer 4-7 network services, presumably through acquisition, what about the bigger picture? We’ve pondered that question, and some have asked it, including one gentleman who posed the query on the blog of Brad Hedlund, another former Ciscoite now at Dell.

Data Center’s Big Picture

The question surfaced in a string of comments that followed Hedlund’s perceptive analysis of Embrane’s recent Heleos unveiling. Specifically, the commenter asked Hedlund to elucidate Dell’s strategic vision in data-center networking. He wanted Hedlund to provide an exposition on how Dell intended to differentiate itself from the likes of Cisco’s UCS/Nexus, Juniper’s QFabric, and Brocade’s VCS.

I quote Hedlund’s response:

 “This may not be the answer you are looking for right now, but .. Consider for a moment that the examples you cite; Cisco UCS/Nexus; Juniper QFabric; Brocade VCS — all are either network only or network centric strategies. Think about that for a second. Take your network hat off for just a minute and consider the data center as a whole. Is the network at the center of the data center universe? Or is network the piece that facilitates the convergence of compute and storage? Is the physical data center network trending toward a feature/performance discussion, or price/performance?

Yes, Dell now has a Tier 1 data center network offering with Force10. And with Force10, Dell can (and will) win in network only conversations. Now consider for a moment what Dell represents as a whole .. a total IT solutions provider of Compute, Storage, Network, Services, and Software. And now consider Dell’s heritage ofproviding solutions that are open, capable, and affordable.”

Compare and Contrast

It’s a fair enough answer. By reframing the relevant context to encompass the data center in its entirety, rather than just the network infrastructure, Dell can offer an expansive value-based, one-stop narrative that its rivals — at least those cited by the questioner —  cannot match on their own.

Let’s consider Cisco. For all its work with EMC/VMware and NetApp on Vblocks and FlexPods, respectively, Cisco does not provide its own storage technologies for converged infrastructure. Juniper and Brocade are pure networking vendors, dependent on partners for storage, compute, and complementary software and services.

HP, though not cited by the commenter in his question, is one Dell rival that can offer the same pitch. Like Dell, HP offers data-center compute, storage, networking, software, and services. It’s true, though, that HP also resells networking gear, notably Brocade’s Fibre Channel storage-networking switches. The same, of course, applies to Dell, which also continues to resell Brocade’s Fibre Channel switches and maintains — at least for now — a nominal relationship with Juniper.

IBM also warrants mention. Its home-grown networking portfolio is restricted to the range of products it obtained through its acquisition of Blade Network Technologies last year. Like HP, but to a greater degree, IBM resells and OEMs networking gear from other vendors, including Brocade and Juniper. It also OEMs some of its storage portfolio from NetApp, but it also has a growing stable of orchestration and management software, and it definitely has a prodigious services army.

Full-Course Fare 

Caveats aside, Dell can tell a reasonably credible story about its ability to address the full range of data-center requirements. Dell’s success with that strategy will depend not only its sales execution, but also on its capacity to continually deliver high-quality solutions across the gamut of compute, storage, networking, software, and services. Offering a moderately tasty data-center repast won’t be good enough.  If Dell wants customers to patronize it and return for more, it must deliver a savory menu spanning every course of the meal.

To his credit, Hedlund acknowledges that Dell must be “capable.” He also notes that Dell must  be open and affordable. To be sure, Dell doesn’t have the data-center brand equity to extract the proprietary entitlements derived from vendor lock-in, certainly not in the networking sphere, where even Cisco is finding that game to be harder work these days.

Dell, HP, and IBM each might be able to craft a single-vendor narrative that spans the entire data center, but the cogency of those pitches are only as credible as the solutions the vendors deliver. For many customers, a multivendor infrastructure, especially in a truly interoperable standards-based world, might be preferable to a soup-to-nuts solution from a single vendor. That’s particularly true if the single-vendor alternative has glaring deficiencies and weaknesses, or if it comes with perpetual proprietary overhead and constraints.

Still Early

I think the real differentiation isn’t so much in whether data-center solutions are delivered by a single vendor or by multiple vendors. I suspect the meaningful differentiation will be delivered in how those environments are further virtualized, automated, orchestrated, and managed as coherent unified entities.

Dell has bought itself a seat at the table where that high-stakes game will unfold. But it isn’t alone, and the big cards have yet to be played.

Reflecting on the Big Acquisition Cisco Didn’t Make

It has been nearly eight years since EMC acquired VMware. The acquisition announcement went over the newswires on December 15, 2003. EMC paid approximately $635 million for VMware, and Joe Tucci, EMC’s president and CEO, had this to say about the deal:

“Customers want help simplifying the management of their IT infrastructures. This is more than a storage challenge. Until now, server and storage virtualization have existed as disparate entities. Today, EMC is accelerating the convergence of these two worlds .“

“We’ve been working with the talented VMware team for some time now, and we understand why they are considered one of the hottest technology companies anywhere. With the resources and commitment of EMC behind VMware’s leading server virtualization technologies and the partnerships that help bring these technologies to market, we look forward to a prosperous future together.”

Virtualization Goldmine

Oh, the future was prosperous . . . and then some. It’s a deal that worked out hugely in EMC’s favor. Even though the storage behemoth has spun out VMware in the interim, allowing it to go public, EMC still retains more than 80 percent ownership of its virtualization goldmine.

Consider that EMC paid just $635 million in 2003 to buy the server-virtualization market leader. VMware’s current market capitalization is more than $38 billion. That means EMC’s stake in VMware is worth more than $30 billion, not including the gains it reaped when it took VMware public. I don’t think it’s hyperbolic to suggest that EMC’s purchase of VMware will be remembered as Tucci’s defining moment as EMC chieftain.

Now, let’s consider another vendor that had an opportunity to acquire VMware back in 2003.

Massive Market Cap, Industry Dominance

A few years earlier, at the pinnacle of the dot-com boom in March 2000, Cisco was the most valuable company in the world, sporting a market capitalization of more than US$500 billion.  It was a networking colossus that bestrode the globe, dominating its realm of the industry as much as any other technology company during any other period. (Its only peers in that regard were IBM in the mainframe era and Microsoft and Intel in the client-server epoch.)

Although Juniper Networks brought its first router to market in the fall of 1998 and began to challenge Cisco for routing patronage at many carriers early in the first decade of the new millennium, Cisco remained relatively unscathed in enterprise networking, where its Catalyst switches grew into a multibillion-dollar franchise after it saw off competitive challenges in the late 90s from companies such as 3Com, Cabletron, Nortel, and others.

As was its wont since its first acquisition, involving Crescendo Communications in 1993, Cisco remained an active buyer of technology companies. It bought companies to inorganically fortify its technological innovation, and to preclude competitors from gaining footholds among its expanding installed base of customers.

Non-Buyer’s Remorse?

It’s true that the post-boom dot-com bust cooled Cisco’s acquisitive ardor. Nonetheless, the networking giant made nine acquisitions from May 2002 through to the end of 2003. The companies Cisco acquired in that span included Hammerhead Networks, Navarro Networks, AYR Networks, Andiamo Systems, Psionic Software, Okena, SignalWorks, Linksys, and Latitude Communications.

The biggest acquisition in that period involved spin-in play Andiamo Systems, which provided the technological foundation for Cisco’s subsequent push to dominate storage networking. Cisco was at risk of paying as much as $2.5 billion for Andiamo, but the actual price tag for that convoluted spin-in transaction was closer to $750 million by the time it finally closed in 2004. The next-biggest Cisco acquisition during that period involved home-networking vendor Linksys, for which Cisco paid about $500 million.

Cisco announced the acquisitions of Hammerhead Networks and Navarro Networks in a single press release. Hammerhead, for which Cisco exchanged common stock valued at up to $173 million, developed software that accelerated the delivery of IP-based billing, security, and QoS; the company was folded into the Cable Business Unit in Cisco’s Network Edge and Aggregation Routing Group. Navarro Networks, for which Cisco exchanged common stock valued at up to $85 million, designed ASIC components for Ethernet switching.

To acquire AYR Networks, a vendor of “high-performance distributed networking services and highly scalable routing software technologies,” Cisco parted with about $113 million in common stock. AYR’s technology was intended to augment Cisco’s IOS software.

Andiamo Factor

Although the facts probably are familiar to many readers, Cisco’s acquisition of Andiamo was noteworthy for several reasons.  It was a spin-in acquisition, in which Cisco funded the company to go off and develop technology on its own, only later to be brought back in-house through acquisition. Andiamo was led by its CEO Buck Gee, and it included a core group of engineers who also were at Cresendo Communications.  The concept and execution of the spin-in move at Cisco was highly controversial within the company, seen as operationally and strategically innovative by many senior executives even though others claimed it engendered envy, invidious, and resentment among rank-and-file employees.

No matter, Andiamo was meant to provide market leadership for Cisco in the IP-based storage networking and to give Cisco a means of battering Brocade in Fibre Channel. That plan hasn’t come to fruition, with Brocade still leading in a tenacious Fibre Channel market and Cisco banking on Fibre Channel over Ethernet (FCoE) to go from the edge to the core. (The future of storage networking, including the often entertaining Fiber Channel-versus-FCoE debates, are another matter, and not within the purview of this post.)

While we’re on the topic of Andiamo, its former engineers continue to make news. Just this week, former Andiamo engineers Dante Malagrinò and Marco Di Benedetto officially launched Embrane, a company that is committed to delivering a platform for virtualized L4-7 network services at large cloud service providers. Those two gentlemen also were involved in Cisco last big spin-in move, Nuova Systems, which provided the foundation for Cisco’s Unified Computing Systems (UCS).

As for Cisco’s post-Andiamo acquisition announcements in 2002, Okena and Psionic both were involved in intrusion-detection technology. Of the two, Okena represented the larger transaction, valued at about $154 million in stock.

Interestingly, not much is available publicly these days regarding Cisco’s announced acquisition of SignalWorks in March of 2003. If you visit the CrunchBase profile for SignalWorks and click on a link that is supposed to take you to a Cisco press release announcing the deal, you’ll get a “Not Found” message. A search of the Cisco website turns up two press releases — relating to financial results in Cisco’s third and fourth quarters of fiscal year 2003, respectively — that obliquely mention the SignalWorks acquisition. The purchase price of the IP-audio company was about $16 million. CNet also covered the acquisition when it first came to light.

Other Strategic Priorities

Cisco’s last announced acquisitions in that timeframe involved home-networking player Linksys, part of Cisco’s ultimately underachieving bid to become a major player in the consumer space, and web-conferencing vendor Latitude Communications.

And now we get the crux of this post. Cisco announced a number of acquisitions in 2002 and 2003, but it was one they didn’t make that reverberates to this day. It was a watershed acquisition, a strategic masterstroke, but it was made by EMC, not by Cisco. As I said, the implications resound through to this day and probably will continue to ramify for years to come.

Some might contend that Cisco perhaps didn’t grasp the long-term significance of virtualization. Apparently, though, some at Cisco were clamoring for the company to buy VMware.  The missed opportunity wasn’t attributable to Cisco failing to see the importance of virtualization — some at Cisco had the prescience to see where the technology would lead — but because an acquisition of VMware wasn’t considered as high a priority as the spin-in of Andiamo for storage networking and the acquisition of Linksys for home networking.

Cisco placed its bets elsewhere, perhaps thinking that it had more time to develop a coherent and comprehensive strategy for virtualization. Then EMC made its move.

Missed the Big Chance

To this day, in my view, Cisco is paying an exorbitant opportunity cost for failing to take VMware off the market, leaving it for EMC and ultimately allowing the storage leader, yeas later, to gain the upper hand in the Virtual Computing Environment (VCE) Company joint venture that delivers UCS-encompassing VBlocks. There’s a rich irony there, too, when one considers that Cisco’s UCS contribution to the VBlock package is represented by technology derived from spin-in Nuova.

But forget about VCE and VBlocks. What about the bigger picture? Although Cisco likes to talk itself up as a leader in virtualization, it’s not nearly as prominent or dominant as it might have been. Is there anybody who would argue that Cisco, if it had acquired and then integrated and assimilated VMware as half as well as it digested Crescendo, wouldn’t have absolutely thrashed all comers in converged data-center infrastructure and cloud infrastructure?

Cisco belatedly recognized its error of omission, but it was too late. By 2009, EMC was not interested in selling its majority stake in VMware to Cisco, and Cisco was in no position to try to obtain it through an acquisition of EMC. In that regard, Cisco’s position has only worsened.

Although EMC’s ownership stake in VMware amounts to about 80 percent (or perhaps even just north of that amount), its has 98 percent of the voting shares in the company, which effectively means EMC steers the ship, regardless of public pronouncements VMware executives might issue regarding their firm being an autonomous corporate entity.

Keeping Cisco Interested but Contained 

Conversely, Cisco owns approximately five percent of VMware’s Class A shares, but none of its Class B shares, and it held just one percent of voting power as of March 2011.  As of that same date, EMC owned all of VMware’s 330,000,000 Class B Shares and 33,066,050 of its 118,462,369 shares of Class A common shares. Cisco has a stake in VMware, but it’s a small one and it has it at the pleasure of EMC, whose objective is to keep Cisco sufficiently interested so as not to pursue other strategic options in data-center virtualization and cloud infrastructure.

The EMC gambit has worked, up to the point. But Cisco, which missed its big chance  in 2003, has been trying ever since then to reassert its authority. Nuova, and all that flowed from it, was Cisco’s first attempt to regain lost ground, and now it is partnering, to varying degrees, with VMware and EMC competitors such as NetApp, Citrix, and Microsoft. It also has gotten involved involved with OpenStack and the oVirt Project in a bid to hedge its virtualization bets.

Yes, some of those moves are indicative of coopetition, and Cisco retains its occasionally strained VCE joint venture with EMC and VMware, but Cisco clearly is playing for time, looking for a way to redefine the rules of the game.

What Cisco is trying to do is break an impasse of its own making, a result of strategic choices it made nearly a decade ago.

Vendors Cite Other Paths to SDNs

Jim Duffy at NetworkWorld wrote an article earlier this month on protocol and API alternatives to OpenFlow as software-defined network (SDN) enablers.

It’s true, of course, that OpenFlow is a just one mechanism among many that can be used to bring SDNs to fruition. Many of the alternatives cited by Duffy, who quoted vendors and analysts in his piece, have been around longer than OpenFlow. Accordingly, they have been implemented by network-equipment vendors and deployed in commercial networks by enterprises and service providers. So, you know, they have that going for them, and it is not a paltry consideration.

No Panacea

Among the alternatives to OpenFlow mentioned in that article and in a sidebar companion piece were command-line interfaces (CLIs), Simple Network Management Protocol (SNMP), Extensible Messaging and Presence Protocol (XMPP), Network Configuration Protocol (NETCONF), OpenStack, and virtualization APIs in offerings such as VMware’s vSphere.

I understand that different applications require different approaches to SDNs, and I’m staunchly in the reality-based camp that acknowledges OpenFlow is not a networking panacea. As I’ve noted previously on more than one occasion, the Open Networking Foundation (ONF), steered by a board of directors representing leading cloud-service operators, has designs on OpenFlow that will make it — at least initially — more valuable to so-called “web-scale” service providers than to enterprises. Purveyors of switches also get short shrift from the ONF.

So, no, OpenFlow isn’t all things to all SDNs, but neither are the alternative APIs and protocols cited in the NetworkWorld articles. Reality, even in the realm of SDNs, has more than one manifestation.

OpenFlow Fills the Void

For the most part, however, the alternatives to OpenFlow have legacies on their side. They’re tried and tested, and they have delivered value in real-world deployments. Then again, those legacies are double-edged swords. One might well ask — and I suppose I’m doing so here — if those foregoing alternatives to OpenFlow were so proficient at facilitating SDNs, then why is OpenFlow the recipient of such perceived need and demonstrable momentum today?

Those pre-existing protocols did many things right, but it’s obvious that they were not perceived to address at least some of the requirements and application scenarios where OpenFlow offers such compelling technological and market potential. The market abhors a vacuum, and OpenFlow has been called forth to fill a need.

Old-School Swagger

Relative to OpenFlow, CLIs seem a particularly poor choice for the realization of SDN-type programmability. In the NetworkWorld companion piece, Arista Networks CEO Jayshree Ullal is quoted as follows:

“There’s more than one way to be open. And there’s more than one way to scale. CLIs may not be a programmable interface with a (user interface) we are used to; but it’s the way real men build real networks today.”

Notwithstanding Ullal’s blatant appeal to engineering machismo, evoking a networking reprise of Saturday Night Live’s old “¿Quien Es Mas Macho?” sketches, I doubt that even the most red-blooded networking professionals would opt for CLIs as a means of SDN fulfillment. In qualifying her statement, Ullal seems to concede as much.

Rubbishing Pretensions

Over at the Big Switch Networks, Omar Baldonado isn’t shy about rubbishing CLI pretensions to SDN superstardom. Granted, Big Switch Networks isn’t a disinterested party when it comes to OpenFlow, but neither are any of the other networking vendors, whether happily ensconced on the OpenFlow bandwagon or throwing rotten tomatoes at it from alleys along the parade route.

Baldonado probably does more than is necessary to hammer home his case against CLIs for SDNs, but I think the following excerpt, in which he stresses that CLIs were and are meant to be used to configure network devices, summarizes his argument pithily:

“The CLI was not designed for layers of software above it to program the network. I think we’d all agree that if we were to put our software hats on and design such a programming API, we would not come up with a CLI!”

That seems about right, and I don’t think we need belabor the point further.

Other Options

What about some of the other OpenFlow alternatives, though? As I said, I think OpenFlow is well crafted for the purposes the high priests of the Open Networking Foundation have in store for it, but enterprises are a different matter, at least for the foreseeable future (which is perhaps more foreseeable by some than by others, your humble scribe included).

In a subsequent post — I’d like to say it will be my next one, but something else, doubtless shiny and superficially appealing, will probably intrude to capture my attentions — I’ll be looking at OpenStack’s applicability in an SDN context.