Category Archives: Citrix

F5’s Look Ahead

I’ve always admired how F5 Networks built its business. Against what seemed heavy odds at the time, F5 took the fight to Cisco Systems and established market leadership in load balancing, which subsequently morphed into market leadership in application delivery controllers (ADC).

F5 now talks about its “Intelligent Services Platform,” which “connects any user, anywhere, from any device to the best application resources, independent of infrastructure.”

To be sure, as various permutations of cloud computing take hold and mobile devices proliferate, the market is shifting, and F5 is attempting to move with it. To get a feel for how F5 sees the world, where it sees things going, and how it intends to meet new challenges, you might want to have a look at a 211-slide (yes, that many) presentation that company executives made to analysts and investors yesterday. 

By its nature, the presentation is mostly high-level stuff, but it offers interesting nuggets on markets, products, technologies, and partnerships.  

Some Thoughts on VMware’s Strategic Acquisition of Nicira

If you were a regular or occasional reader of Nicira Networks CTO Martin Casado’s blog, Network Heresy, you’ll know that his penultimate post dealt with network virtualization, a topic of obvious interest to him and his company. He had written about network virtualization many times, and though Casado would not describe the posts as such, they must have looked like compelling sales pitches to the strategic thinkers at VMware.

Yesterday, as probably everyone reading this post knows, VMware announced its acquisition of Nicira for $1.26 billion. VMware will pay $1.05 billion in cash and $210 million in unvested equity awards.  The ubiquitous Frank Quattrone and his Quatalyst Partners, which reportedly had been hired previously to shop Brocade Communications, served as Nicira’s adviser.

Strategic Buy

VMware should have surprised no one when it emphasized that its acquisition of Nicira was a strategic move, likely to pay off in years to come, rather than one that will produce appreciable near-term revenue. As Reuters and the New York Times noted, VMware’s buy price for Nicira was 25 times the amount ($50 million) invested in the company by its financial backers, which include venture-capital firms Andreessen Horowitz, Lightspeed,and NEA. Diane Greene, co-founder and former CEO of VMware — replaced four years ago by Paul Maritz — had an “angel” stake in Nicira, as did as Andy Rachleff, a former general partner at Benchmark Capital.

Despite its acquisition of Nicira, VMware says it’s not “at war” with Cisco. Technically, that’s correct. VMware and its parent company, EMC, will continue to do business with Cisco as they add meat to the bones of their data-center virtualization strategy. But the die was cast, and  Cisco should have known it. There were intimations previously that the relationship between Cisco and EMC had been infected by mutual suspicion, and VMware’s acquisition of Nicira adds to the fear and loathing. Will Cisco, as rumored, move into storage? How will Insieme, helmed by Cisco’s aging switching gods, deliver a rebuttal to VMware’s networking aspirations? It won’t be too long before the answers trickle out.

Still, for now, Cisco, EMC, and VMware will protest that it’s business as usual. In some ways, that will be true, but it will also be a type of strategic misdirection. The relationship between EMC and Cisco will not be the same as it was before yesterday’s news hit the wires. When these partners get together for meetings, candor could be conspicuous by its absence.

Acquisitive Roads Not Traveled

Some have posited that Cisco might have acquired Nicira if VMware had not beaten it to the punch. I don’t know about that. Perhaps Cisco might have bought Nicira if the asking price were low, enabling Cisco to effectively kill the startup and be done with it. But Cisco would not have paid $1.26 billion for a company whose approach to networking directly contradicts Cisco’s hardware-based business model and market dominance. One typically doesn’t pay that much to spike a company, though I suppose if the prospective buyer were concerned enough about a strategic technology shift and a major market inflection, it might do so. In this case, though, I suspect Cisco was blindsided by VMware. It just didn’t see this coming — at least not now, not at such an early state of Nicira’s development.

Similarly, I didn’t see Microsoft or Citrix as buyers of Nicira. Microsoft is distracted by its cloud-service provider aspirations, and the $1.26 billion would have been too rich for Citrix.

IBM’s Moves and Cisco’s Overseas Cash Horde

One company I had envisioned as a potential (though less likely) acquirer of Nicira was IBM, which already has a vSwitch. IBM might now settle for the SDN-controller technology available from Big Switch Networks. The two have been working together on IBM’s Open Data Center Interoperable Network (ODIN), and Big Switch’s technology fits well with IBM’s PureSystems and its top-down model of having application workloads command and control  virtualized infrastructure. As the second network-virtualization domino to fall, Big Switch likely will go for a lower price than did Nicira.

On Twitter, Dell’s Brad Hedlund asked whether Cisco would use its vast cash horde to strike back with a bold acquisition of its own. Cisco has two problems here. First, I don’t see an acquisition that would effectively blunt VMware’s move. Second, about 90 percent of Cisco’s cash (more than $42 billion) is offshore, and CEO John Chambers doesn’t want to take a tax hit on its repatriation. He had been hoping for a “tax holiday” from the U.S. government, but that’s not going to happen in the middle of an election campaign, during a macroeconomic slump in which plenty of working Americans are struggling to make ends meet. That means a significant U.S.-based acquisition likely is off the table, unless the target company is very small or is willing to take Cisco stock instead of cash.

Cisco’s Innovator’s Dilemma

Oh, and there’s a third problem for Cisco, mentioned earlier in this prolix post. Cisco doesn’t want to embrace this SDN stuff. Cisco would rather resist it. The Cisco ONE announcement really was about Cisco’s take on network programmability, not about SDN-type virtualization in which overlay networks run atop an underyling physical network.

Cisco is caught in a classic innovator’s dilemma, held captive by the success it has enjoyed selling prodigious amounts of networking gear to its customers, and I don’t think it can extricate itself. It’s built a huge and massively successful business selling a hardware-based value proposition predicated on switches and routers. It has software, but it’s not really a software company.

For Cisco, the customer value, the proprietary hooks, are in its boxes. Its whole business model — which, again, has been tremendously successful — is based around that premise. The entire company is based around that business model.  Cisco eventually will have to reinvent itself, like IBM did after it failed to adapt to client-server computing, but the day of reckoning hasn’t arrived.

On the Defensive

Expect Cisco to continue to talk about the northbound interface (which can provide intelligence from the switch) and about network programmability, but don’t expect networking’s big leopard to change its spots. Cisco will try to portray the situation differently, but it’s defending rather than attacking, trying to hold off the software-based marauders of infrastructure virtualization as long as possible. The doomsday clock on when they’ll arrive in Cisco data centers just moved up a few ticks with VMware’s acquisition of Nicira.

What about the other networking players? Sadly, HP hasn’t figured out what to about SDN, even though OpenFlow is available on its former ProCurve switches. HP has a toe dipped in the SDN pool, but it doesn’t seeming willing to take the initiative. Juniper, which previously displayed ingenuity in bringing forward QFabric, is scrambling for an answer. Brocade is pragmatically embracing hybrid control planes to maintain account presence and margins in the near- to intermediate-term.

Arista Networks, for its part, might be better positioned to compete on networking’s new playing field. Arista Networks’ CEO Jayshree Ullal had the following to say about yesterday’s news:

“It’s exciting to see the return of innovative networking companies and the appreciation for great talent/technology. Software Defined Networking (SDN) is indeed disrupting legacy vendors. As a key partner of VMware and co-innovator in VXLANs, we welcome the interoperability of Nicira and VMWare controllers with Arista EOS.”

Arista’s Options

What’s interesting here is that Arista, which invariably presents its Extensible OS (EOS) as “controller friendly,” earlier this year demonstrated interoperability with controllers from VMware, Big Switch Networks, and Nebula, which has built a cloud controller for OpenStack.

One of Nebula’s investors is Andy Bechtolsheim, whom knowledgeable observers will recognize as the chief development officer (CDO) of, and major investor in, Arista Networks.  It is possible that Bechtolsheim sees a potential fit between the two companies — one building a cloud controller and one delivering cloud networking. To add fuel to this particular fire, which may or may not emit smoke, note that the Nebula cloud controller already features Arista technology, and that Nebula is hiring a senior network engineer, who ideally would have “experience with cloud infrastructure (OpenStack, AWS, etc. . . .  and familiarity with OpenFlow and Open vSwitch.”

 Open or Closed?

Speaking of Open vSwitch, Matt Palmer at SDN Centralwill feel some vindication now that VMware has purchased a company whose engineering team has made significant contributions to the OVS code. Palmer doubtless will cast a wary eye on VMware’s intentions toward OVS, but both Steve Herrod, VMware’s CTO, and Martin Casado, Nicira’s CTO, have provided written assurances that their companies, now combining, will not retreat from commitments to OVS and to Open Flow and Quantum, the OpenStack networking  project.

Meanwhile, GigaOm’s Derrick Harris thinks it would be bad business for VMware to jilt the open-source community, particularly in relation to hypervisors, which “have to be treated as the workers that merely carry out the management layer’s commands. If all they’re there to do is create virtual machines that are part of a resource pool, the hypervisor shouldn’t really matter.”

This seems about right. In this brave new world of virtualized infrastructure, the ultimate value will reside in an intelligent management layer.

PS: I wrote this post under a slight fever and a throbbing headache, so I would not be surprised to discover belatedly that it contains at least a couple typographical errors. Please accept my apologies in advance.

Cisco’s Storage Trap

Recent commentary from Barclays Capital analyst Jeff Kvaal has me wondering whether  Cisco might push into the storage market. In turn, I’ve begun to think about a strategic drift at Cisco that has been apparent for the last few years.

But let’s discuss Cisco and storage first, then consider the matter within a broader context.

Risks, Rewards, and Precedents

Obviously a move into storage would involve significant risks as well as potential rewards. Cisco would have to think carefully, as it presumably has done, about the likely consequences and implications of such a move. The stakes are high, and other parties — current competitors and partners alike — would not sit idly on their hands.

Then again, Cisco has been down this road before, when it chose to start selling servers rather than relying on boxes from partners, such as HP and Dell. Today, of course, Cisco partners with EMC and NetApp for storage gear. Citing the precedent of Cisco’s server incursion, one could make the case that Cisco might be tempted to call the same play .

After all, we’re entering a period of converged and virtualized infrastructure in the data center, where private and public clouds overlap and merge. In such a world, customers might wish to get well-integrated compute, networking, and storage infrastructure from a single vendor. That’s a premise already accepted at HP and Dell. Meanwhile, it seems increasingly likely data-center infrastructure is coming together, in one way or another, in service of application workloads.

Limits to Growth?

Cisco also has a growth problem. Despite attempts at strategic diversification, including failed ventures in consumer markets (Flip, anyone?), Cisco still hasn’t found a top-line driver that can help it expand the business while supporting its traditional margins. Cisco has pounded the table perennially for videoconferencing and telepresence, but it’s not clear that Cisco will see as much benefit from the proliferation of video collaboration as once was assumed.

To complicate matters, storm clouds are appearing on the horizon, with Cisco’s core businesses of switching and routing threatened by the interrelated developments of service-provider alienation and software-defined networking (SDN). Cisco’s revenues aren’t about to fall off a cliff by any means, but nor are they on the cusp of a second-wind surge.

Such uncertain prospects must concern Cisco’s board of directors, its CEO John Chambers, and its institutional investors.

Suspicious Minds

In storage, Cisco currently has marriages of mutual convenience with EMC (VBlocks and the sometimes-strained VCE joint venture) and with NetApp (the FlexPod reference architecture).  The lyrics of Mark James’ song Suspicious Minds are evocative of what’s transpiring between Cisco and these storage vendors. The problem is not only that Cisco is bigamous, but that the networking giant might have another arrangement in mind that leaves both partners jilted.

Neither EMC nor NetApp is oblivious to the danger, and each has taken care to reduce its strategic reliance on Cisco. Conversely, Cisco would be exposed to substantial risks if it were to abandon its existing partnership in favor of a go-it-alone approach to storage.

I think that’s particularly true in the case of EMC, which is the majority owner of server-virtualization market leader VMware as well as a storage vendor. The corporate tandem of VMware and EMC carries considerable enterprise clout, and Cisco is likely to be understandably reluctant to see the duo become its adversaries.

Caught in a Trap

Still, Cisco has boxed itself into a strategic corner. It needs growth, it hasn’t been able to find it from diversification away from the data center, and it could easily see the potential of broadening its reach from networking and servers to storage. A few years ago, the logical choice might have been for Cisco to acquire EMC. Cisco had the market capitalization and the onshore cash to pull it off five years ago, perhaps even three years ago.

Since then, though, the companies’ market fortunes have diverged. EMC now has a market capitalization of about $54 billion, while Cisco’s is slightly more than $90 billion. Even if Cisco could find a way of repatriating its offshore cash hoard without taking a stiff hit from the U.S. taxman, it wouldn’t have the cash to pull of an acquisition of EMC, whose shareholders doubtless would be disinclined to accept Cisco stock as part of a proposed transaction.

Therefore, even if it wanted to do so, Cisco cannot acquire EMC. It might have been a good move at one time, but it isn’t practical now.

Losing Control

Even NetApp, with a market capitalization of more than $12.1 billion, would rate as the biggest purchase by far in Cisco’s storied history of acquisitions. Cisco could pull it off, but then it would have to try to further counter and commoditize VMware’s virtualization and cloud-management presence through a fervent embrace of something like OpenStack or a potential acquisition of Citrix. I don’t know whether Cisco is ready for either option.

Actually, I don’t see an easy exit from this dilemma for Cisco. It’s mired in somewhat beneficial but inherently limiting and mutually distrustful relationships with two major storage players. It would probably like to own storage just as it owns servers, so that it might offer a full-fledged converged infrastructure stack, but it has let the data-center grass grow under its feet. Just as it missed a beat and failed to harness virtualization and cloud as well as it might have done, it has stumbled similarly on storage.

The status quo is likely to prevail until something breaks. As we all know, however, making no decision effectively is a decision, and it carries consequences. Increasingly, and to an extent that is unprecedented, Cisco is losing control of its strategic destiny.

Reflecting on the Big Acquisition Cisco Didn’t Make

It has been nearly eight years since EMC acquired VMware. The acquisition announcement went over the newswires on December 15, 2003. EMC paid approximately $635 million for VMware, and Joe Tucci, EMC’s president and CEO, had this to say about the deal:

“Customers want help simplifying the management of their IT infrastructures. This is more than a storage challenge. Until now, server and storage virtualization have existed as disparate entities. Today, EMC is accelerating the convergence of these two worlds .“

“We’ve been working with the talented VMware team for some time now, and we understand why they are considered one of the hottest technology companies anywhere. With the resources and commitment of EMC behind VMware’s leading server virtualization technologies and the partnerships that help bring these technologies to market, we look forward to a prosperous future together.”

Virtualization Goldmine

Oh, the future was prosperous . . . and then some. It’s a deal that worked out hugely in EMC’s favor. Even though the storage behemoth has spun out VMware in the interim, allowing it to go public, EMC still retains more than 80 percent ownership of its virtualization goldmine.

Consider that EMC paid just $635 million in 2003 to buy the server-virtualization market leader. VMware’s current market capitalization is more than $38 billion. That means EMC’s stake in VMware is worth more than $30 billion, not including the gains it reaped when it took VMware public. I don’t think it’s hyperbolic to suggest that EMC’s purchase of VMware will be remembered as Tucci’s defining moment as EMC chieftain.

Now, let’s consider another vendor that had an opportunity to acquire VMware back in 2003.

Massive Market Cap, Industry Dominance

A few years earlier, at the pinnacle of the dot-com boom in March 2000, Cisco was the most valuable company in the world, sporting a market capitalization of more than US$500 billion.  It was a networking colossus that bestrode the globe, dominating its realm of the industry as much as any other technology company during any other period. (Its only peers in that regard were IBM in the mainframe era and Microsoft and Intel in the client-server epoch.)

Although Juniper Networks brought its first router to market in the fall of 1998 and began to challenge Cisco for routing patronage at many carriers early in the first decade of the new millennium, Cisco remained relatively unscathed in enterprise networking, where its Catalyst switches grew into a multibillion-dollar franchise after it saw off competitive challenges in the late 90s from companies such as 3Com, Cabletron, Nortel, and others.

As was its wont since its first acquisition, involving Crescendo Communications in 1993, Cisco remained an active buyer of technology companies. It bought companies to inorganically fortify its technological innovation, and to preclude competitors from gaining footholds among its expanding installed base of customers.

Non-Buyer’s Remorse?

It’s true that the post-boom dot-com bust cooled Cisco’s acquisitive ardor. Nonetheless, the networking giant made nine acquisitions from May 2002 through to the end of 2003. The companies Cisco acquired in that span included Hammerhead Networks, Navarro Networks, AYR Networks, Andiamo Systems, Psionic Software, Okena, SignalWorks, Linksys, and Latitude Communications.

The biggest acquisition in that period involved spin-in play Andiamo Systems, which provided the technological foundation for Cisco’s subsequent push to dominate storage networking. Cisco was at risk of paying as much as $2.5 billion for Andiamo, but the actual price tag for that convoluted spin-in transaction was closer to $750 million by the time it finally closed in 2004. The next-biggest Cisco acquisition during that period involved home-networking vendor Linksys, for which Cisco paid about $500 million.

Cisco announced the acquisitions of Hammerhead Networks and Navarro Networks in a single press release. Hammerhead, for which Cisco exchanged common stock valued at up to $173 million, developed software that accelerated the delivery of IP-based billing, security, and QoS; the company was folded into the Cable Business Unit in Cisco’s Network Edge and Aggregation Routing Group. Navarro Networks, for which Cisco exchanged common stock valued at up to $85 million, designed ASIC components for Ethernet switching.

To acquire AYR Networks, a vendor of “high-performance distributed networking services and highly scalable routing software technologies,” Cisco parted with about $113 million in common stock. AYR’s technology was intended to augment Cisco’s IOS software.

Andiamo Factor

Although the facts probably are familiar to many readers, Cisco’s acquisition of Andiamo was noteworthy for several reasons.  It was a spin-in acquisition, in which Cisco funded the company to go off and develop technology on its own, only later to be brought back in-house through acquisition. Andiamo was led by its CEO Buck Gee, and it included a core group of engineers who also were at Cresendo Communications.  The concept and execution of the spin-in move at Cisco was highly controversial within the company, seen as operationally and strategically innovative by many senior executives even though others claimed it engendered envy, invidious, and resentment among rank-and-file employees.

No matter, Andiamo was meant to provide market leadership for Cisco in the IP-based storage networking and to give Cisco a means of battering Brocade in Fibre Channel. That plan hasn’t come to fruition, with Brocade still leading in a tenacious Fibre Channel market and Cisco banking on Fibre Channel over Ethernet (FCoE) to go from the edge to the core. (The future of storage networking, including the often entertaining Fiber Channel-versus-FCoE debates, are another matter, and not within the purview of this post.)

While we’re on the topic of Andiamo, its former engineers continue to make news. Just this week, former Andiamo engineers Dante Malagrinò and Marco Di Benedetto officially launched Embrane, a company that is committed to delivering a platform for virtualized L4-7 network services at large cloud service providers. Those two gentlemen also were involved in Cisco last big spin-in move, Nuova Systems, which provided the foundation for Cisco’s Unified Computing Systems (UCS).

As for Cisco’s post-Andiamo acquisition announcements in 2002, Okena and Psionic both were involved in intrusion-detection technology. Of the two, Okena represented the larger transaction, valued at about $154 million in stock.

Interestingly, not much is available publicly these days regarding Cisco’s announced acquisition of SignalWorks in March of 2003. If you visit the CrunchBase profile for SignalWorks and click on a link that is supposed to take you to a Cisco press release announcing the deal, you’ll get a “Not Found” message. A search of the Cisco website turns up two press releases — relating to financial results in Cisco’s third and fourth quarters of fiscal year 2003, respectively — that obliquely mention the SignalWorks acquisition. The purchase price of the IP-audio company was about $16 million. CNet also covered the acquisition when it first came to light.

Other Strategic Priorities

Cisco’s last announced acquisitions in that timeframe involved home-networking player Linksys, part of Cisco’s ultimately underachieving bid to become a major player in the consumer space, and web-conferencing vendor Latitude Communications.

And now we get the crux of this post. Cisco announced a number of acquisitions in 2002 and 2003, but it was one they didn’t make that reverberates to this day. It was a watershed acquisition, a strategic masterstroke, but it was made by EMC, not by Cisco. As I said, the implications resound through to this day and probably will continue to ramify for years to come.

Some might contend that Cisco perhaps didn’t grasp the long-term significance of virtualization. Apparently, though, some at Cisco were clamoring for the company to buy VMware.  The missed opportunity wasn’t attributable to Cisco failing to see the importance of virtualization — some at Cisco had the prescience to see where the technology would lead — but because an acquisition of VMware wasn’t considered as high a priority as the spin-in of Andiamo for storage networking and the acquisition of Linksys for home networking.

Cisco placed its bets elsewhere, perhaps thinking that it had more time to develop a coherent and comprehensive strategy for virtualization. Then EMC made its move.

Missed the Big Chance

To this day, in my view, Cisco is paying an exorbitant opportunity cost for failing to take VMware off the market, leaving it for EMC and ultimately allowing the storage leader, yeas later, to gain the upper hand in the Virtual Computing Environment (VCE) Company joint venture that delivers UCS-encompassing VBlocks. There’s a rich irony there, too, when one considers that Cisco’s UCS contribution to the VBlock package is represented by technology derived from spin-in Nuova.

But forget about VCE and VBlocks. What about the bigger picture? Although Cisco likes to talk itself up as a leader in virtualization, it’s not nearly as prominent or dominant as it might have been. Is there anybody who would argue that Cisco, if it had acquired and then integrated and assimilated VMware as half as well as it digested Crescendo, wouldn’t have absolutely thrashed all comers in converged data-center infrastructure and cloud infrastructure?

Cisco belatedly recognized its error of omission, but it was too late. By 2009, EMC was not interested in selling its majority stake in VMware to Cisco, and Cisco was in no position to try to obtain it through an acquisition of EMC. In that regard, Cisco’s position has only worsened.

Although EMC’s ownership stake in VMware amounts to about 80 percent (or perhaps even just north of that amount), its has 98 percent of the voting shares in the company, which effectively means EMC steers the ship, regardless of public pronouncements VMware executives might issue regarding their firm being an autonomous corporate entity.

Keeping Cisco Interested but Contained 

Conversely, Cisco owns approximately five percent of VMware’s Class A shares, but none of its Class B shares, and it held just one percent of voting power as of March 2011.  As of that same date, EMC owned all of VMware’s 330,000,000 Class B Shares and 33,066,050 of its 118,462,369 shares of Class A common shares. Cisco has a stake in VMware, but it’s a small one and it has it at the pleasure of EMC, whose objective is to keep Cisco sufficiently interested so as not to pursue other strategic options in data-center virtualization and cloud infrastructure.

The EMC gambit has worked, up to the point. But Cisco, which missed its big chance  in 2003, has been trying ever since then to reassert its authority. Nuova, and all that flowed from it, was Cisco’s first attempt to regain lost ground, and now it is partnering, to varying degrees, with VMware and EMC competitors such as NetApp, Citrix, and Microsoft. It also has gotten involved involved with OpenStack and the oVirt Project in a bid to hedge its virtualization bets.

Yes, some of those moves are indicative of coopetition, and Cisco retains its occasionally strained VCE joint venture with EMC and VMware, but Cisco clearly is playing for time, looking for a way to redefine the rules of the game.

What Cisco is trying to do is break an impasse of its own making, a result of strategic choices it made nearly a decade ago.

Embrane Emerges from Stealth, Brings Heleos to Light

I had planned to write about something else today — and I still might get around to it — but then Embrane came out of stealth mode. I feel compelled to comment, partly because I have written about the company previously, but also because what Embrane is doing deserves notice.

Embrane’s Heleos

With regard to aforementioned previous post, which dealt with Dell acquisition candidates in Layer 4-7 network services, I am now persuaded that Dell is more likely to pull the trigger on a deal for an A10 Networks, let’s say, than it is to take a more forward-looking leap at venture-funded Embrane. That’s because I now know about Embrane’s technology, product positioning, and strategic direction, and also because I strongly suspect that Dell is looking for a purchase that will provide more immediate payback within its installed base and current strategic orientation.

Still, let’s put Dell aside for now and focus exclusively on Embrane.

The company’s founders, former Andiamo-Cisco lads Dante Malagrinò and Marco Di Benedetto, have taken their company out of the shadows and into the light with their announcement of Heleos, which Embrane calls “the industry’s first distributed software platform for virtualizing layer 4-7 network services.” What that means, according to Embrane, is that cloud service providers (CSPs) and enterprises can use Heleos to build more agile networks to deliver cloud-based infrastructure as a service (IaaS). I can perhaps see the qualified utility of Heleos for the former, but I think the applicability and value for the latter constituency is more tenuous.

Three Wise Men

But I am getting ahead of myself, putting the proverbial cart before the horse. So let’s take a step back and consult some learned minds (including  an”ethereal” one) on what Heleos is, how it works, what it does, and where and how it might confer value.

Since the Embrane announcement hit the newswires, I have read expositions on the company and its new product from The 451 Group’s Eric Hanselman, from rock-climbing Ivan Pepelnjak (technical director at NIL Data Communications), and from EtherealMind’s Greg Ferro.  Each has provided valuable insight and analysis. If you’re interested in learning about Embrane and Heleos, I encourage you to read what they’ve written on the subject. (Only one of Hanselman’s two The 451 Group pieces is available publicly online at no charge).

Pepelnjak provides an exemplary technical description and overview of Heleos. He sets out the problem it’s trying to solve, considers the pros and cons of the alternative solutions (hardware appliances and virtual appliances), expertly explores Embrane’s architecture, examines use cases, and concludes with a tidy summary. He ultimately takes a positive view of Heleos, depicting Embrane’s architecture as “one of the best proposed solutions” he’s seen hitherto for scalable virtual appliances in public and private cloud environments.

Limited Upside

Ferro reaches a different conclusion, but not before setting the context and providing a compelling description of what Embrane does. After considering Heleos, Ferro ascertains that its management of IP flows equates to “flow balancing as a form of load balancing.” From all that I’ve read and heard, it seems an apt classification. He also notes that Embrane, while using flow management, is not an “OpenFlow/SDN business. Although I see conceptual similarities between what Embrane is doing and what OpenFlow does, I agree with Ferro, if only because, as I understand it, OpenFlow reaches no higher than the network layer. I suppose the same is true for SDN, but this is where ambiguity enters the frame.

Even as I wrote this piece, there was a kerfuffle on Twitter as to whether or to what extent Embrane’s Heleos can be categorized as the latest manifestation of SDN. (Hours later, at post time, this vigorous exchange of views continues.)

That’s an interesting debate — and I’m sure it will continue — but I’m most intrigued by the business and market implications of what Embrane has delivered. On that score, Ferro sees Embrane’s platform play as having limited upside, restricted to large cloud-service providers with commensurately large data centers. He concludes there’s not much here for enterprises, a view with which I concur.

Competitive Considerations

Hanselman covers some of the same ground that Ferro and Pepelnjak traverse, but he also expends some effort examining the competitive landscape that Embrane is entering. In that Embrane is delivering a virtualization platform for network services, that it will be up against Layer 4-7 stalwarts such as F5 Networks, A10 Networks, Riverbed/Zeus, Radware, Brocade, Citrix, Cisco, among others. F5, the market leader, already recognizes and is acting upon some of the market and technology drivers that doubtless inspired the team that brought Heleos to fruition.

With that in mind, I wish to consider Embrane’s business prospects.

Embrane closed a Series B round of $18 million in August. It was lead by New Enterprise Associates and included the involvement of Lightspeed Venture Partners and North Bridge Venture Partners, both of whom participated in a $9-million series A round in March 2010.

To determine whether Embrane is a good horse to back (hmm, what’s with the horse metaphors today?), one has to consider the applicability of its technology to its addressable market — very large cloud-service providers — and then also project its likelihood of providing a solution that is preferable and superior to alternative approaches and competitors.

Counting the Caveats

While I tend to agree with those who believe Embrane will find favor with at least some large cloud-service providers, I wonder how much favor there is to find. There are three compelling caveats to Embrane’s commercial success:

  1. L4-7 network services, while vitally important cloud service providers and large enterprises, represent a much smaller market than L2-L3 networking, virtualized or otherwise. Just as a benchmark, Dell’Oro reported earlier this year that the L2-3 Ethernet Switch market would be worth approximately $25 billion in 2015, with the L4-7 application delivery controller (ADC) market expected to reach more than $1.5 billion, though the virtual-appliance segment is expected show most growth in that space. Some will say, accurately, that L4-7 network services are growing faster than L2-3 networking. Even so, the gap is size remains notable, which is why SDN and OpenFlow have been drawing so much attention in an increasingly virtualized and “cloudified” world.
  2. Embrane’s focus on large-scale cloud service providers, and not on enterprises (despite what’s stated in the press release), while rational and perfectly understandable, further circumscribes its addressable market.
  3. F5 Networks is a tough competitor, more agile and focused than a Cisco Systems, and will not easily concede customers or market share to a newcomer. Embrane might have to pick up scraps that fall to the floor rather than feasting at the head table. At this point, I don’t think F5 is concerned about Embrane, though that could change if Embrane can use NaviSite — its first customer, now owned by TimeWarner Cable — as a reference account and validator for further business among cloud service providers.

Notwithstanding those reservations, I look forward to seeing more of Embrane as we head into 2012. The company has brought a creative approach and innovation platform architecture to market, a higher-layer counterpart and analog to what’s happening further down the stack with SDN and OpenFlow.

Exploring OpenStack’s SDN Connections

Pursuant to my last post, I will now examine the role of OpenStack in the context of software-defined networking (SDN). As you will recall, it was one of the alternative SDN enabling technologies mentioned in a recent article and sidebar at Network World.

First, though, I want to note that, contrary to the concerns I expressed in the preceding post, I wasn’t distracted by a shiny object before getting around to writing this installment. I feared I would be, but my powers of concentration and focus held sway. It’s a small victory, but I’ll take it.

Road to Quantum

Now, on to OpenStack, which I’ve written about previously, though admittedly not in the context of SDNs. As for how networking evolved into a distinct facet of OpenStack, Martin Casado, chief technology officer at Nicira, offers a thorough narrative at the Open vSwitch website.

Casado begins by explaining that OpenStack is a “cloud management system (CMS) that orchestrates compute, storage, and networking to provide a platform for building on demand services such as IaaS.” He notes that OpenStack’s primary components were OpenStack Compute (Nova), Open Stack Storage (Swift), and OpenStack Image Services (Glance), and he also provides an overview of their respective roles.

Then he asks, as one might, what about networking? At this point, I will quote directly from his Open vSwitch post:

“Noticeably absent from the list of major subcomponents within OpenStack is networking. The historical reason for this is that networking was originally designed as a part of Nova which supported two networking models:

● Flat Networking – A single global network for workloads hosted in an OpenStack Cloud.

●VLAN based Networking – A network segmentation mechanism that leverages existing VLAN technology to provide each OpenStack tenant, its own private network.

While these models have worked well thus far, and are very reasonable approaches to networking in the cloud, not treating networking as a first class citizen (like compute and storage) reduces the modularity of the architecture.”

As a result of Nova’s networking shortcomings, which Casado enumerates in detail,  Quantum, a standalone networking component, was developed.

Network Connectivity as a Service

The OpenStack wiki defines Quantum as “an incubated OpenStack project to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).” On that same wiki, Quantum is touted as being able to support advanced network topologies beyond the scope of  Nova’s FlatManager or VLanManager; as enabling anyone to “build advanced network services (open and closed source) that plug into Openstack networks”; and as enabling new plugins (open and closed source) that introduce advanced network capabilities.

Okay, but how does it relate specifically to SDNs? That’s a good question, and James Urquhart has provided a clear and compelling answer, which later was summarized succinctly by Stuart Miniman at Wikibon. What Urquhart wrote actually connects the dots between OpenStack’s Quantum and OpenFlow-enabled SDNs. Here’s a salient excerpt:

“. . . . how does OpenFlow relate to Quantum? It’s simple, really. Quantum is an application-level abstraction of networking that relies on plug-in implementations to map the abstraction(s) to reality. OpenFlow-based networking systems are one possible mechanism to be used by a plug-in to deliver a Quantum abstraction.

OpenFlow itself does not provide a network abstraction; that takes software that implements the protocol. Quantum itself does not talk to switches directly; that takes additional software (in the form of a plug-in). Those software components may be one and the same, or a Quantum plug-in might talk to an OpenFlow-based controller software via an API (like the Open vSwitch API).”

Cisco’s Contribution

So, that addresses the complementary functionality of OpenStack’s Quantum and OpenFlow, but, as Urquhart noted, OpenFlow is just one mechanism that can be used by a plug-in to deliver a Quantum abstraction. Further to that point, bear in mind that Quantum, as recounted on the OpenStack wiki, can be used  to “build advanced network services (open and closed source) that plug into OpenStack networks” and to facilitate new plugins that introduce advanced network capabilities.

Consequently, when it comes to using OpenStack in SDNs, OpenFlow isn’t the only complementary option available. In fact, Cisco is in on the action, using Quantum to “develop API extensions and plug-in drivers for creating virtual network segments on top of Cisco NX-OS and UCS.”

Cisco portrays itself as a major contributor to OpenStack’s Quantum, and the evidence seems to support that assertion. Cisco also has indicated qualified support for OpenFlow, so there’s a chance OpenStack and OpenFlow might intersect on a Cisco roadmap. That said, Cisco’s initial OpenStack-related networking forays relate to its proprietary technologies and existing products.

Citrix, Nicira, Rackspace . . . and Midokura

Other companies have made contributions to OpenStack’s Quantum, too. In a post at Network World, Alan Shimel of The CISO Group cites the involvement of Nicira, Cisco, Citrix, Midokura, and Rackspace. From what Nicira’s Casado has written and said publicly, we know that OpenFlow is in the mix there. It seems to be in the picture at Rackspace, too. Citrix has posted blog posts about Quantum, including this one, but I’m not sure where they’re going with it, though XenServer, Open vSwitch, and, yes, OpenFlow are likely to be involved.

Finally, we have Midokura, a Japanese company that has a relatively low profile, at least on this side of the Pacific Ocean. According to its website, it was established early in 2010, and it had just 12 employees in the end of April 2011.

If my currency-conversion calculations (from Japanese yen) are correct, Midokura also had about $1.5 million in capital as of that date. Earlier that same month, the company announced seed funding of about $1.3 million. Investors were Bit-Isle, a Japanese data-center company; NTT Investment Partners, an investment vehicle of  Nippon Telegraph & Telephone Corp. (NTT); 1st Holdings, a Japanese ISV that specializes in tools and middleware; and various individual investors, including Allen Miner, CEO of SunBridge Corporation.

On its website, Midokura provides an overview of its MidoNet network-virtualization platform, which is billed as providing a solution to the problem of inflexible and expensive large-scale physical networks that tend to lock service providers into a single vendor.

Virtual Network Model in Cloud Stack

In an article published  at TechCrunch this spring, at about the time Midokura announced its seed round, the company claimed to be the only one to have “a true virtual network model” in a cloud stack. The TechCrunch piece also said the MidoNet platform could be integrated “into existing products, as a standalone solution, via a NaaS model, or through Midostack, Midokura’s own cloud (IaaS/EC2) distribution of OpenStack (basically the delivery mechanism for Midonet and the company’s main product).”

Although the company was accepting beta customers last spring, it hasn’t updated its corporate blog since December 2010. Its “Events” page, however, shows signs of life, with Midokura indicating that it will be attending or participating in the grand opening of Rackspace’s San Francisco office on December 1.

Perhaps we’ll get an update then on Midokura’s progress.

Rackspace’s Bridge Between Clouds

OpenStack has generated plenty of sound and fury during the past several months, and, with sincere apologies to William Shakespeare, there’s evidence to suggest the frenzied activity actually signifies something.

Precisely what it signifies, and how important OpenStack might become, is open to debate, of which there has been no shortage. OpenStack is generally depicted as an open-source cloud operating system, but that might be a generous interpretation. On the OpenStack website, the following definition is offered:

“OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.”

Just for fun and giggles (yes, that phrase has been modified so as not to offend reader sensibilities), let’s parse that passage, shall we? In the words of the OpenStackers themselves, their project is an open-source cloud-computing platform for public and private clouds, and it is reputedly simple to implement, massively scalable, and feature rich. Notably, it consists of a “series of interrelated projects delivering various components for a cloud infrastructure solution.”

Simple for Some

Given that description, especially the latter reference to interrelated projects spawning various components, one might wonder exactly how “simple” OpenStack is to implement and by whom. That’s a question others have raised, including David Linthicum in a recent piece at InfoWorld. In that article, Linthicum notes that undeniable vendor enthusiasm and burgeoning market momentum accrue to OpenStack — the community now has 138 member companies (and counting), including big-name players such as HP, Dell, Intel, Rackspace, Cisco, Citrix, Brocade, and others — but he also offers the following caveat:

“So should you consider OpenStack as your cloud computing solution? Not on its own. Like many open source projects, it takes a savvy software vendor, or perhaps cloud provider, to create a workable product based on OpenStack. The good news is that many providers are indeed using OpenStack as a foundation for their products, and most of those are working, or will work, just fine.”

Creating Value-Added Services

Meanwhile, taking issue with a recent InfoWorld commentary by Savio Rodrigues — who argued that OpenStack will falter while its open-source counterpart Eucalyptus will thrive — James Urquhart, formerly of Cisco and now VP of product strategy at enStratus, made this observation:

“OpenStack is not a cloud service, per se, but infrastructure automation tuned to cloud-model services, like IaaS, PaaS and SaaS. Intsall OpenStack, and you don’t get a system that can instantly bill customers, provide a service catalog, etc. That takes additional software.

What OpenStack represents is the commodity element of cloud services: the VM, object, server image and networking management components. Yeah, there is a dashboard to interact with those commodity elements, but it is not a value-add capability in-and-of itself.

What HP, Dell, Cisco, Citrix, Piston, Nebula and others are doing with OpenStack is creating value-add services on top (or within) the commodity automation. Some focus more on “being commodity”, others focus more on value-add, but they are all building on top of the core OpenStack projects.”

New Revenue Stream for Rackspace

All of which brings us, in an admittedly roundabout fashion, to Rackspace’s recent announcement of its Rackspace Cloud Private Edition, a packaged version of OpenStack components that can be used by enterprises for private-cloud deployments. This move makes sense for OpenStack on couple levels.

First off, it opens up a new revenue stream for the company. While Rackspace won’t try to make money on the OpenStack software or the reference designs — featuring a strong initial emphasis on Dell servers and Cisco networking gear for now, though bare-bones OpenCompute servers are likely to be embraced before long —  it will provide value-add, revenue-generating managed services to customers of Rackspace Cloud Private Edition. These managed services will comprise installation of OpenStack updates, analysis of system issues, and assistance with specific questions relating to systems engineering. Some security-related services also will be offered. While the reference architecture and the software are available now, Rackspace’s managed services won’t be available until January.

Building a Bridge

The launch of Rackspace Cloud Private Edition is a diversification initiative for Rackspace, which hitherto has made its money by hosting and managing applications and computing services for others in its own data centers. The OpenStack bundle takes it into the realm of provided managed services in its customers’ data centers.

As mentioned above, this represents a new revenue stream for Rackspace, but it also provides a technological bridge that will allow customers who aren’t ready for multi-tenant public cloud services today to make an easy transition to Rackspace’s data centers at some future date. It’s a smart move, preventing prospective customers from moving to another platform for private cloud deployment, ensuring in the process that said customers don’t enter the orbit of another vendor’s long-term gravitational pull.

The business logic coheres. For each customer engagement, Rackspace gets a payoff today, and potentially a larger one at a later date.

Assessing Dell’s Layer 4-7 Options

As it continues to integrate and assimilate its acquisition of Force10 Networks, Dell is thinking about its next networking move.

Based on what has been said recently by Dario Zamarian, Dell’s GM and SVP of networking, the company definitely will be making that move soon. In an article covering Dell’s transition from box pusher to data-center and cloud contender, Zamarian told Fritz Nelson of InformationWeek that “Dell needs to offer Layer 4 and Layer 7 network services, citing security, load balancing, and overall orchestration as its areas of emphasis.”

Zamarian didn’t say whether the move into Layer 4-7 network services would occur through acquisition, internal development, or partnership. However, as I invoke deductive reasoning that would make Sherlock Holmes green with envy (or not), I think it’s safe to conclude an acquisition is the most likely route.

F5 Connection

Why? Well, Dell already has partnerships that cover Layer 4-7 services. F5 Networks, the leader in the application-delivery controllers (ADCs), is a significant Dell partner in the Layer 4-7 sphere. Dell and F5 have partnered for 10 years, and Dell bills itself as the largest reseller of F5 solutions. If you consider what Zamarian described as Dell’s next networking priority, F5 certainly fits the bill.

There’s one problem. F5 probably isn’t selling at any price Dell would be willing to pay.  As of today, F5 has a market capitalization of more than $8.5 billion. Dell has the cash, about $16 billion and counting, to buy F5 at a premium, but it’s unlikely Dell would be willing to fork over more than $11 billion — which, presuming mutual interest, might be F5’s absolute minimum asking price — to close the deal. Besides, observers have been thinking F5 would be acquired since before the Internet bubble of 2000 burst. It’s not likely to happen this time either.

Dell could see whether one of its other partners, Citrix, is willing to sell its NetScaler business. I’m not sure that’s likely to happen, though. I definitely can’t envision Dell buying Citrix outright. Citrix’s market cap, at more than $13.7 billion, is too high, and there are pieces of the business Dell probably wouldn’t want to own.

Shopping Not Far From Home?

Who else is in the mix? Radware is an F5 competitor that Dell might consider, but I don’t see that happening. Dell’s networking group is based in the Bay Area, and I think they’ll be looking for something closer to home, easier to integrate.

That brings us to F5 rival A10 Networks. Force10 Networks, which Dell now owns, had a partnership with A10, and there’s a possibility Dell might inherit and expand upon that relationship.

Then again, maybe not. Generally, A10 is a seen as purveyor of cost-effective ADCs. It is not typically perceived as an innovator and trailblazer, and it isn’t thought to have the best solutions for complex enterprise or data-center environments, exactly the areas where Dell wants to press its advantage. It’s also worth bearing in mind that A10 has been involved in exchanges of not-so-friendly litigious fire — yes, lawsuits volleyed back and forth furiously — with F5 and others.

All in all, A10 doesn’t seem a perfect fit for Dell’s needs, though the price might be right.

Something Programmable 

Another candidate, one that’s quite intriguing in many respects, is Embrane. The company is bringing programmable network services, delivered on commodity x86 servers, to the upper layers of the stack, addressing many of the areas in which Zamarian expressed interest. Embrane is focusing on virtualized data centers where Dell wants to be a player, but initially its appeal will be with service providers rather than with enterprises.

In an article written by Stacey Higginbotham and published at GigaOM this summer, Embrane CEO Dante Malagrinò explained that his company’s technology would enable hosting companies to provide virtualized services at Layers 4 through 7, including load balancing, firewalls, virtual private networking (VPN),  among others.

Some of you might see similarities between what Embrane is offering and the OpenFlow-enabled software-defined networking (SDN). Indeed, there are similarities, but, as Embrane points out, OpenFlow promises network virtualization and programmability at Layers 2 and 3 of the stack, not at Layers 4 through 7.

Higher-Layer Complement to OpenFlow

Dell, as we know, has talked extensively about the potential of OpenFlow to deliver operational cost savings and innovative services to data centers at service provides and enterprises. One could see what Embrane does as a higher-layer complement to OpenFlow’s network programmability. Both technologies take intelligence away from specialized networking gear and place it at the edge of the network, running in software on industry-standard hardware.

Interestingly, there aren’t many degrees of separation between the principals at Embrane and Dell’s Zamarian. It doesn’t take much sleuthing to learn that Zamarian knows both Malagrinò and Marco Di Benedetto, Embrane’s CTO. They worked together at Cisco Systems. Moreover, Zamarian and Malagrinò both studied at the Politecnico di Torino, though a decade or so apart.  Zamarian also has connections to Embrane board members.

Play an Old Game, Or Define a New One

In and of itself, those don’t mean anything. Dell would have to see value in what Embrane offers, and Embrane and its backers would have to want to sell. The company announced that in August that it had closed an $18-million Series-financing round, led by New Enterprise Associates (NEA). Lightspeed Venture Partners and North Bridge Ventures also took part in the round, which followed initial lead investments in the company’s $9-million Series-A funding.

Embrane’s product has been in beta, but the company planned a commercial launch before the end of this year. Its blog has been quiet since August.

I would be surprised to see Dell acquire F5, and I don’t think Citrix will part with NetScaler. If Dell is thinking about plugging L4-7 holes cost-effectively, it might opt for an acquisition of A10, but, if it’s thinking more ambitiously — if it really is transforming itself into a solutions provider for cloud providers and data centers — then it might reach for something with the potential to establish a new game rather than play at an old one.

OVA Members Hope to Close Ground

I discussed the fast-growing Open Virtualization Alliance (OVA) in a recent post about its primary objective, which is to commoditize VMware’s daunting market advantage. In catching up on my reading, I came across an excellent piece by InformationWeek’s Charles Babcock that puts the emergence of OVA into historical perspective.

As Babcock writes, the KVM-centric OVA might not have come into existence at all if an earlier alliance supporting another open-source hypervisor hadn’t foundered first. Quoting Babcock regarding OVA’s vanguard members:

Hewlett-Packard, IBM, Intel, AMD, Red Hat, SUSE, BMC, and CA Technologies are examples of the muscle supporting the alliance. As a matter of fact, the first five used to be big backers of the open source Xen hypervisor and Xen development project. Throw in the fact Novell was an early backer of Xen as the owner of SUSE, and you have six of the same suspects. What happened to support for Xen? For one, the company behind the project, XenSource, got acquired by Citrix. That took Xen out of the strictly open source camp and moved it several steps closer to the Microsoft camp, since Citrix and Microsoft have been close partners for over 20 years.

Xen is still open source code, but its backers found reasons (faster than you can say vMotion) to move on. The Open Virtualization Alliance still shares one thing in common with the Xen open source project. Both groups wish to slow VMware’s rapid advance.

Wary Eyes

Indeed, that is the goal. Most of the industry, with the notable exception of VMware’s parent EMC, is casting a wary eye at the virtualization juggernaut, wondering how far and wide its ambitions will extend and how they will impact the market.

As Babcock points out, however, by moving in mid race from one hypervisor horse (Xen) to another (KVM), the big backers of open-source virtualization might have surrendered insurmountable ground to VMware, and perhaps even to Microsoft. Much will depend on whether VMware abuses its market dominance, and whether Microsoft is successful with its mid-market virtualization push into its still-considerable Windows installed base.

Long Way to Go

Last but perhaps not least, KVM and the Open Virtualization Alliance (OVA) will have a say in the outcome. If OVA members wish to succeed, they’ll not only have to work exceptionally hard, but they’ll also have to work closely together.

Coming from behind is never easy, and, as Babcock contends, just trying to ride Linux’s coattails will not be enough. KVM will have to continue to define its own value proposition, and it will need all the marketing and technological support its marquee backers can deliver. One area of particular importance is operations management in the data center.

KVM’s market share, as reported by Gartner earlier this year, was less than one percent in server virtualization. It has a long way to go before it causes VMware’s executives any sleepless nights. That it wasn’t the first choice of its proponents, and that it has lost so much time and ground, doesn’t help the cause.

OVA Aims to Commoditize VMware’s Advantage

Although it’s no threat to VMware yet, the growth of the Open Virtualization Alliance (OVA) has been impressive. Formally announced in May, the OVA has grown from its original seven founding members — its four Governing Members (Red Hat, Intel, HP, and IBM), plus  BMC, Eucalyptus Systems, and Novel (SUSE) — expanding with the addition of 65 new members in June, finally encompassing  more than 200 members as of yesterday.

The overriding objective of the OVA is to popularize the open-source Kernel-based Virtual Machine (KVM) so that it can become a viable alternative to proprietary server-virtualization offerings, namely market leader VMware.  To achieve that goal, OVA is counting on broad-based industry support from large and small players alike as it works to accelerate the development of an ecosystem of KVM-based third-party solutions. In conjunction with that effort, OVA also is encouraging interoperability, promoting best practices, spotlighting customer successes, and generally raising awareness of KVM through marketing events and initiatives.

Give the People What They Want 

While VMware isn’t breaking out in a cold sweat or losing sleep over OVA, it’s clear that many members of OVA are anxious about the potential stranglehold VMware could gain in cloud infrastructure if its virtualization hegemony goes unchecked. In that regard, it’s notable that certain VMware partners — IBM and HP among them — are at the forefront of OVA.

If customers are demanding VMware, as they clearly have been doing, then that’s what IBM and HP will give them. It’s good business practice for service-based solution providers to give customers what they want. But circumstances can change — customers might be persuaded to accept alternatives to VMware — and IBM and HP probably wouldn’t mind if they did.

Certainly VMware recognizes that its partners also can be its competitors. There’s even well-worn industry phrase for  it: coopetition. At the same time, though, IBM and HP would welcome customer demand for an open-source alternative to VMware, which explains their avidity for and evangelization of KVM.

Client-Server Reprise?

An early lead in a strategic market can result in long-term industry dominance. That’s what VMware wants to achieve, and it’s what nearly everybody else — excluding VMware’s majority shareholder, EMC — would like to prevent. Industry giants IBM and HP have seen this script play out in the client-server era with Microsoft’s Windows, and they’re not keen to relive the experience in cloud computing.

VMware’s customer appeal and market differentiation derive from its dominance in server virtualization, a foundation that allows it to extend up and out into areas that could give it a stranglehold on cloud computing’s most valuable technologies. Nearly every vendor with a stake in the data center is keeping a wary eye on VMware. Some, such as Microsoft and Oracle, are outright competitors seeking to cut into VMware’s market lead, while others — such as HP, IBM, and Cisco — are partnering pragmatically with VMware while pursuing strategic alternatives and contingency plans.

Commoditizing Competitor’s Edge

In promoting an open-source alternative as a means of undercutting a competitor’s competitive advantage, IBM and its OVA cohorts are taking a page from a well-worn strategic handbook. This is what Google unleashed against Apple in mobile operating systems with Android, and what Facebook is trying to achieve against Google in cloud data centers with its Open Compute Project. For OVA’s charter members, it’s all about attempting to commoditize a market leader’s competitive differentiation to level the playing field — and perhaps to eventually tilt it to your advantage.

IBM and HP have integration prowess and professional-services capabilities that VMware lacks. If they can nullify virtualization as a strategic asset by commoditizing it, they relegate VMware to a lesser role. However, if they fail and VMware’s differentiation is maintained and extended further, they risk losing a great deal of long-term account control in a burgeoning market.

KVM Rather than XenServer

Some might wonder why the open-source server virtualization alternative became KVM and not, say, XenServer, whose custodian, XenSource, is owned by Citrix. One of the reasons could be Citrix’s relatively warm embrace by Microsoft. When Gartner released its Magic Quadrant for x86 Server Virtualization Infrastructure this summer, it questioned whether Citrix’s ties to Microsoft could result in XenServer being compromised. Microsoft, of course, has its own server-virtualization entry in Hyper-V.

In the end, the OVA gang put down its money on KVM rather than XenServer, seeing the former as a less-complicated proposition than the latter. That appears to have been the right move.

Clearly OVA has experienced striking growth in just a few months, but it has a long way to go before it meets the strategic mandate envisioned by its founders.

Will Cisco Leave VCE Marriage of Convenience?

Because I am in a generous mood, I will use this post to provide heaping helpings of rumor and speculation, a pairing that can lead to nowhere or to valuable insights. Unfortunately, the tandem usually takes us to the former more than the latter, but let’s see whether we can beat the odds.

The topic today is the Virtual Computing Environment (VCE) Company, a joint venture formed by Cisco and EMC, with investments from VMware and Intel.  VCE is intended to accelerate the adoption of converged infrastructure, reducing customer costs related to IT deployment and management while also expediting customers’ time to revenue.

VCE provides fully assembled and tested Vblocks, integrated platforms that include Cisco’s UCS servers and Nexus switches, EMC’s storage, and VMware’s virtualization. Integration services and management software are provided by VCE, which considers the orchestration layer as the piece de resistance.

VCE Layoffs?

As a company, VCE was formed at the beginning of this year. Before then, it existed as a “coalition” of vendors providing reference architectures in conjunction with a professional-services operation called Acadia. Wikibon’s Stuart Miniman provided a commendable summary of the evolution of VCE in January.

If you look at official pronouncements from EMC and — to a lesser extent — Cisco, you might think that all is well behind the corporate facade of VME. After all, sales are up, the business continues to ramp, the value proposition is cogent, and the dour macroeconomic picture would seem to argue for further adoption of solutions, such as VME, that have the potential to deliver reductions in capital and operating expenditures.

What, then, are we to make of rumored layoffs at VCE? Nobody from Cisco or EMC has confirmed the rumors, but the scuttlebutt has been coming steadily enough to suggest that there’s fire behind the smoke. If there’s substance to the rumors, what might have started the fire?

Second Thoughts for Cisco?

Well, now that I’ve given you the rumor, I’ll give you some speculation. It could be — and you’ll notice that I’ve already qualified my position — that Cisco is having second thoughts about VCE. EMC contributes more than Cisco does to VCE and its ownership stake is commensurately greater, as Miniman explains in a post today at Wikibon:

 “According to company 10Q forms, Cisco (May ’11) owns approximately 35% outstanding equity of VCE with $100M invested and EMC (Aug ’11) owns approximately 58% outstanding equity of VCE with $173.5M invested. The companies are not disclosing revenue of the venture, except that it passed $100M in revenue in about 6 months and as of December 2010 had 65 “major customers” and was growing that number rapidly. In July 2011, EMC reported that VCE YTD revenue had surpassed all of 2010 revenue and CEO Joe Tucci stated that the companies “expect Vblock sales to hit the $1 billion run rate mark in a next several quarters.” EMC sees the VCE investment as strategic to increasing its importance (and revenue) in a changing IT landscape.”

Indeed, I agree that EMC views its VCE acquisition through a strategic prism. What I wonder about is Cisco’s long-term commitment to VCE.

Marriage of Convenience

There already have been rumblings that Cisco isn’t pleased with its cut of VCE profits. In this context, it’s important to remember how VCE is structured. The revenue it generates flows directly to its parent companies; it doesn’t keep any of it.  Thus, VCE is built purely as a convenient integration and delivery vehicle, not as a standalone business that will pursue its own exit strategy.

Relationships of convenience, such as the one that spawned VCE, often do not prove particularly durable. As long as the interests of the constituent partners remain aligned, VCE will remain unchanged. If interests diverge, though, as they might be doing now, all bets are off. When the convenient becomes inconvenient for one or more of the partners, it’s over.

It’s salient to me that Cisco is playing second fiddle to EMC in VCE. In its glory days, Cisco didn’t play second fiddle to anybody.

In the not-too-distant past, Cisco CEO John Chambers had the run of the corporate house. Nobody questioned his strategic acuity, and he and his team were allowed to do as they pleased. Since then, the composition of his team has changed — many of Cisco’s top executives of just a few short years ago are no longer with the company — and several notable investors and analysts, and perhaps one or two board members, have begun to wonder whether Chambers can author the prescription that will cure Cisco’s ills. Doubts creep into the minds of investors after a decade of stock stagnancy, reduced growth horizons, a failed foray into consumer markets, and slow but steady market-share erosion.

Alternatives to Playing Second Fiddle

Meanwhile, Cisco has another storage partner, NetApp. The two companies also have combined to deliver converged infrastructure. Cisco says the relationships involving VCE’s Vblocks and NetApp’s FlexPods don’t see much channel conflict and that they both work to increase Cisco’s UCS  footprint.

That’s likely true. It’s also likely that Cisco will never control VCE. EMC holds the upper hand now, and that probably won’t change.

Once upon a time, Cisco might have been able to change that dynamic. Back then, it could have acquired EMC. Now, though? I wouldn’t bet on it . EMC’s market capitalization is up to nearly $48 billion and Cisco’s stands at less than $88 billion. Even if Cisco repatriated all of its offshore cash hoard, that money still wouldn’t be enough to buy EMC. In fact, when one considers the premium that would have to be paid in such a deal, Cisco would fall well short of the mark. It would have to do a cash-and-stock deal, and that would go over like the Hindenburg with EMC shareholders.

So, if Cisco is to get more profit from sales of converged infrastructure, it has to explore other options. NetApp is definitely one, and some logic behind a potential acquisition was explored earlier this year in a piece by Derrick Harris at GigaOm. In that post, Harris also posited that Cisco consider an acquisition of Citrix, primarily for its virtualization technologies. If Cisco acquired NetApp and Citrix, it would be able to offer a complete set of converged infrastructure, without the assistance of EMC or its majority-owned VMware. It’s just the sort of bold move that might put Chambers back in the good graces of investors and analysts.

Irreconcilable Differences 

Could it be done? The math seems plausible. Before it announced its latest quarterly results, Cisco had $43.4 billion in cash, 89 percent of which was overseas. Supposing that Cisco could repatriate its foreign cash hoard without taking too much of a tax hit — Cisco and others are campaigning hard for a repatriation tax holiday — Cisco would be in position to make all-cash acquisitions for Citrix (with a $11.5 billion market capitalization) and NetApp (with a $16.4 market capitalization). Even with premiums factored into the equation, the deals could be done overwhelmingly, if not exclusively, with cash.

I know the above scenario is not without risk to Cisco. But I also know that the status quo isn’t going to get Cisco to where it needs to be in converged infrastructure. Something has to give. The VCE open marriage of convenience could be destined to founder on the rocks of irreconcilable differences.

ONF Board Members Call OpenFlow Tune

The concept of software-defined networking (SDN) has generated considerable interest during the last several months.  Although SDNs can be realized in more than one way, the OpenFlow protocol seems to have drawn a critical mass of prospective customers (mainly cloud-service providers with vast data centers) and solicitous vendors.

If you aren’t up to speed with the basics of software-defined networking and OpenFlow, I suggest you visit the Open Networking Foundation (ONF) and OpenFlow websites to familiarize yourself the underlying ideas.  Others have written some excellent articles on the technology, its perceived value, and its potential implications.

In a recent piece he wrote originally for GigaOm, Kyle Forster of Big Switch Networks offers this concise definition:

Concisely Defined

“At its most basic level, OpenFlow is a protocol for server software (a “controller”) to send instructions to OpenFlow-enabled switches, where these instructions give direct control over how those switches forward traffic through the network.

I think of OpenFlow like an x86 instruction set for the network – it’s low-level, but it’s very powerful. Continuing that analogy, if you read the x86 instruction set for the first time, you might walk away thinking it could be useful if you need to build a fancy calculator, but using it to build Linux, Apache, Microsoft Word or World of Warcraft wouldn’t exactly be obvious. Ditto for OpenFlow. It isn’t the protocol that is interesting by itself, but rather all of the layers of software that are starting to emerge on top of it, similar to the emergence of operating systems, development environments, middleware and applications on top of x86.”

Increased Network Functionality, Lower Network Operating Costs

The Open Networking Foundation’s charter summarizes its objectives and the value proposition that advocates of SDN and OpenFlow believe they can deliver:

 “The Open Networking Foundation is a nonprofit organization dedicated to promoting a new approach to networking called Software-Defined Networking (SDN). SDN allows owners and operators of networks to control and manage their networks to best serve their users’ needs. ONF’s first priority is to develop and use the OpenFlow protocol. Through simplified hardware and network management, OpenFlow seeks to increase network functionality while lowering the cost associated with operating networks.”

That last part is the key to understanding the composition of ONF’s board of directors, which includes Deutsche Telecom, Facebook, Google, Microsoft, Verizon, and Yahoo. All of these companies are major cloud-service providers with multiple, sizable data centers. (Yes, Microsoft also is a cloud-technology purveyor, but what it has in common with the other board members is its status as a cloud-service provider that owns and runs data centers.)

Underneath the board of directors are member companies. Most of these are vendors seeking to serve the needs of the ONF board members and similar cloud-service providers that share their business objective: boosting network functionality while reducing the costs associated with network operations.

Who’s Who of Networking

Among the vendor members are a veritable who’s who of the networking industry: Cisco, HP, Juniper, Brocade, Dell/Force10, IBM, Huawei, Nokia Siemens Networks, Riverbed, Extreme, and others. Also members, not surprisingly, are virtualization vendors such as VMware and Citrix, as well as the aforementioned Microsoft. There’s a smattering of SDN/OpenFlow startups, too, such as Big Switch Networks and Nicira Networks.

Of course, membership does not necessarily entail avid participation. Some vendors, including Cisco, likley would not be thrilled at any near-term prospect of OpenFlow’s widespread market adoption. Cisco would be pleased to see the networking status quo persist for as long as possible, and its involvement in ONF probably is more that of vigilant observer than of fervent proponent. In fact, many vendors are taking a wait-and-see approach to OpenFlow. Some members, including Force10, are bearish and have suggested that the protocol is a long way from delivering the maturity and scalability that would satisfy enterprise customers.

Vendors Not In Charge

Still, the board members are steering the ONF ship, not the vendors. Regardless of when OpenFlow or something like it comes of age, the rise of software-defined networking seems inevitable. Servers and storage gear have been virtualized and have become more application-driven, but networks haven’t changed much in the last several years. They’re faster, yes, but they’re still provisioned in the traditional manner, configured rather than programmed. That takes time, consumes resources, and costs money.

Major cloud-service providers, such as those on the ONF board, want network infrastructure to become more elastic, flexible, and dynamic. Vendors will have to respond accordingly, whether with OpenFlow or with some other approach that delivers similar operational outcomes and business benefits.

I’ll be following these developments closely, watching to see how the business concerns of the cloud providers and the business interests of the networking-vendor community ultimately reconcile.