Category Archives: Citrix

F5’s Look Ahead

I’ve always admired how F5 Networks built its business. Against what seemed heavy odds at the time, F5 took the fight to Cisco Systems and established market leadership in load balancing, which subsequently morphed into market leadership in application delivery controllers (ADC).

F5 now talks about its “Intelligent Services Platform,” which “connects any user, anywhere, from any device to the best application resources, independent of infrastructure.”

To be sure, as various permutations of cloud computing take hold and mobile devices proliferate, the market is shifting, and F5 is attempting to move with it. To get a feel for how F5 sees the world, where it sees things going, and how it intends to meet new challenges, you might want to have a look at a 211-slide (yes, that many) presentation that company executives made to analysts and investors yesterday. 

By its nature, the presentation is mostly high-level stuff, but it offers interesting nuggets on markets, products, technologies, and partnerships.  

Some Thoughts on VMware’s Strategic Acquisition of Nicira

If you were a regular or occasional reader of Nicira Networks CTO Martin Casado’s blog, Network Heresy, you’ll know that his penultimate post dealt with network virtualization, a topic of obvious interest to him and his company. He had written about network virtualization many times, and though Casado would not describe the posts as such, they must have looked like compelling sales pitches to the strategic thinkers at VMware.

Yesterday, as probably everyone reading this post knows, VMware announced its acquisition of Nicira for $1.26 billion. VMware will pay $1.05 billion in cash and $210 million in unvested equity awards.  The ubiquitous Frank Quattrone and his Quatalyst Partners, which reportedly had been hired previously to shop Brocade Communications, served as Nicira’s adviser.

Strategic Buy

VMware should have surprised no one when it emphasized that its acquisition of Nicira was a strategic move, likely to pay off in years to come, rather than one that will produce appreciable near-term revenue. As Reuters and the New York Times noted, VMware’s buy price for Nicira was 25 times the amount ($50 million) invested in the company by its financial backers, which include venture-capital firms Andreessen Horowitz, Lightspeed,and NEA. Diane Greene, co-founder and former CEO of VMware — replaced four years ago by Paul Maritz — had an “angel” stake in Nicira, as did as Andy Rachleff, a former general partner at Benchmark Capital.

Despite its acquisition of Nicira, VMware says it’s not “at war” with Cisco. Technically, that’s correct. VMware and its parent company, EMC, will continue to do business with Cisco as they add meat to the bones of their data-center virtualization strategy. But the die was cast, and  Cisco should have known it. There were intimations previously that the relationship between Cisco and EMC had been infected by mutual suspicion, and VMware’s acquisition of Nicira adds to the fear and loathing. Will Cisco, as rumored, move into storage? How will Insieme, helmed by Cisco’s aging switching gods, deliver a rebuttal to VMware’s networking aspirations? It won’t be too long before the answers trickle out.

Still, for now, Cisco, EMC, and VMware will protest that it’s business as usual. In some ways, that will be true, but it will also be a type of strategic misdirection. The relationship between EMC and Cisco will not be the same as it was before yesterday’s news hit the wires. When these partners get together for meetings, candor could be conspicuous by its absence.

Acquisitive Roads Not Traveled

Some have posited that Cisco might have acquired Nicira if VMware had not beaten it to the punch. I don’t know about that. Perhaps Cisco might have bought Nicira if the asking price were low, enabling Cisco to effectively kill the startup and be done with it. But Cisco would not have paid $1.26 billion for a company whose approach to networking directly contradicts Cisco’s hardware-based business model and market dominance. One typically doesn’t pay that much to spike a company, though I suppose if the prospective buyer were concerned enough about a strategic technology shift and a major market inflection, it might do so. In this case, though, I suspect Cisco was blindsided by VMware. It just didn’t see this coming — at least not now, not at such an early state of Nicira’s development.

Similarly, I didn’t see Microsoft or Citrix as buyers of Nicira. Microsoft is distracted by its cloud-service provider aspirations, and the $1.26 billion would have been too rich for Citrix.

IBM’s Moves and Cisco’s Overseas Cash Horde

One company I had envisioned as a potential (though less likely) acquirer of Nicira was IBM, which already has a vSwitch. IBM might now settle for the SDN-controller technology available from Big Switch Networks. The two have been working together on IBM’s Open Data Center Interoperable Network (ODIN), and Big Switch’s technology fits well with IBM’s PureSystems and its top-down model of having application workloads command and control  virtualized infrastructure. As the second network-virtualization domino to fall, Big Switch likely will go for a lower price than did Nicira.

On Twitter, Dell’s Brad Hedlund asked whether Cisco would use its vast cash horde to strike back with a bold acquisition of its own. Cisco has two problems here. First, I don’t see an acquisition that would effectively blunt VMware’s move. Second, about 90 percent of Cisco’s cash (more than $42 billion) is offshore, and CEO John Chambers doesn’t want to take a tax hit on its repatriation. He had been hoping for a “tax holiday” from the U.S. government, but that’s not going to happen in the middle of an election campaign, during a macroeconomic slump in which plenty of working Americans are struggling to make ends meet. That means a significant U.S.-based acquisition likely is off the table, unless the target company is very small or is willing to take Cisco stock instead of cash.

Cisco’s Innovator’s Dilemma

Oh, and there’s a third problem for Cisco, mentioned earlier in this prolix post. Cisco doesn’t want to embrace this SDN stuff. Cisco would rather resist it. The Cisco ONE announcement really was about Cisco’s take on network programmability, not about SDN-type virtualization in which overlay networks run atop an underyling physical network.

Cisco is caught in a classic innovator’s dilemma, held captive by the success it has enjoyed selling prodigious amounts of networking gear to its customers, and I don’t think it can extricate itself. It’s built a huge and massively successful business selling a hardware-based value proposition predicated on switches and routers. It has software, but it’s not really a software company.

For Cisco, the customer value, the proprietary hooks, are in its boxes. Its whole business model — which, again, has been tremendously successful — is based around that premise. The entire company is based around that business model.  Cisco eventually will have to reinvent itself, like IBM did after it failed to adapt to client-server computing, but the day of reckoning hasn’t arrived.

On the Defensive

Expect Cisco to continue to talk about the northbound interface (which can provide intelligence from the switch) and about network programmability, but don’t expect networking’s big leopard to change its spots. Cisco will try to portray the situation differently, but it’s defending rather than attacking, trying to hold off the software-based marauders of infrastructure virtualization as long as possible. The doomsday clock on when they’ll arrive in Cisco data centers just moved up a few ticks with VMware’s acquisition of Nicira.

What about the other networking players? Sadly, HP hasn’t figured out what to about SDN, even though OpenFlow is available on its former ProCurve switches. HP has a toe dipped in the SDN pool, but it doesn’t seeming willing to take the initiative. Juniper, which previously displayed ingenuity in bringing forward QFabric, is scrambling for an answer. Brocade is pragmatically embracing hybrid control planes to maintain account presence and margins in the near- to intermediate-term.

Arista Networks, for its part, might be better positioned to compete on networking’s new playing field. Arista Networks’ CEO Jayshree Ullal had the following to say about yesterday’s news:

“It’s exciting to see the return of innovative networking companies and the appreciation for great talent/technology. Software Defined Networking (SDN) is indeed disrupting legacy vendors. As a key partner of VMware and co-innovator in VXLANs, we welcome the interoperability of Nicira and VMWare controllers with Arista EOS.”

Arista’s Options

What’s interesting here is that Arista, which invariably presents its Extensible OS (EOS) as “controller friendly,” earlier this year demonstrated interoperability with controllers from VMware, Big Switch Networks, and Nebula, which has built a cloud controller for OpenStack.

One of Nebula’s investors is Andy Bechtolsheim, whom knowledgeable observers will recognize as the chief development officer (CDO) of, and major investor in, Arista Networks.  It is possible that Bechtolsheim sees a potential fit between the two companies — one building a cloud controller and one delivering cloud networking. To add fuel to this particular fire, which may or may not emit smoke, note that the Nebula cloud controller already features Arista technology, and that Nebula is hiring a senior network engineer, who ideally would have “experience with cloud infrastructure (OpenStack, AWS, etc. . . .  and familiarity with OpenFlow and Open vSwitch.”

 Open or Closed?

Speaking of Open vSwitch, Matt Palmer at SDN Centralwill feel some vindication now that VMware has purchased a company whose engineering team has made significant contributions to the OVS code. Palmer doubtless will cast a wary eye on VMware’s intentions toward OVS, but both Steve Herrod, VMware’s CTO, and Martin Casado, Nicira’s CTO, have provided written assurances that their companies, now combining, will not retreat from commitments to OVS and to Open Flow and Quantum, the OpenStack networking  project.

Meanwhile, GigaOm’s Derrick Harris thinks it would be bad business for VMware to jilt the open-source community, particularly in relation to hypervisors, which “have to be treated as the workers that merely carry out the management layer’s commands. If all they’re there to do is create virtual machines that are part of a resource pool, the hypervisor shouldn’t really matter.”

This seems about right. In this brave new world of virtualized infrastructure, the ultimate value will reside in an intelligent management layer.

PS: I wrote this post under a slight fever and a throbbing headache, so I would not be surprised to discover belatedly that it contains at least a couple typographical errors. Please accept my apologies in advance.

Cisco’s Storage Trap

Recent commentary from Barclays Capital analyst Jeff Kvaal has me wondering whether  Cisco might push into the storage market. In turn, I’ve begun to think about a strategic drift at Cisco that has been apparent for the last few years.

But let’s discuss Cisco and storage first, then consider the matter within a broader context.

Risks, Rewards, and Precedents

Obviously a move into storage would involve significant risks as well as potential rewards. Cisco would have to think carefully, as it presumably has done, about the likely consequences and implications of such a move. The stakes are high, and other parties — current competitors and partners alike — would not sit idly on their hands.

Then again, Cisco has been down this road before, when it chose to start selling servers rather than relying on boxes from partners, such as HP and Dell. Today, of course, Cisco partners with EMC and NetApp for storage gear. Citing the precedent of Cisco’s server incursion, one could make the case that Cisco might be tempted to call the same play .

After all, we’re entering a period of converged and virtualized infrastructure in the data center, where private and public clouds overlap and merge. In such a world, customers might wish to get well-integrated compute, networking, and storage infrastructure from a single vendor. That’s a premise already accepted at HP and Dell. Meanwhile, it seems increasingly likely data-center infrastructure is coming together, in one way or another, in service of application workloads.

Limits to Growth?

Cisco also has a growth problem. Despite attempts at strategic diversification, including failed ventures in consumer markets (Flip, anyone?), Cisco still hasn’t found a top-line driver that can help it expand the business while supporting its traditional margins. Cisco has pounded the table perennially for videoconferencing and telepresence, but it’s not clear that Cisco will see as much benefit from the proliferation of video collaboration as once was assumed.

To complicate matters, storm clouds are appearing on the horizon, with Cisco’s core businesses of switching and routing threatened by the interrelated developments of service-provider alienation and software-defined networking (SDN). Cisco’s revenues aren’t about to fall off a cliff by any means, but nor are they on the cusp of a second-wind surge.

Such uncertain prospects must concern Cisco’s board of directors, its CEO John Chambers, and its institutional investors.

Suspicious Minds

In storage, Cisco currently has marriages of mutual convenience with EMC (VBlocks and the sometimes-strained VCE joint venture) and with NetApp (the FlexPod reference architecture).  The lyrics of Mark James’ song Suspicious Minds are evocative of what’s transpiring between Cisco and these storage vendors. The problem is not only that Cisco is bigamous, but that the networking giant might have another arrangement in mind that leaves both partners jilted.

Neither EMC nor NetApp is oblivious to the danger, and each has taken care to reduce its strategic reliance on Cisco. Conversely, Cisco would be exposed to substantial risks if it were to abandon its existing partnership in favor of a go-it-alone approach to storage.

I think that’s particularly true in the case of EMC, which is the majority owner of server-virtualization market leader VMware as well as a storage vendor. The corporate tandem of VMware and EMC carries considerable enterprise clout, and Cisco is likely to be understandably reluctant to see the duo become its adversaries.

Caught in a Trap

Still, Cisco has boxed itself into a strategic corner. It needs growth, it hasn’t been able to find it from diversification away from the data center, and it could easily see the potential of broadening its reach from networking and servers to storage. A few years ago, the logical choice might have been for Cisco to acquire EMC. Cisco had the market capitalization and the onshore cash to pull it off five years ago, perhaps even three years ago.

Since then, though, the companies’ market fortunes have diverged. EMC now has a market capitalization of about $54 billion, while Cisco’s is slightly more than $90 billion. Even if Cisco could find a way of repatriating its offshore cash hoard without taking a stiff hit from the U.S. taxman, it wouldn’t have the cash to pull of an acquisition of EMC, whose shareholders doubtless would be disinclined to accept Cisco stock as part of a proposed transaction.

Therefore, even if it wanted to do so, Cisco cannot acquire EMC. It might have been a good move at one time, but it isn’t practical now.

Losing Control

Even NetApp, with a market capitalization of more than $12.1 billion, would rate as the biggest purchase by far in Cisco’s storied history of acquisitions. Cisco could pull it off, but then it would have to try to further counter and commoditize VMware’s virtualization and cloud-management presence through a fervent embrace of something like OpenStack or a potential acquisition of Citrix. I don’t know whether Cisco is ready for either option.

Actually, I don’t see an easy exit from this dilemma for Cisco. It’s mired in somewhat beneficial but inherently limiting and mutually distrustful relationships with two major storage players. It would probably like to own storage just as it owns servers, so that it might offer a full-fledged converged infrastructure stack, but it has let the data-center grass grow under its feet. Just as it missed a beat and failed to harness virtualization and cloud as well as it might have done, it has stumbled similarly on storage.

The status quo is likely to prevail until something breaks. As we all know, however, making no decision effectively is a decision, and it carries consequences. Increasingly, and to an extent that is unprecedented, Cisco is losing control of its strategic destiny.

Reflecting on the Big Acquisition Cisco Didn’t Make

It has been nearly eight years since EMC acquired VMware. The acquisition announcement went over the newswires on December 15, 2003. EMC paid approximately $635 million for VMware, and Joe Tucci, EMC’s president and CEO, had this to say about the deal:

“Customers want help simplifying the management of their IT infrastructures. This is more than a storage challenge. Until now, server and storage virtualization have existed as disparate entities. Today, EMC is accelerating the convergence of these two worlds .“

“We’ve been working with the talented VMware team for some time now, and we understand why they are considered one of the hottest technology companies anywhere. With the resources and commitment of EMC behind VMware’s leading server virtualization technologies and the partnerships that help bring these technologies to market, we look forward to a prosperous future together.”

Virtualization Goldmine

Oh, the future was prosperous . . . and then some. It’s a deal that worked out hugely in EMC’s favor. Even though the storage behemoth has spun out VMware in the interim, allowing it to go public, EMC still retains more than 80 percent ownership of its virtualization goldmine.

Consider that EMC paid just $635 million in 2003 to buy the server-virtualization market leader. VMware’s current market capitalization is more than $38 billion. That means EMC’s stake in VMware is worth more than $30 billion, not including the gains it reaped when it took VMware public. I don’t think it’s hyperbolic to suggest that EMC’s purchase of VMware will be remembered as Tucci’s defining moment as EMC chieftain.

Now, let’s consider another vendor that had an opportunity to acquire VMware back in 2003.

Massive Market Cap, Industry Dominance

A few years earlier, at the pinnacle of the dot-com boom in March 2000, Cisco was the most valuable company in the world, sporting a market capitalization of more than US$500 billion.  It was a networking colossus that bestrode the globe, dominating its realm of the industry as much as any other technology company during any other period. (Its only peers in that regard were IBM in the mainframe era and Microsoft and Intel in the client-server epoch.)

Although Juniper Networks brought its first router to market in the fall of 1998 and began to challenge Cisco for routing patronage at many carriers early in the first decade of the new millennium, Cisco remained relatively unscathed in enterprise networking, where its Catalyst switches grew into a multibillion-dollar franchise after it saw off competitive challenges in the late 90s from companies such as 3Com, Cabletron, Nortel, and others.

As was its wont since its first acquisition, involving Crescendo Communications in 1993, Cisco remained an active buyer of technology companies. It bought companies to inorganically fortify its technological innovation, and to preclude competitors from gaining footholds among its expanding installed base of customers.

Non-Buyer’s Remorse?

It’s true that the post-boom dot-com bust cooled Cisco’s acquisitive ardor. Nonetheless, the networking giant made nine acquisitions from May 2002 through to the end of 2003. The companies Cisco acquired in that span included Hammerhead Networks, Navarro Networks, AYR Networks, Andiamo Systems, Psionic Software, Okena, SignalWorks, Linksys, and Latitude Communications.

The biggest acquisition in that period involved spin-in play Andiamo Systems, which provided the technological foundation for Cisco’s subsequent push to dominate storage networking. Cisco was at risk of paying as much as $2.5 billion for Andiamo, but the actual price tag for that convoluted spin-in transaction was closer to $750 million by the time it finally closed in 2004. The next-biggest Cisco acquisition during that period involved home-networking vendor Linksys, for which Cisco paid about $500 million.

Cisco announced the acquisitions of Hammerhead Networks and Navarro Networks in a single press release. Hammerhead, for which Cisco exchanged common stock valued at up to $173 million, developed software that accelerated the delivery of IP-based billing, security, and QoS; the company was folded into the Cable Business Unit in Cisco’s Network Edge and Aggregation Routing Group. Navarro Networks, for which Cisco exchanged common stock valued at up to $85 million, designed ASIC components for Ethernet switching.

To acquire AYR Networks, a vendor of “high-performance distributed networking services and highly scalable routing software technologies,” Cisco parted with about $113 million in common stock. AYR’s technology was intended to augment Cisco’s IOS software.

Andiamo Factor

Although the facts probably are familiar to many readers, Cisco’s acquisition of Andiamo was noteworthy for several reasons.  It was a spin-in acquisition, in which Cisco funded the company to go off and develop technology on its own, only later to be brought back in-house through acquisition. Andiamo was led by its CEO Buck Gee, and it included a core group of engineers who also were at Cresendo Communications.  The concept and execution of the spin-in move at Cisco was highly controversial within the company, seen as operationally and strategically innovative by many senior executives even though others claimed it engendered envy, invidious, and resentment among rank-and-file employees.

No matter, Andiamo was meant to provide market leadership for Cisco in the IP-based storage networking and to give Cisco a means of battering Brocade in Fibre Channel. That plan hasn’t come to fruition, with Brocade still leading in a tenacious Fibre Channel market and Cisco banking on Fibre Channel over Ethernet (FCoE) to go from the edge to the core. (The future of storage networking, including the often entertaining Fiber Channel-versus-FCoE debates, are another matter, and not within the purview of this post.)

While we’re on the topic of Andiamo, its former engineers continue to make news. Just this week, former Andiamo engineers Dante Malagrinò and Marco Di Benedetto officially launched Embrane, a company that is committed to delivering a platform for virtualized L4-7 network services at large cloud service providers. Those two gentlemen also were involved in Cisco last big spin-in move, Nuova Systems, which provided the foundation for Cisco’s Unified Computing Systems (UCS).

As for Cisco’s post-Andiamo acquisition announcements in 2002, Okena and Psionic both were involved in intrusion-detection technology. Of the two, Okena represented the larger transaction, valued at about $154 million in stock.

Interestingly, not much is available publicly these days regarding Cisco’s announced acquisition of SignalWorks in March of 2003. If you visit the CrunchBase profile for SignalWorks and click on a link that is supposed to take you to a Cisco press release announcing the deal, you’ll get a “Not Found” message. A search of the Cisco website turns up two press releases — relating to financial results in Cisco’s third and fourth quarters of fiscal year 2003, respectively — that obliquely mention the SignalWorks acquisition. The purchase price of the IP-audio company was about $16 million. CNet also covered the acquisition when it first came to light.

Other Strategic Priorities

Cisco’s last announced acquisitions in that timeframe involved home-networking player Linksys, part of Cisco’s ultimately underachieving bid to become a major player in the consumer space, and web-conferencing vendor Latitude Communications.

And now we get the crux of this post. Cisco announced a number of acquisitions in 2002 and 2003, but it was one they didn’t make that reverberates to this day. It was a watershed acquisition, a strategic masterstroke, but it was made by EMC, not by Cisco. As I said, the implications resound through to this day and probably will continue to ramify for years to come.

Some might contend that Cisco perhaps didn’t grasp the long-term significance of virtualization. Apparently, though, some at Cisco were clamoring for the company to buy VMware.  The missed opportunity wasn’t attributable to Cisco failing to see the importance of virtualization — some at Cisco had the prescience to see where the technology would lead — but because an acquisition of VMware wasn’t considered as high a priority as the spin-in of Andiamo for storage networking and the acquisition of Linksys for home networking.

Cisco placed its bets elsewhere, perhaps thinking that it had more time to develop a coherent and comprehensive strategy for virtualization. Then EMC made its move.

Missed the Big Chance

To this day, in my view, Cisco is paying an exorbitant opportunity cost for failing to take VMware off the market, leaving it for EMC and ultimately allowing the storage leader, yeas later, to gain the upper hand in the Virtual Computing Environment (VCE) Company joint venture that delivers UCS-encompassing VBlocks. There’s a rich irony there, too, when one considers that Cisco’s UCS contribution to the VBlock package is represented by technology derived from spin-in Nuova.

But forget about VCE and VBlocks. What about the bigger picture? Although Cisco likes to talk itself up as a leader in virtualization, it’s not nearly as prominent or dominant as it might have been. Is there anybody who would argue that Cisco, if it had acquired and then integrated and assimilated VMware as half as well as it digested Crescendo, wouldn’t have absolutely thrashed all comers in converged data-center infrastructure and cloud infrastructure?

Cisco belatedly recognized its error of omission, but it was too late. By 2009, EMC was not interested in selling its majority stake in VMware to Cisco, and Cisco was in no position to try to obtain it through an acquisition of EMC. In that regard, Cisco’s position has only worsened.

Although EMC’s ownership stake in VMware amounts to about 80 percent (or perhaps even just north of that amount), its has 98 percent of the voting shares in the company, which effectively means EMC steers the ship, regardless of public pronouncements VMware executives might issue regarding their firm being an autonomous corporate entity.

Keeping Cisco Interested but Contained 

Conversely, Cisco owns approximately five percent of VMware’s Class A shares, but none of its Class B shares, and it held just one percent of voting power as of March 2011.  As of that same date, EMC owned all of VMware’s 330,000,000 Class B Shares and 33,066,050 of its 118,462,369 shares of Class A common shares. Cisco has a stake in VMware, but it’s a small one and it has it at the pleasure of EMC, whose objective is to keep Cisco sufficiently interested so as not to pursue other strategic options in data-center virtualization and cloud infrastructure.

The EMC gambit has worked, up to the point. But Cisco, which missed its big chance  in 2003, has been trying ever since then to reassert its authority. Nuova, and all that flowed from it, was Cisco’s first attempt to regain lost ground, and now it is partnering, to varying degrees, with VMware and EMC competitors such as NetApp, Citrix, and Microsoft. It also has gotten involved involved with OpenStack and the oVirt Project in a bid to hedge its virtualization bets.

Yes, some of those moves are indicative of coopetition, and Cisco retains its occasionally strained VCE joint venture with EMC and VMware, but Cisco clearly is playing for time, looking for a way to redefine the rules of the game.

What Cisco is trying to do is break an impasse of its own making, a result of strategic choices it made nearly a decade ago.

Embrane Emerges from Stealth, Brings Heleos to Light

I had planned to write about something else today — and I still might get around to it — but then Embrane came out of stealth mode. I feel compelled to comment, partly because I have written about the company previously, but also because what Embrane is doing deserves notice.

Embrane’s Heleos

With regard to aforementioned previous post, which dealt with Dell acquisition candidates in Layer 4-7 network services, I am now persuaded that Dell is more likely to pull the trigger on a deal for an A10 Networks, let’s say, than it is to take a more forward-looking leap at venture-funded Embrane. That’s because I now know about Embrane’s technology, product positioning, and strategic direction, and also because I strongly suspect that Dell is looking for a purchase that will provide more immediate payback within its installed base and current strategic orientation.

Still, let’s put Dell aside for now and focus exclusively on Embrane.

The company’s founders, former Andiamo-Cisco lads Dante Malagrinò and Marco Di Benedetto, have taken their company out of the shadows and into the light with their announcement of Heleos, which Embrane calls “the industry’s first distributed software platform for virtualizing layer 4-7 network services.” What that means, according to Embrane, is that cloud service providers (CSPs) and enterprises can use Heleos to build more agile networks to deliver cloud-based infrastructure as a service (IaaS). I can perhaps see the qualified utility of Heleos for the former, but I think the applicability and value for the latter constituency is more tenuous.

Three Wise Men

But I am getting ahead of myself, putting the proverbial cart before the horse. So let’s take a step back and consult some learned minds (including  an”ethereal” one) on what Heleos is, how it works, what it does, and where and how it might confer value.

Since the Embrane announcement hit the newswires, I have read expositions on the company and its new product from The 451 Group’s Eric Hanselman, from rock-climbing Ivan Pepelnjak (technical director at NIL Data Communications), and from EtherealMind’s Greg Ferro.  Each has provided valuable insight and analysis. If you’re interested in learning about Embrane and Heleos, I encourage you to read what they’ve written on the subject. (Only one of Hanselman’s two The 451 Group pieces is available publicly online at no charge).

Pepelnjak provides an exemplary technical description and overview of Heleos. He sets out the problem it’s trying to solve, considers the pros and cons of the alternative solutions (hardware appliances and virtual appliances), expertly explores Embrane’s architecture, examines use cases, and concludes with a tidy summary. He ultimately takes a positive view of Heleos, depicting Embrane’s architecture as “one of the best proposed solutions” he’s seen hitherto for scalable virtual appliances in public and private cloud environments.

Limited Upside

Ferro reaches a different conclusion, but not before setting the context and providing a compelling description of what Embrane does. After considering Heleos, Ferro ascertains that its management of IP flows equates to “flow balancing as a form of load balancing.” From all that I’ve read and heard, it seems an apt classification. He also notes that Embrane, while using flow management, is not an “OpenFlow/SDN business. Although I see conceptual similarities between what Embrane is doing and what OpenFlow does, I agree with Ferro, if only because, as I understand it, OpenFlow reaches no higher than the network layer. I suppose the same is true for SDN, but this is where ambiguity enters the frame.

Even as I wrote this piece, there was a kerfuffle on Twitter as to whether or to what extent Embrane’s Heleos can be categorized as the latest manifestation of SDN. (Hours later, at post time, this vigorous exchange of views continues.)

That’s an interesting debate — and I’m sure it will continue — but I’m most intrigued by the business and market implications of what Embrane has delivered. On that score, Ferro sees Embrane’s platform play as having limited upside, restricted to large cloud-service providers with commensurately large data centers. He concludes there’s not much here for enterprises, a view with which I concur.

Competitive Considerations

Hanselman covers some of the same ground that Ferro and Pepelnjak traverse, but he also expends some effort examining the competitive landscape that Embrane is entering. In that Embrane is delivering a virtualization platform for network services, that it will be up against Layer 4-7 stalwarts such as F5 Networks, A10 Networks, Riverbed/Zeus, Radware, Brocade, Citrix, Cisco, among others. F5, the market leader, already recognizes and is acting upon some of the market and technology drivers that doubtless inspired the team that brought Heleos to fruition.

With that in mind, I wish to consider Embrane’s business prospects.

Embrane closed a Series B round of $18 million in August. It was lead by New Enterprise Associates and included the involvement of Lightspeed Venture Partners and North Bridge Venture Partners, both of whom participated in a $9-million series A round in March 2010.

To determine whether Embrane is a good horse to back (hmm, what’s with the horse metaphors today?), one has to consider the applicability of its technology to its addressable market — very large cloud-service providers — and then also project its likelihood of providing a solution that is preferable and superior to alternative approaches and competitors.

Counting the Caveats

While I tend to agree with those who believe Embrane will find favor with at least some large cloud-service providers, I wonder how much favor there is to find. There are three compelling caveats to Embrane’s commercial success:

  1. L4-7 network services, while vitally important cloud service providers and large enterprises, represent a much smaller market than L2-L3 networking, virtualized or otherwise. Just as a benchmark, Dell’Oro reported earlier this year that the L2-3 Ethernet Switch market would be worth approximately $25 billion in 2015, with the L4-7 application delivery controller (ADC) market expected to reach more than $1.5 billion, though the virtual-appliance segment is expected show most growth in that space. Some will say, accurately, that L4-7 network services are growing faster than L2-3 networking. Even so, the gap is size remains notable, which is why SDN and OpenFlow have been drawing so much attention in an increasingly virtualized and “cloudified” world.
  2. Embrane’s focus on large-scale cloud service providers, and not on enterprises (despite what’s stated in the press release), while rational and perfectly understandable, further circumscribes its addressable market.
  3. F5 Networks is a tough competitor, more agile and focused than a Cisco Systems, and will not easily concede customers or market share to a newcomer. Embrane might have to pick up scraps that fall to the floor rather than feasting at the head table. At this point, I don’t think F5 is concerned about Embrane, though that could change if Embrane can use NaviSite — its first customer, now owned by TimeWarner Cable — as a reference account and validator for further business among cloud service providers.

Notwithstanding those reservations, I look forward to seeing more of Embrane as we head into 2012. The company has brought a creative approach and innovation platform architecture to market, a higher-layer counterpart and analog to what’s happening further down the stack with SDN and OpenFlow.

Exploring OpenStack’s SDN Connections

Pursuant to my last post, I will now examine the role of OpenStack in the context of software-defined networking (SDN). As you will recall, it was one of the alternative SDN enabling technologies mentioned in a recent article and sidebar at Network World.

First, though, I want to note that, contrary to the concerns I expressed in the preceding post, I wasn’t distracted by a shiny object before getting around to writing this installment. I feared I would be, but my powers of concentration and focus held sway. It’s a small victory, but I’ll take it.

Road to Quantum

Now, on to OpenStack, which I’ve written about previously, though admittedly not in the context of SDNs. As for how networking evolved into a distinct facet of OpenStack, Martin Casado, chief technology officer at Nicira, offers a thorough narrative at the Open vSwitch website.

Casado begins by explaining that OpenStack is a “cloud management system (CMS) that orchestrates compute, storage, and networking to provide a platform for building on demand services such as IaaS.” He notes that OpenStack’s primary components were OpenStack Compute (Nova), Open Stack Storage (Swift), and OpenStack Image Services (Glance), and he also provides an overview of their respective roles.

Then he asks, as one might, what about networking? At this point, I will quote directly from his Open vSwitch post:

“Noticeably absent from the list of major subcomponents within OpenStack is networking. The historical reason for this is that networking was originally designed as a part of Nova which supported two networking models:

● Flat Networking – A single global network for workloads hosted in an OpenStack Cloud.

●VLAN based Networking – A network segmentation mechanism that leverages existing VLAN technology to provide each OpenStack tenant, its own private network.

While these models have worked well thus far, and are very reasonable approaches to networking in the cloud, not treating networking as a first class citizen (like compute and storage) reduces the modularity of the architecture.”

As a result of Nova’s networking shortcomings, which Casado enumerates in detail,  Quantum, a standalone networking component, was developed.

Network Connectivity as a Service

The OpenStack wiki defines Quantum as “an incubated OpenStack project to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).” On that same wiki, Quantum is touted as being able to support advanced network topologies beyond the scope of  Nova’s FlatManager or VLanManager; as enabling anyone to “build advanced network services (open and closed source) that plug into Openstack networks”; and as enabling new plugins (open and closed source) that introduce advanced network capabilities.

Okay, but how does it relate specifically to SDNs? That’s a good question, and James Urquhart has provided a clear and compelling answer, which later was summarized succinctly by Stuart Miniman at Wikibon. What Urquhart wrote actually connects the dots between OpenStack’s Quantum and OpenFlow-enabled SDNs. Here’s a salient excerpt:

“. . . . how does OpenFlow relate to Quantum? It’s simple, really. Quantum is an application-level abstraction of networking that relies on plug-in implementations to map the abstraction(s) to reality. OpenFlow-based networking systems are one possible mechanism to be used by a plug-in to deliver a Quantum abstraction.

OpenFlow itself does not provide a network abstraction; that takes software that implements the protocol. Quantum itself does not talk to switches directly; that takes additional software (in the form of a plug-in). Those software components may be one and the same, or a Quantum plug-in might talk to an OpenFlow-based controller software via an API (like the Open vSwitch API).”

Cisco’s Contribution

So, that addresses the complementary functionality of OpenStack’s Quantum and OpenFlow, but, as Urquhart noted, OpenFlow is just one mechanism that can be used by a plug-in to deliver a Quantum abstraction. Further to that point, bear in mind that Quantum, as recounted on the OpenStack wiki, can be used  to “build advanced network services (open and closed source) that plug into OpenStack networks” and to facilitate new plugins that introduce advanced network capabilities.

Consequently, when it comes to using OpenStack in SDNs, OpenFlow isn’t the only complementary option available. In fact, Cisco is in on the action, using Quantum to “develop API extensions and plug-in drivers for creating virtual network segments on top of Cisco NX-OS and UCS.”

Cisco portrays itself as a major contributor to OpenStack’s Quantum, and the evidence seems to support that assertion. Cisco also has indicated qualified support for OpenFlow, so there’s a chance OpenStack and OpenFlow might intersect on a Cisco roadmap. That said, Cisco’s initial OpenStack-related networking forays relate to its proprietary technologies and existing products.

Citrix, Nicira, Rackspace . . . and Midokura

Other companies have made contributions to OpenStack’s Quantum, too. In a post at Network World, Alan Shimel of The CISO Group cites the involvement of Nicira, Cisco, Citrix, Midokura, and Rackspace. From what Nicira’s Casado has written and said publicly, we know that OpenFlow is in the mix there. It seems to be in the picture at Rackspace, too. Citrix has posted blog posts about Quantum, including this one, but I’m not sure where they’re going with it, though XenServer, Open vSwitch, and, yes, OpenFlow are likely to be involved.

Finally, we have Midokura, a Japanese company that has a relatively low profile, at least on this side of the Pacific Ocean. According to its website, it was established early in 2010, and it had just 12 employees in the end of April 2011.

If my currency-conversion calculations (from Japanese yen) are correct, Midokura also had about $1.5 million in capital as of that date. Earlier that same month, the company announced seed funding of about $1.3 million. Investors were Bit-Isle, a Japanese data-center company; NTT Investment Partners, an investment vehicle of  Nippon Telegraph & Telephone Corp. (NTT); 1st Holdings, a Japanese ISV that specializes in tools and middleware; and various individual investors, including Allen Miner, CEO of SunBridge Corporation.

On its website, Midokura provides an overview of its MidoNet network-virtualization platform, which is billed as providing a solution to the problem of inflexible and expensive large-scale physical networks that tend to lock service providers into a single vendor.

Virtual Network Model in Cloud Stack

In an article published  at TechCrunch this spring, at about the time Midokura announced its seed round, the company claimed to be the only one to have “a true virtual network model” in a cloud stack. The TechCrunch piece also said the MidoNet platform could be integrated “into existing products, as a standalone solution, via a NaaS model, or through Midostack, Midokura’s own cloud (IaaS/EC2) distribution of OpenStack (basically the delivery mechanism for Midonet and the company’s main product).”

Although the company was accepting beta customers last spring, it hasn’t updated its corporate blog since December 2010. Its “Events” page, however, shows signs of life, with Midokura indicating that it will be attending or participating in the grand opening of Rackspace’s San Francisco office on December 1.

Perhaps we’ll get an update then on Midokura’s progress.

Rackspace’s Bridge Between Clouds

OpenStack has generated plenty of sound and fury during the past several months, and, with sincere apologies to William Shakespeare, there’s evidence to suggest the frenzied activity actually signifies something.

Precisely what it signifies, and how important OpenStack might become, is open to debate, of which there has been no shortage. OpenStack is generally depicted as an open-source cloud operating system, but that might be a generous interpretation. On the OpenStack website, the following definition is offered:

“OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.”

Just for fun and giggles (yes, that phrase has been modified so as not to offend reader sensibilities), let’s parse that passage, shall we? In the words of the OpenStackers themselves, their project is an open-source cloud-computing platform for public and private clouds, and it is reputedly simple to implement, massively scalable, and feature rich. Notably, it consists of a “series of interrelated projects delivering various components for a cloud infrastructure solution.”

Simple for Some

Given that description, especially the latter reference to interrelated projects spawning various components, one might wonder exactly how “simple” OpenStack is to implement and by whom. That’s a question others have raised, including David Linthicum in a recent piece at InfoWorld. In that article, Linthicum notes that undeniable vendor enthusiasm and burgeoning market momentum accrue to OpenStack — the community now has 138 member companies (and counting), including big-name players such as HP, Dell, Intel, Rackspace, Cisco, Citrix, Brocade, and others — but he also offers the following caveat:

“So should you consider OpenStack as your cloud computing solution? Not on its own. Like many open source projects, it takes a savvy software vendor, or perhaps cloud provider, to create a workable product based on OpenStack. The good news is that many providers are indeed using OpenStack as a foundation for their products, and most of those are working, or will work, just fine.”

Creating Value-Added Services

Meanwhile, taking issue with a recent InfoWorld commentary by Savio Rodrigues — who argued that OpenStack will falter while its open-source counterpart Eucalyptus will thrive — James Urquhart, formerly of Cisco and now VP of product strategy at enStratus, made this observation:

“OpenStack is not a cloud service, per se, but infrastructure automation tuned to cloud-model services, like IaaS, PaaS and SaaS. Intsall OpenStack, and you don’t get a system that can instantly bill customers, provide a service catalog, etc. That takes additional software.

What OpenStack represents is the commodity element of cloud services: the VM, object, server image and networking management components. Yeah, there is a dashboard to interact with those commodity elements, but it is not a value-add capability in-and-of itself.

What HP, Dell, Cisco, Citrix, Piston, Nebula and others are doing with OpenStack is creating value-add services on top (or within) the commodity automation. Some focus more on “being commodity”, others focus more on value-add, but they are all building on top of the core OpenStack projects.”

New Revenue Stream for Rackspace

All of which brings us, in an admittedly roundabout fashion, to Rackspace’s recent announcement of its Rackspace Cloud Private Edition, a packaged version of OpenStack components that can be used by enterprises for private-cloud deployments. This move makes sense for OpenStack on couple levels.

First off, it opens up a new revenue stream for the company. While Rackspace won’t try to make money on the OpenStack software or the reference designs — featuring a strong initial emphasis on Dell servers and Cisco networking gear for now, though bare-bones OpenCompute servers are likely to be embraced before long —  it will provide value-add, revenue-generating managed services to customers of Rackspace Cloud Private Edition. These managed services will comprise installation of OpenStack updates, analysis of system issues, and assistance with specific questions relating to systems engineering. Some security-related services also will be offered. While the reference architecture and the software are available now, Rackspace’s managed services won’t be available until January.

Building a Bridge

The launch of Rackspace Cloud Private Edition is a diversification initiative for Rackspace, which hitherto has made its money by hosting and managing applications and computing services for others in its own data centers. The OpenStack bundle takes it into the realm of provided managed services in its customers’ data centers.

As mentioned above, this represents a new revenue stream for Rackspace, but it also provides a technological bridge that will allow customers who aren’t ready for multi-tenant public cloud services today to make an easy transition to Rackspace’s data centers at some future date. It’s a smart move, preventing prospective customers from moving to another platform for private cloud deployment, ensuring in the process that said customers don’t enter the orbit of another vendor’s long-term gravitational pull.

The business logic coheres. For each customer engagement, Rackspace gets a payoff today, and potentially a larger one at a later date.