Category Archives: OpenStack

Exploring OpenStack’s SDN Connections

Pursuant to my last post, I will now examine the role of OpenStack in the context of software-defined networking (SDN). As you will recall, it was one of the alternative SDN enabling technologies mentioned in a recent article and sidebar at Network World.

First, though, I want to note that, contrary to the concerns I expressed in the preceding post, I wasn’t distracted by a shiny object before getting around to writing this installment. I feared I would be, but my powers of concentration and focus held sway. It’s a small victory, but I’ll take it.

Road to Quantum

Now, on to OpenStack, which I’ve written about previously, though admittedly not in the context of SDNs. As for how networking evolved into a distinct facet of OpenStack, Martin Casado, chief technology officer at Nicira, offers a thorough narrative at the Open vSwitch website.

Casado begins by explaining that OpenStack is a “cloud management system (CMS) that orchestrates compute, storage, and networking to provide a platform for building on demand services such as IaaS.” He notes that OpenStack’s primary components were OpenStack Compute (Nova), Open Stack Storage (Swift), and OpenStack Image Services (Glance), and he also provides an overview of their respective roles.

Then he asks, as one might, what about networking? At this point, I will quote directly from his Open vSwitch post:

“Noticeably absent from the list of major subcomponents within OpenStack is networking. The historical reason for this is that networking was originally designed as a part of Nova which supported two networking models:

● Flat Networking – A single global network for workloads hosted in an OpenStack Cloud.

●VLAN based Networking – A network segmentation mechanism that leverages existing VLAN technology to provide each OpenStack tenant, its own private network.

While these models have worked well thus far, and are very reasonable approaches to networking in the cloud, not treating networking as a first class citizen (like compute and storage) reduces the modularity of the architecture.”

As a result of Nova’s networking shortcomings, which Casado enumerates in detail,  Quantum, a standalone networking component, was developed.

Network Connectivity as a Service

The OpenStack wiki defines Quantum as “an incubated OpenStack project to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).” On that same wiki, Quantum is touted as being able to support advanced network topologies beyond the scope of  Nova’s FlatManager or VLanManager; as enabling anyone to “build advanced network services (open and closed source) that plug into Openstack networks”; and as enabling new plugins (open and closed source) that introduce advanced network capabilities.

Okay, but how does it relate specifically to SDNs? That’s a good question, and James Urquhart has provided a clear and compelling answer, which later was summarized succinctly by Stuart Miniman at Wikibon. What Urquhart wrote actually connects the dots between OpenStack’s Quantum and OpenFlow-enabled SDNs. Here’s a salient excerpt:

“. . . . how does OpenFlow relate to Quantum? It’s simple, really. Quantum is an application-level abstraction of networking that relies on plug-in implementations to map the abstraction(s) to reality. OpenFlow-based networking systems are one possible mechanism to be used by a plug-in to deliver a Quantum abstraction.

OpenFlow itself does not provide a network abstraction; that takes software that implements the protocol. Quantum itself does not talk to switches directly; that takes additional software (in the form of a plug-in). Those software components may be one and the same, or a Quantum plug-in might talk to an OpenFlow-based controller software via an API (like the Open vSwitch API).”

Cisco’s Contribution

So, that addresses the complementary functionality of OpenStack’s Quantum and OpenFlow, but, as Urquhart noted, OpenFlow is just one mechanism that can be used by a plug-in to deliver a Quantum abstraction. Further to that point, bear in mind that Quantum, as recounted on the OpenStack wiki, can be used  to “build advanced network services (open and closed source) that plug into OpenStack networks” and to facilitate new plugins that introduce advanced network capabilities.

Consequently, when it comes to using OpenStack in SDNs, OpenFlow isn’t the only complementary option available. In fact, Cisco is in on the action, using Quantum to “develop API extensions and plug-in drivers for creating virtual network segments on top of Cisco NX-OS and UCS.”

Cisco portrays itself as a major contributor to OpenStack’s Quantum, and the evidence seems to support that assertion. Cisco also has indicated qualified support for OpenFlow, so there’s a chance OpenStack and OpenFlow might intersect on a Cisco roadmap. That said, Cisco’s initial OpenStack-related networking forays relate to its proprietary technologies and existing products.

Citrix, Nicira, Rackspace . . . and Midokura

Other companies have made contributions to OpenStack’s Quantum, too. In a post at Network World, Alan Shimel of The CISO Group cites the involvement of Nicira, Cisco, Citrix, Midokura, and Rackspace. From what Nicira’s Casado has written and said publicly, we know that OpenFlow is in the mix there. It seems to be in the picture at Rackspace, too. Citrix has posted blog posts about Quantum, including this one, but I’m not sure where they’re going with it, though XenServer, Open vSwitch, and, yes, OpenFlow are likely to be involved.

Finally, we have Midokura, a Japanese company that has a relatively low profile, at least on this side of the Pacific Ocean. According to its website, it was established early in 2010, and it had just 12 employees in the end of April 2011.

If my currency-conversion calculations (from Japanese yen) are correct, Midokura also had about $1.5 million in capital as of that date. Earlier that same month, the company announced seed funding of about $1.3 million. Investors were Bit-Isle, a Japanese data-center company; NTT Investment Partners, an investment vehicle of  Nippon Telegraph & Telephone Corp. (NTT); 1st Holdings, a Japanese ISV that specializes in tools and middleware; and various individual investors, including Allen Miner, CEO of SunBridge Corporation.

On its website, Midokura provides an overview of its MidoNet network-virtualization platform, which is billed as providing a solution to the problem of inflexible and expensive large-scale physical networks that tend to lock service providers into a single vendor.

Virtual Network Model in Cloud Stack

In an article published  at TechCrunch this spring, at about the time Midokura announced its seed round, the company claimed to be the only one to have “a true virtual network model” in a cloud stack. The TechCrunch piece also said the MidoNet platform could be integrated “into existing products, as a standalone solution, via a NaaS model, or through Midostack, Midokura’s own cloud (IaaS/EC2) distribution of OpenStack (basically the delivery mechanism for Midonet and the company’s main product).”

Although the company was accepting beta customers last spring, it hasn’t updated its corporate blog since December 2010. Its “Events” page, however, shows signs of life, with Midokura indicating that it will be attending or participating in the grand opening of Rackspace’s San Francisco office on December 1.

Perhaps we’ll get an update then on Midokura’s progress.

Vendors Cite Other Paths to SDNs

Jim Duffy at NetworkWorld wrote an article earlier this month on protocol and API alternatives to OpenFlow as software-defined network (SDN) enablers.

It’s true, of course, that OpenFlow is a just one mechanism among many that can be used to bring SDNs to fruition. Many of the alternatives cited by Duffy, who quoted vendors and analysts in his piece, have been around longer than OpenFlow. Accordingly, they have been implemented by network-equipment vendors and deployed in commercial networks by enterprises and service providers. So, you know, they have that going for them, and it is not a paltry consideration.

No Panacea

Among the alternatives to OpenFlow mentioned in that article and in a sidebar companion piece were command-line interfaces (CLIs), Simple Network Management Protocol (SNMP), Extensible Messaging and Presence Protocol (XMPP), Network Configuration Protocol (NETCONF), OpenStack, and virtualization APIs in offerings such as VMware’s vSphere.

I understand that different applications require different approaches to SDNs, and I’m staunchly in the reality-based camp that acknowledges OpenFlow is not a networking panacea. As I’ve noted previously on more than one occasion, the Open Networking Foundation (ONF), steered by a board of directors representing leading cloud-service operators, has designs on OpenFlow that will make it — at least initially — more valuable to so-called “web-scale” service providers than to enterprises. Purveyors of switches also get short shrift from the ONF.

So, no, OpenFlow isn’t all things to all SDNs, but neither are the alternative APIs and protocols cited in the NetworkWorld articles. Reality, even in the realm of SDNs, has more than one manifestation.

OpenFlow Fills the Void

For the most part, however, the alternatives to OpenFlow have legacies on their side. They’re tried and tested, and they have delivered value in real-world deployments. Then again, those legacies are double-edged swords. One might well ask — and I suppose I’m doing so here — if those foregoing alternatives to OpenFlow were so proficient at facilitating SDNs, then why is OpenFlow the recipient of such perceived need and demonstrable momentum today?

Those pre-existing protocols did many things right, but it’s obvious that they were not perceived to address at least some of the requirements and application scenarios where OpenFlow offers such compelling technological and market potential. The market abhors a vacuum, and OpenFlow has been called forth to fill a need.

Old-School Swagger

Relative to OpenFlow, CLIs seem a particularly poor choice for the realization of SDN-type programmability. In the NetworkWorld companion piece, Arista Networks CEO Jayshree Ullal is quoted as follows:

“There’s more than one way to be open. And there’s more than one way to scale. CLIs may not be a programmable interface with a (user interface) we are used to; but it’s the way real men build real networks today.”

Notwithstanding Ullal’s blatant appeal to engineering machismo, evoking a networking reprise of Saturday Night Live’s old “¿Quien Es Mas Macho?” sketches, I doubt that even the most red-blooded networking professionals would opt for CLIs as a means of SDN fulfillment. In qualifying her statement, Ullal seems to concede as much.

Rubbishing Pretensions

Over at the Big Switch Networks, Omar Baldonado isn’t shy about rubbishing CLI pretensions to SDN superstardom. Granted, Big Switch Networks isn’t a disinterested party when it comes to OpenFlow, but neither are any of the other networking vendors, whether happily ensconced on the OpenFlow bandwagon or throwing rotten tomatoes at it from alleys along the parade route.

Baldonado probably does more than is necessary to hammer home his case against CLIs for SDNs, but I think the following excerpt, in which he stresses that CLIs were and are meant to be used to configure network devices, summarizes his argument pithily:

“The CLI was not designed for layers of software above it to program the network. I think we’d all agree that if we were to put our software hats on and design such a programming API, we would not come up with a CLI!”

That seems about right, and I don’t think we need belabor the point further.

Other Options

What about some of the other OpenFlow alternatives, though? As I said, I think OpenFlow is well crafted for the purposes the high priests of the Open Networking Foundation have in store for it, but enterprises are a different matter, at least for the foreseeable future (which is perhaps more foreseeable by some than by others, your humble scribe included).

In a subsequent post — I’d like to say it will be my next one, but something else, doubtless shiny and superficially appealing, will probably intrude to capture my attentions — I’ll be looking at OpenStack’s applicability in an SDN context.

Rackspace’s Bridge Between Clouds

OpenStack has generated plenty of sound and fury during the past several months, and, with sincere apologies to William Shakespeare, there’s evidence to suggest the frenzied activity actually signifies something.

Precisely what it signifies, and how important OpenStack might become, is open to debate, of which there has been no shortage. OpenStack is generally depicted as an open-source cloud operating system, but that might be a generous interpretation. On the OpenStack website, the following definition is offered:

“OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.”

Just for fun and giggles (yes, that phrase has been modified so as not to offend reader sensibilities), let’s parse that passage, shall we? In the words of the OpenStackers themselves, their project is an open-source cloud-computing platform for public and private clouds, and it is reputedly simple to implement, massively scalable, and feature rich. Notably, it consists of a “series of interrelated projects delivering various components for a cloud infrastructure solution.”

Simple for Some

Given that description, especially the latter reference to interrelated projects spawning various components, one might wonder exactly how “simple” OpenStack is to implement and by whom. That’s a question others have raised, including David Linthicum in a recent piece at InfoWorld. In that article, Linthicum notes that undeniable vendor enthusiasm and burgeoning market momentum accrue to OpenStack — the community now has 138 member companies (and counting), including big-name players such as HP, Dell, Intel, Rackspace, Cisco, Citrix, Brocade, and others — but he also offers the following caveat:

“So should you consider OpenStack as your cloud computing solution? Not on its own. Like many open source projects, it takes a savvy software vendor, or perhaps cloud provider, to create a workable product based on OpenStack. The good news is that many providers are indeed using OpenStack as a foundation for their products, and most of those are working, or will work, just fine.”

Creating Value-Added Services

Meanwhile, taking issue with a recent InfoWorld commentary by Savio Rodrigues — who argued that OpenStack will falter while its open-source counterpart Eucalyptus will thrive — James Urquhart, formerly of Cisco and now VP of product strategy at enStratus, made this observation:

“OpenStack is not a cloud service, per se, but infrastructure automation tuned to cloud-model services, like IaaS, PaaS and SaaS. Intsall OpenStack, and you don’t get a system that can instantly bill customers, provide a service catalog, etc. That takes additional software.

What OpenStack represents is the commodity element of cloud services: the VM, object, server image and networking management components. Yeah, there is a dashboard to interact with those commodity elements, but it is not a value-add capability in-and-of itself.

What HP, Dell, Cisco, Citrix, Piston, Nebula and others are doing with OpenStack is creating value-add services on top (or within) the commodity automation. Some focus more on “being commodity”, others focus more on value-add, but they are all building on top of the core OpenStack projects.”

New Revenue Stream for Rackspace

All of which brings us, in an admittedly roundabout fashion, to Rackspace’s recent announcement of its Rackspace Cloud Private Edition, a packaged version of OpenStack components that can be used by enterprises for private-cloud deployments. This move makes sense for OpenStack on couple levels.

First off, it opens up a new revenue stream for the company. While Rackspace won’t try to make money on the OpenStack software or the reference designs — featuring a strong initial emphasis on Dell servers and Cisco networking gear for now, though bare-bones OpenCompute servers are likely to be embraced before long –  it will provide value-add, revenue-generating managed services to customers of Rackspace Cloud Private Edition. These managed services will comprise installation of OpenStack updates, analysis of system issues, and assistance with specific questions relating to systems engineering. Some security-related services also will be offered. While the reference architecture and the software are available now, Rackspace’s managed services won’t be available until January.

Building a Bridge

The launch of Rackspace Cloud Private Edition is a diversification initiative for Rackspace, which hitherto has made its money by hosting and managing applications and computing services for others in its own data centers. The OpenStack bundle takes it into the realm of provided managed services in its customers’ data centers.

As mentioned above, this represents a new revenue stream for Rackspace, but it also provides a technological bridge that will allow customers who aren’t ready for multi-tenant public cloud services today to make an easy transition to Rackspace’s data centers at some future date. It’s a smart move, preventing prospective customers from moving to another platform for private cloud deployment, ensuring in the process that said customers don’t enter the orbit of another vendor’s long-term gravitational pull.

The business logic coheres. For each customer engagement, Rackspace gets a payoff today, and potentially a larger one at a later date.

Like OpenFlow, Open Compute Signals Shift in Industry Power

I’ve written quite a bit recently about OpenFlow and the Open Networking Foundation (ONF). For a change of pace, I will focus today on the Open Compute Project.

In many ways, even though OpenFlow deals with networking infrastructure and Open Compute deals with computing infrastructure, they are analogous movements, springing from the same fundamental set of industry dynamics.

Open Compute was introduced formally to the world in April. Its ostensible goal was “to develop servers and data centers following the model traditionally associated with open-source software projects.”  That’s true insofar as it goes, but it’s only part of the story. The stated goal actually is a means to an end, which is to devise an operational template that allows cloud behemoths such as Facebook to save lots of money on computing infrastructure. It’s all about commoditizing and optimizing the operational efficiency of the hardware encompassed within many of the largest cloud data centers that don’t belong to Google.

Speaking of Google, it is not involved with Open Compute. That’s primarily because Google has been taking a DIY approach to its data center long before Facebook began working on the blueprint for the Open Compute Project.

Google as DIY Trailblazer

For Google, its ability to develop and deliver its own data-center technologies — spanning computing, networking and storage infrastructure — became a source of competitive advantage. By using off-the-shelf hardware components, Google was able to provide itself with cost- and energy-efficient data-center infrastructure that did exactly what it needed to do — and no more. Moreover, Google no longer had to pay a premium to technology vendors that offered products that weren’t ideally suited to its requirements and that offered extraneous “higher-value” (pricier) features and functionality.

Observing how Google had used its scale and its ample resources to fashion its cost-saving infrastructure, Facebook  considered how it might follow suit. The goal at Facebook was to save money, of course, but also to mitigate or perhaps eliminate the infrastructure-based competitive advantage Google had developed. Facebook realized that it could never compete with Google at scale in the infrastructure cost-saving game, so it sought to enlist others in the cause.

And so the Open Computer project was born. The aim is to have a community of shared interest deliver cost-saving open-hardware innovations that can help Facebook scale its infrastructure at an operational efficiency approximating Google’s. If others besides Facebook benefit, so be it. That’s not a concern.

Collateral Damage

As Facebook seeks to boost its advertising revenue, it is effectively competing with Google. The search giant still derives nearly 97 percent of its revenue from advertising, and its Google+ is intended to distract it not derail Facebook’s core business, just as Google Apps is meant to keep Microsoft focused on protecting one of its crown jewels rather than on allocating more corporate resources to search and search advertising.

There’s nothing particularly striking about that. Cloud service providers are expected to compete against other by developing new revenue-generating services and by achieving new cost-saving operational efficiencies.  In that context, the Open Compute Project can be seen, at least in one respect, as Facebook’s open-source bid to level the infrastructure playing field and undercut, as previously noted, what has been a Google competitive advantage.

But there’s another dynamic at play. As the leading cloud providers with their vast data centers increasingly seek to develop their own hardware infrastructure — or to create an open-source model that facilitates its delivery — we will witness some significant collateral damage. Those taking the hit, as is becoming apparent, will be the hardware systems vendors, including HP, IBM, Oracle (Sun), Dell, and even Cisco. That’s only on the computing side of the house, of course. In networking, as software-defined networking (SDN) and OpenFlow find ready embrace among the large cloud shops, Cisco and others will be subject to the loss of revenue and profit margin, though how much and how soon remain to be seen.

Who’s Steering the OCP Ship?

So, who, aside from Facebook, will set the strategic agenda of Open Compute? To answer to that question, we need only consult the identities of those named to the Open Compute Project Foundation’s board of directors:

  • Chairman/President – Frank Frankovsky, Director, Technical Operations at Facebook
  • Jason Waxman, General Manager, High Density Computing, Data Center Group, Intel
  • Mark Roenigk, Chief Operating Officer, Rackspace Hosting
  • Andy Bechtolshiem, Industry Guru
  • Don Duet, Managing Director, Goldman-Sachs

It’s no shocker that Facebook retains the chairman’s role. Facebook didn’t launch this initiative to have somebody else steer the ship.

Similarly, it’s not a surprise that Intel is involved. Intel benefits regardless of whether cloud shops build their own systems, buy them from HP or Dell, or even get them from a Taiwanese or Chinese ODM.

As for the Rackspace representation, that makes sense, too. Rackspace already has OpenStack, open-source software for private and public clouds, and the Open Compute approach provides a logical hardware complement to that effort.

After that, though, the board membership of the Open Compute Project Foundation gets rather interesting.

Examining Bechtolsheim’s Involvement

First, there’s the intriguing presence of Andy Bechtolsheim. Those who follow the networking industry will know that Andy Bechtolsheim is more than an “industry guru,” whatever that means. Among his many roles, Bechtolsheim serves as the chief development officer and co-founder of Arista Networks, a growing rival to Cisco in low-latency data-center switching, especially at cloud-scale web shops and financial-services companies. It bears repeating that Open Compute’s mandate does not extend to network infrastructure, which is the preserve of the analogous OpenFlow.

Bechtolsheim’s history is replete with successes, as a technologist and as an investor. He was one of the earliest investors in Google, which makes his involvement in Open Compute deliciously ironic.

More recently, he disclosed a seed-stage investment in Nebula, which, as Derrick Harris at GigaOM wrote this summer, has “developed a hardware appliance pre-loaded with customized OpenStack software and Arista networking tools, designed to manage racks of commodity servers as a private cloud.” The reference architectures for the commodity servers comprise Dell’s PowerEdge C Micro Servers and servers that adhere to Open Compute specifications.

We know, then, why Bechtolsheim is on the board. He’s a high-profile presence that I’m sure Open Compute was only too happy to welcome with open arms (pardon the pun), and he also has business interests that would benefit from a furtherance of Open Compute’s agenda. Not to put too fine a point on it, but there’s an Arista and a Nebula dimension to Bechtolsheim’s board role at the Open Compute Project Foundation.

OpenStack Angle for Rackspace, Dell

Interestingly, the presence of Bechtolsheim and Rackspace’s Mark Roenigk on the board both emphasize OpenStack considerations, as does Dell’s involvement with Open Compute. Dell doesn’t have a board seat — at least not according to the Open Compute website — but it seems to think it can build a business for solutions based on Open Compute and OpenStack among second-tier purveyors of public-cloud services and among those pursuing large private or hybrid clouds. Both will become key strategic markets for Dell as its SMB installed base migrates applications and spending to the cloud.

Dell notably lost a chunk of server business when Facebook chose to go the DIY route, in conjunction with Taiwanese ODM Quanta Computer, for servers in its data center in Pineville, Oregon. Through its involvement in Open Compute, Dell might be trying to regain lost ground at Facebook, but I suspect that ship has sailed. Instead, Dell probably is attempting to ensure that it prevents or mitigates potential market erosion among smaller service providers and enterprise customers.

What Goldman Sachs Wants

The other intriguing presence on the Open Compute Project Foundation board is Don Duet from Goldman Sachs. Here’s what Duet had to say about his firm’s involvement with Open Compute:

“We build a lot of our own technology, but we are not at the hyperscale of Google or Facebook. We are a mid-scale company with a large global footprint. The work done by the OCP has the potential to lower the TCO [total cost of ownership] and we are extremely interested in that.”

Indeed, that perspective probably worries major server vendors more than anything else about Open Compute. Once Goldman Sachs goes this route, other financial-services firms will be inclined to follow, and nobody knows where the market attrition will end, presuming it ends at all.

Like Facebook, Goldman Sachs saw what Google was doing with its home-brewed, scale-out data-center infrastructure, and wondered how it might achieve similar business benefits. That has to be disconcerting news for major server vendors.

Welcome to the Future

The big takeaway for me, as I absorb these developments, is how the power axis of the industry is shifting. The big systems vendors used to set the agenda, promoting and pushing their products and influencing the influencers so that enterprise buyers kept their growth rates on the uptick. Now, though, a combination of factors — widespread data-center virtualization, the rise of cloud computing, a persistent and protected global economic downturn (which has placed unprecedented emphasis on IT cost containment) — is reshaping the IT universe.

Welcome to the future. Some might like it more than others, but there’s no going back.

Cisco Hedges Virtualization Bets

Pursuant to my post last week on the impressive growth of the Open Virtualization Alliance (OVA), which aims to commoditize VMware’s virtualization advantage by offering a viable open-virtualization alternative to the market leader, I note that Red Hat and five other major players have founded the oVirt Project, established to transform Red Hat Enterprise Virtualization Manager (RHEV-M) into a feature-rich virtualization management platform with well-defined APIs.

Cisco to Host Workshop

According to coverage at The Register, Red Hat has been joined on the oVirt Project by Cisco, IBM, Intel, NetApp and SuSE, all of which have committed to building a KVM-based pluggable hypervisor management framework along with an ecosystem of plug-in partners.

Although Cisco will be hosting an oVirt workshop on November 1-3 at its main campus in San Jose, the article at The Register suggests that the networking giant is the only one of the six founding companies not on the oVirt Project’s governance board.  Indeed, the sole reference to Cisco on the oVirt Project website relates to the workshop.

Nonetheless, Cisco’s participation in oVirt warrants attention.

Insurance Policies and Contingency Plans

Realizing that VMware could increasingly eat into the value, and hence the margins, associated with its network infrastructure as cloud computing proliferates, Cisco seems to be devising insurance policies and contingency plans in the event that its relationship with the virtualization market leader becomes, well, more complicated.

To be sure, the oVirt Project isn’t Cisco’s only backup plan. Cisco also is involved with OpenStack, the open-source cloud-computing project that effectively competes with oVirt — and which Red Hat assails as a community “owned”  by its co-founder and driving force, Rackspace — and it has announced that its Cisco Nexus 1000V distributed virtual switch and the Cisco Unified Computing System with Virtual Machine Fabric Extender (VM-FEX) capabilities will support the Windows Server Hyper-V hypervisor to be released with Microsoft Windows Server 8.

Increasingly, Cisco is spreading its virtualization bets across the board, though it still has (and makes) most of its money on VMware.

Will Cisco Leave VCE Marriage of Convenience?

Because I am in a generous mood, I will use this post to provide heaping helpings of rumor and speculation, a pairing that can lead to nowhere or to valuable insights. Unfortunately, the tandem usually takes us to the former more than the latter, but let’s see whether we can beat the odds.

The topic today is the Virtual Computing Environment (VCE) Company, a joint venture formed by Cisco and EMC, with investments from VMware and Intel.  VCE is intended to accelerate the adoption of converged infrastructure, reducing customer costs related to IT deployment and management while also expediting customers’ time to revenue.

VCE provides fully assembled and tested Vblocks, integrated platforms that include Cisco’s UCS servers and Nexus switches, EMC’s storage, and VMware’s virtualization. Integration services and management software are provided by VCE, which considers the orchestration layer as the piece de resistance.

VCE Layoffs?

As a company, VCE was formed at the beginning of this year. Before then, it existed as a “coalition” of vendors providing reference architectures in conjunction with a professional-services operation called Acadia. Wikibon’s Stuart Miniman provided a commendable summary of the evolution of VCE in January.

If you look at official pronouncements from EMC and — to a lesser extent — Cisco, you might think that all is well behind the corporate facade of VME. After all, sales are up, the business continues to ramp, the value proposition is cogent, and the dour macroeconomic picture would seem to argue for further adoption of solutions, such as VME, that have the potential to deliver reductions in capital and operating expenditures.

What, then, are we to make of rumored layoffs at VCE? Nobody from Cisco or EMC has confirmed the rumors, but the scuttlebutt has been coming steadily enough to suggest that there’s fire behind the smoke. If there’s substance to the rumors, what might have started the fire?

Second Thoughts for Cisco?

Well, now that I’ve given you the rumor, I’ll give you some speculation. It could be — and you’ll notice that I’ve already qualified my position — that Cisco is having second thoughts about VCE. EMC contributes more than Cisco does to VCE and its ownership stake is commensurately greater, as Miniman explains in a post today at Wikibon:

 “According to company 10Q forms, Cisco (May ’11) owns approximately 35% outstanding equity of VCE with $100M invested and EMC (Aug ’11) owns approximately 58% outstanding equity of VCE with $173.5M invested. The companies are not disclosing revenue of the venture, except that it passed $100M in revenue in about 6 months and as of December 2010 had 65 “major customers” and was growing that number rapidly. In July 2011, EMC reported that VCE YTD revenue had surpassed all of 2010 revenue and CEO Joe Tucci stated that the companies “expect Vblock sales to hit the $1 billion run rate mark in a next several quarters.” EMC sees the VCE investment as strategic to increasing its importance (and revenue) in a changing IT landscape.”

Indeed, I agree that EMC views its VCE acquisition through a strategic prism. What I wonder about is Cisco’s long-term commitment to VCE.

Marriage of Convenience

There already have been rumblings that Cisco isn’t pleased with its cut of VCE profits. In this context, it’s important to remember how VCE is structured. The revenue it generates flows directly to its parent companies; it doesn’t keep any of it.  Thus, VCE is built purely as a convenient integration and delivery vehicle, not as a standalone business that will pursue its own exit strategy.

Relationships of convenience, such as the one that spawned VCE, often do not prove particularly durable. As long as the interests of the constituent partners remain aligned, VCE will remain unchanged. If interests diverge, though, as they might be doing now, all bets are off. When the convenient becomes inconvenient for one or more of the partners, it’s over.

It’s salient to me that Cisco is playing second fiddle to EMC in VCE. In its glory days, Cisco didn’t play second fiddle to anybody.

In the not-too-distant past, Cisco CEO John Chambers had the run of the corporate house. Nobody questioned his strategic acuity, and he and his team were allowed to do as they pleased. Since then, the composition of his team has changed — many of Cisco’s top executives of just a few short years ago are no longer with the company — and several notable investors and analysts, and perhaps one or two board members, have begun to wonder whether Chambers can author the prescription that will cure Cisco’s ills. Doubts creep into the minds of investors after a decade of stock stagnancy, reduced growth horizons, a failed foray into consumer markets, and slow but steady market-share erosion.

Alternatives to Playing Second Fiddle

Meanwhile, Cisco has another storage partner, NetApp. The two companies also have combined to deliver converged infrastructure. Cisco says the relationships involving VCE’s Vblocks and NetApp’s FlexPods don’t see much channel conflict and that they both work to increase Cisco’s UCS  footprint.

That’s likely true. It’s also likely that Cisco will never control VCE. EMC holds the upper hand now, and that probably won’t change.

Once upon a time, Cisco might have been able to change that dynamic. Back then, it could have acquired EMC. Now, though? I wouldn’t bet on it . EMC’s market capitalization is up to nearly $48 billion and Cisco’s stands at less than $88 billion. Even if Cisco repatriated all of its offshore cash hoard, that money still wouldn’t be enough to buy EMC. In fact, when one considers the premium that would have to be paid in such a deal, Cisco would fall well short of the mark. It would have to do a cash-and-stock deal, and that would go over like the Hindenburg with EMC shareholders.

So, if Cisco is to get more profit from sales of converged infrastructure, it has to explore other options. NetApp is definitely one, and some logic behind a potential acquisition was explored earlier this year in a piece by Derrick Harris at GigaOm. In that post, Harris also posited that Cisco consider an acquisition of Citrix, primarily for its virtualization technologies. If Cisco acquired NetApp and Citrix, it would be able to offer a complete set of converged infrastructure, without the assistance of EMC or its majority-owned VMware. It’s just the sort of bold move that might put Chambers back in the good graces of investors and analysts.

Irreconcilable Differences 

Could it be done? The math seems plausible. Before it announced its latest quarterly results, Cisco had $43.4 billion in cash, 89 percent of which was overseas. Supposing that Cisco could repatriate its foreign cash hoard without taking too much of a tax hit — Cisco and others are campaigning hard for a repatriation tax holiday — Cisco would be in position to make all-cash acquisitions for Citrix (with a $11.5 billion market capitalization) and NetApp (with a $16.4 market capitalization). Even with premiums factored into the equation, the deals could be done overwhelmingly, if not exclusively, with cash.

I know the above scenario is not without risk to Cisco. But I also know that the status quo isn’t going to get Cisco to where it needs to be in converged infrastructure. Something has to give. The VCE open marriage of convenience could be destined to founder on the rocks of irreconcilable differences.

Prescribing Dell’s Next Networking Move

Now that it has announced its acquisition of Force10 Networks, Dell is poised to make its next networking move.

Should that be another acquisition? No, I don’t think so. Dell needs time to integrate and assimilate Force10 before it considers another networking acquisition. Indeed, I think integration, not just of Force10, is the key to understanding what Dell ought to do next.

One problem, certainly for some of Dell’s biggest data-center customers, is that networking has been its own silo, relatively unaffected by the broad sweep of virtualization. While server hardware has been virtualized comprehensively — resulting in significant cost savings for data centers — and storage is now following suit, switches and routers have remained vertically integrated, comparatively proprietary boxes, largely insulated from the winds of change.

Dell and OpenStack

Perhaps because it is so eager to win cloud business — seeing the cloud not only as the next big thing but also as the ultimate destination for many SMB applications — Dell has been extremely solicitous in attempting to address the requirements flagged by the likes of Rackspace, Microsoft, Facebook, and Google. Dell sees these customers as big public-cloud purveyors (which they are), but also as early adopters of data-center solutions that could be offered subsequently to other cloud-oriented service providers and large enterprises.

That’s why Dell has been such a big proponent of OpenStack.  A longtime member of the OpenStack community, Dell recently introduced the Dell OpenStack Cloud Solution,  which includes the OpenStack cloud operating system, Dell PowerEdge C servers, the Dell-developed “Crowbar” OpenStack installer, plus services from Dell and Rackspace Cloud Builders.

The rollout of the Dell OpenStack Cloud Solution is intended to make it easy for cloud purveyors and large enterprises  to adopt and deploy open-source infrastructure as a service (IaaS).

Promise of OpenFlow

Interestingly, many of the same cloud and service providers that see promise in heavily virtualized, open-source IaaS technologies, as represented by OpenStack, also see considerable potential in OpenFlow, a protocol that allows a switch data plane to be programmed directly by a separate flow controller. Until now,  the data plane and the control plane have existed in the same switch hardware. OpenFlow removes control-plane responsibilities from the switch and places them in software that can run elsewhere, presumably on an industry-standard server (or on a cluster of servers).

OpenFlow is one means of realizing software-defined networking, which holds the promise of making network infrastructure programmable and virtualized.

Some vendors already have perceived merit in the data-center combination of OpenStack and OpenFlow. Earlier this year in a blog post, Brocade Communications’ CTO Dave Stevens and Ken Cheng, VP of service provider products, wrote the following about the joint value of OpenStack and OpenFlow:

 “There are now two promising industry efforts that go a long way in promoting industry-wide interoperability and open architectures for both virtualization and cloud computing. Specifically, they are the OpenFlow initiative driven by the Open Networking Foundation (ONF), which is hosted by Stanford University with 17 member companies currently, and theOpenStack cloud software project backed by a consortium of more than 50 private and public sector organisations.

We won’t belabor the charters and goals of either initiative as that information is widely available and listed in detail on both Web sites. The key idea we want to convey from Brocade’s point of view is that OpenFlow and OpenStack should not be regarded as discrete, unrelated projects. Indeed, we view them as three legs of a stool with OpenFlow serving as the networking leg while OpenStack serves as the other two legs through its compute and object storage software projects. Only by working together can these industry initiatives truly enable customers to virtualize their physical network assets and migrate smoothly to open, highly interoperable cloud architectures.”

Much in Common

Indeed, the architectural, philosophical, and technological foundations of OpenFlow and OpenStack have much in common. They also deliver similar business benefits for cloud shops and large data centers, which could run their programmable, virtualized infrastructure (servers, storage, and networking) on industry-standard hardware.

Large cloud providers are understandably motivated to want to see the potential of OpenFlow and OpenStack come to fruition. Both provide the promise of substantial cost savings, not only capex but also opex. There’s more to both than cost savings, of course, but the cost savings alone could provide ROI justification for many prospective customers.

That’s something Dell, now the proud owner of Force10 Networks, ought to be considering. Dell has been quick to point out that its networking acquisition now gives it the converged infrastructure for data centers that Cisco and HP already had. Still, even if we accept that argument at face value, Dell is at a disadvantage facing those vendors in a proprietary game on a level playing field. Both Cisco and HP have bigger, stronger networking assets, and both have more marketing, sales, and technological resources at their disposal. Unless it changes the game, Dell has little chance of winning.

Changing the Game

So, how can Dell change the game? It could become the converged infrastructure player that wholeheartedly embraces OpenStack and OpenFlow, following the lead its data-center customers have provided while also leading them to new possibilities.

I realize that the braintrust at Force10 recently took a wait-and-see stance toward OpenFlow. However, now that Dell owns Force10, that position should be reviewed in a new, larger context.

Given that Dell reportedly passed over Brocade on its way to the altar with Force10, it would be ironic if Dell were to execute on an OpenStack-OpenFlow vision that Brocade eloquently articulated.