Monthly Archives: November 2011

Revisiting the Nicira Break-In

While doing research on my last post, I spent some time on Martin Casado’s thought-provoking blog, Network Heresy. He doesn’t generate posts prolifically — he’s preoccupied with other matters, including his job as chief technology officer at Nicira Networks — but his commentaries typically are detailed, illuminating, intelligent, and invariably honest.

One of his relatively recent posts, Origins and Evolution of OpenFlow/SDN, features a video of his keynote at the Open Networking Summit, where, as the title of the blog post advertises, he explained how SDNs and OpenFlow have advanced. His salient point is that it’s the community,  not the technology, that makes the SDN movement so meaningful.  The technology, he believes, will progress as it should, but the key to SDN’s success will be the capacity of the varied community of interests to cohere and thrive. It’s a valid point.

Serious Work

That said, that’s not the only thing that caught my interest in the keynote video. Early in that presentation, speaking about how he and others got involved with SDNs and OpenFlow, he talks about his professional past. I quote directly:

“Back in 2002-2003, post-9/11, I used to work for the feds. I worked in the intelligence sector. The team I worked with, we were responsible for auditing and securing some of the most sensitive networks in the United States. This is pretty serious stuff. Literally, if these guys got broken into, people died . . . We took our jobs pretty seriously.”

It doesn’t surprise me that OpenFlow-enabled SDNs might have had at least some of their roots in the intelligence world. Many technologies have been conceived and cultivated in the shadowy realms of defense and intelligence agencies. The Internet itself grew from the Advanced Research Projects Agency Network (ARPANET),  which was funded by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense.

Old-School Break-In

When I heard those words, however, I was reminded of the armed break-in that Nicira suffered last spring, first reported in a Newsweek cover story on the so-called “Code War” and cyber-espionage published in July.  What was striking about the breach at Nicira, both in and of itself and within the context of the Newsweek article, is that it was a physical, old-school break-in, not a cyber attack. An armed burglar wearing a ski mask broke into Nicira Networks and made his way purposefully to the desk of “one of the company’s top engineers.” The perpetrator then grabbed a computer, apparently containing source code, and took flight.

Palo Alto constabulary portrayed the crime as a bog-standard smash and grab, but “people close to the company” and national-intelligence investigators suspect it was a professional job executed by someone with ties to Russia or China. The objective, as one might guess, was to purloin intellectual property.

The involvement of national-intelligence investigators in the case served as a red flag signaling that the crime was not committed by a crank-addled junkie hoping to sell a stolen computer. There’s a bigger story, and Newsweek touched on it before heading off in a different direction to explore cyber espionage, hack attacks, and the code-warrior industry.

Nicira’s Stealth Mode

Last month, the New York Times mentioned the Nicira break-in during the course of an article titled “What Is Nicira Up To?”.

Indeed, that is a fair question to ask. There still isn’t much meat on the bones of Nicira’s website, though we know the company is developing a network-virtualization platform that decouples network services from the underlying hardware, “like a server hypervisor separates physical servers from virtual machines.”

It’s essentially software-defined networking (SDN), with OpenFlow in the mix, though Nicira refrained assiduously from using those words in its marketing messages. On the other hand, as we’ve already seen, CTO Martin Casado isn’t shy about invoking the SDN acronym, or providing learned expositions on its underlying technologies, when addressing technical audiences.

Mystery Remains 

Let’s return to the break-in, however, because the New York Times provided some additional information. We learn that a significant amount of Nicira’s intellectual property was on the purloined computer, though CEO Steven Mullaney said it was “very early stuff, nothing like what we’ve got now.”

Still, the supposition remained that the thief was an agent of a foreign government. We also learned more about Casado’s professional background and about the genesis of the technology that eventually would be developed further and commercialized at Nicira.  Casado’s government work took place at Lawrence Livermore National Laboratory, where he was asked by U.S. intelligence agencies to design a global network that would dynamically change its levels of security and authorization.

We might never discover who broke into Nicira last May. As the Newsweek story recounted, government investigators have advised those familiar with the incident not to discuss it. Questions remain, but the mystery is likely to remain unsolved, at least publicly.

Exploring OpenStack’s SDN Connections

Pursuant to my last post, I will now examine the role of OpenStack in the context of software-defined networking (SDN). As you will recall, it was one of the alternative SDN enabling technologies mentioned in a recent article and sidebar at Network World.

First, though, I want to note that, contrary to the concerns I expressed in the preceding post, I wasn’t distracted by a shiny object before getting around to writing this installment. I feared I would be, but my powers of concentration and focus held sway. It’s a small victory, but I’ll take it.

Road to Quantum

Now, on to OpenStack, which I’ve written about previously, though admittedly not in the context of SDNs. As for how networking evolved into a distinct facet of OpenStack, Martin Casado, chief technology officer at Nicira, offers a thorough narrative at the Open vSwitch website.

Casado begins by explaining that OpenStack is a “cloud management system (CMS) that orchestrates compute, storage, and networking to provide a platform for building on demand services such as IaaS.” He notes that OpenStack’s primary components were OpenStack Compute (Nova), Open Stack Storage (Swift), and OpenStack Image Services (Glance), and he also provides an overview of their respective roles.

Then he asks, as one might, what about networking? At this point, I will quote directly from his Open vSwitch post:

“Noticeably absent from the list of major subcomponents within OpenStack is networking. The historical reason for this is that networking was originally designed as a part of Nova which supported two networking models:

● Flat Networking – A single global network for workloads hosted in an OpenStack Cloud.

●VLAN based Networking – A network segmentation mechanism that leverages existing VLAN technology to provide each OpenStack tenant, its own private network.

While these models have worked well thus far, and are very reasonable approaches to networking in the cloud, not treating networking as a first class citizen (like compute and storage) reduces the modularity of the architecture.”

As a result of Nova’s networking shortcomings, which Casado enumerates in detail,  Quantum, a standalone networking component, was developed.

Network Connectivity as a Service

The OpenStack wiki defines Quantum as “an incubated OpenStack project to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).” On that same wiki, Quantum is touted as being able to support advanced network topologies beyond the scope of  Nova’s FlatManager or VLanManager; as enabling anyone to “build advanced network services (open and closed source) that plug into Openstack networks”; and as enabling new plugins (open and closed source) that introduce advanced network capabilities.

Okay, but how does it relate specifically to SDNs? That’s a good question, and James Urquhart has provided a clear and compelling answer, which later was summarized succinctly by Stuart Miniman at Wikibon. What Urquhart wrote actually connects the dots between OpenStack’s Quantum and OpenFlow-enabled SDNs. Here’s a salient excerpt:

“. . . . how does OpenFlow relate to Quantum? It’s simple, really. Quantum is an application-level abstraction of networking that relies on plug-in implementations to map the abstraction(s) to reality. OpenFlow-based networking systems are one possible mechanism to be used by a plug-in to deliver a Quantum abstraction.

OpenFlow itself does not provide a network abstraction; that takes software that implements the protocol. Quantum itself does not talk to switches directly; that takes additional software (in the form of a plug-in). Those software components may be one and the same, or a Quantum plug-in might talk to an OpenFlow-based controller software via an API (like the Open vSwitch API).”

Cisco’s Contribution

So, that addresses the complementary functionality of OpenStack’s Quantum and OpenFlow, but, as Urquhart noted, OpenFlow is just one mechanism that can be used by a plug-in to deliver a Quantum abstraction. Further to that point, bear in mind that Quantum, as recounted on the OpenStack wiki, can be used  to “build advanced network services (open and closed source) that plug into OpenStack networks” and to facilitate new plugins that introduce advanced network capabilities.

Consequently, when it comes to using OpenStack in SDNs, OpenFlow isn’t the only complementary option available. In fact, Cisco is in on the action, using Quantum to “develop API extensions and plug-in drivers for creating virtual network segments on top of Cisco NX-OS and UCS.”

Cisco portrays itself as a major contributor to OpenStack’s Quantum, and the evidence seems to support that assertion. Cisco also has indicated qualified support for OpenFlow, so there’s a chance OpenStack and OpenFlow might intersect on a Cisco roadmap. That said, Cisco’s initial OpenStack-related networking forays relate to its proprietary technologies and existing products.

Citrix, Nicira, Rackspace . . . and Midokura

Other companies have made contributions to OpenStack’s Quantum, too. In a post at Network World, Alan Shimel of The CISO Group cites the involvement of Nicira, Cisco, Citrix, Midokura, and Rackspace. From what Nicira’s Casado has written and said publicly, we know that OpenFlow is in the mix there. It seems to be in the picture at Rackspace, too. Citrix has posted blog posts about Quantum, including this one, but I’m not sure where they’re going with it, though XenServer, Open vSwitch, and, yes, OpenFlow are likely to be involved.

Finally, we have Midokura, a Japanese company that has a relatively low profile, at least on this side of the Pacific Ocean. According to its website, it was established early in 2010, and it had just 12 employees in the end of April 2011.

If my currency-conversion calculations (from Japanese yen) are correct, Midokura also had about $1.5 million in capital as of that date. Earlier that same month, the company announced seed funding of about $1.3 million. Investors were Bit-Isle, a Japanese data-center company; NTT Investment Partners, an investment vehicle of  Nippon Telegraph & Telephone Corp. (NTT); 1st Holdings, a Japanese ISV that specializes in tools and middleware; and various individual investors, including Allen Miner, CEO of SunBridge Corporation.

On its website, Midokura provides an overview of its MidoNet network-virtualization platform, which is billed as providing a solution to the problem of inflexible and expensive large-scale physical networks that tend to lock service providers into a single vendor.

Virtual Network Model in Cloud Stack

In an article published  at TechCrunch this spring, at about the time Midokura announced its seed round, the company claimed to be the only one to have “a true virtual network model” in a cloud stack. The TechCrunch piece also said the MidoNet platform could be integrated “into existing products, as a standalone solution, via a NaaS model, or through Midostack, Midokura’s own cloud (IaaS/EC2) distribution of OpenStack (basically the delivery mechanism for Midonet and the company’s main product).”

Although the company was accepting beta customers last spring, it hasn’t updated its corporate blog since December 2010. Its “Events” page, however, shows signs of life, with Midokura indicating that it will be attending or participating in the grand opening of Rackspace’s San Francisco office on December 1.

Perhaps we’ll get an update then on Midokura’s progress.

Vendors Cite Other Paths to SDNs

Jim Duffy at NetworkWorld wrote an article earlier this month on protocol and API alternatives to OpenFlow as software-defined network (SDN) enablers.

It’s true, of course, that OpenFlow is a just one mechanism among many that can be used to bring SDNs to fruition. Many of the alternatives cited by Duffy, who quoted vendors and analysts in his piece, have been around longer than OpenFlow. Accordingly, they have been implemented by network-equipment vendors and deployed in commercial networks by enterprises and service providers. So, you know, they have that going for them, and it is not a paltry consideration.

No Panacea

Among the alternatives to OpenFlow mentioned in that article and in a sidebar companion piece were command-line interfaces (CLIs), Simple Network Management Protocol (SNMP), Extensible Messaging and Presence Protocol (XMPP), Network Configuration Protocol (NETCONF), OpenStack, and virtualization APIs in offerings such as VMware’s vSphere.

I understand that different applications require different approaches to SDNs, and I’m staunchly in the reality-based camp that acknowledges OpenFlow is not a networking panacea. As I’ve noted previously on more than one occasion, the Open Networking Foundation (ONF), steered by a board of directors representing leading cloud-service operators, has designs on OpenFlow that will make it — at least initially — more valuable to so-called “web-scale” service providers than to enterprises. Purveyors of switches also get short shrift from the ONF.

So, no, OpenFlow isn’t all things to all SDNs, but neither are the alternative APIs and protocols cited in the NetworkWorld articles. Reality, even in the realm of SDNs, has more than one manifestation.

OpenFlow Fills the Void

For the most part, however, the alternatives to OpenFlow have legacies on their side. They’re tried and tested, and they have delivered value in real-world deployments. Then again, those legacies are double-edged swords. One might well ask — and I suppose I’m doing so here — if those foregoing alternatives to OpenFlow were so proficient at facilitating SDNs, then why is OpenFlow the recipient of such perceived need and demonstrable momentum today?

Those pre-existing protocols did many things right, but it’s obvious that they were not perceived to address at least some of the requirements and application scenarios where OpenFlow offers such compelling technological and market potential. The market abhors a vacuum, and OpenFlow has been called forth to fill a need.

Old-School Swagger

Relative to OpenFlow, CLIs seem a particularly poor choice for the realization of SDN-type programmability. In the NetworkWorld companion piece, Arista Networks CEO Jayshree Ullal is quoted as follows:

“There’s more than one way to be open. And there’s more than one way to scale. CLIs may not be a programmable interface with a (user interface) we are used to; but it’s the way real men build real networks today.”

Notwithstanding Ullal’s blatant appeal to engineering machismo, evoking a networking reprise of Saturday Night Live’s old “¿Quien Es Mas Macho?” sketches, I doubt that even the most red-blooded networking professionals would opt for CLIs as a means of SDN fulfillment. In qualifying her statement, Ullal seems to concede as much.

Rubbishing Pretensions

Over at the Big Switch Networks, Omar Baldonado isn’t shy about rubbishing CLI pretensions to SDN superstardom. Granted, Big Switch Networks isn’t a disinterested party when it comes to OpenFlow, but neither are any of the other networking vendors, whether happily ensconced on the OpenFlow bandwagon or throwing rotten tomatoes at it from alleys along the parade route.

Baldonado probably does more than is necessary to hammer home his case against CLIs for SDNs, but I think the following excerpt, in which he stresses that CLIs were and are meant to be used to configure network devices, summarizes his argument pithily:

“The CLI was not designed for layers of software above it to program the network. I think we’d all agree that if we were to put our software hats on and design such a programming API, we would not come up with a CLI!”

That seems about right, and I don’t think we need belabor the point further.

Other Options

What about some of the other OpenFlow alternatives, though? As I said, I think OpenFlow is well crafted for the purposes the high priests of the Open Networking Foundation have in store for it, but enterprises are a different matter, at least for the foreseeable future (which is perhaps more foreseeable by some than by others, your humble scribe included).

In a subsequent post — I’d like to say it will be my next one, but something else, doubtless shiny and superficially appealing, will probably intrude to capture my attentions — I’ll be looking at OpenStack’s applicability in an SDN context.

Alcatel-Lucent Banks on Carrier Clouds

Late last week, I had the opportunity to speak with David Fratura, Alcatel-Lucent’s senior director of strategy for cloud solutions, about his company’s new foray into cloud computing, CloudBand, which is designed to give Alcatel-Lucent’s carrier customers a competitive edge in delivering cloud services to their enterprise clientele and — perhaps to a lesser extent — to consumers, too.

Like so many others in the telecommunications-equipment market, Alcatel-Lucent is under pressure on multiple fronts. In a protracted period of global economic uncertainty, carriers are understandably circumspect about their capital spending, focusing investments primarily on areas that will result in near-term reduced operating costs or similarly immediate new service revenues. Carriers are reluctant to spend much in hopeful anticipation of future growth for existing services; instead, they’re preoccupied with squeezing more value from the infrastructure they already own or with finding entirely new streams of service-based revenue growth, preferably at the lowest-possible cost of market entry.

Big Stakes, Complicated Game

Complicating the situation for Alcatel-Lucent — as well as for Nokia Siemens Networks and longtime market wireless-gear market leader Ericsson — are the steady competitive advances being made into both developed and developing markets by Chinese telco-equipment vendors Huawei and ZTE. That competitive dynamic is putting downward pressure on hardware margins for the old-guard vendors, compelling them to look to software and services for diversification, differentiation, and future growth.

For its part, Alcatel-Lucent has sought to establish itself as a vendor that can help its operator customers derive new revenue from mobile software and services and, increasingly, from cloud computing.

Alcatel-Lucent CEO Ben Verwaayen is banking on those initiatives to save his job as well as to revive the company’s growth profile. Word from sources close the company, as reported first by the Wall Street Journal, is that the boardroom knives are out for the man in Alcatel’s big chair, though Alcatel-Lucent chairman Philippe Camus felt compelled to respond to the intensifying scuttlebutt by providing Verwaayen with a qualified vote of confidence.

Looking Up 

With Verwaayen counting on growth markets such as cloud computing to pull him and Alcatel-Lucent out of the line of fire, CloudBand can be seen as something more than the standard product announcement. There’s a bigger context, encompassing not only Alcatel-Lucent’s ambitions but also the evolution of the broader telecommunications industry.

CloudBand, according to a company-issued press release, is designed to deliver a “foundation for a new class of ‘carrier cloud’ services that will enable communications service providers to bring the benefits of the cloud to their own networks and business operations, and put them in an ideal position to offer a new range of high-performance cloud services to enterprises and consumers.”

In a world where everybody is trying to contribute to or be the cloud, that’s a tall order, so let’s take a look at the architecture Alcatel-Lucent has brought forward to create its “carrier cloud.”

CloudBand Architecture

CloudBand comprises two distinct elements. First up is the CloudBand Management System, derived from research work at the venerable Bell Labs, which delivers orchestration and optimization of services between the communications network and the cloud. The second element is the CloudBand Node, which provides computing, storage, and networking hardware and associated software to host a wide range of cloud services. Alcatel-Lucent’s “secret sauce,” and hence its potential to draw meaningful long-term business from its installed base of carrier customers, is the former, but the latter also is of interest.

Hewlett-Packard, as part of a ten-year strategic global agreement with Alcatel-Lucent, will provide converged data-center infrastructure for the CloudBand nodes, including compute, storage, and networking technologies. While Alcatel-Lucent has said it can accommodate gear from other vendors in the nodes, HP’s offerings will be positioned as the default option in the CloudBand nodes. Alcatel-Lucent’s relationship with HP was intended to help “bridge the gap between the data center and the network,” and the CloudBand node definitely fits within that mandate.

Virtualized Network Elements in “Carrier Clouds”

By enabling operators to shift to a cloud-based delivery model, CloudBand is intended to help service providers market and deliver new services to customers quickly, with improved quality of service and at lower cost. Carriers can use CloudBand to virtualize their network elements, converting them to software and running them on demand in their “carrier clouds.” As a result, service providers  presumably will derive improved utilization from their network resources, saving money on the delivery of existing services — such as SMS and video — and testing and introducing new ones at lower costs.

If carriers embrace CloudBand only for this reason — to virtualize and better manage their network elements and resources for more efficient and cost-effective delivery of existing services — Alcatel-Lucent should do well with the offering. Nonetheless, the company has bigger ambitions for CloudBand.

Alcatel-Lucent has done market research indicating that enterprise IT decision makers’ primary concern about the cloud involves performance rather than security, though both ranked highly. Alcatel-Lucent also found that those same enterprise IT decision makers believe their communications service providers — yes, carriers — are best equipped to deliver the required performance and quality of service.

Helping Carriers Capture Cloud Real Estate 

Although Alcatel-Lucent talks a bit about consumer-oriented cloud services, it’s clear that the enterprise is where it really believes it can help its carrier customers gain traction. That’s an important distinction, too, because it means Alcatel-Lucent might be able to help its customers carve out a niche beyond consumer-focused cloud purveyors such as Google, Facebook, Apple, and even Microsoft. It also means it might be able to assist carriers in differentiate themselves from infrastructure-as-a-service (IaaS) leader Amazon Web Services (AWS), which became the service of choice for technology startups, and from the likes of Rackspace.

As Alcatel-Lucent’s Fratura emphasized, many businesses, from SMBs up to large enterprises, already obtain hosted services and software-as-a-service (SaaS) offerings from carriers today. What Alcatel-Lucent proposes with CloudBand is designed to help them capture more of the cloud market.

It just might work, but it won’t be easy. As Ray Le Maistre at LightReading wrote, cloud solutions on this scale are not a walk on the beach or a day at the park (yes, you saw what I did there). What’s more, Alcatel-Lucent will have to hope that a sufficient number of its carrier customers can deploy, operate, and manage CloudBand to full effect. That’s not a given, even if Alcatel-Lucent offers CloudBand as managed service and even though it already sells and delivers professional services to carriers.

Alcatel-Lucent says CloudBand will be available for deployment in the first half of 2012.  At first, CloudBand will run exclusively on Alcatel-Lucent technology, but the company claims to be working with the Alliance for Telecommunications Industry Solutions (ATIS)  and the Internet Engineering Task Force (IETF) to establish standards to enable CloudBand to run on gear from other vendors.

With CloudBand, Alcatel-Lucent, at least within the content of its main telecommunications-equipment competitors, is seen as having first run at the potentially lucrative market opportunity of cloud enabling the carrier community. Much now will depend on how well it executes and on how effectively its competitors respond to the initiative.

The Carrier Factor

In addition, of course, the carriers themselves are a factor. Although they undoubtedly want to get their hands around the cloud business opportunity, there’s some question as to whether they have the wherewithal to get the job done. The rise of cloud services from Google, Apple, Facebook, Amazon was partly a result of carriers missing a golden opportunity. One would like to think they’ve learned from those sobering experiences, but one also can’t be sure they won’t run to prior form.

From what I have heard and seen, the Alcatel-Lucent vision for CloudBand is compelling. It brings the benefits of virtualization and orchestration to carrier network infrastructure, enabling the latter to manage their resources cost-effectively and innovatively. If they seize the opportunity, they’ll save money on their own existing services and be in a great position to deliver range of cloud-based enterprise services to their business customers.

Alcatel-Lucent should find a receptive audience for CloudBand among its carrier installed base. The question is whether those Alcatel-Lucent customers will be able to get full measure from the technology and from the business opportunity the cloud represents.

Rackspace’s Bridge Between Clouds

OpenStack has generated plenty of sound and fury during the past several months, and, with sincere apologies to William Shakespeare, there’s evidence to suggest the frenzied activity actually signifies something.

Precisely what it signifies, and how important OpenStack might become, is open to debate, of which there has been no shortage. OpenStack is generally depicted as an open-source cloud operating system, but that might be a generous interpretation. On the OpenStack website, the following definition is offered:

“OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.”

Just for fun and giggles (yes, that phrase has been modified so as not to offend reader sensibilities), let’s parse that passage, shall we? In the words of the OpenStackers themselves, their project is an open-source cloud-computing platform for public and private clouds, and it is reputedly simple to implement, massively scalable, and feature rich. Notably, it consists of a “series of interrelated projects delivering various components for a cloud infrastructure solution.”

Simple for Some

Given that description, especially the latter reference to interrelated projects spawning various components, one might wonder exactly how “simple” OpenStack is to implement and by whom. That’s a question others have raised, including David Linthicum in a recent piece at InfoWorld. In that article, Linthicum notes that undeniable vendor enthusiasm and burgeoning market momentum accrue to OpenStack — the community now has 138 member companies (and counting), including big-name players such as HP, Dell, Intel, Rackspace, Cisco, Citrix, Brocade, and others — but he also offers the following caveat:

“So should you consider OpenStack as your cloud computing solution? Not on its own. Like many open source projects, it takes a savvy software vendor, or perhaps cloud provider, to create a workable product based on OpenStack. The good news is that many providers are indeed using OpenStack as a foundation for their products, and most of those are working, or will work, just fine.”

Creating Value-Added Services

Meanwhile, taking issue with a recent InfoWorld commentary by Savio Rodrigues — who argued that OpenStack will falter while its open-source counterpart Eucalyptus will thrive — James Urquhart, formerly of Cisco and now VP of product strategy at enStratus, made this observation:

“OpenStack is not a cloud service, per se, but infrastructure automation tuned to cloud-model services, like IaaS, PaaS and SaaS. Intsall OpenStack, and you don’t get a system that can instantly bill customers, provide a service catalog, etc. That takes additional software.

What OpenStack represents is the commodity element of cloud services: the VM, object, server image and networking management components. Yeah, there is a dashboard to interact with those commodity elements, but it is not a value-add capability in-and-of itself.

What HP, Dell, Cisco, Citrix, Piston, Nebula and others are doing with OpenStack is creating value-add services on top (or within) the commodity automation. Some focus more on “being commodity”, others focus more on value-add, but they are all building on top of the core OpenStack projects.”

New Revenue Stream for Rackspace

All of which brings us, in an admittedly roundabout fashion, to Rackspace’s recent announcement of its Rackspace Cloud Private Edition, a packaged version of OpenStack components that can be used by enterprises for private-cloud deployments. This move makes sense for OpenStack on couple levels.

First off, it opens up a new revenue stream for the company. While Rackspace won’t try to make money on the OpenStack software or the reference designs — featuring a strong initial emphasis on Dell servers and Cisco networking gear for now, though bare-bones OpenCompute servers are likely to be embraced before long —  it will provide value-add, revenue-generating managed services to customers of Rackspace Cloud Private Edition. These managed services will comprise installation of OpenStack updates, analysis of system issues, and assistance with specific questions relating to systems engineering. Some security-related services also will be offered. While the reference architecture and the software are available now, Rackspace’s managed services won’t be available until January.

Building a Bridge

The launch of Rackspace Cloud Private Edition is a diversification initiative for Rackspace, which hitherto has made its money by hosting and managing applications and computing services for others in its own data centers. The OpenStack bundle takes it into the realm of provided managed services in its customers’ data centers.

As mentioned above, this represents a new revenue stream for Rackspace, but it also provides a technological bridge that will allow customers who aren’t ready for multi-tenant public cloud services today to make an easy transition to Rackspace’s data centers at some future date. It’s a smart move, preventing prospective customers from moving to another platform for private cloud deployment, ensuring in the process that said customers don’t enter the orbit of another vendor’s long-term gravitational pull.

The business logic coheres. For each customer engagement, Rackspace gets a payoff today, and potentially a larger one at a later date.

Assessing Dell’s Layer 4-7 Options

As it continues to integrate and assimilate its acquisition of Force10 Networks, Dell is thinking about its next networking move.

Based on what has been said recently by Dario Zamarian, Dell’s GM and SVP of networking, the company definitely will be making that move soon. In an article covering Dell’s transition from box pusher to data-center and cloud contender, Zamarian told Fritz Nelson of InformationWeek that “Dell needs to offer Layer 4 and Layer 7 network services, citing security, load balancing, and overall orchestration as its areas of emphasis.”

Zamarian didn’t say whether the move into Layer 4-7 network services would occur through acquisition, internal development, or partnership. However, as I invoke deductive reasoning that would make Sherlock Holmes green with envy (or not), I think it’s safe to conclude an acquisition is the most likely route.

F5 Connection

Why? Well, Dell already has partnerships that cover Layer 4-7 services. F5 Networks, the leader in the application-delivery controllers (ADCs), is a significant Dell partner in the Layer 4-7 sphere. Dell and F5 have partnered for 10 years, and Dell bills itself as the largest reseller of F5 solutions. If you consider what Zamarian described as Dell’s next networking priority, F5 certainly fits the bill.

There’s one problem. F5 probably isn’t selling at any price Dell would be willing to pay.  As of today, F5 has a market capitalization of more than $8.5 billion. Dell has the cash, about $16 billion and counting, to buy F5 at a premium, but it’s unlikely Dell would be willing to fork over more than $11 billion — which, presuming mutual interest, might be F5’s absolute minimum asking price — to close the deal. Besides, observers have been thinking F5 would be acquired since before the Internet bubble of 2000 burst. It’s not likely to happen this time either.

Dell could see whether one of its other partners, Citrix, is willing to sell its NetScaler business. I’m not sure that’s likely to happen, though. I definitely can’t envision Dell buying Citrix outright. Citrix’s market cap, at more than $13.7 billion, is too high, and there are pieces of the business Dell probably wouldn’t want to own.

Shopping Not Far From Home?

Who else is in the mix? Radware is an F5 competitor that Dell might consider, but I don’t see that happening. Dell’s networking group is based in the Bay Area, and I think they’ll be looking for something closer to home, easier to integrate.

That brings us to F5 rival A10 Networks. Force10 Networks, which Dell now owns, had a partnership with A10, and there’s a possibility Dell might inherit and expand upon that relationship.

Then again, maybe not. Generally, A10 is a seen as purveyor of cost-effective ADCs. It is not typically perceived as an innovator and trailblazer, and it isn’t thought to have the best solutions for complex enterprise or data-center environments, exactly the areas where Dell wants to press its advantage. It’s also worth bearing in mind that A10 has been involved in exchanges of not-so-friendly litigious fire — yes, lawsuits volleyed back and forth furiously — with F5 and others.

All in all, A10 doesn’t seem a perfect fit for Dell’s needs, though the price might be right.

Something Programmable 

Another candidate, one that’s quite intriguing in many respects, is Embrane. The company is bringing programmable network services, delivered on commodity x86 servers, to the upper layers of the stack, addressing many of the areas in which Zamarian expressed interest. Embrane is focusing on virtualized data centers where Dell wants to be a player, but initially its appeal will be with service providers rather than with enterprises.

In an article written by Stacey Higginbotham and published at GigaOM this summer, Embrane CEO Dante Malagrinò explained that his company’s technology would enable hosting companies to provide virtualized services at Layers 4 through 7, including load balancing, firewalls, virtual private networking (VPN),  among others.

Some of you might see similarities between what Embrane is offering and the OpenFlow-enabled software-defined networking (SDN). Indeed, there are similarities, but, as Embrane points out, OpenFlow promises network virtualization and programmability at Layers 2 and 3 of the stack, not at Layers 4 through 7.

Higher-Layer Complement to OpenFlow

Dell, as we know, has talked extensively about the potential of OpenFlow to deliver operational cost savings and innovative services to data centers at service provides and enterprises. One could see what Embrane does as a higher-layer complement to OpenFlow’s network programmability. Both technologies take intelligence away from specialized networking gear and place it at the edge of the network, running in software on industry-standard hardware.

Interestingly, there aren’t many degrees of separation between the principals at Embrane and Dell’s Zamarian. It doesn’t take much sleuthing to learn that Zamarian knows both Malagrinò and Marco Di Benedetto, Embrane’s CTO. They worked together at Cisco Systems. Moreover, Zamarian and Malagrinò both studied at the Politecnico di Torino, though a decade or so apart.  Zamarian also has connections to Embrane board members.

Play an Old Game, Or Define a New One

In and of itself, those don’t mean anything. Dell would have to see value in what Embrane offers, and Embrane and its backers would have to want to sell. The company announced that in August that it had closed an $18-million Series-financing round, led by New Enterprise Associates (NEA). Lightspeed Venture Partners and North Bridge Ventures also took part in the round, which followed initial lead investments in the company’s $9-million Series-A funding.

Embrane’s product has been in beta, but the company planned a commercial launch before the end of this year. Its blog has been quiet since August.

I would be surprised to see Dell acquire F5, and I don’t think Citrix will part with NetScaler. If Dell is thinking about plugging L4-7 holes cost-effectively, it might opt for an acquisition of A10, but, if it’s thinking more ambitiously — if it really is transforming itself into a solutions provider for cloud providers and data centers — then it might reach for something with the potential to establish a new game rather than play at an old one.

Amazon’s Advantageous Model for Cloud Investments

While catching up with industry developments earlier this week, I came across a Reuters piece on Amazon’s now well-established approach toward investments in startup companies. If you haven’t seen it, I recommend that you give it a read.

As its Amazon Web Services (AWS) cloud operations approach the threshold of a $1-billion business, the company once known exclusively as an online bookshop continues to search for money-making opportunities well beyond Internet retailing.

Privileged Insights

An article at GigaOM by Barb Darrow quotes Amazon CEO Jeff Bezos explaining that his company stumbled unintentionally into the cloud-services business, but the Reuters item makes clear that Amazon is putting considerably more thought into its cloud endeavors these days. In fact, Amazon’s investment methodology, which sees it invest in startup companies that are AWS customers, is an exercise in calculated risk mitigation.

That’s because, before making those investments, Amazon gains highly detailed and extremely valuable insights into startup companies’ dynamic requirements for computing infrastructure and resources. It can then draw inferences about the popularity and market appeal of the services those companies supply. All in all, it seems like an inherently logical and sound investment model, one that gives Amazon privileged insights into companies before it decides to bet on their long-term health and prosperity.

That fact has not been lost on a number of prominent venture-capital firms, which have joined with Amazon to back the likes of Yieldex, Sonian, Engine Yard, and Animoto, all of whom, at one time or another, were AWS customers.

Mutual Benefits

Now that nearly every startup is likely to begin its business life using cloud-based computing infrastructure, either from AWS or another cloud purveyor, I wonder whether Amazon’s investment model might be mimicked by others with similar insights into their business customers’ resource utilization and growth rates.

There’s no question that such investments deliver mutual benefit. The startup companies get the financial backing to accelerate its growth, establish and maintain competitive differentiation, and speed toward market leadership. Meanwhile, Amazon and its VC partners get stakes in fast-growing companies that seem destined for bigger things, including potentially lucrative exits. Amazon also gets to maintain relationships with customers that might otherwise outgrow AWS and leave the relationship behind. Last but not least, the investment program serves a promotional purpose for Amazon, demonstrating a commitment and dedication to its AWS customers that can extend well beyond operational support.

It isn’t just Amazon that can derive an investment edge from how their customers are using their cloud services. SaaS cloud providers such as Salesforce and Google also can gain useful insights into how customers and customer segments are faring during good and bad economic times, and PaaS providers would also stand to derive potentially useful knowledge about how and where customers are adopting their services.

Various Scenarios

Also on SaaS side of the ledger, in the realm of social networking — I’m thinking of Facebook, but others fit the bill — subscriber data can be mined for the benefit of advertisers seeking to deliver targeted campaigns to specific demographic segments.

In a different vein, Google’s search business could potentially give it the means to develop high-probability or weighted analytics based on the prevalence, intensity, nature, and specificity of search queries. Such data could be applied to and mined for probability markets. One application scenario might involve insiders searching online to ascertain whether prior knowledge of a transaction has been leaked to the wider world. By searching for the terms in question, they would effectively signal that an event might take place. (This would be more granular than Google Trends, and different from it in other respects, too.) There are a number of other examples and scenarios that one could envision.

Getting back to Amazon, though, what it is doing with its investment model clearly makes a lot of sense, giving it unique insights and a clear advantage as it weighs where to place its bets. As I said, it would be no surprise to see other cloud providers, even those not of the same scale as Amazon, consider similar investment models.

Google Move Could Cause Collateral Damage for RIM

In a move that demonstrates Google’s willingness to embrace mobile-device heterogeneity in the larger context of a strategic mandate, Google today announced that it would bring improved mobile-device management (MDM) functionality to its Google Apps business customers.

No Extra Charge

On the Official Google Enterprise Blog, Hong Zhang, a Google software engineer, wrote:

“Starting today, comprehensive mobile device management is available at no extra charge to Google Apps for Business, Government and Education users. Organizations large and small can manage Android, iOS and Windows Mobile devices right from the Google Apps control panel, with no special hardware or software to manage.

In addition to our existing mobile management capabilities, IT administrators can now see a holistic overview of all mobile devices that are syncing with Google Apps, and revoke access to individual devices as needed.

Organizations can also now define mobile policies such as password requirements and roaming sync preferences on a granular basis by user group.

Also available today, administrators have the ability to gain insights into mobile productivity within their organizations, complete with trends and analytics.”

Gradual Enhancements

Google gradually has enhanced its MDM functionality for Google Apps. In the summer of 2010, the company announced several basic MDM controls for Google Apps, and today’s announcement adds to those capabilities.

Addressing the bring-your-own device (BYOD) phenomenon and the larger theme of consumerization of IT amid proliferating enterprise mobility, Google appears to be getting into the heterogeneous (not just Android) MDM space as a means of retaining current Google Apps business subscribers and attracting new ones.

Means Rather Than End

At least for now, Google is offering its MDM services at no charge to Google Apps business subscribers. That suggests Google sees MDM as a means of providing support for Google Apps rather than as a lucrative market in its own right. Google isn’t trying to crush standalone MDM vendors. Instead, its goal seems to be to preclude Microsoft, and perhaps even Apple, from making mobile inroads against Google Apps.

Of course, many VC-funded MDM vendors do see a lucrative market in what they do, and they might be concerned about Google’s encroachment on their turf. Officially, they’ll doubtless contend that Google is offering a limited range of MDM functionality exclusively on its Google Apps platform. They might also point out that Google, at least for now, isn’t offering support for RIM BlackBerry devices. On those counts, strictly speaking, they’d be right.

Nonetheless, many Google Apps subscribers might feel that the MDM services Google provides, even without further enhancements, are good enough for their purposes. If that happens, it will cut into the revenue and profitability of standalone MDM vendors.

Not Worrying About RIM

Those vendors will still have an MDM market beyond the Google Apps universe in which to play, but one wonders whether Microsoft, in defense of its expansive Office and Office 365 territory, might follow Google’s lead. Apple, which derives so much of its revenue from its iOS-based devices and comparatively little from Internet advertising or personal-productivity applications, would seem less inclined to embrace heterogeneous mobile-device management.

Finally, there’s the question of RIM. As mentioned above, Google has not indicated MDM support for RIM’s BlackBerry devices, whether of the legacy variety or the forthcoming BBX vintage. Larry Dignan at ZDNet thinks Google has jolted RIM’s MDM aspirations, but I think that’s an incidental rather than desired outcome. The sad fact is, I don’t think Google spends many cycles worrying about RIM.

Like OpenFlow, Open Compute Signals Shift in Industry Power

I’ve written quite a bit recently about OpenFlow and the Open Networking Foundation (ONF). For a change of pace, I will focus today on the Open Compute Project.

In many ways, even though OpenFlow deals with networking infrastructure and Open Compute deals with computing infrastructure, they are analogous movements, springing from the same fundamental set of industry dynamics.

Open Compute was introduced formally to the world in April. Its ostensible goal was “to develop servers and data centers following the model traditionally associated with open-source software projects.”  That’s true insofar as it goes, but it’s only part of the story. The stated goal actually is a means to an end, which is to devise an operational template that allows cloud behemoths such as Facebook to save lots of money on computing infrastructure. It’s all about commoditizing and optimizing the operational efficiency of the hardware encompassed within many of the largest cloud data centers that don’t belong to Google.

Speaking of Google, it is not involved with Open Compute. That’s primarily because Google has been taking a DIY approach to its data center long before Facebook began working on the blueprint for the Open Compute Project.

Google as DIY Trailblazer

For Google, its ability to develop and deliver its own data-center technologies — spanning computing, networking and storage infrastructure — became a source of competitive advantage. By using off-the-shelf hardware components, Google was able to provide itself with cost- and energy-efficient data-center infrastructure that did exactly what it needed to do — and no more. Moreover, Google no longer had to pay a premium to technology vendors that offered products that weren’t ideally suited to its requirements and that offered extraneous “higher-value” (pricier) features and functionality.

Observing how Google had used its scale and its ample resources to fashion its cost-saving infrastructure, Facebook  considered how it might follow suit. The goal at Facebook was to save money, of course, but also to mitigate or perhaps eliminate the infrastructure-based competitive advantage Google had developed. Facebook realized that it could never compete with Google at scale in the infrastructure cost-saving game, so it sought to enlist others in the cause.

And so the Open Computer project was born. The aim is to have a community of shared interest deliver cost-saving open-hardware innovations that can help Facebook scale its infrastructure at an operational efficiency approximating Google’s. If others besides Facebook benefit, so be it. That’s not a concern.

Collateral Damage

As Facebook seeks to boost its advertising revenue, it is effectively competing with Google. The search giant still derives nearly 97 percent of its revenue from advertising, and its Google+ is intended to distract it not derail Facebook’s core business, just as Google Apps is meant to keep Microsoft focused on protecting one of its crown jewels rather than on allocating more corporate resources to search and search advertising.

There’s nothing particularly striking about that. Cloud service providers are expected to compete against other by developing new revenue-generating services and by achieving new cost-saving operational efficiencies.  In that context, the Open Compute Project can be seen, at least in one respect, as Facebook’s open-source bid to level the infrastructure playing field and undercut, as previously noted, what has been a Google competitive advantage.

But there’s another dynamic at play. As the leading cloud providers with their vast data centers increasingly seek to develop their own hardware infrastructure — or to create an open-source model that facilitates its delivery — we will witness some significant collateral damage. Those taking the hit, as is becoming apparent, will be the hardware systems vendors, including HP, IBM, Oracle (Sun), Dell, and even Cisco. That’s only on the computing side of the house, of course. In networking, as software-defined networking (SDN) and OpenFlow find ready embrace among the large cloud shops, Cisco and others will be subject to the loss of revenue and profit margin, though how much and how soon remain to be seen.

Who’s Steering the OCP Ship?

So, who, aside from Facebook, will set the strategic agenda of Open Compute? To answer to that question, we need only consult the identities of those named to the Open Compute Project Foundation’s board of directors:

  • Chairman/President – Frank Frankovsky, Director, Technical Operations at Facebook
  • Jason Waxman, General Manager, High Density Computing, Data Center Group, Intel
  • Mark Roenigk, Chief Operating Officer, Rackspace Hosting
  • Andy Bechtolshiem, Industry Guru
  • Don Duet, Managing Director, Goldman-Sachs

It’s no shocker that Facebook retains the chairman’s role. Facebook didn’t launch this initiative to have somebody else steer the ship.

Similarly, it’s not a surprise that Intel is involved. Intel benefits regardless of whether cloud shops build their own systems, buy them from HP or Dell, or even get them from a Taiwanese or Chinese ODM.

As for the Rackspace representation, that makes sense, too. Rackspace already has OpenStack, open-source software for private and public clouds, and the Open Compute approach provides a logical hardware complement to that effort.

After that, though, the board membership of the Open Compute Project Foundation gets rather interesting.

Examining Bechtolsheim’s Involvement

First, there’s the intriguing presence of Andy Bechtolsheim. Those who follow the networking industry will know that Andy Bechtolsheim is more than an “industry guru,” whatever that means. Among his many roles, Bechtolsheim serves as the chief development officer and co-founder of Arista Networks, a growing rival to Cisco in low-latency data-center switching, especially at cloud-scale web shops and financial-services companies. It bears repeating that Open Compute’s mandate does not extend to network infrastructure, which is the preserve of the analogous OpenFlow.

Bechtolsheim’s history is replete with successes, as a technologist and as an investor. He was one of the earliest investors in Google, which makes his involvement in Open Compute deliciously ironic.

More recently, he disclosed a seed-stage investment in Nebula, which, as Derrick Harris at GigaOM wrote this summer, has “developed a hardware appliance pre-loaded with customized OpenStack software and Arista networking tools, designed to manage racks of commodity servers as a private cloud.” The reference architectures for the commodity servers comprise Dell’s PowerEdge C Micro Servers and servers that adhere to Open Compute specifications.

We know, then, why Bechtolsheim is on the board. He’s a high-profile presence that I’m sure Open Compute was only too happy to welcome with open arms (pardon the pun), and he also has business interests that would benefit from a furtherance of Open Compute’s agenda. Not to put too fine a point on it, but there’s an Arista and a Nebula dimension to Bechtolsheim’s board role at the Open Compute Project Foundation.

OpenStack Angle for Rackspace, Dell

Interestingly, the presence of Bechtolsheim and Rackspace’s Mark Roenigk on the board both emphasize OpenStack considerations, as does Dell’s involvement with Open Compute. Dell doesn’t have a board seat — at least not according to the Open Compute website — but it seems to think it can build a business for solutions based on Open Compute and OpenStack among second-tier purveyors of public-cloud services and among those pursuing large private or hybrid clouds. Both will become key strategic markets for Dell as its SMB installed base migrates applications and spending to the cloud.

Dell notably lost a chunk of server business when Facebook chose to go the DIY route, in conjunction with Taiwanese ODM Quanta Computer, for servers in its data center in Pineville, Oregon. Through its involvement in Open Compute, Dell might be trying to regain lost ground at Facebook, but I suspect that ship has sailed. Instead, Dell probably is attempting to ensure that it prevents or mitigates potential market erosion among smaller service providers and enterprise customers.

What Goldman Sachs Wants

The other intriguing presence on the Open Compute Project Foundation board is Don Duet from Goldman Sachs. Here’s what Duet had to say about his firm’s involvement with Open Compute:

“We build a lot of our own technology, but we are not at the hyperscale of Google or Facebook. We are a mid-scale company with a large global footprint. The work done by the OCP has the potential to lower the TCO [total cost of ownership] and we are extremely interested in that.”

Indeed, that perspective probably worries major server vendors more than anything else about Open Compute. Once Goldman Sachs goes this route, other financial-services firms will be inclined to follow, and nobody knows where the market attrition will end, presuming it ends at all.

Like Facebook, Goldman Sachs saw what Google was doing with its home-brewed, scale-out data-center infrastructure, and wondered how it might achieve similar business benefits. That has to be disconcerting news for major server vendors.

Welcome to the Future

The big takeaway for me, as I absorb these developments, is how the power axis of the industry is shifting. The big systems vendors used to set the agenda, promoting and pushing their products and influencing the influencers so that enterprise buyers kept their growth rates on the uptick. Now, though, a combination of factors — widespread data-center virtualization, the rise of cloud computing, a persistent and protected global economic downturn (which has placed unprecedented emphasis on IT cost containment) — is reshaping the IT universe.

Welcome to the future. Some might like it more than others, but there’s no going back.

HP’s Launches Its Moonshot Amid Changing Industry Dynamics

As I read about HP’s new Project Moonshot, which was covered extensively by the trade press, I wondered about the vendor’s strategic end game. Where was it going with this technology initiative, and does it have a realistic likelihood of meeting its objectives?

Those questions led me to consider how drastically the complexion of the IT industry has changed as cloud computing takes hold. Everything is in flux, advancing toward an ultimate galactic configuration that, in many respects, will be far different from what we’ve known previously.

What’s the Destination?

It seems to me that Project Moonshot, with its emphasis on a power-sipping and space-saving server architecture for web-scale processing, represents an effort by HP to re-establish a reputation for innovation and thought leadership in a burgeoning new market. But what, exactly, is the market HP has in mind?

Contrary to some of what I’ve seen written on the subject, HP doesn’t really have a serious chance of using this technology to wrest meaningful patronage from the behemoths of the cloud service-provider world. Google won’t be queuing up for these ARM-based, Calxeda-designed, HP-branded “micro servers.” Nor will Facebook or Microsoft. Amazon or Yahoo probably won’t be in the market for them, either.

The biggest of the big cloud providers are heading in a different direction, as evidenced by their aggressive patronage of open-source hardware initiatives that, when one really thinks about it, are designed to reduce their dependence on traditional vendors of server, storage, and networking hardware. They’re breaking that dependence — in some ways, they see it as taking back their data centers — for a variety of reasons, but their behavior is invariably motivated by their desire to significantly reduce operating expenditures on data-center infrastructure while freeing themselves to innovate on the fly.

When Customers Become Competitors

We’ve reached an inflection point where the largest cloud players — the Googles, the Facebooks, the Amazons, some of the major carriers who have given thought to such matters — have figured out that they can build their own hardware infrastructure, or order it off the shelf from ODMs, and get it to do everything they need it to do (they have relatively few revenue-generating applications to consider) at a lower operating cost than if they kept buying relatively feature-laden, more-expensive gear from hardware vendors.

As one might imagine, this represents a major business concern for the likes of HP, as well as for Cisco and others who’ve built a considerable business selling hardware at sustainable margins to customers in those markets. An added concern is that enterprise customers, starting with many SMBs, have begun transitioning their application workloads to cloud-service providers. The vendor problem, then, is not only that the cloud market is growing, but also that segments of the enterprise market are at risk.

Attempt to Reset Technology Agenda

The vendors recognize the problem, and they’re doing what they can to adapt to changing circumstances. If the biggest web-scale cloud providers are moving away from reliance on them, then hardware vendors must find buyers elsewhere. Scores of cloud service providers are not as big, or as specialized, or as resourceful as Google, Facebook, or Microsoft. Those companies might be considering the paths their bigger brethren have forged, with initiatives such as the Open Compute Project and OpenFlow (for computing and networking infrastructure, respectively), but perhaps they’re not entirely sold on those models or don’t think they’re quite right  for their requirements just yet.

This represents an opportunity for vendors such as HP to reset the technology agenda, at least for these sorts of customers. Hence, Project Moonshot, which, while clearly ambitious, remains a work in progress consisting of the Redstone Server Development Platform, an HP Discovery Lab (the first one is in Houston), and HP Pathfinder, a program designed to create open standards and third-party technology support for the overall effort.

I’m not sure I understand who will buy the initial batch of HP’s “extreme low-power servers” based on Calxeda’s EnergyCore ARM server-on-a-chip processors. As I said before, and as an article at Ars Technica explains, those buyers are unlikely to be the masters of the cloud universe, for both technological and business reasons. For now, buyers might not even come from the constituency of smaller cloud providers

Friends Become Foes, Foes Become Friends (Sort Of)

But HP is positioning itself for that market and to be involved in those buying decisions relating to the energy-efficient system architectures.  Its Project Moonshot also will embrace energy-efficient microprocessors from Intel and AMD.

Incidentally, what’s most interesting here is not that HP adopted an ARM-based chip architecture before opting for an Intel server chipset — though that does warrant notice — but that Project Moonshot has been devised not so much as to compete against other server vendors as it is meant to provide a rejoinder to an open-computing model advanced by Facebook and others.

Just a short time ago, industry dynamics were relatively easy to discern. Hardware and system vendors competed against one another for the patronage of service providers and enterprises. Now, as cloud computing grows and its business model gains ascendance, hardware vendors also find themselves competing against a new threat represented by mammoth cloud service providers and their cost-saving DIY ethos.