Category Archives: IaaS

Hardware Elephant in the HP Cloud

Taking another run at cloud computing, HP made news today with its strategy for the “Converged Cloud,” which focuses on hybrid cloud environments and provides a common architecture that spans existing data centers as well as private and public clouds.

In finally diving into infrastructure as a service (IaaS), with a public beta of HP Public Infrastructure as a Service slated for May 10, HP will go up against current IaaS market leader Amazon Web Services.

HP will tap OpenStack and hypervisor neutrality as it joins the battle. Not surprisingly, it also will leverage its own hardware portfolio for compute, storage, and networking — HP Converged Infrastructure, which it already has promoted for enterprise data centers — as well as a blend of software and services that is meant to provide bonding agents to keep customers in the HP fold regardless of where and how they want to run their applications.

Trying to Set the Cloud Agenda

In addition to HP Public Infrastructure as a Service — providing on-demand compute instances or virtual machines, online storage capacity, and cached content delivery — HP Cloud Services also will unveil a private beta of a relational database service for MySQL and a block storage service that supports movement of data from one compute instance to another.

While HP has chosen to go up against AWS in IaaS — though it apparently is targeting a different constituency from the one served by Amazon — perhaps a bigger story is that HP also will compete with other service providers, too, including other OpenStack purveyors.

There’s some risk in that decision, no question, but perhaps not as much as one might think. The long-term trend, already established at the largest cloud service providers on the planet, is to move away from branded, vanity hardware in favor of no-frills boxes from original design manufacturers (ODMs).  This will not only affect servers, but also storage and networking hardware, the latter of which has seen the rise of merchant silicon. HP can read the writing on the data-center wall, and it knows that it must attempt to set the cloud agenda, or cede the floor and watch its hardware sales atrophy.

Software and Services as Hooks

Hybrid clouds are HP’s best bet, though far from a sure thing. Indeed, one can interpret  HP’s Converged Cloud as a bulwark against what it would perceive as a premature decline in its hardware business.

Simply packaging and reselling OpenStack and a hypervisor of the customer’s choice wouldn’t achieve HP’s “sticky” business objectives, so it is tapping its software and services for the hooks and proprietary value that will keep customers from straying.

For managing hybrid environments, HP has its new Cloud Maps, which provides catalogue of prepackaged application templates to speed deployment of enterprise cloud-services applications.

To test the applications, the company offers HP Service Virtualization 2.0, which enables enterprise customers to test quality and performance of cloud or mobile applications without interfering with production systems. Meanwhile, HP Virtual Application Networks — which taps HP’s Intelligent Management Center (IMC) and the IMC Virtual Application Networks (VAN) Manager Module — also makes its debut. It is designed to eliminate network-related cloud-services bottlenecks by speeding application deployment, automating management, and ensuring service levels for virtual and cloud applications on HP’s FlexNetwork architecture.

Maintaining and Growing

HP also will launch two new networking services: HP Virtual Network Protection Service, which leverages best practices and is intended to set a baseline for security of network virtualization; and HP Network Cloud Optimization Service, which is intended to customers enhance their networks for delivery of cloud services.

For  enterprises that don’t want to manage their clouds, the company offers HP Enterprise Cloud Services as well as other services to get enterprises up to speed on how cloud can best be harnessed.

Whether the software and services will add sufficient stickiness to HP’s hardware business remains to be seen, but there’s no question that HP is looking to maintain existing revenue streams while establishing new ones.

Advertisements

Networking Vendors Tilt at ONF Windmill

Closely following the latest developments and continuing progress of software-defined networking (SDN), I am reminded of what somebody who shall remain nameless said not long ago about why he chose to leave Cisco to pursue his career elsewhere.

He basically said that Cisco, as a huge networking company, is having trouble reconciling itself to the reality that the growing force known as cloud computing is not “network centric.” His words stuck with me, and I’ve been giving them a lot of thought since then.

All Computing Now

His opinion was validated earlier this week at a NetEvents symposium in Garmisch, Germany, where Dan Pitt, executive director of the Open Networking Foundation (ONF) made some statements about software-defined networking (SDN) that, while entirely consistent with what we’ve heard before from that community’s most fervent proponents, also seemed surprisingly provocative. Quoting Pitt, from a blog post published at ZDNet UK:

“In future, networking will become just an integral part of computing, using same tools as the rest of computing. Enterprises will get out of managing plumbing, operators will become software companies, IT will add more business value, and there will be more network startups from Generation Y.”

Pitt was asked what impact this architectural shift would have on network performance. He said that a 30,000-user campus could be supported by a four-year-old Dell PC.

Redefining Architecture, Redefining Value

Naturally, networking vendors can’t be elated at that prospect. Under the SDN master plan, the intelligence (and hence the value) of switching and routing gets moved to a server, or to a cluster of servers, on the edge of the network. Whether this is done with OpenFlow, Open vSwitch, or some other mechanism between the control plane and the switch doesn’t really matter in the big picture. What matters is that networking architectures will be redefined, and networking value will migrate into (and be subsumed within) a computing paradigm. Not to put too fine a point on it, but networking value will be inherent in applications and control-plane software, not in the dumb, physical hardware that will be relegated to shunting packets on the network.

At that same NetEvents symposium in Germany, a Computerworld UK story quoted Pitt saying something very similar to, though perhaps less eloquent than, what Berkeley professor and Nicira co-founder Scott Shenker said about network-protocol complexity.

Said Pitt:

“There are lots of networking protocols which make it very labour intensive to manage a network. There are too many “band aids” being used to keep a network working, and these band aids can actually cause many of the problems elsewhere in the network.”

Politics of ONF

I’ve written previously about the political dynamics of the Open Networking Foundation (ONF).

Just to recap, if you look at the composition of the board of directors at the ONF, you’ll know all you need to know about who wields power in that organization. The ONF board members are Google, Yahoo, Verizon, Deutsche Telekom, NTT, and Microsoft. Make no mistake about Microsoft’s presence. It is there as a cloud service provider, not as a vendor of technology products.

The ONF is run by large cloud service providers, and it’s run for large cloud service providers, though it’s conceivable that much of what gets done in the ONF will have applicability and value to cloud shops of smaller size and stature. I suppose it’s also conceivable that some of the ONF’s works will prove valuable at some point to large enterprises, though it should be noted that the enterprise isn’t a constituency that is foremost of mind to the ONF.

Vendors Not Driving

One thing is certain: Networking vendors are not steering the ONF ship. I’ve written that before, and I’ll no doubt write it again. In fact, I’ll quote Dan Pit to that effect right now:

“No vendors are allowed on the (ONF) board. Only the board can found a working group, approve standards, and appoint chairs of working groups. Vendors can be on the groups but not chair them. So users are in the driving seat.”

And those users — really the largest of the cloud service providers — aren’t about to move over. In fact, the power elite that governs that ONF has a definite vision in mind for the future of networking, a future that — as we’ve already seen — will make the networking subservient to applications, programmability, and computing.

Transition on the Horizon

As the SDN vision moves downstream from the largest service providers, such as those who run the show at the ONF, to smaller service providers and then to large enterprises, networking companies will have to transform themselves into software vendors — with software business models.

Can they do that? Some of them probably can, but others — including probably the largest of all — will have a difficult time making the transition, a prisoner of its own past success and circumscribed by the classic “innovator’s dilemma.”  Cisco, a networking colossus, has built a thriving franchise and dominant market position, replete with a full-fledged business model and an enormous sales machine. It will be hard to move away from a formula that’s filled the coffers all these years.

Still, move they must, though timing, as it often does, will count for a lot. The SDN wave won’t inundate the marketplace overnight, but, regardless of the underlying protocols and mechanisms that might run alongside or supersede OpenFlow, SDN seems set to eventually win adherents in CFO and CIO offices beyond the realm of the companies represented on the ONF’s board of directors. It will take some time, probably many years, but it’s a movement that will gain followers and momentum as it delivers quantifiable business benefits to those that adopt it.

Enterprise As Last Redoubt

The enterprise will be the last redoubt of conventional networking infrastructure, and it’s not difficult to envision Cisco doing everything in its power to keep it that way for as long as possible. Expect networking’s old guard to strongly resist the siren song of SDN. That’s only natural, even if — in the very long haul — it seems a vain pursuit and, ultimately, a losing battle.

At this point, I just want to emphasizes that SDN need not lead to the commoditization of networking. Granted, it might lead to the commoditization of certain types of networking hardware, but there’s still value, much of it proprietary, that software-centric networking vendors can bring to the SDN table. But, as I said earlier, for many vendors that will mean a shift in business model, product focus, and go-to-market strategy.

In that Computerworld piece, some wonder whether networking vendors could prevent the rise of software-defined networking by refusing to play along.

Not Going Away

Again, I can easily imagine the vendors slowing and impeding the ascent of SDN within enterprises, but there’s absolutely no way for them to forestall its adoption at the major service providers represented by the ONF board members. Those players have the capital and the operational resources, to say nothing of the business motivation, to roll their own switches, perhaps with the help of ODMs, and to program their own applications and networks. That train has left the station and it can’t be recalled by even the largest of networking vendors, who really have no leverage or say in the matter. They can play along and try to find a sinecure where they can continue to add value, or they can dig in their heels and get circumvented entirely.  It’s their choice.

Either way, the tension between the ONF and the traditional networking vendors is palpable. In the IETF, the vendors are casting glances and sometimes aspersions at the ONF, trying to figure out how they can mount a counterattack. The battle will be joined, but the ONF rules its own roost — and it isn’t going away.

Peeling the Nicira Onion

Nicira emerged from pseudo-stealth yesterday, drawing plenty of press coverage in the process. “Network virtualization” is the concise, two-word marketing message the company delivered, on its own and through the analysts and journalists who greeted its long-awaited official arrival on the networking scene.

The company’s website opened for business this week replete with a new look and an abundance of new content. Even so, the content seemed short on hard substance, and those covering the company’s launch interpreted Nicira’s message in a surprisingly varied manner, somewhat like blind men groping different parts of an elephant. (Onion in the title, now an elephant; I’m already mixing flora and fauna metaphors.)

VMware of Networking Ambiguity

Many made the point that Nicira aims to become the “VMware of networking.” Interestingly, Big Switch Networks has aspirations to wear that crown, asserting on its website that “networking needs a VMware.” The theme also has been featured in posts on Network Heresy, Nicira CTO Martin Casado’s blog. He and his colleagues have written alternately that networking both doesn’t and does need a VMware. Confused? That’s okay. Many are in the same boat . . . or onion field, as the case may be.

The point Casado and company were trying to make is that network virtualization, while seemingly overdue and necessary, is not the same as server virtualization. As stated in the first in that series of posts at Network Heresy:

“Virtualized servers are effectively self contained in that they are only very loosely coupled to one another (there are a few exceptions to this rule, but even then, the groupings with direct relationships are small). As a result, the virtualization logic doesn’t need to deal with the complexity of state sharing between many entities.

A virtualized network solution, on the other hand, has to deal with all ports on the network, most of which can be assumed to have a direct relationship (the ability to communicate via some service model). Therefore, the virtual networking logic not only has to deal with N instances of N state (assuming every port wants to talk to every other port), but it has to ensure that state is consistent (or at least safely inconsistent) along all of the elements on the path of a packet. Inconsistent state can result in packet loss (not a huge deal) or much worse, delivery of the packet to the wrong location.”

In Context of SDN Universe

That issue aside, many writers covering the Nicira launch presented information about the company and its overall value proposition consistently. Some articles were more detailed than others. One at MIT’s Technology Review provided good historical background on how Casado first got involved with the challenge of network virtualization and how Nicira was formed to deliver a solution.

Jim Duffy provided a solid piece touching on the company’s origins, its venture-capital investors, and its early adopters and the problems Nicira is solving for them. He also touched on where Nicira appears to fit within the context of the wider SDN universe, which includes established vendors such as Cisco Systems, HP, and Juniper Networks, as well as startup such as Big Switch Networks, Embrane, and Contextream.

In that respect, it’s interesting to note what Embrane co-founder and President Dante Malagrino told Duffy:

 “The introduction of another network virtualization product is further validation that the network is in dire need of increased agility and programmability to support the emergence of a more dynamic data center and the cloud.”

“Traditional networking vendors aren’t delivering this, which is why companies like Nicira and Embrane are so attractive to service providers and enterprises. Embrane’s network services platform can be implemented within the re-architected approach proposed by Nicira, or in traditional network architectures. At the same time, products that address Layer 2-3 and platforms that address Layer 4-7 are not interchangeable and it’s important for the industry to understand the differences as the network catches up to the cloud.”

What’s Nicira Selling?

All of which brings us back to what Nicira actually is delivering to market. The company’s website offers videos, white papers, and product data sheets addressing the Nicira Network Virtualization Platform (NVP) and its Distributed Network Virtualization Infrastructure (DNVI), but I found the most helpful and straightforward explanations, strangely enough, on the Frequently Asked Questions (FAQ) page.

This is an instance of a FAQ page that actually does provide answers to common questions. We learn, for example, that the key components of the Nicira Network Virtualization Platform (NVP) are the following:

– The Controller cluster, a distributed control system

– The Management software, an operations console

– The RESTful API that integrates into a range of Cloud Management Systems (CMS), including a Quantum plug-in for OpenStack.

Those components, which constitute the NVP software suite, are what Nicira sells, albeit in a service-oriented monthly subscription model that scales per virtual network port.

Open vSwitch, Minor Role for OpenFlow 

We then learn that the NVP communicates with the physical network indirectly, through Open vSwitch. Ivan Pepelnjak (I always worry that I’ll misspell his name, but not the Ivan part) provides further insight into how Nicira leverages Open vSwitch. As Nicira notes, the NVP Controller communicates directly with Open vSwitch (OVS), which is deployed in server hypervisors. The server hypervisor then connects to the physical network and end hosts connect to the vswitch. As a result, NVP does not talk directly to the physical network.

As for OpenFlow, its role is relatively minor. As Nicira explains: “OpenFlow is the communications protocol between the controller and OVS instances at the edge of the network. It does not directly communicate with the physical network elements and is thus not subject to scaling challenges of hardware-dependent, hop-by-hop OpenFlow solutions.”

Questions About L4-7 Network Services

Nicira sees its Network Virtualization Platform delivering value in a number of different contexts, including the provision of hardware-independent virtual networks; virtual-machine mobility across subnet boundaries (while maintaining L2 adjacency); edge-enforced, dynamic QoS and security policies (filters, tagging, policy routing, etc.) bound to virtual ports; centralized system-wide visibility & monitoring; address space isolation (L2 & L3); and Layer 4-7 services.

Now that last capability provokes some questions that cannot be answered in the FAQ.

Nicira says its NVP can integrate with third-party Layer 3-7 services, but it also says services can be created by Nicira or its customers.  Notwithstanding Embrane’s perfectly valid contention that its network-services platform can be delivered in conjunction with Nicira’s architectural model, there is a distinct possibility Nicira might have other plans.

This is something that bears watching, not only by Embrane but also by longstanding Layer 4-7 service-delivery vendors such as F5 Networks. At this point, I don’t pretend to know how far or how fast Nicira’s ambitions extend, but I would imagine they’ll be demarcated, at least partly, by the needs and requirements of its customers.

Nicira’s Early Niche

Speaking of which, Nicira has an impressive list of early adopters, including AT&T, eBay, Fidelity Investments, Rackspace, Deutsche Telekom, and Japan’s NTT. You’ll notice a commonality in the customer profiles, even if their application scenarios vary. Basically, these all are public cloud providers, of one sort or another, and they have what are called “web-scale” data centers.

While Nicira and Big Switch Networks both are purveyors of “network virtualization”  and controller platforms — and both proclaim that networking needs a VMware — they’re aiming at different markets. Big Switch is focusing on the enterprise and the private cloud, whereas Nicira is aiming for large public cloud-service providers or big enterprises that provide public-cloud services (such as Fidelity).

Nicira has taken care in selecting its market. An earlier post on Casado’s blog suggests that he and Nicira believe that OpenFl0w-based SDNs might be a solution in search of a problem already being addressed satisfactorily within many enterprises. I’m sure the team at Big Switch would argue otherwise.

At the same time, Nicira probably has conceded that it won’t be patronized by Open Networking Foundation (ONF) board members such as Google, Facebook, and Microsoft, each of which is likely to roll its own network-virtualization systems, controller platforms, and SDN applications. These companies not only have the resources to do so, but they also have a business imperative that drives them in that direction. This is especially true for Google, which views its data-center infrastructure as a competitive differentiator.

Telcos Viable Targets

That said, I can see at least a couple ONF board members that might find Nicira’s pitch compelling. In fact, one, Deutsche Telekom, already is on board, at least in part, and perhaps Verizon will come along later. The telcos are more likely than a Google to need assistance with SDN rollouts.

One last night on Nicira before I end this already-prolix post. In the feature article at Technology Review, Casado says it’s difficult for Nicira to impress a layperson with its technology, that “people do struggle to understand it.” That’s undoubtedly true, but Nicira needs to keep trying to refine its message, for its own sake as well as for those of prospective customers and other stakeholders.

That said, the company is stocked with impressive minds, on both the business and technology sides of the house, and I’m confident it will get there.

Embrane Emerges from Stealth, Brings Heleos to Light

I had planned to write about something else today — and I still might get around to it — but then Embrane came out of stealth mode. I feel compelled to comment, partly because I have written about the company previously, but also because what Embrane is doing deserves notice.

Embrane’s Heleos

With regard to aforementioned previous post, which dealt with Dell acquisition candidates in Layer 4-7 network services, I am now persuaded that Dell is more likely to pull the trigger on a deal for an A10 Networks, let’s say, than it is to take a more forward-looking leap at venture-funded Embrane. That’s because I now know about Embrane’s technology, product positioning, and strategic direction, and also because I strongly suspect that Dell is looking for a purchase that will provide more immediate payback within its installed base and current strategic orientation.

Still, let’s put Dell aside for now and focus exclusively on Embrane.

The company’s founders, former Andiamo-Cisco lads Dante Malagrinò and Marco Di Benedetto, have taken their company out of the shadows and into the light with their announcement of Heleos, which Embrane calls “the industry’s first distributed software platform for virtualizing layer 4-7 network services.” What that means, according to Embrane, is that cloud service providers (CSPs) and enterprises can use Heleos to build more agile networks to deliver cloud-based infrastructure as a service (IaaS). I can perhaps see the qualified utility of Heleos for the former, but I think the applicability and value for the latter constituency is more tenuous.

Three Wise Men

But I am getting ahead of myself, putting the proverbial cart before the horse. So let’s take a step back and consult some learned minds (including  an”ethereal” one) on what Heleos is, how it works, what it does, and where and how it might confer value.

Since the Embrane announcement hit the newswires, I have read expositions on the company and its new product from The 451 Group’s Eric Hanselman, from rock-climbing Ivan Pepelnjak (technical director at NIL Data Communications), and from EtherealMind’s Greg Ferro.  Each has provided valuable insight and analysis. If you’re interested in learning about Embrane and Heleos, I encourage you to read what they’ve written on the subject. (Only one of Hanselman’s two The 451 Group pieces is available publicly online at no charge).

Pepelnjak provides an exemplary technical description and overview of Heleos. He sets out the problem it’s trying to solve, considers the pros and cons of the alternative solutions (hardware appliances and virtual appliances), expertly explores Embrane’s architecture, examines use cases, and concludes with a tidy summary. He ultimately takes a positive view of Heleos, depicting Embrane’s architecture as “one of the best proposed solutions” he’s seen hitherto for scalable virtual appliances in public and private cloud environments.

Limited Upside

Ferro reaches a different conclusion, but not before setting the context and providing a compelling description of what Embrane does. After considering Heleos, Ferro ascertains that its management of IP flows equates to “flow balancing as a form of load balancing.” From all that I’ve read and heard, it seems an apt classification. He also notes that Embrane, while using flow management, is not an “OpenFlow/SDN business. Although I see conceptual similarities between what Embrane is doing and what OpenFlow does, I agree with Ferro, if only because, as I understand it, OpenFlow reaches no higher than the network layer. I suppose the same is true for SDN, but this is where ambiguity enters the frame.

Even as I wrote this piece, there was a kerfuffle on Twitter as to whether or to what extent Embrane’s Heleos can be categorized as the latest manifestation of SDN. (Hours later, at post time, this vigorous exchange of views continues.)

That’s an interesting debate — and I’m sure it will continue — but I’m most intrigued by the business and market implications of what Embrane has delivered. On that score, Ferro sees Embrane’s platform play as having limited upside, restricted to large cloud-service providers with commensurately large data centers. He concludes there’s not much here for enterprises, a view with which I concur.

Competitive Considerations

Hanselman covers some of the same ground that Ferro and Pepelnjak traverse, but he also expends some effort examining the competitive landscape that Embrane is entering. In that Embrane is delivering a virtualization platform for network services, that it will be up against Layer 4-7 stalwarts such as F5 Networks, A10 Networks, Riverbed/Zeus, Radware, Brocade, Citrix, Cisco, among others. F5, the market leader, already recognizes and is acting upon some of the market and technology drivers that doubtless inspired the team that brought Heleos to fruition.

With that in mind, I wish to consider Embrane’s business prospects.

Embrane closed a Series B round of $18 million in August. It was lead by New Enterprise Associates and included the involvement of Lightspeed Venture Partners and North Bridge Venture Partners, both of whom participated in a $9-million series A round in March 2010.

To determine whether Embrane is a good horse to back (hmm, what’s with the horse metaphors today?), one has to consider the applicability of its technology to its addressable market — very large cloud-service providers — and then also project its likelihood of providing a solution that is preferable and superior to alternative approaches and competitors.

Counting the Caveats

While I tend to agree with those who believe Embrane will find favor with at least some large cloud-service providers, I wonder how much favor there is to find. There are three compelling caveats to Embrane’s commercial success:

  1. L4-7 network services, while vitally important cloud service providers and large enterprises, represent a much smaller market than L2-L3 networking, virtualized or otherwise. Just as a benchmark, Dell’Oro reported earlier this year that the L2-3 Ethernet Switch market would be worth approximately $25 billion in 2015, with the L4-7 application delivery controller (ADC) market expected to reach more than $1.5 billion, though the virtual-appliance segment is expected show most growth in that space. Some will say, accurately, that L4-7 network services are growing faster than L2-3 networking. Even so, the gap is size remains notable, which is why SDN and OpenFlow have been drawing so much attention in an increasingly virtualized and “cloudified” world.
  2. Embrane’s focus on large-scale cloud service providers, and not on enterprises (despite what’s stated in the press release), while rational and perfectly understandable, further circumscribes its addressable market.
  3. F5 Networks is a tough competitor, more agile and focused than a Cisco Systems, and will not easily concede customers or market share to a newcomer. Embrane might have to pick up scraps that fall to the floor rather than feasting at the head table. At this point, I don’t think F5 is concerned about Embrane, though that could change if Embrane can use NaviSite — its first customer, now owned by TimeWarner Cable — as a reference account and validator for further business among cloud service providers.

Notwithstanding those reservations, I look forward to seeing more of Embrane as we head into 2012. The company has brought a creative approach and innovation platform architecture to market, a higher-layer counterpart and analog to what’s happening further down the stack with SDN and OpenFlow.

Rackspace’s Bridge Between Clouds

OpenStack has generated plenty of sound and fury during the past several months, and, with sincere apologies to William Shakespeare, there’s evidence to suggest the frenzied activity actually signifies something.

Precisely what it signifies, and how important OpenStack might become, is open to debate, of which there has been no shortage. OpenStack is generally depicted as an open-source cloud operating system, but that might be a generous interpretation. On the OpenStack website, the following definition is offered:

“OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.”

Just for fun and giggles (yes, that phrase has been modified so as not to offend reader sensibilities), let’s parse that passage, shall we? In the words of the OpenStackers themselves, their project is an open-source cloud-computing platform for public and private clouds, and it is reputedly simple to implement, massively scalable, and feature rich. Notably, it consists of a “series of interrelated projects delivering various components for a cloud infrastructure solution.”

Simple for Some

Given that description, especially the latter reference to interrelated projects spawning various components, one might wonder exactly how “simple” OpenStack is to implement and by whom. That’s a question others have raised, including David Linthicum in a recent piece at InfoWorld. In that article, Linthicum notes that undeniable vendor enthusiasm and burgeoning market momentum accrue to OpenStack — the community now has 138 member companies (and counting), including big-name players such as HP, Dell, Intel, Rackspace, Cisco, Citrix, Brocade, and others — but he also offers the following caveat:

“So should you consider OpenStack as your cloud computing solution? Not on its own. Like many open source projects, it takes a savvy software vendor, or perhaps cloud provider, to create a workable product based on OpenStack. The good news is that many providers are indeed using OpenStack as a foundation for their products, and most of those are working, or will work, just fine.”

Creating Value-Added Services

Meanwhile, taking issue with a recent InfoWorld commentary by Savio Rodrigues — who argued that OpenStack will falter while its open-source counterpart Eucalyptus will thrive — James Urquhart, formerly of Cisco and now VP of product strategy at enStratus, made this observation:

“OpenStack is not a cloud service, per se, but infrastructure automation tuned to cloud-model services, like IaaS, PaaS and SaaS. Intsall OpenStack, and you don’t get a system that can instantly bill customers, provide a service catalog, etc. That takes additional software.

What OpenStack represents is the commodity element of cloud services: the VM, object, server image and networking management components. Yeah, there is a dashboard to interact with those commodity elements, but it is not a value-add capability in-and-of itself.

What HP, Dell, Cisco, Citrix, Piston, Nebula and others are doing with OpenStack is creating value-add services on top (or within) the commodity automation. Some focus more on “being commodity”, others focus more on value-add, but they are all building on top of the core OpenStack projects.”

New Revenue Stream for Rackspace

All of which brings us, in an admittedly roundabout fashion, to Rackspace’s recent announcement of its Rackspace Cloud Private Edition, a packaged version of OpenStack components that can be used by enterprises for private-cloud deployments. This move makes sense for OpenStack on couple levels.

First off, it opens up a new revenue stream for the company. While Rackspace won’t try to make money on the OpenStack software or the reference designs — featuring a strong initial emphasis on Dell servers and Cisco networking gear for now, though bare-bones OpenCompute servers are likely to be embraced before long —  it will provide value-add, revenue-generating managed services to customers of Rackspace Cloud Private Edition. These managed services will comprise installation of OpenStack updates, analysis of system issues, and assistance with specific questions relating to systems engineering. Some security-related services also will be offered. While the reference architecture and the software are available now, Rackspace’s managed services won’t be available until January.

Building a Bridge

The launch of Rackspace Cloud Private Edition is a diversification initiative for Rackspace, which hitherto has made its money by hosting and managing applications and computing services for others in its own data centers. The OpenStack bundle takes it into the realm of provided managed services in its customers’ data centers.

As mentioned above, this represents a new revenue stream for Rackspace, but it also provides a technological bridge that will allow customers who aren’t ready for multi-tenant public cloud services today to make an easy transition to Rackspace’s data centers at some future date. It’s a smart move, preventing prospective customers from moving to another platform for private cloud deployment, ensuring in the process that said customers don’t enter the orbit of another vendor’s long-term gravitational pull.

The business logic coheres. For each customer engagement, Rackspace gets a payoff today, and potentially a larger one at a later date.

Amazon’s Advantageous Model for Cloud Investments

While catching up with industry developments earlier this week, I came across a Reuters piece on Amazon’s now well-established approach toward investments in startup companies. If you haven’t seen it, I recommend that you give it a read.

As its Amazon Web Services (AWS) cloud operations approach the threshold of a $1-billion business, the company once known exclusively as an online bookshop continues to search for money-making opportunities well beyond Internet retailing.

Privileged Insights

An article at GigaOM by Barb Darrow quotes Amazon CEO Jeff Bezos explaining that his company stumbled unintentionally into the cloud-services business, but the Reuters item makes clear that Amazon is putting considerably more thought into its cloud endeavors these days. In fact, Amazon’s investment methodology, which sees it invest in startup companies that are AWS customers, is an exercise in calculated risk mitigation.

That’s because, before making those investments, Amazon gains highly detailed and extremely valuable insights into startup companies’ dynamic requirements for computing infrastructure and resources. It can then draw inferences about the popularity and market appeal of the services those companies supply. All in all, it seems like an inherently logical and sound investment model, one that gives Amazon privileged insights into companies before it decides to bet on their long-term health and prosperity.

That fact has not been lost on a number of prominent venture-capital firms, which have joined with Amazon to back the likes of Yieldex, Sonian, Engine Yard, and Animoto, all of whom, at one time or another, were AWS customers.

Mutual Benefits

Now that nearly every startup is likely to begin its business life using cloud-based computing infrastructure, either from AWS or another cloud purveyor, I wonder whether Amazon’s investment model might be mimicked by others with similar insights into their business customers’ resource utilization and growth rates.

There’s no question that such investments deliver mutual benefit. The startup companies get the financial backing to accelerate its growth, establish and maintain competitive differentiation, and speed toward market leadership. Meanwhile, Amazon and its VC partners get stakes in fast-growing companies that seem destined for bigger things, including potentially lucrative exits. Amazon also gets to maintain relationships with customers that might otherwise outgrow AWS and leave the relationship behind. Last but not least, the investment program serves a promotional purpose for Amazon, demonstrating a commitment and dedication to its AWS customers that can extend well beyond operational support.

It isn’t just Amazon that can derive an investment edge from how their customers are using their cloud services. SaaS cloud providers such as Salesforce and Google also can gain useful insights into how customers and customer segments are faring during good and bad economic times, and PaaS providers would also stand to derive potentially useful knowledge about how and where customers are adopting their services.

Various Scenarios

Also on SaaS side of the ledger, in the realm of social networking — I’m thinking of Facebook, but others fit the bill — subscriber data can be mined for the benefit of advertisers seeking to deliver targeted campaigns to specific demographic segments.

In a different vein, Google’s search business could potentially give it the means to develop high-probability or weighted analytics based on the prevalence, intensity, nature, and specificity of search queries. Such data could be applied to and mined for probability markets. One application scenario might involve insiders searching online to ascertain whether prior knowledge of a transaction has been leaked to the wider world. By searching for the terms in question, they would effectively signal that an event might take place. (This would be more granular than Google Trends, and different from it in other respects, too.) There are a number of other examples and scenarios that one could envision.

Getting back to Amazon, though, what it is doing with its investment model clearly makes a lot of sense, giving it unique insights and a clear advantage as it weighs where to place its bets. As I said, it would be no surprise to see other cloud providers, even those not of the same scale as Amazon, consider similar investment models.