Monthly Archives: November 2011

Revisiting the Nicira Break-In

While doing research on my last post, I spent some time on Martin Casado’s thought-provoking blog, Network Heresy. He doesn’t generate posts prolifically — he’s preoccupied with other matters, including his job as chief technology officer at Nicira Networks — but his commentaries typically are detailed, illuminating, intelligent, and invariably honest.

One of his relatively recent posts, Origins and Evolution of OpenFlow/SDN, features a video of his keynote at the Open Networking Summit, where, as the title of the blog post advertises, he explained how SDNs and OpenFlow have advanced. His salient point is that it’s the community,  not the technology, that makes the SDN movement so meaningful.  The technology, he believes, will progress as it should, but the key to SDN’s success will be the capacity of the varied community of interests to cohere and thrive. It’s a valid point.

Serious Work

That said, that’s not the only thing that caught my interest in the keynote video. Early in that presentation, speaking about how he and others got involved with SDNs and OpenFlow, he talks about his professional past. I quote directly:

“Back in 2002-2003, post-9/11, I used to work for the feds. I worked in the intelligence sector. The team I worked with, we were responsible for auditing and securing some of the most sensitive networks in the United States. This is pretty serious stuff. Literally, if these guys got broken into, people died . . . We took our jobs pretty seriously.”

It doesn’t surprise me that OpenFlow-enabled SDNs might have had at least some of their roots in the intelligence world. Many technologies have been conceived and cultivated in the shadowy realms of defense and intelligence agencies. The Internet itself grew from the Advanced Research Projects Agency Network (ARPANET),  which was funded by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense.

Old-School Break-In

When I heard those words, however, I was reminded of the armed break-in that Nicira suffered last spring, first reported in a Newsweek cover story on the so-called “Code War” and cyber-espionage published in July.  What was striking about the breach at Nicira, both in and of itself and within the context of the Newsweek article, is that it was a physical, old-school break-in, not a cyber attack. An armed burglar wearing a ski mask broke into Nicira Networks and made his way purposefully to the desk of “one of the company’s top engineers.” The perpetrator then grabbed a computer, apparently containing source code, and took flight.

Palo Alto constabulary portrayed the crime as a bog-standard smash and grab, but “people close to the company” and national-intelligence investigators suspect it was a professional job executed by someone with ties to Russia or China. The objective, as one might guess, was to purloin intellectual property.

The involvement of national-intelligence investigators in the case served as a red flag signaling that the crime was not committed by a crank-addled junkie hoping to sell a stolen computer. There’s a bigger story, and Newsweek touched on it before heading off in a different direction to explore cyber espionage, hack attacks, and the code-warrior industry.

Nicira’s Stealth Mode

Last month, the New York Times mentioned the Nicira break-in during the course of an article titled “What Is Nicira Up To?”.

Indeed, that is a fair question to ask. There still isn’t much meat on the bones of Nicira’s website, though we know the company is developing a network-virtualization platform that decouples network services from the underlying hardware, “like a server hypervisor separates physical servers from virtual machines.”

It’s essentially software-defined networking (SDN), with OpenFlow in the mix, though Nicira refrained assiduously from using those words in its marketing messages. On the other hand, as we’ve already seen, CTO Martin Casado isn’t shy about invoking the SDN acronym, or providing learned expositions on its underlying technologies, when addressing technical audiences.

Mystery Remains 

Let’s return to the break-in, however, because the New York Times provided some additional information. We learn that a significant amount of Nicira’s intellectual property was on the purloined computer, though CEO Steven Mullaney said it was “very early stuff, nothing like what we’ve got now.”

Still, the supposition remained that the thief was an agent of a foreign government. We also learned more about Casado’s professional background and about the genesis of the technology that eventually would be developed further and commercialized at Nicira.  Casado’s government work took place at Lawrence Livermore National Laboratory, where he was asked by U.S. intelligence agencies to design a global network that would dynamically change its levels of security and authorization.

We might never discover who broke into Nicira last May. As the Newsweek story recounted, government investigators have advised those familiar with the incident not to discuss it. Questions remain, but the mystery is likely to remain unsolved, at least publicly.

Exploring OpenStack’s SDN Connections

Pursuant to my last post, I will now examine the role of OpenStack in the context of software-defined networking (SDN). As you will recall, it was one of the alternative SDN enabling technologies mentioned in a recent article and sidebar at Network World.

First, though, I want to note that, contrary to the concerns I expressed in the preceding post, I wasn’t distracted by a shiny object before getting around to writing this installment. I feared I would be, but my powers of concentration and focus held sway. It’s a small victory, but I’ll take it.

Road to Quantum

Now, on to OpenStack, which I’ve written about previously, though admittedly not in the context of SDNs. As for how networking evolved into a distinct facet of OpenStack, Martin Casado, chief technology officer at Nicira, offers a thorough narrative at the Open vSwitch website.

Casado begins by explaining that OpenStack is a “cloud management system (CMS) that orchestrates compute, storage, and networking to provide a platform for building on demand services such as IaaS.” He notes that OpenStack’s primary components were OpenStack Compute (Nova), Open Stack Storage (Swift), and OpenStack Image Services (Glance), and he also provides an overview of their respective roles.

Then he asks, as one might, what about networking? At this point, I will quote directly from his Open vSwitch post:

“Noticeably absent from the list of major subcomponents within OpenStack is networking. The historical reason for this is that networking was originally designed as a part of Nova which supported two networking models:

● Flat Networking – A single global network for workloads hosted in an OpenStack Cloud.

●VLAN based Networking – A network segmentation mechanism that leverages existing VLAN technology to provide each OpenStack tenant, its own private network.

While these models have worked well thus far, and are very reasonable approaches to networking in the cloud, not treating networking as a first class citizen (like compute and storage) reduces the modularity of the architecture.”

As a result of Nova’s networking shortcomings, which Casado enumerates in detail,  Quantum, a standalone networking component, was developed.

Network Connectivity as a Service

The OpenStack wiki defines Quantum as “an incubated OpenStack project to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).” On that same wiki, Quantum is touted as being able to support advanced network topologies beyond the scope of  Nova’s FlatManager or VLanManager; as enabling anyone to “build advanced network services (open and closed source) that plug into Openstack networks”; and as enabling new plugins (open and closed source) that introduce advanced network capabilities.

Okay, but how does it relate specifically to SDNs? That’s a good question, and James Urquhart has provided a clear and compelling answer, which later was summarized succinctly by Stuart Miniman at Wikibon. What Urquhart wrote actually connects the dots between OpenStack’s Quantum and OpenFlow-enabled SDNs. Here’s a salient excerpt:

“. . . . how does OpenFlow relate to Quantum? It’s simple, really. Quantum is an application-level abstraction of networking that relies on plug-in implementations to map the abstraction(s) to reality. OpenFlow-based networking systems are one possible mechanism to be used by a plug-in to deliver a Quantum abstraction.

OpenFlow itself does not provide a network abstraction; that takes software that implements the protocol. Quantum itself does not talk to switches directly; that takes additional software (in the form of a plug-in). Those software components may be one and the same, or a Quantum plug-in might talk to an OpenFlow-based controller software via an API (like the Open vSwitch API).”

Cisco’s Contribution

So, that addresses the complementary functionality of OpenStack’s Quantum and OpenFlow, but, as Urquhart noted, OpenFlow is just one mechanism that can be used by a plug-in to deliver a Quantum abstraction. Further to that point, bear in mind that Quantum, as recounted on the OpenStack wiki, can be used  to “build advanced network services (open and closed source) that plug into OpenStack networks” and to facilitate new plugins that introduce advanced network capabilities.

Consequently, when it comes to using OpenStack in SDNs, OpenFlow isn’t the only complementary option available. In fact, Cisco is in on the action, using Quantum to “develop API extensions and plug-in drivers for creating virtual network segments on top of Cisco NX-OS and UCS.”

Cisco portrays itself as a major contributor to OpenStack’s Quantum, and the evidence seems to support that assertion. Cisco also has indicated qualified support for OpenFlow, so there’s a chance OpenStack and OpenFlow might intersect on a Cisco roadmap. That said, Cisco’s initial OpenStack-related networking forays relate to its proprietary technologies and existing products.

Citrix, Nicira, Rackspace . . . and Midokura

Other companies have made contributions to OpenStack’s Quantum, too. In a post at Network World, Alan Shimel of The CISO Group cites the involvement of Nicira, Cisco, Citrix, Midokura, and Rackspace. From what Nicira’s Casado has written and said publicly, we know that OpenFlow is in the mix there. It seems to be in the picture at Rackspace, too. Citrix has posted blog posts about Quantum, including this one, but I’m not sure where they’re going with it, though XenServer, Open vSwitch, and, yes, OpenFlow are likely to be involved.

Finally, we have Midokura, a Japanese company that has a relatively low profile, at least on this side of the Pacific Ocean. According to its website, it was established early in 2010, and it had just 12 employees in the end of April 2011.

If my currency-conversion calculations (from Japanese yen) are correct, Midokura also had about $1.5 million in capital as of that date. Earlier that same month, the company announced seed funding of about $1.3 million. Investors were Bit-Isle, a Japanese data-center company; NTT Investment Partners, an investment vehicle of  Nippon Telegraph & Telephone Corp. (NTT); 1st Holdings, a Japanese ISV that specializes in tools and middleware; and various individual investors, including Allen Miner, CEO of SunBridge Corporation.

On its website, Midokura provides an overview of its MidoNet network-virtualization platform, which is billed as providing a solution to the problem of inflexible and expensive large-scale physical networks that tend to lock service providers into a single vendor.

Virtual Network Model in Cloud Stack

In an article published  at TechCrunch this spring, at about the time Midokura announced its seed round, the company claimed to be the only one to have “a true virtual network model” in a cloud stack. The TechCrunch piece also said the MidoNet platform could be integrated “into existing products, as a standalone solution, via a NaaS model, or through Midostack, Midokura’s own cloud (IaaS/EC2) distribution of OpenStack (basically the delivery mechanism for Midonet and the company’s main product).”

Although the company was accepting beta customers last spring, it hasn’t updated its corporate blog since December 2010. Its “Events” page, however, shows signs of life, with Midokura indicating that it will be attending or participating in the grand opening of Rackspace’s San Francisco office on December 1.

Perhaps we’ll get an update then on Midokura’s progress.

Vendors Cite Other Paths to SDNs

Jim Duffy at NetworkWorld wrote an article earlier this month on protocol and API alternatives to OpenFlow as software-defined network (SDN) enablers.

It’s true, of course, that OpenFlow is a just one mechanism among many that can be used to bring SDNs to fruition. Many of the alternatives cited by Duffy, who quoted vendors and analysts in his piece, have been around longer than OpenFlow. Accordingly, they have been implemented by network-equipment vendors and deployed in commercial networks by enterprises and service providers. So, you know, they have that going for them, and it is not a paltry consideration.

No Panacea

Among the alternatives to OpenFlow mentioned in that article and in a sidebar companion piece were command-line interfaces (CLIs), Simple Network Management Protocol (SNMP), Extensible Messaging and Presence Protocol (XMPP), Network Configuration Protocol (NETCONF), OpenStack, and virtualization APIs in offerings such as VMware’s vSphere.

I understand that different applications require different approaches to SDNs, and I’m staunchly in the reality-based camp that acknowledges OpenFlow is not a networking panacea. As I’ve noted previously on more than one occasion, the Open Networking Foundation (ONF), steered by a board of directors representing leading cloud-service operators, has designs on OpenFlow that will make it — at least initially — more valuable to so-called “web-scale” service providers than to enterprises. Purveyors of switches also get short shrift from the ONF.

So, no, OpenFlow isn’t all things to all SDNs, but neither are the alternative APIs and protocols cited in the NetworkWorld articles. Reality, even in the realm of SDNs, has more than one manifestation.

OpenFlow Fills the Void

For the most part, however, the alternatives to OpenFlow have legacies on their side. They’re tried and tested, and they have delivered value in real-world deployments. Then again, those legacies are double-edged swords. One might well ask — and I suppose I’m doing so here — if those foregoing alternatives to OpenFlow were so proficient at facilitating SDNs, then why is OpenFlow the recipient of such perceived need and demonstrable momentum today?

Those pre-existing protocols did many things right, but it’s obvious that they were not perceived to address at least some of the requirements and application scenarios where OpenFlow offers such compelling technological and market potential. The market abhors a vacuum, and OpenFlow has been called forth to fill a need.

Old-School Swagger

Relative to OpenFlow, CLIs seem a particularly poor choice for the realization of SDN-type programmability. In the NetworkWorld companion piece, Arista Networks CEO Jayshree Ullal is quoted as follows:

“There’s more than one way to be open. And there’s more than one way to scale. CLIs may not be a programmable interface with a (user interface) we are used to; but it’s the way real men build real networks today.”

Notwithstanding Ullal’s blatant appeal to engineering machismo, evoking a networking reprise of Saturday Night Live’s old “¿Quien Es Mas Macho?” sketches, I doubt that even the most red-blooded networking professionals would opt for CLIs as a means of SDN fulfillment. In qualifying her statement, Ullal seems to concede as much.

Rubbishing Pretensions

Over at the Big Switch Networks, Omar Baldonado isn’t shy about rubbishing CLI pretensions to SDN superstardom. Granted, Big Switch Networks isn’t a disinterested party when it comes to OpenFlow, but neither are any of the other networking vendors, whether happily ensconced on the OpenFlow bandwagon or throwing rotten tomatoes at it from alleys along the parade route.

Baldonado probably does more than is necessary to hammer home his case against CLIs for SDNs, but I think the following excerpt, in which he stresses that CLIs were and are meant to be used to configure network devices, summarizes his argument pithily:

“The CLI was not designed for layers of software above it to program the network. I think we’d all agree that if we were to put our software hats on and design such a programming API, we would not come up with a CLI!”

That seems about right, and I don’t think we need belabor the point further.

Other Options

What about some of the other OpenFlow alternatives, though? As I said, I think OpenFlow is well crafted for the purposes the high priests of the Open Networking Foundation have in store for it, but enterprises are a different matter, at least for the foreseeable future (which is perhaps more foreseeable by some than by others, your humble scribe included).

In a subsequent post — I’d like to say it will be my next one, but something else, doubtless shiny and superficially appealing, will probably intrude to capture my attentions — I’ll be looking at OpenStack’s applicability in an SDN context.

Alcatel-Lucent Banks on Carrier Clouds

Late last week, I had the opportunity to speak with David Fratura, Alcatel-Lucent’s senior director of strategy for cloud solutions, about his company’s new foray into cloud computing, CloudBand, which is designed to give Alcatel-Lucent’s carrier customers a competitive edge in delivering cloud services to their enterprise clientele and — perhaps to a lesser extent — to consumers, too.

Like so many others in the telecommunications-equipment market, Alcatel-Lucent is under pressure on multiple fronts. In a protracted period of global economic uncertainty, carriers are understandably circumspect about their capital spending, focusing investments primarily on areas that will result in near-term reduced operating costs or similarly immediate new service revenues. Carriers are reluctant to spend much in hopeful anticipation of future growth for existing services; instead, they’re preoccupied with squeezing more value from the infrastructure they already own or with finding entirely new streams of service-based revenue growth, preferably at the lowest-possible cost of market entry.

Big Stakes, Complicated Game

Complicating the situation for Alcatel-Lucent — as well as for Nokia Siemens Networks and longtime market wireless-gear market leader Ericsson — are the steady competitive advances being made into both developed and developing markets by Chinese telco-equipment vendors Huawei and ZTE. That competitive dynamic is putting downward pressure on hardware margins for the old-guard vendors, compelling them to look to software and services for diversification, differentiation, and future growth.

For its part, Alcatel-Lucent has sought to establish itself as a vendor that can help its operator customers derive new revenue from mobile software and services and, increasingly, from cloud computing.

Alcatel-Lucent CEO Ben Verwaayen is banking on those initiatives to save his job as well as to revive the company’s growth profile. Word from sources close the company, as reported first by the Wall Street Journal, is that the boardroom knives are out for the man in Alcatel’s big chair, though Alcatel-Lucent chairman Philippe Camus felt compelled to respond to the intensifying scuttlebutt by providing Verwaayen with a qualified vote of confidence.

Looking Up 

With Verwaayen counting on growth markets such as cloud computing to pull him and Alcatel-Lucent out of the line of fire, CloudBand can be seen as something more than the standard product announcement. There’s a bigger context, encompassing not only Alcatel-Lucent’s ambitions but also the evolution of the broader telecommunications industry.

CloudBand, according to a company-issued press release, is designed to deliver a “foundation for a new class of ‘carrier cloud’ services that will enable communications service providers to bring the benefits of the cloud to their own networks and business operations, and put them in an ideal position to offer a new range of high-performance cloud services to enterprises and consumers.”

In a world where everybody is trying to contribute to or be the cloud, that’s a tall order, so let’s take a look at the architecture Alcatel-Lucent has brought forward to create its “carrier cloud.”

CloudBand Architecture

CloudBand comprises two distinct elements. First up is the CloudBand Management System, derived from research work at the venerable Bell Labs, which delivers orchestration and optimization of services between the communications network and the cloud. The second element is the CloudBand Node, which provides computing, storage, and networking hardware and associated software to host a wide range of cloud services. Alcatel-Lucent’s “secret sauce,” and hence its potential to draw meaningful long-term business from its installed base of carrier customers, is the former, but the latter also is of interest.

Hewlett-Packard, as part of a ten-year strategic global agreement with Alcatel-Lucent, will provide converged data-center infrastructure for the CloudBand nodes, including compute, storage, and networking technologies. While Alcatel-Lucent has said it can accommodate gear from other vendors in the nodes, HP’s offerings will be positioned as the default option in the CloudBand nodes. Alcatel-Lucent’s relationship with HP was intended to help “bridge the gap between the data center and the network,” and the CloudBand node definitely fits within that mandate.

Virtualized Network Elements in “Carrier Clouds”

By enabling operators to shift to a cloud-based delivery model, CloudBand is intended to help service providers market and deliver new services to customers quickly, with improved quality of service and at lower cost. Carriers can use CloudBand to virtualize their network elements, converting them to software and running them on demand in their “carrier clouds.” As a result, service providers  presumably will derive improved utilization from their network resources, saving money on the delivery of existing services — such as SMS and video — and testing and introducing new ones at lower costs.

If carriers embrace CloudBand only for this reason — to virtualize and better manage their network elements and resources for more efficient and cost-effective delivery of existing services — Alcatel-Lucent should do well with the offering. Nonetheless, the company has bigger ambitions for CloudBand.

Alcatel-Lucent has done market research indicating that enterprise IT decision makers’ primary concern about the cloud involves performance rather than security, though both ranked highly. Alcatel-Lucent also found that those same enterprise IT decision makers believe their communications service providers — yes, carriers — are best equipped to deliver the required performance and quality of service.

Helping Carriers Capture Cloud Real Estate 

Although Alcatel-Lucent talks a bit about consumer-oriented cloud services, it’s clear that the enterprise is where it really believes it can help its carrier customers gain traction. That’s an important distinction, too, because it means Alcatel-Lucent might be able to help its customers carve out a niche beyond consumer-focused cloud purveyors such as Google, Facebook, Apple, and even Microsoft. It also means it might be able to assist carriers in differentiate themselves from infrastructure-as-a-service (IaaS) leader Amazon Web Services (AWS), which became the service of choice for technology startups, and from the likes of Rackspace.

As Alcatel-Lucent’s Fratura emphasized, many businesses, from SMBs up to large enterprises, already obtain hosted services and software-as-a-service (SaaS) offerings from carriers today. What Alcatel-Lucent proposes with CloudBand is designed to help them capture more of the cloud market.

It just might work, but it won’t be easy. As Ray Le Maistre at LightReading wrote, cloud solutions on this scale are not a walk on the beach or a day at the park (yes, you saw what I did there). What’s more, Alcatel-Lucent will have to hope that a sufficient number of its carrier customers can deploy, operate, and manage CloudBand to full effect. That’s not a given, even if Alcatel-Lucent offers CloudBand as managed service and even though it already sells and delivers professional services to carriers.

Alcatel-Lucent says CloudBand will be available for deployment in the first half of 2012.  At first, CloudBand will run exclusively on Alcatel-Lucent technology, but the company claims to be working with the Alliance for Telecommunications Industry Solutions (ATIS)  and the Internet Engineering Task Force (IETF) to establish standards to enable CloudBand to run on gear from other vendors.

With CloudBand, Alcatel-Lucent, at least within the content of its main telecommunications-equipment competitors, is seen as having first run at the potentially lucrative market opportunity of cloud enabling the carrier community. Much now will depend on how well it executes and on how effectively its competitors respond to the initiative.

The Carrier Factor

In addition, of course, the carriers themselves are a factor. Although they undoubtedly want to get their hands around the cloud business opportunity, there’s some question as to whether they have the wherewithal to get the job done. The rise of cloud services from Google, Apple, Facebook, Amazon was partly a result of carriers missing a golden opportunity. One would like to think they’ve learned from those sobering experiences, but one also can’t be sure they won’t run to prior form.

From what I have heard and seen, the Alcatel-Lucent vision for CloudBand is compelling. It brings the benefits of virtualization and orchestration to carrier network infrastructure, enabling the latter to manage their resources cost-effectively and innovatively. If they seize the opportunity, they’ll save money on their own existing services and be in a great position to deliver range of cloud-based enterprise services to their business customers.

Alcatel-Lucent should find a receptive audience for CloudBand among its carrier installed base. The question is whether those Alcatel-Lucent customers will be able to get full measure from the technology and from the business opportunity the cloud represents.

Rackspace’s Bridge Between Clouds

OpenStack has generated plenty of sound and fury during the past several months, and, with sincere apologies to William Shakespeare, there’s evidence to suggest the frenzied activity actually signifies something.

Precisely what it signifies, and how important OpenStack might become, is open to debate, of which there has been no shortage. OpenStack is generally depicted as an open-source cloud operating system, but that might be a generous interpretation. On the OpenStack website, the following definition is offered:

“OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.”

Just for fun and giggles (yes, that phrase has been modified so as not to offend reader sensibilities), let’s parse that passage, shall we? In the words of the OpenStackers themselves, their project is an open-source cloud-computing platform for public and private clouds, and it is reputedly simple to implement, massively scalable, and feature rich. Notably, it consists of a “series of interrelated projects delivering various components for a cloud infrastructure solution.”

Simple for Some

Given that description, especially the latter reference to interrelated projects spawning various components, one might wonder exactly how “simple” OpenStack is to implement and by whom. That’s a question others have raised, including David Linthicum in a recent piece at InfoWorld. In that article, Linthicum notes that undeniable vendor enthusiasm and burgeoning market momentum accrue to OpenStack — the community now has 138 member companies (and counting), including big-name players such as HP, Dell, Intel, Rackspace, Cisco, Citrix, Brocade, and others — but he also offers the following caveat:

“So should you consider OpenStack as your cloud computing solution? Not on its own. Like many open source projects, it takes a savvy software vendor, or perhaps cloud provider, to create a workable product based on OpenStack. The good news is that many providers are indeed using OpenStack as a foundation for their products, and most of those are working, or will work, just fine.”

Creating Value-Added Services

Meanwhile, taking issue with a recent InfoWorld commentary by Savio Rodrigues — who argued that OpenStack will falter while its open-source counterpart Eucalyptus will thrive — James Urquhart, formerly of Cisco and now VP of product strategy at enStratus, made this observation:

“OpenStack is not a cloud service, per se, but infrastructure automation tuned to cloud-model services, like IaaS, PaaS and SaaS. Intsall OpenStack, and you don’t get a system that can instantly bill customers, provide a service catalog, etc. That takes additional software.

What OpenStack represents is the commodity element of cloud services: the VM, object, server image and networking management components. Yeah, there is a dashboard to interact with those commodity elements, but it is not a value-add capability in-and-of itself.

What HP, Dell, Cisco, Citrix, Piston, Nebula and others are doing with OpenStack is creating value-add services on top (or within) the commodity automation. Some focus more on “being commodity”, others focus more on value-add, but they are all building on top of the core OpenStack projects.”

New Revenue Stream for Rackspace

All of which brings us, in an admittedly roundabout fashion, to Rackspace’s recent announcement of its Rackspace Cloud Private Edition, a packaged version of OpenStack components that can be used by enterprises for private-cloud deployments. This move makes sense for OpenStack on couple levels.

First off, it opens up a new revenue stream for the company. While Rackspace won’t try to make money on the OpenStack software or the reference designs — featuring a strong initial emphasis on Dell servers and Cisco networking gear for now, though bare-bones OpenCompute servers are likely to be embraced before long —  it will provide value-add, revenue-generating managed services to customers of Rackspace Cloud Private Edition. These managed services will comprise installation of OpenStack updates, analysis of system issues, and assistance with specific questions relating to systems engineering. Some security-related services also will be offered. While the reference architecture and the software are available now, Rackspace’s managed services won’t be available until January.

Building a Bridge

The launch of Rackspace Cloud Private Edition is a diversification initiative for Rackspace, which hitherto has made its money by hosting and managing applications and computing services for others in its own data centers. The OpenStack bundle takes it into the realm of provided managed services in its customers’ data centers.

As mentioned above, this represents a new revenue stream for Rackspace, but it also provides a technological bridge that will allow customers who aren’t ready for multi-tenant public cloud services today to make an easy transition to Rackspace’s data centers at some future date. It’s a smart move, preventing prospective customers from moving to another platform for private cloud deployment, ensuring in the process that said customers don’t enter the orbit of another vendor’s long-term gravitational pull.

The business logic coheres. For each customer engagement, Rackspace gets a payoff today, and potentially a larger one at a later date.

Assessing Dell’s Layer 4-7 Options

As it continues to integrate and assimilate its acquisition of Force10 Networks, Dell is thinking about its next networking move.

Based on what has been said recently by Dario Zamarian, Dell’s GM and SVP of networking, the company definitely will be making that move soon. In an article covering Dell’s transition from box pusher to data-center and cloud contender, Zamarian told Fritz Nelson of InformationWeek that “Dell needs to offer Layer 4 and Layer 7 network services, citing security, load balancing, and overall orchestration as its areas of emphasis.”

Zamarian didn’t say whether the move into Layer 4-7 network services would occur through acquisition, internal development, or partnership. However, as I invoke deductive reasoning that would make Sherlock Holmes green with envy (or not), I think it’s safe to conclude an acquisition is the most likely route.

F5 Connection

Why? Well, Dell already has partnerships that cover Layer 4-7 services. F5 Networks, the leader in the application-delivery controllers (ADCs), is a significant Dell partner in the Layer 4-7 sphere. Dell and F5 have partnered for 10 years, and Dell bills itself as the largest reseller of F5 solutions. If you consider what Zamarian described as Dell’s next networking priority, F5 certainly fits the bill.

There’s one problem. F5 probably isn’t selling at any price Dell would be willing to pay.  As of today, F5 has a market capitalization of more than $8.5 billion. Dell has the cash, about $16 billion and counting, to buy F5 at a premium, but it’s unlikely Dell would be willing to fork over more than $11 billion — which, presuming mutual interest, might be F5’s absolute minimum asking price — to close the deal. Besides, observers have been thinking F5 would be acquired since before the Internet bubble of 2000 burst. It’s not likely to happen this time either.

Dell could see whether one of its other partners, Citrix, is willing to sell its NetScaler business. I’m not sure that’s likely to happen, though. I definitely can’t envision Dell buying Citrix outright. Citrix’s market cap, at more than $13.7 billion, is too high, and there are pieces of the business Dell probably wouldn’t want to own.

Shopping Not Far From Home?

Who else is in the mix? Radware is an F5 competitor that Dell might consider, but I don’t see that happening. Dell’s networking group is based in the Bay Area, and I think they’ll be looking for something closer to home, easier to integrate.

That brings us to F5 rival A10 Networks. Force10 Networks, which Dell now owns, had a partnership with A10, and there’s a possibility Dell might inherit and expand upon that relationship.

Then again, maybe not. Generally, A10 is a seen as purveyor of cost-effective ADCs. It is not typically perceived as an innovator and trailblazer, and it isn’t thought to have the best solutions for complex enterprise or data-center environments, exactly the areas where Dell wants to press its advantage. It’s also worth bearing in mind that A10 has been involved in exchanges of not-so-friendly litigious fire — yes, lawsuits volleyed back and forth furiously — with F5 and others.

All in all, A10 doesn’t seem a perfect fit for Dell’s needs, though the price might be right.

Something Programmable 

Another candidate, one that’s quite intriguing in many respects, is Embrane. The company is bringing programmable network services, delivered on commodity x86 servers, to the upper layers of the stack, addressing many of the areas in which Zamarian expressed interest. Embrane is focusing on virtualized data centers where Dell wants to be a player, but initially its appeal will be with service providers rather than with enterprises.

In an article written by Stacey Higginbotham and published at GigaOM this summer, Embrane CEO Dante Malagrinò explained that his company’s technology would enable hosting companies to provide virtualized services at Layers 4 through 7, including load balancing, firewalls, virtual private networking (VPN),  among others.

Some of you might see similarities between what Embrane is offering and the OpenFlow-enabled software-defined networking (SDN). Indeed, there are similarities, but, as Embrane points out, OpenFlow promises network virtualization and programmability at Layers 2 and 3 of the stack, not at Layers 4 through 7.

Higher-Layer Complement to OpenFlow

Dell, as we know, has talked extensively about the potential of OpenFlow to deliver operational cost savings and innovative services to data centers at service provides and enterprises. One could see what Embrane does as a higher-layer complement to OpenFlow’s network programmability. Both technologies take intelligence away from specialized networking gear and place it at the edge of the network, running in software on industry-standard hardware.

Interestingly, there aren’t many degrees of separation between the principals at Embrane and Dell’s Zamarian. It doesn’t take much sleuthing to learn that Zamarian knows both Malagrinò and Marco Di Benedetto, Embrane’s CTO. They worked together at Cisco Systems. Moreover, Zamarian and Malagrinò both studied at the Politecnico di Torino, though a decade or so apart.  Zamarian also has connections to Embrane board members.

Play an Old Game, Or Define a New One

In and of itself, those don’t mean anything. Dell would have to see value in what Embrane offers, and Embrane and its backers would have to want to sell. The company announced that in August that it had closed an $18-million Series-financing round, led by New Enterprise Associates (NEA). Lightspeed Venture Partners and North Bridge Ventures also took part in the round, which followed initial lead investments in the company’s $9-million Series-A funding.

Embrane’s product has been in beta, but the company planned a commercial launch before the end of this year. Its blog has been quiet since August.

I would be surprised to see Dell acquire F5, and I don’t think Citrix will part with NetScaler. If Dell is thinking about plugging L4-7 holes cost-effectively, it might opt for an acquisition of A10, but, if it’s thinking more ambitiously — if it really is transforming itself into a solutions provider for cloud providers and data centers — then it might reach for something with the potential to establish a new game rather than play at an old one.

Amazon’s Advantageous Model for Cloud Investments

While catching up with industry developments earlier this week, I came across a Reuters piece on Amazon’s now well-established approach toward investments in startup companies. If you haven’t seen it, I recommend that you give it a read.

As its Amazon Web Services (AWS) cloud operations approach the threshold of a $1-billion business, the company once known exclusively as an online bookshop continues to search for money-making opportunities well beyond Internet retailing.

Privileged Insights

An article at GigaOM by Barb Darrow quotes Amazon CEO Jeff Bezos explaining that his company stumbled unintentionally into the cloud-services business, but the Reuters item makes clear that Amazon is putting considerably more thought into its cloud endeavors these days. In fact, Amazon’s investment methodology, which sees it invest in startup companies that are AWS customers, is an exercise in calculated risk mitigation.

That’s because, before making those investments, Amazon gains highly detailed and extremely valuable insights into startup companies’ dynamic requirements for computing infrastructure and resources. It can then draw inferences about the popularity and market appeal of the services those companies supply. All in all, it seems like an inherently logical and sound investment model, one that gives Amazon privileged insights into companies before it decides to bet on their long-term health and prosperity.

That fact has not been lost on a number of prominent venture-capital firms, which have joined with Amazon to back the likes of Yieldex, Sonian, Engine Yard, and Animoto, all of whom, at one time or another, were AWS customers.

Mutual Benefits

Now that nearly every startup is likely to begin its business life using cloud-based computing infrastructure, either from AWS or another cloud purveyor, I wonder whether Amazon’s investment model might be mimicked by others with similar insights into their business customers’ resource utilization and growth rates.

There’s no question that such investments deliver mutual benefit. The startup companies get the financial backing to accelerate its growth, establish and maintain competitive differentiation, and speed toward market leadership. Meanwhile, Amazon and its VC partners get stakes in fast-growing companies that seem destined for bigger things, including potentially lucrative exits. Amazon also gets to maintain relationships with customers that might otherwise outgrow AWS and leave the relationship behind. Last but not least, the investment program serves a promotional purpose for Amazon, demonstrating a commitment and dedication to its AWS customers that can extend well beyond operational support.

It isn’t just Amazon that can derive an investment edge from how their customers are using their cloud services. SaaS cloud providers such as Salesforce and Google also can gain useful insights into how customers and customer segments are faring during good and bad economic times, and PaaS providers would also stand to derive potentially useful knowledge about how and where customers are adopting their services.

Various Scenarios

Also on SaaS side of the ledger, in the realm of social networking — I’m thinking of Facebook, but others fit the bill — subscriber data can be mined for the benefit of advertisers seeking to deliver targeted campaigns to specific demographic segments.

In a different vein, Google’s search business could potentially give it the means to develop high-probability or weighted analytics based on the prevalence, intensity, nature, and specificity of search queries. Such data could be applied to and mined for probability markets. One application scenario might involve insiders searching online to ascertain whether prior knowledge of a transaction has been leaked to the wider world. By searching for the terms in question, they would effectively signal that an event might take place. (This would be more granular than Google Trends, and different from it in other respects, too.) There are a number of other examples and scenarios that one could envision.

Getting back to Amazon, though, what it is doing with its investment model clearly makes a lot of sense, giving it unique insights and a clear advantage as it weighs where to place its bets. As I said, it would be no surprise to see other cloud providers, even those not of the same scale as Amazon, consider similar investment models.