Prescribing Dell’s Next Networking Move

Now that it has announced its acquisition of Force10 Networks, Dell is poised to make its next networking move.

Should that be another acquisition? No, I don’t think so. Dell needs time to integrate and assimilate Force10 before it considers another networking acquisition. Indeed, I think integration, not just of Force10, is the key to understanding what Dell ought to do next.

One problem, certainly for some of Dell’s biggest data-center customers, is that networking has been its own silo, relatively unaffected by the broad sweep of virtualization. While server hardware has been virtualized comprehensively — resulting in significant cost savings for data centers — and storage is now following suit, switches and routers have remained vertically integrated, comparatively proprietary boxes, largely insulated from the winds of change.

Dell and OpenStack

Perhaps because it is so eager to win cloud business — seeing the cloud not only as the next big thing but also as the ultimate destination for many SMB applications — Dell has been extremely solicitous in attempting to address the requirements flagged by the likes of Rackspace, Microsoft, Facebook, and Google. Dell sees these customers as big public-cloud purveyors (which they are), but also as early adopters of data-center solutions that could be offered subsequently to other cloud-oriented service providers and large enterprises.

That’s why Dell has been such a big proponent of OpenStack.  A longtime member of the OpenStack community, Dell recently introduced the Dell OpenStack Cloud Solution,  which includes the OpenStack cloud operating system, Dell PowerEdge C servers, the Dell-developed “Crowbar” OpenStack installer, plus services from Dell and Rackspace Cloud Builders.

The rollout of the Dell OpenStack Cloud Solution is intended to make it easy for cloud purveyors and large enterprises  to adopt and deploy open-source infrastructure as a service (IaaS).

Promise of OpenFlow

Interestingly, many of the same cloud and service providers that see promise in heavily virtualized, open-source IaaS technologies, as represented by OpenStack, also see considerable potential in OpenFlow, a protocol that allows a switch data plane to be programmed directly by a separate flow controller. Until now,  the data plane and the control plane have existed in the same switch hardware. OpenFlow removes control-plane responsibilities from the switch and places them in software that can run elsewhere, presumably on an industry-standard server (or on a cluster of servers).

OpenFlow is one means of realizing software-defined networking, which holds the promise of making network infrastructure programmable and virtualized.

Some vendors already have perceived merit in the data-center combination of OpenStack and OpenFlow. Earlier this year in a blog post, Brocade Communications’ CTO Dave Stevens and Ken Cheng, VP of service provider products, wrote the following about the joint value of OpenStack and OpenFlow:

 “There are now two promising industry efforts that go a long way in promoting industry-wide interoperability and open architectures for both virtualization and cloud computing. Specifically, they are the OpenFlow initiative driven by the Open Networking Foundation (ONF), which is hosted by Stanford University with 17 member companies currently, and theOpenStack cloud software project backed by a consortium of more than 50 private and public sector organisations.

We won’t belabor the charters and goals of either initiative as that information is widely available and listed in detail on both Web sites. The key idea we want to convey from Brocade’s point of view is that OpenFlow and OpenStack should not be regarded as discrete, unrelated projects. Indeed, we view them as three legs of a stool with OpenFlow serving as the networking leg while OpenStack serves as the other two legs through its compute and object storage software projects. Only by working together can these industry initiatives truly enable customers to virtualize their physical network assets and migrate smoothly to open, highly interoperable cloud architectures.”

Much in Common

Indeed, the architectural, philosophical, and technological foundations of OpenFlow and OpenStack have much in common. They also deliver similar business benefits for cloud shops and large data centers, which could run their programmable, virtualized infrastructure (servers, storage, and networking) on industry-standard hardware.

Large cloud providers are understandably motivated to want to see the potential of OpenFlow and OpenStack come to fruition. Both provide the promise of substantial cost savings, not only capex but also opex. There’s more to both than cost savings, of course, but the cost savings alone could provide ROI justification for many prospective customers.

That’s something Dell, now the proud owner of Force10 Networks, ought to be considering. Dell has been quick to point out that its networking acquisition now gives it the converged infrastructure for data centers that Cisco and HP already had. Still, even if we accept that argument at face value, Dell is at a disadvantage facing those vendors in a proprietary game on a level playing field. Both Cisco and HP have bigger, stronger networking assets, and both have more marketing, sales, and technological resources at their disposal. Unless it changes the game, Dell has little chance of winning.

Changing the Game

So, how can Dell change the game? It could become the converged infrastructure player that wholeheartedly embraces OpenStack and OpenFlow, following the lead its data-center customers have provided while also leading them to new possibilities.

I realize that the braintrust at Force10 recently took a wait-and-see stance toward OpenFlow. However, now that Dell owns Force10, that position should be reviewed in a new, larger context.

Given that Dell reportedly passed over Brocade on its way to the altar with Force10, it would be ironic if Dell were to execute on an OpenStack-OpenFlow vision that Brocade eloquently articulated.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s