Category Archives: Open Source

Amazon-RIM: Summer Reunion?

Think back to last December, just before the holidays. You might recall a Reuters report, quoting “people with knowledge of the situation,” claiming that Research in Motion (RIM) rejected takeover propositions from Amazon.com and others.

The report wasn’t clear on whether the informal discussions resulted in any talk of price between Amazon and RIM, but apparently no formal offer was made. RIM, then still under the stewardship of former co-CEOs Jim Balsillie and Mike Lazaridis, reportedly preferred to remain independent and to address its challenges alone.

I Know What You Discussed Last Summer

Since then, a lot has happened. When the Reuters report was published — on December 20, 2011 — RIM’s market value had plunged 77 percent during the previous year, sitting then at about $6.8 billion. Today, RIM’s market capitalization is $3.7 billion. What’s more, the company now has Thorsten Heins as its CEO, not Balsillie and Lazardis, who were adamantly opposed to selling the company. We also have seen recent reports that IBM approached RIM regarding a potential acquisition of the Waterloo, Ontario-based company’s enterprise business, and rumors have surfaced that RIM might sell its handset business to Amazon or Facebook.

Meanwhile, RIM’s prospects for long-term success aren’t any brighter than they were last winter, and activist shareholders, not interested in a protracted turnaround effort, continue to lobby for a sale of the company.

As for Amazon, it is said to be on the cusp of entering the smartphone market, presumably using a forked version of Android, which is what it runs on the Kindle tablet.  From the vantage point of the boardroom at Amazon, that might not be a sustainable long-term plan. Google is looking more like an Amazon competitor, and the future trajectory of Android is clouded by Google’s strategic considerations and by legal imbroglios relating to patents. Those presumably were among the reasons Amazon approached RIM last December.

Uneasy Bedfellows

It’s no secret that Amazon and Google are uneasy Android bedfellows. As Eric Jackson wrote just after the Reuters story hit the wires:

Amazon has never been a big supporter of Google’s Android OS for its Kindle. And Google’s never been keen on promoting Amazon as part of the Android ecosystem. It seems that both companies know this is just a matter of time before each leaves the other.

Yes, there’s some question as to how much value inheres in RIM’s patents. Estimates on their worth are all over the map. Nevertheless, RIM’s QNX mobile-operating system could look compelling to Amazon. With QNX and with RIM’s patents, Amazon would have something more than a contingency plan against any strategic machinations by Google or any potential litigiousness by Apple (or others).  The foregoing case, of course, rests on the assumption that QNX, rechristened BlackBerry 10, is as far along as RIM claims. It also rests on the assumption that Amazon wants a mobile platform all its own.

It was last summer when Amazon reportedly made its informal approach to RIM. It would not be surprising to learn that a reprise of discussions occurred this summer. RIM might be more disposed to consider a formal offer this time around.

Cisco’s SDN Response: Mission Accomplished, but Long Battle Ahead

In concluding my last post, I said I would write a subsequent note on whether Cisco achieved its objectives in its rejoinder to software-defined networking (SDN) at the Cisco Live conference last week in San Diego.

As the largest player in network infrastructure, Cisco’s words carry considerable weight. When Cisco talks, its customers (and the industry ecosystem) listen. As such, we witnessed extensive coverage of the company’s Cisco Open Network Environment (Cisco ONE) proclamations last week.

Really, what Cisco announced with Cisco ONE was relatively modest and wholly unsurprising. What was surprising was the broad spectrum of reactions to what was effectively a positioning statement from the networking market’s leading vendor.

Mission Accomplished . . . For Now

And that positioning statement wasn’t so much about SDN, or about the switch-control protocol OpenFlow, but about something more specific to Cisco, whose installed base of customers, especially in the enterprise, is increasingly curious about SDN. Indeed, Cisco’s response to SDN should be seen, first and foremost, as a response to its customers. One could construe it as a cynical gesture to “freeze the market,” but that would not do full justice to the rationale. Instead, let’s just say that Cisco’s customers wanted to know how their vendor of choice would respond to SDN, and Cisco was more than willing to oblige.

In that regard, it was mission accomplished. Cisco gave its enterprise customers enough reason to put off a serious dalliance with SDN, at least for the foreseeable future (which isn’t that long). But that’s all it did. I didn’t see a vision from Cisco. What I saw was an effective counterpunch — but definitely not a knockout — against a long-term threat to its core market.

Cisco achieved its objective partly by offering its own take on network programmability, replete with a heavy emphasis on APIs and northbound interfaces; but it also did it partly by bashing OpenFlow, the open  protocol that effects physical separation of the network-element control and forwarding planes.

Conflating OpenFlow and SDN

In its criticism of OpenFlow, Cisco sought to conflate the protocol with the larger SDN architecture. As I and many others have noted repeatedly, OpenFlow is not SDN;  the two are not inseparable. It is possible to deliver an SDN architecture without OpenFlow. Even when OpenFlow is included, it’s a small part of the overall picture.  SDN is more than a mechanism by which a physically separate control plane directs packet forwarding on a switch.

If you listened to Cisco last week, however, you would have gotten the distinct impression that OpenFlow and SDN are indistinguishable, and that all that’s happening in SDN is a southbound conversation from a server-based software controller and OpenFlow-capable switches. That’s not true, but the Open Networking Foundation (ONF), the custodians of SDN and OpenFlow, has left an opening that Cisco is only too happy to exploit.

The fact is, the cloud service-provider principals steering the ONF see SDN playing a much bigger role than Cisco would have you believe. OpenFlow is a starting point. It is a means to, well, another means — because SDN is an enabler, too. What SDN enables is network virtualization and network programmability, but not how Cisco would like its customers to get there.

Cisco Knows SDN More Than OpenFlow

To illustrate my point, I refer you to the relatively crude ONF SDN architectural stack showcased in a white paper, Software-Defined Networking: The New Norm for Networks. If you consult the diagram in that document, you will see that OpenFlow is the connective tissue between the controller and the switch — what ONF’s Dan Pitt has described as an “open interface to packet forwarding” — but you will also see that there are abstraction layers that reside well above OpenFlow.

If you want an ever more detailed look at a “modern” SDN architecture, you can consult a presentation given by Cisco’s David Meyer earlier this year. That presentation features physical hardware at the base, with SDN components in the middle. These SDN components include the “forwarding interface abstraction” represented by OpenFlow, a network operation system (NOS) running on a controller (server), a “nypervisor” (network hypervisor), and a global management abstraction that interfaces with the control logic of higher-layer application (control) programs.

So, Cisco clearly knows that SDN comprises more than OpenFlow, but, in its statements last week at Cisco Live, the company preferred to use the protocol as a strawman in its arguments for Cisco-centric network programmability. You can’t blame Cisco, though. It has customers to serve — and to keep in the revenue- and profit-generating fold — and an enterprise-networking franchise to protect.

Mind the Gap

But why did the ONF leave this gap for Cisco to fill? It’s partly because the ONF isn’t overly concerned with the enterprise and partly because the ONF sees OpenFlow as an open, essential precondition for the higher, richer layers of the SDN architectural model.

Without the physical separation of the control plane from the forwarding plane, after all, some of the ONF’s service-provider constituency might not have been able to break free of vendor hegemony in their networks. What’s more, they wouldn’t be able to set the stage for low-priced, ODM-manufactured networking hardware built with merchant silicon.

As you can imagine, that is not the sort of change that Cisco can get behind, much less lead. Therefore, Cisco breaks out the brickbats and goes in hot pursuit of OpenFlow, which it then portrays as deficient for the purposes of far-reaching, north-and-south network programmability.

Exiting (Not Exciting) Plumbing

Make no mistake, though. The ONF has a vision, and it extends well beyond OpenFlow. At a conference in Garmisch, Germany, earlier this year, Dan Pitt, the ONF’s executive director, offered a presentation called “A Revolution in Networking and Standards,” and made the following comments:

“I think networking is going to become an integral part of computing in a way that makes it less important, because it’s less of a problem. It’s not the black sheep any longer. And the same tools you use to create an IT computing infrastructure or virtualization, performance, and policy will flow through to the network component of that as well, without special effort.

I think enterprises are going to be exiting technology – or exiting plumbing. They are not going to care about the plumbing, whether it’s their networks or the cloud networks that increasingly meet their needs, and the cloud services. They’re going to say, here’s the function or the feature I want for my business goal, and you make it happen. And somebody worries about the plumbing, but not as many people who worry about plumbing today. And if you’ve got this virtualized view, you don’t have to look at the plumbing. . . .

The operators are gradually becoming software companies and internet companies. They are bulking up on those skills. They want to be able to add those services and features themselves instead of relying on the vendors, and doing it quickly for their customers. It gives opportunities to operators that they didn’t have before of operating more diverse services and experimenting at low cost with new services.”

No Cartwheels

Again, this is not a vision that would have John Chambers doing cartwheels across the expansive Cisco campus.

While the ONF is making plans to address the northbound interfaces that are a major element in Cisco’s network programmability, it hasn’t done so yet. Even when it does, the ONF is unlikely to standardize higher-layer APIs, at least in the near term. Instead, those APIs will be associated with the controllers that get deployed in customer networks. In other words, the ONF will let the market decide.

On that tenet, Cisco can agree with the ONF. It, too, would like the market to decide, especially since its market presence — the investments customers have made in its routers and switches, and in its protocols and management tools — towers imperiously over the meager real estate being claimed in the nascent SDN market.

With all that Cisco network infrastructure deployed in customer networks, Cisco believes it’s in a commanding position to set the terms for how the network will deliver software intelligence to programmers of applications and management systems. Theoretically, that’s true, but the challenge for Cisco will be in successfully engaging a programming constituency that isn’t its core audience. Can Cisco do it? It will be a stretch.

Do They Get It?

All the while, the ONF and its service-provider backers will be advancing and promoting the SDN model and the network virtualization and programmability that accompany it. The question for the ONF is not whether its movers and shakers understand programmers — it’s pretty clear that Google, Facebook, Microsoft, and Yahoo are familiar with programmers — but whether the ONF understands and cares enough about the enterprise to make that market a priority in its technology roadmap.

If the ONF leaves the enterprise to the dictates of the Internet Engineering Task Force (IETF) and Institute of Electrical and Electronics Engineers (IEEE), Cisco is likely to maintain its enterprise dominance with an approach that provides some benefits of network programmability without the need for server-based controllers.

Meanwhile, as Tom Nolle, president of CIMI Corporation has pointed out, Cisco ONE also serves as a challenge to Cisco’s conventional networking competitors, which are devising their own answers to SDN.

But that is a different thread, and this one is too long already.

Direct from ODMs: The Hardware Complement to SDN

Subsequent to my return from Network Field Day 3, I read an interesting article published by Wired that dealt with the Internet giants’ shift toward buying networking gear from original design manufacturers (ODMs) rather than from brand-name OEMs such as Cisco, HP Networking, Juniper, and Dell’s Force10 Networks.

The development isn’t new — Andrew Schmitt, now an analyst at Infonetics, wrote about Google designing its own 10-GbE switches a few years ago — but the story confirmed that the trend is gaining momentum and drawing a crowd, which includes brokers and custom suppliers as well as increasing numbers of buyers.

In the Wired article, Google, Microsoft, Amazon, and Facebook were explicitly cited as web giants buying their switches directly from ODMs based in Taiwan and China. These same buyers previously procured their servers directly from ODMs, circumventing brand-name server vendors such as HP and Dell.  What they’re now doing with networking hardware, then, is a variation on an established theme.

The ONF Connection

Just as with servers, the web titans have their reasons for going directly to ODMs for their networking hardware. Sometimes they want a simpler switch than the brand-name networking vendors offer, and sometimes they want certain functionality that networking vendors do not provide in their commercial products. Most often, though, they’re looking for cheap commodity switches based on merchant silicon, which has become more than capable of handling the requirements the big service providers have in mind.

Software is part of the picture, too, but the Wired story didn’t touch on it. Look at the names of the Internet companies that have gone shopping for ODM switches: Google, Microsoft, Facebook, and Amazon.

What do those companies have in common besides their status as Internet giants and their purchases of copious amounts of networking gear? Yes, it’s true that they’re also cloud service providers. But there’s something else, too.

With the exception of Amazon, the other three are board members in good standing of the Open Networking Foundation (ONF). What’s more,  even though Amazon is not an ONF board member (or even a member), it shares the ONF’s philosophical outlook in relation to making networking infrastructure more flexible and responsive, less complex and costly, and generally getting it out of the way of critical data-center processes.

Pica8 and Cumulus

So, yes, software-defined networking (SDN) is the software complement to cloud-service providers’ direct procurement of networking hardware from ODMs.  In the ONF’s conception of SDN, the server-based controller maps application-driven traffic flows to switches running OpenFlow or some other mechanism that provides interaction between the controller and the switch. Therefore, switches for SDN environments don’t need to be as smart as conventional “vertically integrated” switches that combine packet forwarding and the control plane in the same box.

This isn’t just guesswork on my part. Two companies are cited in the Wired article as “brokers” and “arms dealers” between switch buyers and ODM suppliers. Pica8 is one, and Cumulus Networks is the other.

If you visit the Pica8 website,  you’ll see that the company’s goal is “to commoditize the network industry and to make the network platforms easy to program, robust to operate, and low-cost to procure.” The company says it is “committed to providing high-quality open software with commoditized switches to break the current performance/price barrier of the network industry.” The company’s latest switch, the Pronto 3920, uses Broadcom’s Trident+ chipset, which Pica8 says can be found in other ToR switches, including the Cisco Nexus 3064, Force10 S4810, IBM G8264, Arista 7050S, and Juniper QFC-3500.

That “high-quality open software” to which Pica8 refers? It features XORP open-source routing code, support for Open vSwitch and OpenFlow, and Linux. Pica8 also is a relatively longstanding member of ONF.

Hardware and Software Pedigrees

Cumulus Networks is the other switch arms dealer mentioned in the Wired article. There hasn’t been much public disclosure about Cumulus, and there isn’t much to see on the company’s website. From background information on the professional pasts of the company’s six principals, though, a picture emerges of a company that would be capable of putting together bespoke switch offerings, sourced directly from ODMs, much like those Pica8 delivers.

The co-founders of Cumulus are J.R. Rivers, quoted extensively in the Wired article, and Nolan Leake. A perusal of their LinkedIn profiles reveals that both describe Cumulus as “satisfying the networking needs of large Internet service clusters with high-performance, cost-effective networking equipment.”

Both men also worked at Cisco spin-in venture Nuova Systems, where Rivers served as vice president of systems architecture and Leake served in the “Office of the CTO.” Rivers has a hardware heritage, whereas Leake has a software background, beginning his career building a Java IDE and working at senior positions at VMware and 3Leaf Networks before joining Nuova.

Some of you might recall that 3Leaf’s assets were nearly acquired by Huawei, before the Chinese networking company withdrew its offer after meeting with strenuous objections from the Committee on Foreign Investment in the United States (CFIUS). It was just the latest setback for Huawei in its recurring and unsuccessful attempts to acquire American assets. 3Com, anyone?

For the record, Leake’s LinkedIn profile shows that his work at 3Leaf entailed leading “the development of a distributed virtual machine monitor that leveraged a ccNUMA ASIC to run multiple large (many-core) single system image OSes on a Infiniband-connected cluster of commodity x86 nodes.”

For Companies Not Named Google

Also at Cumulus is Shrijeet Mukherjee, who serves as the startup company’s vice president of software engineering. He was at Nuova, too, and worked at Cisco right up until early this year. At Cisco, Mukherjee focused on” virtualization-acceleration technologies, low-latency Ethernet solutions, Fibre Channel over Ethernet (FCoE), virtual switching, and data center networking technologies.” He boasts of having led the team that delivered the Cisco Virtualized Interface Card (vNIC) for the UCS server platform.

Another Nuova alumnus at Cumulus is Scott Feldman, who was employed at Cisco until May of last year. Among other projects, he served in a leading role on development of “Linux/ESX drivers for Cisco’s UCS vNIC.” (Do all these former Nuova guys at Cumulus realize that Cisco reportedly is offering big-bucks inducements to those who join its latest spin-in venture, Insieme?)

Before moving to Nuova and then to Cisco, J.R. Rivers was involved with Google’s in-house switch design. In the Wired article, Rivers explains the rationale behind Google’s switch design and the company’s evolving relationship with ODMs. Google originally bought switches designed by the ODMs, but now it designs its own switches and has the ODMs manufacture them to the specifications, similar to how Apple designs its iPads and iPhones, then  contracts with Foxconn for assembly.

Rivers notes, not without reason, that Google is an unusual company. It can easily design its own switches, but other service providers possess neither the engineering expertise nor the desire to pursue that option. Nonetheless, they still might want the cost savings that accrue from buying bare-bones switches directly from an ODM. This is the market Cumulus wishes to serve.

Enterprise/Cloud-Service Provider Split

Quoting Rivers from the Wired story:

“We’ve been working for the last year on opening up a supply chain for traditional ODMs who want to sell the hardware on the open market for whoever wants to buy. For the buyers, there can be some very meaningful cost savings. Companies like Cisco and Force10 are just buying from these same ODMs and marking things up. Now, you can go directly to the people who manufacture it.”

It has appeal, but only for large service providers, and perhaps also for very large companies that run prodigious server farms, such as some financial-services concerns. There’s no imminent danger of irrelevance for Cisco, Juniper, HP, or Dell, who still have the vast enterprise market and even many service providers to serve.

But this is a trend worth watching, illustrating the growing chasm between the DIY hardware and software mentality of the biggest cloud shops and the more conventional approach to networking taken by enterprises.

SDN’s Continuing Evolution

At the risk of understatement, I’ll begin this post by acknowledging that we are witness to intensifying discussion about the applicability and potential of software-defined networking (SDN). Frequently, such discourse is conjoined and conflated with discussion of OpenFlow.

But the two, as we know, are neither the same nor necessarily inextricable. Software-defined networking is a big-picture concept involving controller-driven programmable networks whereas OpenFlow is a protocol that enables interaction between a control plane and the data plane of a switch.

Not Necessarily Inextricable

A salient point to remember — there are others, I’m sure, but I’m leaning toward minimalism today — is that, while SDN and OpenFlow often are presented as joined at the hip, they need not be. You can have SDN without Open Flow. Furthermore, it’s worth bearing in mind that the real magic of SDN resides beyond OpenFlow’s reach, at a higher layer of  abstraction in the SDN value hierarchy.

So, with that in mind, let’s take a brief detour into SDN history, to see whether the past can inform the present and illuminate the future. I was fortunate enough to have some help on this journey from  Amin Tootoonchian, a PhD student in the Systems and Networking Group, Department of Computer Science, University of Toronto.

Tootoonchian is actively involved in research projects related to software-defined networking and OpenFlow. He wrote a paper in conjunction with Yashar Ganjali, his advisor and an assistant professor at the University of Toronto, on HyperFlow, an application that runs on the open-source NOX controller to create a logically centralized but physically distributed control plane for OpenFlow. Tootoonchian developed and implemented HyperFlow, and he also is working on the next release of NOX. Recently, he spent six months pursuing SDN research at the University of California Berkeley.

His ongoing research has afforded insights into the origins and evolution of SDN. During a discussion over a coffee, he kindly recommended some reference material for my edification and enlightenment. I’m all for generosity here, so I’m going to share those recommendations with you over in what might become a series of posts. (I’d like to be more definitive, I really would, but I never know where I’m going to steer this thing I call a blog. It all comes down to time, opportunity, circumstances, and whether I get hit by a bus.)

Anyway, let’s start, strangely enough, at the beginning, with SDN concepts that ultimately led to the development of the OpenFlow protocol.

4D and Ethane: SDN Milestones 

Tootoonchian pointed me to papers and previous research involving academic projects such as 4D and Ethane, which served as recent antecedents to OpenFlow. There are other papers and initiatives he mentioned, a few of which I will reference, if all goes according to my current plan, in forthcoming posts.

Before 4D and Ethane, however, there were other SDN predecessors, most of which were captured in a presentation by Edward Crabbe, network architect at Google. Helpfully titled “The (Long) Road to SDN,” Crabbe’s presentation was given at a Tech Field Day last autumn.

Crabbe draws an SDN evolutionary line from Ipsilon’s General Switch Management Protocol (GSMP) in 1996 through a number of subsequent initiatives — including the IEFT’s Forwarding and Control Element Separation (FORCES) and Path Computation Element (PCE) working groups — gradually progressing toward the advent of OpenFlow in 2008. He points to common threads in SDN that include partitioning of resources and control within network elements; and minimization of the network-element local control plane, involving offline control of forwarding state and offline control of network-element resource allocation.

As for why SDN has drawn growing interest, development and support, Crabbe cites two main reasons: cost and “innovation velocity.” I (and others) have touched on the cost savings previously, but Crabble’s particular view from the parapets of Google warrants attention.

Capex and Opex Savings 

In his presentation, Crabbe cites cost savings relating to both capital and operating expenditures.

On the capex side, he notes that SDN can deliver efficient use of  IT infrastructure resources, which, I note, results in the need to purchase fewer new resources. He makes particular mention of how efficient resource utilization applies to network element CPU and memory as well as to underlying network capacity. He also notes SDN’s facility at moving the “heaviest workloads off expensive, relatively slow embedded systems to cheap, fast, commodity hardware.” Unstated, but seemingly implicit, is that the former are often proprietary whereas the latter are not.

Crabbe also mentions that capex savings can accrue from SDN’s ability to “provide visibility into, and synchronized control of, network state, such that underlying capacity may be used more efficiently.” Again, efficient utilization of the resources one owns means one derives full value from them before having to allocate spending to the purchase of new ones.

As for lower operating expenditures, Crabbe broadly states that SDN enables reduced network complexity, which results in less operational overhead and fewer outages. He offers a number of supporting examples, and the case he makes is straightforward and valid. If you can reduce network complexity, you will mitigate operational risk, save time, boost network-related productivity, and perhaps get the opportunity to allocate valuable resources to other, potentially more productive uses.

Enterprise Narrative Just Beginning 

Speaking of which, that brings us to Crabbe’s assertion that SDN confers “innovation velocity.” He cites several examples of how and where such innovation can be expedited, including faster feature implementation and deployment; partitioning of resources and control for relatively safe experimentation; and implementations on “relatively simple, well-known systems with well-defined interfaces.” Finally, he also emphasizes that the decoupling of the control plane from the network element facilitates “novel decision algorithms and hardware uses.”

It makes sense, all of it, at least insofar as Google is concerned. Crabbe’s points, of course, are similarly valid for other web-scale, cloud service providers.  But what about enterprises, large and small? Well, that’s a question still to be explored and answered, though the early adopters IBM and NEC brought forward earlier this week indicate that SDN also has a future in at least a few enterprise application environments.

IBM and NEC Find Early Adopters for OpenFlow-based SDNs

News arrived today that IBM and NEC have joined forces to work on OpenFlow deployments. The two companies’ joint solution integrates IBM’s Open Flow-enabled RackSwitch G8264 10/40GbE top-of-rack switch with NEC’s ProgrammableFlow Controller, PF5240 1/10 Gigabit Ethernet Switch, and PF5820 10/40 Gigabit Ethernet Switch.

What’s more, the two technology partners boast early adopters, who are using OpenFlow-based software-defined networks (SDNs) for real-world applications.

Actual Deployments by Early Adopters

Granted, one of those organizations, Stanford University, is firmly ensconced in academia, but the other two are commercial concerns, which are using the technology for applications that apparently confer significant business value. As Stacey Higginbotham writes at GigaOm, these deployments validate the commercial potential of SDNs that utilize the OpenFlow protocol in enterprise environments.

The three early adopters cover some intriguing application scenarios. Tervela, a purveyor of a distributed data fabric,  says the joint solution delivers dynamic networking that ensures predictable performance of Big Data for complex, demanding applications such as global trading, risk analysis, and e-commerce.

Another early adopter is Selerity Corporation. At Network Computing, Mike Fratto provides an excellent overview of how Selerity — which provides real-time, machine-readable financial information to its subscribers — is using the technology to save money and reduce complexity by replacing a convoluted set of VLANs, high-end firewalls, and  application-level processes with flow rules defined on NEC’s Programmable Flow Controller.

More to Come

Stanford, which, along with the University of California Berkeley, first developed the OpenFlow protocol, is using the NEC-IBM networking gear to  deploy a campus-wide experimental network that will run alongside its production backbone network. As Higginbotham writes (see link above), Stanford is using network programmability to provision bandwidth on demand for campus researchers.

It’s good to read details about OpenFlow deployments and about how bigger-picture SDNs can be applied for real-world benefits. I suspect we’ll be reading about more SDN deployments as the year progresses.

One quibble I have with the IBM press release is that it does not demarcate clearly between where OpenFlow ends at the controller and where SDN abstraction and higher-layer application intelligence take over.

Applications Drive Adoption

Reading about these early deployments, I couldn’t help but conclude that most of the value — and doubtless professional-service revenue for IBM — is derived through the application logic that informs the controller. Those applications ride above OpenFlow, which only serves the purpose of allowing the controller to communicate with the switch so that it forwards packets in a prescribed manner.

Put another way, as pointed out by those with more technical acumen than your humble scribe, OpenFlow is a protocol for interaction between the control and the forwarding plane. It serves a commendable purpose, but it’s a purpose that can be fulfilled in other ways.

What’s compelling and potentially unique about emerging SDNs are the new applications that drive their adoption. Others have written about where SDNs do and don’t make sense, and now we’re beginning to see tangible confirmation from the marketplace, the ultimate arbiter of all things commercial.

Big Switch Hopes Floodlight Draws Crowd

As the curtain came down on 2011, software-defined networking (SDN) and its open-source enabling protocol, OpenFlow, continued to draw plenty of attention. So far, 2012 has been no different, with SDN serving as a locus of intense activity, heady discourse, and steady technological advance.

Just last week, for instance, Big Switch Networks announced the release of Floodlight, a Java-based, Apache-licensed OpenFlow controller. In making Floodlight available under the Apache license, which allows the code to be reused for both research and commercial purposes, Big Switch hopes to establish the controller as a platform for OpenFlow application development.

Big Switch acknowledges that other OpenFlow controllers are available — the company even asks rhetorically, in a blog post accompanying the announcement, whether the world really needs another OpenFlow controller — but it believes that Floodlight is differentiated through its ease 0f use, extensibility, and robustness.

Controller as Platform 

I think we all realize by now that OpenFlow is just an SDN protocol. It allows data-path flow tables on switches to be programmed by a software-based controller, represented by the likes of Floodlight.  While OpenFlow might be essential as a mechanism for the realization of software-defined networks, it is not where SDN business value will be delivered or where vendors will find their pots of gold.

Next up in the hierarchy of SDN value are the controllers. As Big Switch recognizes, they can serve as platforms for SDN application development. Many vendors, including HP, believe that applications will define the value (and hence the money-making potential) in the SDN universe. That’s a fair assumption.

Big Switch Networks has indicated that it wants to be the “VMware of networking,” delivering network virtualization and providing enterprise-oriented OpenFlow applications. If it can establish its controller as a popular platform for OpenFlow application development, it will set a foundation both for its own commercial success as well as for enterprise OpenFlow in general.

Seeking Enterprise Value

The key to success, of course, will be the degree to which the applications, and the business value that accrues from them, are compelling. We’ll also see management and orchestration, perhaps integrated with the controller(s), but the commercial acceptance of the applications will determine the need and scope for automated management of the overall SDN environment. This is particularly true in the enterprise market that Big Switch has targeted.

What will those enterprise applications be? Well, if I knew answer to that question, I might be on a personal trajectory to obscene wealth, membership in an exclusive secret society, and perhaps ownership of a professional sports team (or, at minimum, a racehorse).

Service Providers Have Different Agenda

Meanwhile, in the rarefied heights of the largest cloud providers, such as the companies that populate the board at the Open Networking Foundation (ONF), I suspect that nearly everything of meaningful business value connected with OpenFlow and SDN will be done internally. Google and Facebook, for instance, will design and build (perhaps through ODMs) their own bare-bones servers and switches, and they will develop their own SDN controllers and applications. Their network infrastructure is a business asset, even a competitive advantage, and they will prefer to build and customize their own SDN environments rather than procure products and solutions from networking vendors, whether established players or startups.

Most enterprises, though, will be inclined to look toward the vendor community to equip them with SDN-related products, technologies, and expertise. This is presuming, of course, that an enterprise market for OpenFlow-based SDNs actually finds its legs.

Plenty of Work Ahead

So, again, it all comes back to the power and value of the applications, and this is why Big Switch is so keen to open-source its controller.  The enterprise market for OpenFlow-based SDNs won’t grow unless IT departments are comfortable adopting it. Vendors such as Big Switch will have to demonstrate that they are safe bets, capable of providing unprecedented value at minimal risk.

It’s a daunting challenge. OpenFlow definitely possesses long-term enterprise potential, but today it remains a long way from being able to check all the enterprise boxes. Big Switch, not to mention the enterprise OpenFlow community, needs a meaningful ecosystem to materialize sooner rather than later.

Like OpenFlow, Open Compute Signals Shift in Industry Power

I’ve written quite a bit recently about OpenFlow and the Open Networking Foundation (ONF). For a change of pace, I will focus today on the Open Compute Project.

In many ways, even though OpenFlow deals with networking infrastructure and Open Compute deals with computing infrastructure, they are analogous movements, springing from the same fundamental set of industry dynamics.

Open Compute was introduced formally to the world in April. Its ostensible goal was “to develop servers and data centers following the model traditionally associated with open-source software projects.”  That’s true insofar as it goes, but it’s only part of the story. The stated goal actually is a means to an end, which is to devise an operational template that allows cloud behemoths such as Facebook to save lots of money on computing infrastructure. It’s all about commoditizing and optimizing the operational efficiency of the hardware encompassed within many of the largest cloud data centers that don’t belong to Google.

Speaking of Google, it is not involved with Open Compute. That’s primarily because Google has been taking a DIY approach to its data center long before Facebook began working on the blueprint for the Open Compute Project.

Google as DIY Trailblazer

For Google, its ability to develop and deliver its own data-center technologies — spanning computing, networking and storage infrastructure — became a source of competitive advantage. By using off-the-shelf hardware components, Google was able to provide itself with cost- and energy-efficient data-center infrastructure that did exactly what it needed to do — and no more. Moreover, Google no longer had to pay a premium to technology vendors that offered products that weren’t ideally suited to its requirements and that offered extraneous “higher-value” (pricier) features and functionality.

Observing how Google had used its scale and its ample resources to fashion its cost-saving infrastructure, Facebook  considered how it might follow suit. The goal at Facebook was to save money, of course, but also to mitigate or perhaps eliminate the infrastructure-based competitive advantage Google had developed. Facebook realized that it could never compete with Google at scale in the infrastructure cost-saving game, so it sought to enlist others in the cause.

And so the Open Computer project was born. The aim is to have a community of shared interest deliver cost-saving open-hardware innovations that can help Facebook scale its infrastructure at an operational efficiency approximating Google’s. If others besides Facebook benefit, so be it. That’s not a concern.

Collateral Damage

As Facebook seeks to boost its advertising revenue, it is effectively competing with Google. The search giant still derives nearly 97 percent of its revenue from advertising, and its Google+ is intended to distract it not derail Facebook’s core business, just as Google Apps is meant to keep Microsoft focused on protecting one of its crown jewels rather than on allocating more corporate resources to search and search advertising.

There’s nothing particularly striking about that. Cloud service providers are expected to compete against other by developing new revenue-generating services and by achieving new cost-saving operational efficiencies.  In that context, the Open Compute Project can be seen, at least in one respect, as Facebook’s open-source bid to level the infrastructure playing field and undercut, as previously noted, what has been a Google competitive advantage.

But there’s another dynamic at play. As the leading cloud providers with their vast data centers increasingly seek to develop their own hardware infrastructure — or to create an open-source model that facilitates its delivery — we will witness some significant collateral damage. Those taking the hit, as is becoming apparent, will be the hardware systems vendors, including HP, IBM, Oracle (Sun), Dell, and even Cisco. That’s only on the computing side of the house, of course. In networking, as software-defined networking (SDN) and OpenFlow find ready embrace among the large cloud shops, Cisco and others will be subject to the loss of revenue and profit margin, though how much and how soon remain to be seen.

Who’s Steering the OCP Ship?

So, who, aside from Facebook, will set the strategic agenda of Open Compute? To answer to that question, we need only consult the identities of those named to the Open Compute Project Foundation’s board of directors:

  • Chairman/President – Frank Frankovsky, Director, Technical Operations at Facebook
  • Jason Waxman, General Manager, High Density Computing, Data Center Group, Intel
  • Mark Roenigk, Chief Operating Officer, Rackspace Hosting
  • Andy Bechtolshiem, Industry Guru
  • Don Duet, Managing Director, Goldman-Sachs

It’s no shocker that Facebook retains the chairman’s role. Facebook didn’t launch this initiative to have somebody else steer the ship.

Similarly, it’s not a surprise that Intel is involved. Intel benefits regardless of whether cloud shops build their own systems, buy them from HP or Dell, or even get them from a Taiwanese or Chinese ODM.

As for the Rackspace representation, that makes sense, too. Rackspace already has OpenStack, open-source software for private and public clouds, and the Open Compute approach provides a logical hardware complement to that effort.

After that, though, the board membership of the Open Compute Project Foundation gets rather interesting.

Examining Bechtolsheim’s Involvement

First, there’s the intriguing presence of Andy Bechtolsheim. Those who follow the networking industry will know that Andy Bechtolsheim is more than an “industry guru,” whatever that means. Among his many roles, Bechtolsheim serves as the chief development officer and co-founder of Arista Networks, a growing rival to Cisco in low-latency data-center switching, especially at cloud-scale web shops and financial-services companies. It bears repeating that Open Compute’s mandate does not extend to network infrastructure, which is the preserve of the analogous OpenFlow.

Bechtolsheim’s history is replete with successes, as a technologist and as an investor. He was one of the earliest investors in Google, which makes his involvement in Open Compute deliciously ironic.

More recently, he disclosed a seed-stage investment in Nebula, which, as Derrick Harris at GigaOM wrote this summer, has “developed a hardware appliance pre-loaded with customized OpenStack software and Arista networking tools, designed to manage racks of commodity servers as a private cloud.” The reference architectures for the commodity servers comprise Dell’s PowerEdge C Micro Servers and servers that adhere to Open Compute specifications.

We know, then, why Bechtolsheim is on the board. He’s a high-profile presence that I’m sure Open Compute was only too happy to welcome with open arms (pardon the pun), and he also has business interests that would benefit from a furtherance of Open Compute’s agenda. Not to put too fine a point on it, but there’s an Arista and a Nebula dimension to Bechtolsheim’s board role at the Open Compute Project Foundation.

OpenStack Angle for Rackspace, Dell

Interestingly, the presence of Bechtolsheim and Rackspace’s Mark Roenigk on the board both emphasize OpenStack considerations, as does Dell’s involvement with Open Compute. Dell doesn’t have a board seat — at least not according to the Open Compute website — but it seems to think it can build a business for solutions based on Open Compute and OpenStack among second-tier purveyors of public-cloud services and among those pursuing large private or hybrid clouds. Both will become key strategic markets for Dell as its SMB installed base migrates applications and spending to the cloud.

Dell notably lost a chunk of server business when Facebook chose to go the DIY route, in conjunction with Taiwanese ODM Quanta Computer, for servers in its data center in Pineville, Oregon. Through its involvement in Open Compute, Dell might be trying to regain lost ground at Facebook, but I suspect that ship has sailed. Instead, Dell probably is attempting to ensure that it prevents or mitigates potential market erosion among smaller service providers and enterprise customers.

What Goldman Sachs Wants

The other intriguing presence on the Open Compute Project Foundation board is Don Duet from Goldman Sachs. Here’s what Duet had to say about his firm’s involvement with Open Compute:

“We build a lot of our own technology, but we are not at the hyperscale of Google or Facebook. We are a mid-scale company with a large global footprint. The work done by the OCP has the potential to lower the TCO [total cost of ownership] and we are extremely interested in that.”

Indeed, that perspective probably worries major server vendors more than anything else about Open Compute. Once Goldman Sachs goes this route, other financial-services firms will be inclined to follow, and nobody knows where the market attrition will end, presuming it ends at all.

Like Facebook, Goldman Sachs saw what Google was doing with its home-brewed, scale-out data-center infrastructure, and wondered how it might achieve similar business benefits. That has to be disconcerting news for major server vendors.

Welcome to the Future

The big takeaway for me, as I absorb these developments, is how the power axis of the industry is shifting. The big systems vendors used to set the agenda, promoting and pushing their products and influencing the influencers so that enterprise buyers kept their growth rates on the uptick. Now, though, a combination of factors — widespread data-center virtualization, the rise of cloud computing, a persistent and protected global economic downturn (which has placed unprecedented emphasis on IT cost containment) — is reshaping the IT universe.

Welcome to the future. Some might like it more than others, but there’s no going back.