Category Archives: Google

Amazon-RIM: Summer Reunion?

Think back to last December, just before the holidays. You might recall a Reuters report, quoting “people with knowledge of the situation,” claiming that Research in Motion (RIM) rejected takeover propositions from Amazon.com and others.

The report wasn’t clear on whether the informal discussions resulted in any talk of price between Amazon and RIM, but apparently no formal offer was made. RIM, then still under the stewardship of former co-CEOs Jim Balsillie and Mike Lazaridis, reportedly preferred to remain independent and to address its challenges alone.

I Know What You Discussed Last Summer

Since then, a lot has happened. When the Reuters report was published — on December 20, 2011 — RIM’s market value had plunged 77 percent during the previous year, sitting then at about $6.8 billion. Today, RIM’s market capitalization is $3.7 billion. What’s more, the company now has Thorsten Heins as its CEO, not Balsillie and Lazardis, who were adamantly opposed to selling the company. We also have seen recent reports that IBM approached RIM regarding a potential acquisition of the Waterloo, Ontario-based company’s enterprise business, and rumors have surfaced that RIM might sell its handset business to Amazon or Facebook.

Meanwhile, RIM’s prospects for long-term success aren’t any brighter than they were last winter, and activist shareholders, not interested in a protracted turnaround effort, continue to lobby for a sale of the company.

As for Amazon, it is said to be on the cusp of entering the smartphone market, presumably using a forked version of Android, which is what it runs on the Kindle tablet.  From the vantage point of the boardroom at Amazon, that might not be a sustainable long-term plan. Google is looking more like an Amazon competitor, and the future trajectory of Android is clouded by Google’s strategic considerations and by legal imbroglios relating to patents. Those presumably were among the reasons Amazon approached RIM last December.

Uneasy Bedfellows

It’s no secret that Amazon and Google are uneasy Android bedfellows. As Eric Jackson wrote just after the Reuters story hit the wires:

Amazon has never been a big supporter of Google’s Android OS for its Kindle. And Google’s never been keen on promoting Amazon as part of the Android ecosystem. It seems that both companies know this is just a matter of time before each leaves the other.

Yes, there’s some question as to how much value inheres in RIM’s patents. Estimates on their worth are all over the map. Nevertheless, RIM’s QNX mobile-operating system could look compelling to Amazon. With QNX and with RIM’s patents, Amazon would have something more than a contingency plan against any strategic machinations by Google or any potential litigiousness by Apple (or others).  The foregoing case, of course, rests on the assumption that QNX, rechristened BlackBerry 10, is as far along as RIM claims. It also rests on the assumption that Amazon wants a mobile platform all its own.

It was last summer when Amazon reportedly made its informal approach to RIM. It would not be surprising to learn that a reprise of discussions occurred this summer. RIM might be more disposed to consider a formal offer this time around.

Inevitability of Virtualized Infrastructure

As a previous post, Infrastructure Virtualization Versus Converged Infrastructure, attests, I strongly believe that virtualization is leading us to a future in which underlying hardware becomes largely undifferentiated and interchangeable. Applications and orchestration will reside in software riding atop the virtualization layer, which effectively will function as an abstraction buffer above hardware infrastructure.  The latter will eventually include hardware for computer, networking, and storage.

Vendors that ride hardware-based business models will have trouble adapting to this new reality. Many of these companies have hordes of software developers and software engineers, but they inextricably intertwine their software and hardware as a matter of business practice, selling the latter as proprietary boxes that often cannot interoperate with, or be swapped out for, competing hardware. It’s classic hardware-based vendor lock-in, and it’s been with us for many years. This applies to vendors that sell all three main types of hardware infrastructure, and to those that sell them tied together as converged infrastructure.

Loosening a Tenacious Grip

Proprietary data-center hardware would appear to be running on borrowed time, though it will not disappear overnight. Its grip will be especially tenacious in the enterprise, though the pull of the cloud eventually will weaken its hold. Proprietary compute infrastructure will be the first to succumb, but networking and storage will fall, too. The economic and operational logic powering the transition is inexorable, so it’s a question of when, not whether, it will happen.

While CapEx cost savings are an obvious benefit, operational flexibility (shifting workloads with agility and less effort) and OpEx savings also are factors. Infrastructure hardware will be cheaper, as well as easier and less costly to run. Pools of industry-standard hardware will be reallocated on demand to serve the needs of application workloads. Data-center customers no longer will be constrained by the hardware-release schedules of their previous vendors of choice. Customers also will be able to take advantage of the latest industry-standard chipsets, which will power hardware with improved energy efficiency and better cooling characteristics.

In servers, and now in storage, Facebook’s Open Compute Project (OCP) has sought to expedite the move to off-the-shelf hardware. Last week at Oscon, Frank Frankovsky, a vice president at  Facebook and the chairman and president of the OCP, rallied the open-source troops by arguing that proprietary x86 systems are “gratuitously differentiated.” He called for all hardware-design specifications to be open.

OCP as Competitive Cudgel

That would benefit Facebook, which launched OCP as a vehicle to help it lower data-center CapEx and OpEx, boost operational flexibility, and — last but not least — mitigate a competitive advantage held by Google, which had a massive head start in rationalizing and fine-tuning its data centers and IT infrastructure. In fact, Google cloaks its IT operations in extreme secrecy, believing that its practices and technologies deliver substantial competitive advantage over its main rivals, including Facebook. The latter must agree, because the animating idea behind Open Compute is to create a market, demand and supply, for commodity server hardware will reduce or eliminate Google’s edge.

Some have wondered why Google hasn’t joined OCP, but the answer should be obvious. Google believes it has cracked the infrastructure code, and it is therefore disinclined to share its insights and best practices with its competitors. Google isn’t a fan of proprietary vanity hardware — it’s been designing its own gear, then going to server and network ODMs, for some time now — but Google feels it has nothing to gain, and much to lose, from opening its kimono to the OCP crowd.

With networking, though, Google felt it needed a little help from its friends — as well as from its enemies. That explains why it allied with Facebook and other cloud-service providers in the Open Networking Foundation (ONF), which I have written about here on many occasions. The goal of the ONF, as with OCP, is to slip the proprietary shackles of hardware vendors, whose gear functions as an impediment to operational agility as well as a costs that could be reduced through SDN-style network virtualization. Google’s communitarian approach to addressing the network-virtualization riddle suggests that it believes it cannot achieve the desired outcome on its own.

Cracking the Nut

Whereas compute hardware was well on its way to standardization, networking hardware, until the ONF, was akin to a vertically integrated mainframe system, replete with a proliferating number of both proprietary and industry-standard protocols. Networking is a bigger, and tougher, nut to crack.

But crack it will, first at the big cloud-service providers, then, as the cloud gains momentum, at enterprises.

PS: I will post something tomorrow about VMware’s just-announced acquisition of Nicira, which is big news no matter how you slice it.  I wrote the above post before I learned of the acquisition.

Cisco’s SDN Response: Mission Accomplished, but Long Battle Ahead

In concluding my last post, I said I would write a subsequent note on whether Cisco achieved its objectives in its rejoinder to software-defined networking (SDN) at the Cisco Live conference last week in San Diego.

As the largest player in network infrastructure, Cisco’s words carry considerable weight. When Cisco talks, its customers (and the industry ecosystem) listen. As such, we witnessed extensive coverage of the company’s Cisco Open Network Environment (Cisco ONE) proclamations last week.

Really, what Cisco announced with Cisco ONE was relatively modest and wholly unsurprising. What was surprising was the broad spectrum of reactions to what was effectively a positioning statement from the networking market’s leading vendor.

Mission Accomplished . . . For Now

And that positioning statement wasn’t so much about SDN, or about the switch-control protocol OpenFlow, but about something more specific to Cisco, whose installed base of customers, especially in the enterprise, is increasingly curious about SDN. Indeed, Cisco’s response to SDN should be seen, first and foremost, as a response to its customers. One could construe it as a cynical gesture to “freeze the market,” but that would not do full justice to the rationale. Instead, let’s just say that Cisco’s customers wanted to know how their vendor of choice would respond to SDN, and Cisco was more than willing to oblige.

In that regard, it was mission accomplished. Cisco gave its enterprise customers enough reason to put off a serious dalliance with SDN, at least for the foreseeable future (which isn’t that long). But that’s all it did. I didn’t see a vision from Cisco. What I saw was an effective counterpunch — but definitely not a knockout — against a long-term threat to its core market.

Cisco achieved its objective partly by offering its own take on network programmability, replete with a heavy emphasis on APIs and northbound interfaces; but it also did it partly by bashing OpenFlow, the open  protocol that effects physical separation of the network-element control and forwarding planes.

Conflating OpenFlow and SDN

In its criticism of OpenFlow, Cisco sought to conflate the protocol with the larger SDN architecture. As I and many others have noted repeatedly, OpenFlow is not SDN;  the two are not inseparable. It is possible to deliver an SDN architecture without OpenFlow. Even when OpenFlow is included, it’s a small part of the overall picture.  SDN is more than a mechanism by which a physically separate control plane directs packet forwarding on a switch.

If you listened to Cisco last week, however, you would have gotten the distinct impression that OpenFlow and SDN are indistinguishable, and that all that’s happening in SDN is a southbound conversation from a server-based software controller and OpenFlow-capable switches. That’s not true, but the Open Networking Foundation (ONF), the custodians of SDN and OpenFlow, has left an opening that Cisco is only too happy to exploit.

The fact is, the cloud service-provider principals steering the ONF see SDN playing a much bigger role than Cisco would have you believe. OpenFlow is a starting point. It is a means to, well, another means — because SDN is an enabler, too. What SDN enables is network virtualization and network programmability, but not how Cisco would like its customers to get there.

Cisco Knows SDN More Than OpenFlow

To illustrate my point, I refer you to the relatively crude ONF SDN architectural stack showcased in a white paper, Software-Defined Networking: The New Norm for Networks. If you consult the diagram in that document, you will see that OpenFlow is the connective tissue between the controller and the switch — what ONF’s Dan Pitt has described as an “open interface to packet forwarding” — but you will also see that there are abstraction layers that reside well above OpenFlow.

If you want an ever more detailed look at a “modern” SDN architecture, you can consult a presentation given by Cisco’s David Meyer earlier this year. That presentation features physical hardware at the base, with SDN components in the middle. These SDN components include the “forwarding interface abstraction” represented by OpenFlow, a network operation system (NOS) running on a controller (server), a “nypervisor” (network hypervisor), and a global management abstraction that interfaces with the control logic of higher-layer application (control) programs.

So, Cisco clearly knows that SDN comprises more than OpenFlow, but, in its statements last week at Cisco Live, the company preferred to use the protocol as a strawman in its arguments for Cisco-centric network programmability. You can’t blame Cisco, though. It has customers to serve — and to keep in the revenue- and profit-generating fold — and an enterprise-networking franchise to protect.

Mind the Gap

But why did the ONF leave this gap for Cisco to fill? It’s partly because the ONF isn’t overly concerned with the enterprise and partly because the ONF sees OpenFlow as an open, essential precondition for the higher, richer layers of the SDN architectural model.

Without the physical separation of the control plane from the forwarding plane, after all, some of the ONF’s service-provider constituency might not have been able to break free of vendor hegemony in their networks. What’s more, they wouldn’t be able to set the stage for low-priced, ODM-manufactured networking hardware built with merchant silicon.

As you can imagine, that is not the sort of change that Cisco can get behind, much less lead. Therefore, Cisco breaks out the brickbats and goes in hot pursuit of OpenFlow, which it then portrays as deficient for the purposes of far-reaching, north-and-south network programmability.

Exiting (Not Exciting) Plumbing

Make no mistake, though. The ONF has a vision, and it extends well beyond OpenFlow. At a conference in Garmisch, Germany, earlier this year, Dan Pitt, the ONF’s executive director, offered a presentation called “A Revolution in Networking and Standards,” and made the following comments:

“I think networking is going to become an integral part of computing in a way that makes it less important, because it’s less of a problem. It’s not the black sheep any longer. And the same tools you use to create an IT computing infrastructure or virtualization, performance, and policy will flow through to the network component of that as well, without special effort.

I think enterprises are going to be exiting technology – or exiting plumbing. They are not going to care about the plumbing, whether it’s their networks or the cloud networks that increasingly meet their needs, and the cloud services. They’re going to say, here’s the function or the feature I want for my business goal, and you make it happen. And somebody worries about the plumbing, but not as many people who worry about plumbing today. And if you’ve got this virtualized view, you don’t have to look at the plumbing. . . .

The operators are gradually becoming software companies and internet companies. They are bulking up on those skills. They want to be able to add those services and features themselves instead of relying on the vendors, and doing it quickly for their customers. It gives opportunities to operators that they didn’t have before of operating more diverse services and experimenting at low cost with new services.”

No Cartwheels

Again, this is not a vision that would have John Chambers doing cartwheels across the expansive Cisco campus.

While the ONF is making plans to address the northbound interfaces that are a major element in Cisco’s network programmability, it hasn’t done so yet. Even when it does, the ONF is unlikely to standardize higher-layer APIs, at least in the near term. Instead, those APIs will be associated with the controllers that get deployed in customer networks. In other words, the ONF will let the market decide.

On that tenet, Cisco can agree with the ONF. It, too, would like the market to decide, especially since its market presence — the investments customers have made in its routers and switches, and in its protocols and management tools — towers imperiously over the meager real estate being claimed in the nascent SDN market.

With all that Cisco network infrastructure deployed in customer networks, Cisco believes it’s in a commanding position to set the terms for how the network will deliver software intelligence to programmers of applications and management systems. Theoretically, that’s true, but the challenge for Cisco will be in successfully engaging a programming constituency that isn’t its core audience. Can Cisco do it? It will be a stretch.

Do They Get It?

All the while, the ONF and its service-provider backers will be advancing and promoting the SDN model and the network virtualization and programmability that accompany it. The question for the ONF is not whether its movers and shakers understand programmers — it’s pretty clear that Google, Facebook, Microsoft, and Yahoo are familiar with programmers — but whether the ONF understands and cares enough about the enterprise to make that market a priority in its technology roadmap.

If the ONF leaves the enterprise to the dictates of the Internet Engineering Task Force (IETF) and Institute of Electrical and Electronics Engineers (IEEE), Cisco is likely to maintain its enterprise dominance with an approach that provides some benefits of network programmability without the need for server-based controllers.

Meanwhile, as Tom Nolle, president of CIMI Corporation has pointed out, Cisco ONE also serves as a challenge to Cisco’s conventional networking competitors, which are devising their own answers to SDN.

But that is a different thread, and this one is too long already.

Why Established Networking Vendors Aren’t Leading SDN Charge

Expressing equal parts exasperation and incredulity, Greg Ferro wonders why industry-leading networking vendors aren’t taking the innovative initiative in offering compelling strategies for software-defined networking (SDN).

The answer seems clear enough.

Although applications will be critical to the long-term commercial success of SDN, Google and the other movers and shakers that direct the affairs of the Open Networking Foundation (ONF) originally were drawn to SDN because they were frustrated with the lack of responsiveness and innovation from established vendors. As a result, they devised a networking model that not only separated the control and data planes of network elements, but that also, in the word’s of Google’s Amin Vahdat, separated the “ evolution path for (network) hardware and software.”

Two Paths

Until now, those evolutionary paths have been converged and constrained inside the largely propriety boxes of networking vendors. Google and its confreres with the ONF perceived that state of affairs as the yoke of vendor oppression. The network, slow to evolve and innovate, was getting in the way of progress.  All the combustible ingredients of a cloud-service provider insurrection had cohered. Google, taking the lead in organizing the other major service providers under the rubric of the ONF, lit the fuse.

The effects of the explosion are just being felt, and the reverberations will echo for some time. The big service providers, and perhaps many smaller ones, are gravitating away from the orbit of networking’s ancien regime. The question now is whether enterprises will follow. At some point, that probably will happen, but how and when it will unfold are less clear. Enterprises, unlike the board members of the ONF, are too diverse and prolific to organize in pursuit of common interests. Accordingly, vendors are still able to set the enterprise agenda.

But enterprises will notice the benefits that SDN is capable of conferring, and the ONF’s overlords will seek to cultivate and sustain an ecosystem that can deliver parallel hardware and software innovation. Google, for example, has indicated that while it develops its own networking hardware today, it would be amenable to buying OpenFlow switches from the vendor community. Those switches, like to carry lower margins and prices than the gear sold by the major networking vendors, will probably come from ODMs using merchant silicon from Broadcom, Marvell, Fulcrum (Intel), and others.

Money’s in the Software

The major networking vendors are saying that the cleavage of the control and data planes is not a big deal, that it’s not necessary or isn’t a critical requirement for innovation and network programmability. Perhaps there is some merit to their arguments, but there’s no question that the separation of the control and data planes is not in their business interests. If some their assertions have merit, they also are self-serving.

Cisco, as we’ve discussed before, might be able to develop software, but its business model is predicated on the sale of routers and switches. Effectively, it would have to remake itself comprehensively to recast itself as a vendor of server-based controllers (software) and the applications the run on them. A proprietary hardware box, whether a server or switch, isn’t what the ONF wants.

If the ONF’s SDN vision prevails, the money is in software: server-based controllers, applications, management/orchestration frameworks, and so on. Successful vendors not only will have to be proficient at developing software; they’ll also have to be skilled at marketing and selling it. They’ll have to build their businesses around it.

This is the challenge the major networking vendors confront. It’s why they aren’t leading the SDN charge, and it also is why they are attempting to co-opt and subvert it.

Putting an ONF Conspiracy Theory to Rest

We know that the Open Networking Foundation (ONF) is controlled by the six major service providers that constitute its board of directors.

It is no secret that the ONF is built this way by design. The board members wanted to make sure that they got what they wanted from the ONF’s deliberations, and they felt that existing standards bodies, such as the IETF and IEEE, were gerrymandered and dominated by vendors with self-serving agendas.

The ONF was devised with a different purpose in mind — not to serve the interests of the vendors, but to further the interests of the service-provider community, especially the service providers who sit on the ONF’s board of directors. In their view, conventional networking was a drag on their innovation and business agility, obstructing progress elsewhere in their data centers and IT operations. Whereas compute and storage resources had been virtualized and orchestrated, networking remained a relatively costly and unwieldy fiefdom ruled by “masters of complexity” rummaging manually through an ever-expanding bag of ad-hoc protocols.

Organizing for Clout

Not getting what they desired from their networking vendors, the service providers decided to seize the initiative. Acting on its own,  Google already had done just that, designing and deploying DIY networking gear.

The study of political elites tells us that an organized minority comprising powerful interests can impose its will on a disorganized majority.  In the past, as individual companies, the ONF board members had been unable to counter the agendas of the networking vendors. Together, they hoped to effect the change they desired.

So, we have the ONF, and it’s unlike the IETF and the IEEE in more ways than one. While not a standards body — the ONF describes itself as a “non-profit consortium dedicated to the transformation of networking through the development and standardization of a unique architecture called Software-Defined Networking (SDN)” — there’s no question that the ONF wants to ensure that it defines and delivers SDN according to its own rules  And at its own pace, too, not tied to the product-release schedules of networking vendors.

In certain respects, the ONF is all about consortium of customers taking control and dictating what it wants from the vendor community, which, in this case, should be understood to comprise not only OEM networking vendors, but also ODMs, SDN startups, and purveyors of merchant silicon.

Vehicle of Insurrection?

Just to ensure that its leadership could not be subverted, though, the ONF stipulated that vendors would not be permitted to serve on its board of directors. That means that representatives of Cisco, Juniper, and HP Networking, for example, will never be able to serve on the ONF board.

At least within their self-determined jurisdiction, the ONF’s board members call all the shots. Or do they?

Commenting on my earlier post regarding Cisco’s SDN counterstrategy, a reader, who wished to remain anonymous (Anon4This1), wrote the following:

Regarding this point: “Ultimately, [Cisco] does not control the ONF.”

That was one of the key reasons for the creation of the ONF. That is, there was a sense that existing standards bodies were under the collective thumb of large vendors. ONF was created such that only the ONF board can vote on binding decisions, and no vendors are allowed on the board. Done, right? Ah, well, not so fast. The ONF also has a Technical Advisory Group (TAG). For most decisions, the board actually acts on the recommendations of the TAG. The TAG does not have the same membership restrictions that apply to the ONF board. Indeed, the current chairman of the TAG is none other than influential Cisco honcho, Dave Ward. So if the ONF board listens to the TAG, and the TAG listens to its chairman… Who has more control over the ONF than anyone? https://www.opennetworking.org/about/tag

Board’s Iron Grip

If you follow the link provided by my anonymous commenter, you will find an extensive overview of the ONF’s Technical Advisory Group (TAG). Could the TAG, as constituted, be the tail that wags the ONF dog?

My analysis leads me to a different conclusion.  As I see it, the TAG serves at the pleasure of the ONF board of directors, individually and collectively. Nobody on the TAG does so without the express consent of the board of directors. Moreover, “TAG term appointments are annual and the chair position rotates quarterly.” Whereas Cisco’s Dave Ward serves as the current chair, his term will expire and somebody else will succeed him.

What about the suggestion that the “board actually acts on recommendations of the TAG,” as my commenter asserts. In many instances, that might be true, but the form and substance of the language on the TAG webpage articulates clearly that the TAG is, as its acronym denotes, an advisory body that reports to (and “responds to requests from”) the ONF board of directors.  The TAG offers technical guidance and recommendations, but the board makes the ultimate decisions. If the board doesn’t like what it’s getting from TAG members, annual appointments presumably can be allowed to expire and new members can succeed those who leave.

Currently, two networking-gear OEMs are represented on the ONF’s TAG. Cisco is represented by the aforementioned David Ward, and HP is represented by Jean Tourrilhes, an HP-Labs researcher in Networking and Communication who has worked with OpenFlow since 2008. These gentlemen seem to be on the TAG because those who run the ONF believe they can make meaningful contributions to the development of SDN.

No Coup

It’s instructive to note the company affiliations of the other six members serving on TAG. We find, for instance, Nicira CTO Martin Casado, as well as Verizon’s Dave McDysan, Google’s Amin Vahdat, Microsoft’s Albert Greenberg, Broadcom’s Puneet Agarwal, and Stanford’s Nick McKeown, who also is known as a Nicira co-founder and serves on that company’s board of directors.

If any company has pull, then, on the ONF’s TAG, it would seem to be Nicira Networks, not Cisco Systems. After all, Nicira has two of its corporate directors serving on the ONF’s TAG. Again, though, both gentlemen from Nicira are highly regarded and esteemed SDN proponents, who played critical roles in the advent and development of OpenFlow.

And that’s my point. If you look at who serves on the ONF’s TAG, you can clearly see why they’re in those roles and you can understand why the ONF board members would desire their contributions.

The TAG as a vehicle for an internal coup d’etat at the ONF? That’s one conspiracy theory that I’m definitely not buying.

SDN Controller Ecosystems Critical to Market Success

Software-defined networking (SDN) is a relatively new phenomenon. Consequently, analogies to preceding markets and technologies often are invoked by its proponents to communicate key concepts. One oft-cited analogy involves the server-based solution stack and the nascent SDN stack.

In this comparison, server hardware equates to networking hardware, with the CPU instruction set positioned as analogous to the OpenFlow instruction set. Above those layers, the server operating system is said to be analogous to the SDN controller, which effectively runs a “network operating system.” Above that layer, the analogy extends to similarities between server OS and network OS APIs and to the applications that run atop both stacks.

Analogies and Implications

Let’s consider the comparison of the server operating system to the SDN controller.  While the analogy is apt, it carries implications that prospective early adopters of SDN need to fully understand. As we’ve discussed before, SDN controllers based on OpenFlow today carry no guarantees of interoperability. An application that runs on one controller might not be available (or run) on another controller, just as an application developed for a Windows server might not be available on Linux (and vice versa).

Moreover, we don’t know how difficult it will be to port applications from one OpenFlow-based controller to another. It could be a trivial exercise or an agonizing one. There are many nagging questions, far fewer answers.

Keep in mind that this is an entirely different matter from the question of interoperability between OpenFlow-based controllers and switches. Presuming the OpenFlow standard is adhered to and implemented correctly in all cases, OpenFlow-based controllers on the market today should be able to communicate with OpenFlow-based switches.

But interoperability (or lack thereof) is an unwritten book at the layers where the SDN controller (an NOS akin to a server-based operating system) and the NOS APIs reside.  The poses a potential problem for the market development of a viable SDN ecosystem, at least for the enterprise market. (It’s not as much of an issue for the gargantuan service providers that drive the agenda of the Open Network Foundation; those companies have ample resources and will make their own internal standardization decisions relating to controllers and practically everything else that falls under the SDN rubric.)

Controller Derby

At SDNCentral, no fewer than seven open-source OpenFlow controllers are listed. Three of those controllers are Java based: Beacon, Floodlight, and Jaxon. The other open-source OpenFlow controllers listed at SDNCentral are FlowER, NodeFlow, POX, and Trema.  Additionally, OpenFlow controllers have been developed by several companies, including NEC, BigSwitch Networks (which offers a commercial version of the Floodlight controller), and Nicira Networks, which has built on the foundation of the Onix controller.

Interestingly, Google and Ericsson also have based their controllers on Onix. In a blog post last summer, Nicira CTO Martin Casado described Onix as a “general SDN controller” rather than an OpenFlow controller. Casado admitted that he was devising terminology on the fly, but he defined a “general SDN controller as “one in which the control application is fully decoupled from both the underlying protocol(s) communicating with the switches, and the protocol(s) exchanging state between controller instances (assuming the controllers can run in a cluster).” So, OpenFlow could be part of the picture, but it doesn’t have to be there; another mechanism could substitute for it.

Casado conceded that Onix is the right controller for many environments, but not for others. Wrote Casado:

There have been multiple control applications built on Onix, and it is used in large production deployments in the data centers, as well as in the access and core networks. However, it is probably too heavyweight for smaller networks (the home or small enterprise), and it is certainly too complex to use as a basic research tool.

Horses for Courses

So, there are horses for courses, and there are controllers for applications. Early indications suggest that it will not be a one-size-fits-all world. Nonetheless, at the end of his blog post, Casado expressed that opinion that “standards should be kept away” from controller design, and that the market’s natural-selection process should be allowed to run its course.

Perhaps that is the right prescription. It seems too early for leaden-footed standards bodies, such as the IETF and IEEE, to intervene. Nevertheless, customers will have to be wary. They’ll have to do their research, perform due diligence, and thoroughly understand the strengths, weaknesses, and characteristics of candidate controllers. Without assured controller interoperability, customers that adopt and deploy applications on one controller might have considerable difficulty shifting their investment and their software elsewhere.

Of course, if Google and the other major service providers who rule the roost at ONF want to expedite matters, they could publicly and aggressively endorse one or two controller platforms as de facto standards. But that’s probably unlikely, for a variety of reasons. Even if it were to happen, as Casado points out, any controller that proves favorable at large cloud service providers might not be the best choice for enterprises, especially smaller ones.

Opening for Networking’s Old Guard

At this point, it’s not clear how the SDN controller market will shake out. SDN controllers will struggle for sustenance not only against each other, but also against networking’s conventional distributed control planes already on the market, as well as so-called hybrid approaches — whereby the data path is jointly controlled by the conventional box-based control planes as well as by server-based controllers — that will be articulated and promoted by the major networking vendors, all of whom are keen to retard “pure SDN’s” advance from the environs of the largest cloud service providers to those of enterprise buyers. (As mentioned previously, the hybrid-control approach also is perceived by the ONF as a transitional necessity for customers seeking to move from their networks as they are constituted today to future SDN architectures.)

In that regard, the big networking powers are fortunate that the ONF’s early mandate is focused primarily, if not exclusively, on the requirements of the large cloud-service providers that populate its board of directors. The ambiguity surrounding controllers and their interoperability (or lack thereof) represents another factor that will dissuade enterprise buyers from taking an early leap of faith into the arms of SDN purveyors.

The faster the SDN market sorts out a controller hierarchy — determining the suitability and market prevalence of certain controllers in specific application environments — the sooner valuable ecosystems will form and enterprises will take serious notice.

For now, though, a shakeout doesn’t appear imminent.

Distributed, Hybrid, Northbound: Key Words in Cisco’s SDN Counterstrategy

When it has broached the topic of software-defined networking (SDN) recently, Cisco has attempted to reframe the discussion within the larger context of programmable networks. In Cisco’s conception of the evolving networking universe, the programmable network encompasses SDN, which in turn envelops OpenFlow.

We know by now that OpenFlow is a relatively small part of SDN. OpenFlow is a protocol that provides for the physical separation of the control and data planes, which heretofore have been combined within a switch or router. As such, OpenFlow enables server-based software (a controller) to determine how packets should be forwarded by network elements. As has been mentioned before, here and elsewhere, mechanisms other than OpenFlow could be used for the same purpose.

Logical Outcome

SDN is bigger than OpenFlow. It deals not only with the abstraction of the data plane, but also with higher-layer abstractions, at the control plane and above. The whole idea behind SDN is to put the applications, and the services they deliver, in the driver’s seat, so that the network does not become a costly encumbrance that impedes business agility and operational efficiency. In that sense, Cisco is right to suggest that programmable networks are a logical outcome that can and should result from the rise of SDN.

That said, the devil can always be found in the details, and we should note that Cisco’s definition of SDN, to the extent that it might invoke that acronym rather one of its own, is at variance with the definition that has been proffered by the Open Networking Foundation (ONF), which is controlled by the world’s largest cloud-service providers rather than by the world’s largest networking vendors. Cisco’s understanding of SDN looks a lot more like conventional networking, with a distributed or hybrid control plane instead of the logically centralized control plane favored by the ONF.

This post isn’t about value judgments, though. I am not here to bash Cisco, or anybody else for that matter, but to understand and interpret Cisco’s motivations as it formulates a counterstrategy to the ONF’s plans.

Bog-Standard Switches

Given the context, then, it’s easy to understand why Cisco favors the retention of the distributed — or, failing that, even a hybrid — control plane. Cisco is the market leader in switches and routers, and it owns a lot of valuable real estate on its customers’ networks.  If OpenFlow succeeds, not only in service-provider networks but also in the enterprise, Cisco is at risk of losing the market dominance it has worked so long and hard to build.

Frankly, there isn’t much differentiation to be achieved in bog-standard OpenFlow switches. If the Googles of the world get their way, the merchant silicon vendors all will support OpenFlow on their chipsets, and industry-standard boxes will be available from a number of ODMs and OEMs. It will be a prototypical buyer’s market, perhaps advancing quickly toward commoditization, and that’s not a prospect that Cisco shareholders and executives wish to entertain.

As Cisco comes to grips with SDN, then, it needs to rediscover the sort of leverage that it had before the advent of the ONF.  After all, if SDN is all about putting applications and other software literally in control of networks composed of industry-standard boxes, then network hardware will suffer a significant margin-squeezing demotion in the value hierarchy of customers.  And Cisco, as we’ve discussed before, develops more than its fair share of software, but remains a company wedded to a hardware-based business model.

Compromise and Accommodation 

Cisco would like to resist and undermine any potential market shift to the ONF’s server-based controllers. Fortunately for Cisco, many within the ONF are willing to acquiesce, at least initially and up to a point. A general consensus seems to have developed about the need for a hybrid control plane, which would accommodate both logically centralized controllers and distributed boxes. The ONF’s braintrust sees this move as a necessary compromise that will facilitate a long-term transition to a server-based model. It seems a logical and rational deduction — there’s a lot of networking gear installed out there that does not support the ONF’s conception of SDN — but it’s an opening for Cisco, nonetheless.

Beyond the issue of physical separation of the data plane and the control plane, Cisco has at least one other card to play.  You might have noticed that Cisco representatives have talked a lot during the past couple months about a “northbound interface” for SDN. As currently constituted, OpenFlow is a “southbound” interface, in that serves as a mechanism for a controller to program a switch. On a network diagram, that communication flows downward (hence southbound).

In SDN, a northbound interface would go upward, extending from the switch to the control plane and potentially beyond to applications and management/orchestration software. This is a discussion Cisco wants to have with the industry, at the ONF and elsewhere. Whereas southbound interfaces are all about what is done to a switch by external software, the northbound interface is a conduit by which the switch confers value — in the form of information intrinsic to the network — to the higher layers of abstraction.

Northbound Traffic

For now, the ONF has chosen not to define standard protocols or APIs for northbound interfaces, which could run from the networking devices up to the control plane and to higher layers of abstraction. Cisco, as the vendor with the largest installed base of gear in customer networks, finds itself in a logical position to play a role in helping to define those northbound interfaces.

Ideally, if programmable networks and SDN fulfill their potential, we’ll see the development of a virtuous feedback loop at the highest layers of abstraction, with software programming an underlying virtualized network and the network sending back state and other data that dynamically allows applications to perform even better.

Therefore, the northbound interface will be an important element in the future of SDN. Cisco hopes to leverage it, but more for the sustenance of its own business model than for the furtherance of the ONF’s objectives. Cisco holds some interesting cards, but it should be careful not to overplay them. Ultimately, it does not control the ONF.

As the SDN discourse elevates beyond OpenFlow, watch the traffic in the northbound lanes.