Category Archives: Google

Amazon-RIM: Summer Reunion?

Think back to last December, just before the holidays. You might recall a Reuters report, quoting “people with knowledge of the situation,” claiming that Research in Motion (RIM) rejected takeover propositions from Amazon.com and others.

The report wasn’t clear on whether the informal discussions resulted in any talk of price between Amazon and RIM, but apparently no formal offer was made. RIM, then still under the stewardship of former co-CEOs Jim Balsillie and Mike Lazaridis, reportedly preferred to remain independent and to address its challenges alone.

I Know What You Discussed Last Summer

Since then, a lot has happened. When the Reuters report was published — on December 20, 2011 — RIM’s market value had plunged 77 percent during the previous year, sitting then at about $6.8 billion. Today, RIM’s market capitalization is $3.7 billion. What’s more, the company now has Thorsten Heins as its CEO, not Balsillie and Lazardis, who were adamantly opposed to selling the company. We also have seen recent reports that IBM approached RIM regarding a potential acquisition of the Waterloo, Ontario-based company’s enterprise business, and rumors have surfaced that RIM might sell its handset business to Amazon or Facebook.

Meanwhile, RIM’s prospects for long-term success aren’t any brighter than they were last winter, and activist shareholders, not interested in a protracted turnaround effort, continue to lobby for a sale of the company.

As for Amazon, it is said to be on the cusp of entering the smartphone market, presumably using a forked version of Android, which is what it runs on the Kindle tablet.  From the vantage point of the boardroom at Amazon, that might not be a sustainable long-term plan. Google is looking more like an Amazon competitor, and the future trajectory of Android is clouded by Google’s strategic considerations and by legal imbroglios relating to patents. Those presumably were among the reasons Amazon approached RIM last December.

Uneasy Bedfellows

It’s no secret that Amazon and Google are uneasy Android bedfellows. As Eric Jackson wrote just after the Reuters story hit the wires:

Amazon has never been a big supporter of Google’s Android OS for its Kindle. And Google’s never been keen on promoting Amazon as part of the Android ecosystem. It seems that both companies know this is just a matter of time before each leaves the other.

Yes, there’s some question as to how much value inheres in RIM’s patents. Estimates on their worth are all over the map. Nevertheless, RIM’s QNX mobile-operating system could look compelling to Amazon. With QNX and with RIM’s patents, Amazon would have something more than a contingency plan against any strategic machinations by Google or any potential litigiousness by Apple (or others).  The foregoing case, of course, rests on the assumption that QNX, rechristened BlackBerry 10, is as far along as RIM claims. It also rests on the assumption that Amazon wants a mobile platform all its own.

It was last summer when Amazon reportedly made its informal approach to RIM. It would not be surprising to learn that a reprise of discussions occurred this summer. RIM might be more disposed to consider a formal offer this time around.

Advertisement

Inevitability of Virtualized Infrastructure

As a previous post, Infrastructure Virtualization Versus Converged Infrastructure, attests, I strongly believe that virtualization is leading us to a future in which underlying hardware becomes largely undifferentiated and interchangeable. Applications and orchestration will reside in software riding atop the virtualization layer, which effectively will function as an abstraction buffer above hardware infrastructure.  The latter will eventually include hardware for computer, networking, and storage.

Vendors that ride hardware-based business models will have trouble adapting to this new reality. Many of these companies have hordes of software developers and software engineers, but they inextricably intertwine their software and hardware as a matter of business practice, selling the latter as proprietary boxes that often cannot interoperate with, or be swapped out for, competing hardware. It’s classic hardware-based vendor lock-in, and it’s been with us for many years. This applies to vendors that sell all three main types of hardware infrastructure, and to those that sell them tied together as converged infrastructure.

Loosening a Tenacious Grip

Proprietary data-center hardware would appear to be running on borrowed time, though it will not disappear overnight. Its grip will be especially tenacious in the enterprise, though the pull of the cloud eventually will weaken its hold. Proprietary compute infrastructure will be the first to succumb, but networking and storage will fall, too. The economic and operational logic powering the transition is inexorable, so it’s a question of when, not whether, it will happen.

While CapEx cost savings are an obvious benefit, operational flexibility (shifting workloads with agility and less effort) and OpEx savings also are factors. Infrastructure hardware will be cheaper, as well as easier and less costly to run. Pools of industry-standard hardware will be reallocated on demand to serve the needs of application workloads. Data-center customers no longer will be constrained by the hardware-release schedules of their previous vendors of choice. Customers also will be able to take advantage of the latest industry-standard chipsets, which will power hardware with improved energy efficiency and better cooling characteristics.

In servers, and now in storage, Facebook’s Open Compute Project (OCP) has sought to expedite the move to off-the-shelf hardware. Last week at Oscon, Frank Frankovsky, a vice president at  Facebook and the chairman and president of the OCP, rallied the open-source troops by arguing that proprietary x86 systems are “gratuitously differentiated.” He called for all hardware-design specifications to be open.

OCP as Competitive Cudgel

That would benefit Facebook, which launched OCP as a vehicle to help it lower data-center CapEx and OpEx, boost operational flexibility, and — last but not least — mitigate a competitive advantage held by Google, which had a massive head start in rationalizing and fine-tuning its data centers and IT infrastructure. In fact, Google cloaks its IT operations in extreme secrecy, believing that its practices and technologies deliver substantial competitive advantage over its main rivals, including Facebook. The latter must agree, because the animating idea behind Open Compute is to create a market, demand and supply, for commodity server hardware will reduce or eliminate Google’s edge.

Some have wondered why Google hasn’t joined OCP, but the answer should be obvious. Google believes it has cracked the infrastructure code, and it is therefore disinclined to share its insights and best practices with its competitors. Google isn’t a fan of proprietary vanity hardware — it’s been designing its own gear, then going to server and network ODMs, for some time now — but Google feels it has nothing to gain, and much to lose, from opening its kimono to the OCP crowd.

With networking, though, Google felt it needed a little help from its friends — as well as from its enemies. That explains why it allied with Facebook and other cloud-service providers in the Open Networking Foundation (ONF), which I have written about here on many occasions. The goal of the ONF, as with OCP, is to slip the proprietary shackles of hardware vendors, whose gear functions as an impediment to operational agility as well as a costs that could be reduced through SDN-style network virtualization. Google’s communitarian approach to addressing the network-virtualization riddle suggests that it believes it cannot achieve the desired outcome on its own.

Cracking the Nut

Whereas compute hardware was well on its way to standardization, networking hardware, until the ONF, was akin to a vertically integrated mainframe system, replete with a proliferating number of both proprietary and industry-standard protocols. Networking is a bigger, and tougher, nut to crack.

But crack it will, first at the big cloud-service providers, then, as the cloud gains momentum, at enterprises.

PS: I will post something tomorrow about VMware’s just-announced acquisition of Nicira, which is big news no matter how you slice it.  I wrote the above post before I learned of the acquisition.

Cisco’s SDN Response: Mission Accomplished, but Long Battle Ahead

In concluding my last post, I said I would write a subsequent note on whether Cisco achieved its objectives in its rejoinder to software-defined networking (SDN) at the Cisco Live conference last week in San Diego.

As the largest player in network infrastructure, Cisco’s words carry considerable weight. When Cisco talks, its customers (and the industry ecosystem) listen. As such, we witnessed extensive coverage of the company’s Cisco Open Network Environment (Cisco ONE) proclamations last week.

Really, what Cisco announced with Cisco ONE was relatively modest and wholly unsurprising. What was surprising was the broad spectrum of reactions to what was effectively a positioning statement from the networking market’s leading vendor.

Mission Accomplished . . . For Now

And that positioning statement wasn’t so much about SDN, or about the switch-control protocol OpenFlow, but about something more specific to Cisco, whose installed base of customers, especially in the enterprise, is increasingly curious about SDN. Indeed, Cisco’s response to SDN should be seen, first and foremost, as a response to its customers. One could construe it as a cynical gesture to “freeze the market,” but that would not do full justice to the rationale. Instead, let’s just say that Cisco’s customers wanted to know how their vendor of choice would respond to SDN, and Cisco was more than willing to oblige.

In that regard, it was mission accomplished. Cisco gave its enterprise customers enough reason to put off a serious dalliance with SDN, at least for the foreseeable future (which isn’t that long). But that’s all it did. I didn’t see a vision from Cisco. What I saw was an effective counterpunch — but definitely not a knockout — against a long-term threat to its core market.

Cisco achieved its objective partly by offering its own take on network programmability, replete with a heavy emphasis on APIs and northbound interfaces; but it also did it partly by bashing OpenFlow, the open  protocol that effects physical separation of the network-element control and forwarding planes.

Conflating OpenFlow and SDN

In its criticism of OpenFlow, Cisco sought to conflate the protocol with the larger SDN architecture. As I and many others have noted repeatedly, OpenFlow is not SDN;  the two are not inseparable. It is possible to deliver an SDN architecture without OpenFlow. Even when OpenFlow is included, it’s a small part of the overall picture.  SDN is more than a mechanism by which a physically separate control plane directs packet forwarding on a switch.

If you listened to Cisco last week, however, you would have gotten the distinct impression that OpenFlow and SDN are indistinguishable, and that all that’s happening in SDN is a southbound conversation from a server-based software controller and OpenFlow-capable switches. That’s not true, but the Open Networking Foundation (ONF), the custodians of SDN and OpenFlow, has left an opening that Cisco is only too happy to exploit.

The fact is, the cloud service-provider principals steering the ONF see SDN playing a much bigger role than Cisco would have you believe. OpenFlow is a starting point. It is a means to, well, another means — because SDN is an enabler, too. What SDN enables is network virtualization and network programmability, but not how Cisco would like its customers to get there.

Cisco Knows SDN More Than OpenFlow

To illustrate my point, I refer you to the relatively crude ONF SDN architectural stack showcased in a white paper, Software-Defined Networking: The New Norm for Networks. If you consult the diagram in that document, you will see that OpenFlow is the connective tissue between the controller and the switch — what ONF’s Dan Pitt has described as an “open interface to packet forwarding” — but you will also see that there are abstraction layers that reside well above OpenFlow.

If you want an ever more detailed look at a “modern” SDN architecture, you can consult a presentation given by Cisco’s David Meyer earlier this year. That presentation features physical hardware at the base, with SDN components in the middle. These SDN components include the “forwarding interface abstraction” represented by OpenFlow, a network operation system (NOS) running on a controller (server), a “nypervisor” (network hypervisor), and a global management abstraction that interfaces with the control logic of higher-layer application (control) programs.

So, Cisco clearly knows that SDN comprises more than OpenFlow, but, in its statements last week at Cisco Live, the company preferred to use the protocol as a strawman in its arguments for Cisco-centric network programmability. You can’t blame Cisco, though. It has customers to serve — and to keep in the revenue- and profit-generating fold — and an enterprise-networking franchise to protect.

Mind the Gap

But why did the ONF leave this gap for Cisco to fill? It’s partly because the ONF isn’t overly concerned with the enterprise and partly because the ONF sees OpenFlow as an open, essential precondition for the higher, richer layers of the SDN architectural model.

Without the physical separation of the control plane from the forwarding plane, after all, some of the ONF’s service-provider constituency might not have been able to break free of vendor hegemony in their networks. What’s more, they wouldn’t be able to set the stage for low-priced, ODM-manufactured networking hardware built with merchant silicon.

As you can imagine, that is not the sort of change that Cisco can get behind, much less lead. Therefore, Cisco breaks out the brickbats and goes in hot pursuit of OpenFlow, which it then portrays as deficient for the purposes of far-reaching, north-and-south network programmability.

Exiting (Not Exciting) Plumbing

Make no mistake, though. The ONF has a vision, and it extends well beyond OpenFlow. At a conference in Garmisch, Germany, earlier this year, Dan Pitt, the ONF’s executive director, offered a presentation called “A Revolution in Networking and Standards,” and made the following comments:

“I think networking is going to become an integral part of computing in a way that makes it less important, because it’s less of a problem. It’s not the black sheep any longer. And the same tools you use to create an IT computing infrastructure or virtualization, performance, and policy will flow through to the network component of that as well, without special effort.

I think enterprises are going to be exiting technology – or exiting plumbing. They are not going to care about the plumbing, whether it’s their networks or the cloud networks that increasingly meet their needs, and the cloud services. They’re going to say, here’s the function or the feature I want for my business goal, and you make it happen. And somebody worries about the plumbing, but not as many people who worry about plumbing today. And if you’ve got this virtualized view, you don’t have to look at the plumbing. . . .

The operators are gradually becoming software companies and internet companies. They are bulking up on those skills. They want to be able to add those services and features themselves instead of relying on the vendors, and doing it quickly for their customers. It gives opportunities to operators that they didn’t have before of operating more diverse services and experimenting at low cost with new services.”

No Cartwheels

Again, this is not a vision that would have John Chambers doing cartwheels across the expansive Cisco campus.

While the ONF is making plans to address the northbound interfaces that are a major element in Cisco’s network programmability, it hasn’t done so yet. Even when it does, the ONF is unlikely to standardize higher-layer APIs, at least in the near term. Instead, those APIs will be associated with the controllers that get deployed in customer networks. In other words, the ONF will let the market decide.

On that tenet, Cisco can agree with the ONF. It, too, would like the market to decide, especially since its market presence — the investments customers have made in its routers and switches, and in its protocols and management tools — towers imperiously over the meager real estate being claimed in the nascent SDN market.

With all that Cisco network infrastructure deployed in customer networks, Cisco believes it’s in a commanding position to set the terms for how the network will deliver software intelligence to programmers of applications and management systems. Theoretically, that’s true, but the challenge for Cisco will be in successfully engaging a programming constituency that isn’t its core audience. Can Cisco do it? It will be a stretch.

Do They Get It?

All the while, the ONF and its service-provider backers will be advancing and promoting the SDN model and the network virtualization and programmability that accompany it. The question for the ONF is not whether its movers and shakers understand programmers — it’s pretty clear that Google, Facebook, Microsoft, and Yahoo are familiar with programmers — but whether the ONF understands and cares enough about the enterprise to make that market a priority in its technology roadmap.

If the ONF leaves the enterprise to the dictates of the Internet Engineering Task Force (IETF) and Institute of Electrical and Electronics Engineers (IEEE), Cisco is likely to maintain its enterprise dominance with an approach that provides some benefits of network programmability without the need for server-based controllers.

Meanwhile, as Tom Nolle, president of CIMI Corporation has pointed out, Cisco ONE also serves as a challenge to Cisco’s conventional networking competitors, which are devising their own answers to SDN.

But that is a different thread, and this one is too long already.

Why Established Networking Vendors Aren’t Leading SDN Charge

Expressing equal parts exasperation and incredulity, Greg Ferro wonders why industry-leading networking vendors aren’t taking the innovative initiative in offering compelling strategies for software-defined networking (SDN).

The answer seems clear enough.

Although applications will be critical to the long-term commercial success of SDN, Google and the other movers and shakers that direct the affairs of the Open Networking Foundation (ONF) originally were drawn to SDN because they were frustrated with the lack of responsiveness and innovation from established vendors. As a result, they devised a networking model that not only separated the control and data planes of network elements, but that also, in the word’s of Google’s Amin Vahdat, separated the “ evolution path for (network) hardware and software.”

Two Paths

Until now, those evolutionary paths have been converged and constrained inside the largely propriety boxes of networking vendors. Google and its confreres with the ONF perceived that state of affairs as the yoke of vendor oppression. The network, slow to evolve and innovate, was getting in the way of progress.  All the combustible ingredients of a cloud-service provider insurrection had cohered. Google, taking the lead in organizing the other major service providers under the rubric of the ONF, lit the fuse.

The effects of the explosion are just being felt, and the reverberations will echo for some time. The big service providers, and perhaps many smaller ones, are gravitating away from the orbit of networking’s ancien regime. The question now is whether enterprises will follow. At some point, that probably will happen, but how and when it will unfold are less clear. Enterprises, unlike the board members of the ONF, are too diverse and prolific to organize in pursuit of common interests. Accordingly, vendors are still able to set the enterprise agenda.

But enterprises will notice the benefits that SDN is capable of conferring, and the ONF’s overlords will seek to cultivate and sustain an ecosystem that can deliver parallel hardware and software innovation. Google, for example, has indicated that while it develops its own networking hardware today, it would be amenable to buying OpenFlow switches from the vendor community. Those switches, like to carry lower margins and prices than the gear sold by the major networking vendors, will probably come from ODMs using merchant silicon from Broadcom, Marvell, Fulcrum (Intel), and others.

Money’s in the Software

The major networking vendors are saying that the cleavage of the control and data planes is not a big deal, that it’s not necessary or isn’t a critical requirement for innovation and network programmability. Perhaps there is some merit to their arguments, but there’s no question that the separation of the control and data planes is not in their business interests. If some their assertions have merit, they also are self-serving.

Cisco, as we’ve discussed before, might be able to develop software, but its business model is predicated on the sale of routers and switches. Effectively, it would have to remake itself comprehensively to recast itself as a vendor of server-based controllers (software) and the applications the run on them. A proprietary hardware box, whether a server or switch, isn’t what the ONF wants.

If the ONF’s SDN vision prevails, the money is in software: server-based controllers, applications, management/orchestration frameworks, and so on. Successful vendors not only will have to be proficient at developing software; they’ll also have to be skilled at marketing and selling it. They’ll have to build their businesses around it.

This is the challenge the major networking vendors confront. It’s why they aren’t leading the SDN charge, and it also is why they are attempting to co-opt and subvert it.

Putting an ONF Conspiracy Theory to Rest

We know that the Open Networking Foundation (ONF) is controlled by the six major service providers that constitute its board of directors.

It is no secret that the ONF is built this way by design. The board members wanted to make sure that they got what they wanted from the ONF’s deliberations, and they felt that existing standards bodies, such as the IETF and IEEE, were gerrymandered and dominated by vendors with self-serving agendas.

The ONF was devised with a different purpose in mind — not to serve the interests of the vendors, but to further the interests of the service-provider community, especially the service providers who sit on the ONF’s board of directors. In their view, conventional networking was a drag on their innovation and business agility, obstructing progress elsewhere in their data centers and IT operations. Whereas compute and storage resources had been virtualized and orchestrated, networking remained a relatively costly and unwieldy fiefdom ruled by “masters of complexity” rummaging manually through an ever-expanding bag of ad-hoc protocols.

Organizing for Clout

Not getting what they desired from their networking vendors, the service providers decided to seize the initiative. Acting on its own,  Google already had done just that, designing and deploying DIY networking gear.

The study of political elites tells us that an organized minority comprising powerful interests can impose its will on a disorganized majority.  In the past, as individual companies, the ONF board members had been unable to counter the agendas of the networking vendors. Together, they hoped to effect the change they desired.

So, we have the ONF, and it’s unlike the IETF and the IEEE in more ways than one. While not a standards body — the ONF describes itself as a “non-profit consortium dedicated to the transformation of networking through the development and standardization of a unique architecture called Software-Defined Networking (SDN)” — there’s no question that the ONF wants to ensure that it defines and delivers SDN according to its own rules  And at its own pace, too, not tied to the product-release schedules of networking vendors.

In certain respects, the ONF is all about consortium of customers taking control and dictating what it wants from the vendor community, which, in this case, should be understood to comprise not only OEM networking vendors, but also ODMs, SDN startups, and purveyors of merchant silicon.

Vehicle of Insurrection?

Just to ensure that its leadership could not be subverted, though, the ONF stipulated that vendors would not be permitted to serve on its board of directors. That means that representatives of Cisco, Juniper, and HP Networking, for example, will never be able to serve on the ONF board.

At least within their self-determined jurisdiction, the ONF’s board members call all the shots. Or do they?

Commenting on my earlier post regarding Cisco’s SDN counterstrategy, a reader, who wished to remain anonymous (Anon4This1), wrote the following:

Regarding this point: “Ultimately, [Cisco] does not control the ONF.”

That was one of the key reasons for the creation of the ONF. That is, there was a sense that existing standards bodies were under the collective thumb of large vendors. ONF was created such that only the ONF board can vote on binding decisions, and no vendors are allowed on the board. Done, right? Ah, well, not so fast. The ONF also has a Technical Advisory Group (TAG). For most decisions, the board actually acts on the recommendations of the TAG. The TAG does not have the same membership restrictions that apply to the ONF board. Indeed, the current chairman of the TAG is none other than influential Cisco honcho, Dave Ward. So if the ONF board listens to the TAG, and the TAG listens to its chairman… Who has more control over the ONF than anyone? https://www.opennetworking.org/about/tag

Board’s Iron Grip

If you follow the link provided by my anonymous commenter, you will find an extensive overview of the ONF’s Technical Advisory Group (TAG). Could the TAG, as constituted, be the tail that wags the ONF dog?

My analysis leads me to a different conclusion.  As I see it, the TAG serves at the pleasure of the ONF board of directors, individually and collectively. Nobody on the TAG does so without the express consent of the board of directors. Moreover, “TAG term appointments are annual and the chair position rotates quarterly.” Whereas Cisco’s Dave Ward serves as the current chair, his term will expire and somebody else will succeed him.

What about the suggestion that the “board actually acts on recommendations of the TAG,” as my commenter asserts. In many instances, that might be true, but the form and substance of the language on the TAG webpage articulates clearly that the TAG is, as its acronym denotes, an advisory body that reports to (and “responds to requests from”) the ONF board of directors.  The TAG offers technical guidance and recommendations, but the board makes the ultimate decisions. If the board doesn’t like what it’s getting from TAG members, annual appointments presumably can be allowed to expire and new members can succeed those who leave.

Currently, two networking-gear OEMs are represented on the ONF’s TAG. Cisco is represented by the aforementioned David Ward, and HP is represented by Jean Tourrilhes, an HP-Labs researcher in Networking and Communication who has worked with OpenFlow since 2008. These gentlemen seem to be on the TAG because those who run the ONF believe they can make meaningful contributions to the development of SDN.

No Coup

It’s instructive to note the company affiliations of the other six members serving on TAG. We find, for instance, Nicira CTO Martin Casado, as well as Verizon’s Dave McDysan, Google’s Amin Vahdat, Microsoft’s Albert Greenberg, Broadcom’s Puneet Agarwal, and Stanford’s Nick McKeown, who also is known as a Nicira co-founder and serves on that company’s board of directors.

If any company has pull, then, on the ONF’s TAG, it would seem to be Nicira Networks, not Cisco Systems. After all, Nicira has two of its corporate directors serving on the ONF’s TAG. Again, though, both gentlemen from Nicira are highly regarded and esteemed SDN proponents, who played critical roles in the advent and development of OpenFlow.

And that’s my point. If you look at who serves on the ONF’s TAG, you can clearly see why they’re in those roles and you can understand why the ONF board members would desire their contributions.

The TAG as a vehicle for an internal coup d’etat at the ONF? That’s one conspiracy theory that I’m definitely not buying.

SDN Controller Ecosystems Critical to Market Success

Software-defined networking (SDN) is a relatively new phenomenon. Consequently, analogies to preceding markets and technologies often are invoked by its proponents to communicate key concepts. One oft-cited analogy involves the server-based solution stack and the nascent SDN stack.

In this comparison, server hardware equates to networking hardware, with the CPU instruction set positioned as analogous to the OpenFlow instruction set. Above those layers, the server operating system is said to be analogous to the SDN controller, which effectively runs a “network operating system.” Above that layer, the analogy extends to similarities between server OS and network OS APIs and to the applications that run atop both stacks.

Analogies and Implications

Let’s consider the comparison of the server operating system to the SDN controller.  While the analogy is apt, it carries implications that prospective early adopters of SDN need to fully understand. As we’ve discussed before, SDN controllers based on OpenFlow today carry no guarantees of interoperability. An application that runs on one controller might not be available (or run) on another controller, just as an application developed for a Windows server might not be available on Linux (and vice versa).

Moreover, we don’t know how difficult it will be to port applications from one OpenFlow-based controller to another. It could be a trivial exercise or an agonizing one. There are many nagging questions, far fewer answers.

Keep in mind that this is an entirely different matter from the question of interoperability between OpenFlow-based controllers and switches. Presuming the OpenFlow standard is adhered to and implemented correctly in all cases, OpenFlow-based controllers on the market today should be able to communicate with OpenFlow-based switches.

But interoperability (or lack thereof) is an unwritten book at the layers where the SDN controller (an NOS akin to a server-based operating system) and the NOS APIs reside.  The poses a potential problem for the market development of a viable SDN ecosystem, at least for the enterprise market. (It’s not as much of an issue for the gargantuan service providers that drive the agenda of the Open Network Foundation; those companies have ample resources and will make their own internal standardization decisions relating to controllers and practically everything else that falls under the SDN rubric.)

Controller Derby

At SDNCentral, no fewer than seven open-source OpenFlow controllers are listed. Three of those controllers are Java based: Beacon, Floodlight, and Jaxon. The other open-source OpenFlow controllers listed at SDNCentral are FlowER, NodeFlow, POX, and Trema.  Additionally, OpenFlow controllers have been developed by several companies, including NEC, BigSwitch Networks (which offers a commercial version of the Floodlight controller), and Nicira Networks, which has built on the foundation of the Onix controller.

Interestingly, Google and Ericsson also have based their controllers on Onix. In a blog post last summer, Nicira CTO Martin Casado described Onix as a “general SDN controller” rather than an OpenFlow controller. Casado admitted that he was devising terminology on the fly, but he defined a “general SDN controller as “one in which the control application is fully decoupled from both the underlying protocol(s) communicating with the switches, and the protocol(s) exchanging state between controller instances (assuming the controllers can run in a cluster).” So, OpenFlow could be part of the picture, but it doesn’t have to be there; another mechanism could substitute for it.

Casado conceded that Onix is the right controller for many environments, but not for others. Wrote Casado:

There have been multiple control applications built on Onix, and it is used in large production deployments in the data centers, as well as in the access and core networks. However, it is probably too heavyweight for smaller networks (the home or small enterprise), and it is certainly too complex to use as a basic research tool.

Horses for Courses

So, there are horses for courses, and there are controllers for applications. Early indications suggest that it will not be a one-size-fits-all world. Nonetheless, at the end of his blog post, Casado expressed that opinion that “standards should be kept away” from controller design, and that the market’s natural-selection process should be allowed to run its course.

Perhaps that is the right prescription. It seems too early for leaden-footed standards bodies, such as the IETF and IEEE, to intervene. Nevertheless, customers will have to be wary. They’ll have to do their research, perform due diligence, and thoroughly understand the strengths, weaknesses, and characteristics of candidate controllers. Without assured controller interoperability, customers that adopt and deploy applications on one controller might have considerable difficulty shifting their investment and their software elsewhere.

Of course, if Google and the other major service providers who rule the roost at ONF want to expedite matters, they could publicly and aggressively endorse one or two controller platforms as de facto standards. But that’s probably unlikely, for a variety of reasons. Even if it were to happen, as Casado points out, any controller that proves favorable at large cloud service providers might not be the best choice for enterprises, especially smaller ones.

Opening for Networking’s Old Guard

At this point, it’s not clear how the SDN controller market will shake out. SDN controllers will struggle for sustenance not only against each other, but also against networking’s conventional distributed control planes already on the market, as well as so-called hybrid approaches — whereby the data path is jointly controlled by the conventional box-based control planes as well as by server-based controllers — that will be articulated and promoted by the major networking vendors, all of whom are keen to retard “pure SDN’s” advance from the environs of the largest cloud service providers to those of enterprise buyers. (As mentioned previously, the hybrid-control approach also is perceived by the ONF as a transitional necessity for customers seeking to move from their networks as they are constituted today to future SDN architectures.)

In that regard, the big networking powers are fortunate that the ONF’s early mandate is focused primarily, if not exclusively, on the requirements of the large cloud-service providers that populate its board of directors. The ambiguity surrounding controllers and their interoperability (or lack thereof) represents another factor that will dissuade enterprise buyers from taking an early leap of faith into the arms of SDN purveyors.

The faster the SDN market sorts out a controller hierarchy — determining the suitability and market prevalence of certain controllers in specific application environments — the sooner valuable ecosystems will form and enterprises will take serious notice.

For now, though, a shakeout doesn’t appear imminent.

Distributed, Hybrid, Northbound: Key Words in Cisco’s SDN Counterstrategy

When it has broached the topic of software-defined networking (SDN) recently, Cisco has attempted to reframe the discussion within the larger context of programmable networks. In Cisco’s conception of the evolving networking universe, the programmable network encompasses SDN, which in turn envelops OpenFlow.

We know by now that OpenFlow is a relatively small part of SDN. OpenFlow is a protocol that provides for the physical separation of the control and data planes, which heretofore have been combined within a switch or router. As such, OpenFlow enables server-based software (a controller) to determine how packets should be forwarded by network elements. As has been mentioned before, here and elsewhere, mechanisms other than OpenFlow could be used for the same purpose.

Logical Outcome

SDN is bigger than OpenFlow. It deals not only with the abstraction of the data plane, but also with higher-layer abstractions, at the control plane and above. The whole idea behind SDN is to put the applications, and the services they deliver, in the driver’s seat, so that the network does not become a costly encumbrance that impedes business agility and operational efficiency. In that sense, Cisco is right to suggest that programmable networks are a logical outcome that can and should result from the rise of SDN.

That said, the devil can always be found in the details, and we should note that Cisco’s definition of SDN, to the extent that it might invoke that acronym rather one of its own, is at variance with the definition that has been proffered by the Open Networking Foundation (ONF), which is controlled by the world’s largest cloud-service providers rather than by the world’s largest networking vendors. Cisco’s understanding of SDN looks a lot more like conventional networking, with a distributed or hybrid control plane instead of the logically centralized control plane favored by the ONF.

This post isn’t about value judgments, though. I am not here to bash Cisco, or anybody else for that matter, but to understand and interpret Cisco’s motivations as it formulates a counterstrategy to the ONF’s plans.

Bog-Standard Switches

Given the context, then, it’s easy to understand why Cisco favors the retention of the distributed — or, failing that, even a hybrid — control plane. Cisco is the market leader in switches and routers, and it owns a lot of valuable real estate on its customers’ networks.  If OpenFlow succeeds, not only in service-provider networks but also in the enterprise, Cisco is at risk of losing the market dominance it has worked so long and hard to build.

Frankly, there isn’t much differentiation to be achieved in bog-standard OpenFlow switches. If the Googles of the world get their way, the merchant silicon vendors all will support OpenFlow on their chipsets, and industry-standard boxes will be available from a number of ODMs and OEMs. It will be a prototypical buyer’s market, perhaps advancing quickly toward commoditization, and that’s not a prospect that Cisco shareholders and executives wish to entertain.

As Cisco comes to grips with SDN, then, it needs to rediscover the sort of leverage that it had before the advent of the ONF.  After all, if SDN is all about putting applications and other software literally in control of networks composed of industry-standard boxes, then network hardware will suffer a significant margin-squeezing demotion in the value hierarchy of customers.  And Cisco, as we’ve discussed before, develops more than its fair share of software, but remains a company wedded to a hardware-based business model.

Compromise and Accommodation 

Cisco would like to resist and undermine any potential market shift to the ONF’s server-based controllers. Fortunately for Cisco, many within the ONF are willing to acquiesce, at least initially and up to a point. A general consensus seems to have developed about the need for a hybrid control plane, which would accommodate both logically centralized controllers and distributed boxes. The ONF’s braintrust sees this move as a necessary compromise that will facilitate a long-term transition to a server-based model. It seems a logical and rational deduction — there’s a lot of networking gear installed out there that does not support the ONF’s conception of SDN — but it’s an opening for Cisco, nonetheless.

Beyond the issue of physical separation of the data plane and the control plane, Cisco has at least one other card to play.  You might have noticed that Cisco representatives have talked a lot during the past couple months about a “northbound interface” for SDN. As currently constituted, OpenFlow is a “southbound” interface, in that serves as a mechanism for a controller to program a switch. On a network diagram, that communication flows downward (hence southbound).

In SDN, a northbound interface would go upward, extending from the switch to the control plane and potentially beyond to applications and management/orchestration software. This is a discussion Cisco wants to have with the industry, at the ONF and elsewhere. Whereas southbound interfaces are all about what is done to a switch by external software, the northbound interface is a conduit by which the switch confers value — in the form of information intrinsic to the network — to the higher layers of abstraction.

Northbound Traffic

For now, the ONF has chosen not to define standard protocols or APIs for northbound interfaces, which could run from the networking devices up to the control plane and to higher layers of abstraction. Cisco, as the vendor with the largest installed base of gear in customer networks, finds itself in a logical position to play a role in helping to define those northbound interfaces.

Ideally, if programmable networks and SDN fulfill their potential, we’ll see the development of a virtuous feedback loop at the highest layers of abstraction, with software programming an underlying virtualized network and the network sending back state and other data that dynamically allows applications to perform even better.

Therefore, the northbound interface will be an important element in the future of SDN. Cisco hopes to leverage it, but more for the sustenance of its own business model than for the furtherance of the ONF’s objectives. Cisco holds some interesting cards, but it should be careful not to overplay them. Ultimately, it does not control the ONF.

As the SDN discourse elevates beyond OpenFlow, watch the traffic in the northbound lanes.

Why Google Isn’t A Networking Vendor

Invariably trenchant and always worth reading, Ivan Pepelnjak today explores what he believes Google is doing with OpenFlow. As it turns out, Pepelnjak posits that Google is doing more with other technologies than it is with OpenFlow, seemingly building a modern routing platform and a traffic-engineering application deserving universal envy and admiration.

In assessing what Google is doing, Pepelnjak would seem to get it right, as he usually does, but I would like to offer modest commentary on a couple minor points. Let’s start with his assessment of how Google is using OpenFlow:

“Google is using OpenFlow between controller and adjacent chassis switches because (like every other vendor) they need a protocol between the control plane and forwarding planes, and they decided to use an already-documented one instead of inventing their own (the extra OpenFlow hype could also persuade hardware vendors to implement more OpenFlow capabilities in their next-generation chipsets).”

OpenFlow: Just A Piece of the Puzzle

First off, Pepelnjak is essentially right. I’m not going to quarrel with his central point, which is that Google adopted OpenFlow as a communication protocol between (and that separates) the control plane and the forwarding plane. That’s OpenFlow’s purpose, its raison d’être, so it’s no surprising that Google would use it that way. As Chris Rock might say, that’s what OpenFlow is supposed to do.

Larger claims made on behalf of OpenFlow are not its fault. Subsequently, Pepelnjak states that OpenFlow is but a small piece of the networking puzzle at Google, and he’s right there, too. I don’t think it’s possible for OpenFlow to be a bigger piece. As a protocol between the control and forwarding planes, OpenFlow is what it is.

Beyond that, though, Pepelnjak refers to Google as a “vendor,” which I find odd.

Not a Networking Vendor

In many ways, Google is a vendor. It’s a cloud vendor, it’s an advertising vendor, it’s a SaaS vendor, and so on. But, in this particular context, Pepelnjak seems to be classifying Google as a networking vendor. That would be an incorrect designation, and here’s why: Vendors sell things, they vend. Google doesn’t sell the homegrown networking hardware and software that it implements internally. It’s doing it only for itself, not as a business proposition that would involve it proffering the technology to customers. As such, it should not be tossed into the same networking-vendor bucket as a Cisco, a Juniper, or an HP.

In fact, Google is going the roll-your-own route with its network infrastructure precisely because it couldn’t get what it wanted from networking vendors. In that respect, it is the anti-vendor. Google and the other gargantuan cloud-service providers who steer the Open Networking Foundation (ONF) promulgated software-defined networking (SDN) and espoused OpenFlow because they wanted network infrastructure to be different from the conventional approaches advanced by networking vendors and the traditional networking industry.

Whatever else one might think of the ONF, it’s difficult not to conclude that it represents an instance of customers (in this case, cloud-service providers) attempting to wrest strategic control from vendors to set a technological agenda. Google, a networking vendor? Only if one misunderstands the origins and purpose of ONF.

Creating a Market

Nonetheless, Google might have a hidden agenda here, and Pepelnjak touches on it when he writes parenthetically that “the extra OpenFlow hype could also persuade hardware vendors to implement more OpenFlow capabilities in their next-generation chipsets.”

Well, yes. Just because Google has chosen to roll its own and doesn’t like what the networking industry is selling today, it doesn’t necessarily mean that it has closed the door to buying from vendors in the future, presuming said vendors jump on the ONF bandwagon and start developing the sorts of products Google wants. Google doesn’t want to disclose the particulars of its network infrastructure, which it views as a source of competitive advantage and differentiation, but it is not averse to hyping OpenFlow in a bid to spur the supply side of the market to get with the SDN program.

Later in his post, Pepelnjak notes that Google used “standard protocols (BGP and IS-IS) like everyone else and their traffic engineering implementation (and probably the northbound API) is proprietary. How is that different (from the openness perspective) from networks built from Juniper’s or Cisco’s gear?”

Critical Distinction

Again, my point is that Google is not a vendor. It is customer building network technologies for its own use. By the very nature of that implicit (non)-transaction, the technologies in question will be proprietary. They’re not going anywhere other than Google’s data-center network. Google owns them, and it is in full control of defining them and releasing them on a schedule that suits Google’s internal objectives.

It’s rather different for vendors, who profit — if they’re doing it right — from the commercial sale of products and technologies to customers. There might be value in proprietary products and technologies in that context, but customers need to ensure that the proprietary value outweighs the proprietary risks, typically represented by vendor lock-in and upgrade cycles dictated by the vendor’s product-release schedule.

Google is not a vendor, and neither are the other companies driving the agenda of the ONF. I think it’s critical to make that distinction in the context of SDN and, to a lesser extent, OpenFlow.

What the Battle for “SDN” Reveals

As Mike Fratto notes in an excellent piece over at Network Computing, “software-defined networking” has become a semantical battleground, with the term getting pushed and pulled in various directions.

For good reason, Fratto was concerned that the proliferating definitions of software-defined networking (SDN) were in danger of shearing the term of all meaning. He compared what was happening to SDN to what happened previously to terms such as cloud computing, and he opined that once a term means anything, it means nothing.

Setting Record Straight

Not wanting to be passive observer to such linguistic nihilism, Fratto decided to set the record straight. He rightly observes that software-defined networking (SDN), as we understand it today, derives its provenance from the Open Networking Foundation (ONF). As such, the ONF’s definition of SDN should be the one that holds sway.

Citing an ONF white paper, “Software-Defined Networking:  The New Norm for Networks,” Fratto notes that, properly understood, SDN emphasizes three key features:

  • Separation of the control plane from the data plane
  • A centralized controller and view of the network
  • Programmability of the network by external applications

Why the Fuss?

I agree that the ONF’s definition is the one that should be authoritative and, well, definitive. What other vendors are doing in areas such as network virtualization and network programmability might be interesting — and perhaps even commendable and valuable to their customers — but unless what they are doing meets the ONF criteria, it should not qualify as SDN.  Furthermore, if what they’re doing doesn’t qualify as SDN, they should call it something else and explain its architectural principles and value clearly. An ambiguous, perhaps even disingenuous, linkage with SDN ought to be avoided.

What Fratto does not explore is why certain parties are attempting to muddy the SDN waters. In my experience, when vendors contest terminology, it suggests the linguistic real estate in question is uncommonly valuable, either strategically or monetarily. I posit that SDN is both.

Like “cloud” before it, everybody seemingly recognizes that SDN has struck a resounding chord. There’s hype attached to SDN, sure, but it also has genuine appeal and has generated considerable interest. As the composition of the ONF’s board of directors has suggested, and as the growing number of cloud service-provider deployments attest, SDN is not a passing fad. At least in the large cloud shops, it already has practical utility and business value.

The Value of Words

That value is likely to grow over time, and, while the enterprise will be a tough nut to crack for more than one reason, it’s certainly conceivable that the SDN eventually will find favor among at least certain enterprise demographics. The timeline for that outcome is not imminent, and, as I’ve written previously, Cisco isn’t about to “do a Nortel” and hold a going-out-of-business sale. Nonetheless, the auguries suggest that the ONF’s SDN will be with us for a long time and represents a significant business threat to networking’s status quo.

In this context, language really is power. If entrenched interests — such as the status quo of networking — don’t like an emerging trend, one way they can attempt to derail it is by co-opting it or subverting it. After all, it’s only an emerging trend, not yet entrenched, so therefore its terminology is nascent, too. If, as a major vendor with industry clout, you can change the meaning of the terminology, or make it so ambiguous that practically anything qualifies for inclusion, you can reassert control and dilute the threat.

In the past, this gambit — change the meaning, change the game — has accrued a decent track record. It works to impede change and to give entrenched interests more time to plot effective countermeasures.

Different This Time

What’s different this time — and Fratto’s piece provides corroborating evidence — is the existence of the ONF, a strong, customer-driven consortium that is (in its own words) “dedicated to the transformation of networking through the development and standardization of a unique architecture called Software-Defined Networking (SDN), which brings direct software programmability to networks worldwide. The mission of the Foundation is to commercialize and promote SDN and the underlying technologies as a disruptive approach to networking that will change how virtually every company with a network operates.”

If the ONF hadn’t existed, if it hadn’t already established an incontrovertible definition of SDN, the old “change the meaning, change the game” play might have worked.

But, as Fratto’s piece illustrates, it probably won’t work now.

Debating SDN, OpenFlow, and Cisco as a Software Company

Greg Ferro writes exceptionally well, is technologically knowledgeable, provides incisive commentary, and invariably makes cogent arguments over at EtherealMind.  Having met him, I can also report that he’s a great guy. So, it is with some surprise that I find myself responding critically to his latest blog post on OpenFlow and SDN.

Let’s start with that particular conjunction of terms. Despite occasional suggestions to the contrary, SDN and OpenFlow are not inseparable or interchangeable. OpenFlow is a protocol, a mechanism that allows a server, known in SDN parlance as a controller, to interact with and program flow tables (for packet forwarding) on switches. It facilitates the separation of the control plane from the data plane in some SDN networks.

But OpenFlow is not SDN, which can be achieved with or without OpenFlow.  In fact, Nicira Networks recently announced two SDN customer deployments of its Network Virtualization Platform (NVP) — at DreamHost and at Rackspace, respectively — and you won’t find mention of OpenFlow in either press release, though OpenStack and its Quantum networking project receive prominent billing. (I’ll be writing more about the Nicira deployments soon.)

A Protocol in the Big Picture 

My point is not to diminish or disparage OpenFlow, which I think can and will be used gainfully in a number of SDN deployments. My point is that we have to be clear that the bigger picture of SDN is not interchangeable with the lower-level functionality of OpenFlow.

In that respect, Ferro is absolutely correct when he says that software-defined networking, and specifically SDN controller and application software, are “where the money is.” He conflates it with OpenFlow — which may or may not be involved, as we already have established — but his larger point is valid.  SDN, at the controller and above, is where all the big changes to the networking model, and to the industry itself, will occur.

Ferro also likely is correct in his assertion that OpenFlow, in and of itself, will  not enable “a choice of using low cost network equipment instead of the expensive networking equipment that we use today. “ In the near term, at least, I don’t see major prospects for change on that front as long as backward compatibility, interoperability with a bulging bag of networking protocols, and the agendas of the networking old guard are at play.

Cisco as Software Company

However, I think Ferro is wrong when he says that the market-leading vendors in switching and routing, including Cisco and Juniper, are software companies. Before you jump down my throat, presuming that’s what you intend to do, allow me to explain.

As Ferro says, Cisco and Juniper, among others, have placed increasing emphasis on the software features and functionality of their products. I have no objection there. But Ferro pushes his argument too far and suggests that the “networking business today is mostly a software business.”  It’s definitely heading in that direction, but Cisco, for one, isn’t there yet and probably won’t be for some time.  The key word, by the way, is “business.”

Cisco is developing more software these days, and it is placing more emphasis on software features and functionality, but what it overwhelmingly markets and sells to its customers are switches, routers, and other hardware appliances. Yes, those devices contain software, but Cisco sells them as hardware boxes, with box-oriented pricing and box-oriented channel programs, just as it has always done. Nitpickers will note that Cisco also has collaboration and video software, which it actually sells like software, but that remains an exception to the rule.

Talks Like a Hardware Company, Walks Like a Hardware Company

For the most part, in its interactions with its customers and the marketplace in general, Cisco still thinks and acts like a hardware vendor, software proliferation notwithstanding. It might have more software than ever in its products, but Cisco is in the hardware business.

In that respect, Cisco faces the same fundamental challenge that server vendors such as HP, Dell, and — yes — Cisco confront as they address a market that will be radically transformed by the rise of cloud services and ODM-hardware-buying cloud service providers. Can it think, figuratively and literally, outside the box? Just because Cisco develops more software than it did before doesn’t mean the answer is yes, nor does it signify that Cisco has transformed itself into a software vendor.

Let’s look, for example, at Cisco’s approach to SDN. Does anybody really believe that Cisco, with its ongoing attachment to ASIC-based hardware differentiation, will move toward a software-based delivery model that places the primary value on server-based controller software rather than on switches and routers? It’s just not going to happen, because  it’s not what Cisco does or how it operates.

Missing the Signs 

And that bring us to my next objection.  In arguing that Cisco and others have followed the market and provided the software their customers want, Ferro writes the following:

“Billion dollar companies don’t usually miss the obvious and have moved to enhance their software to provide customer value.”

Where to begin? Well, billion-dollar companies frequently have missed the obvious and gotten it horribly wrong, often when at least some individuals within the companies in question knew that their employer was getting it horribly wrong.  That’s partly because past and present successes can sow the seeds of future failure. As in Clayton M. Christensen’s classic book The Innovator’s Dilemma, industry leaders can have their vision blinkered by past successes, which prevent them from detecting disruptive innovations. In other cases, former market leaders get complacent or fail to acknowledge the seriousness of a competitive threat until it is too late.

The list of billion-dollar technology companies that have missed the obvious and failed spectacularly, sometimes disappearing into oblivion, is too long to enumerate here, but some  names spring readily to mind. Right at the top (or bottom) of our list of industry ignominy, we find Nortel Networks. Once a company valued at nearly $400 billion, Nortel exists today only in thoroughly digested pieces that were masticated by other companies.

Is Cisco Decline Inevitable?

Today, we see a similarly disconcerting situation unfolding at Research In Motion (RIM), where many within the company saw the threat posed by Apple and by the emerging BYOD phenomenon but failed to do anything about it. Going further back into the annals of computing history, we can adduce examples such as Novell, Digital Equipment Corporation, as well as the raft of other minicomputer vendors who perished from the planet after the rise of the PC and client-sever computing. Some employees within those companies might even have foreseen their firms’ dark fates, but the organizations in which they toiled were unable to rescue themselves.

They were all huge successes, billion-dollar companies, but, in the face of radical shifts in industry and market dynamics, they couldn’t change who and what they were.  The industry graveyard is full of the carcasses of company’s that were once enormously successful.

Am I saying this is what will happen to Cisco in an era of software-defined networking? No, I’m not prepared to make that bet. Cisco should be able to adapt and adjust better than the aforementioned companies were able to do, but it’s not a given. Just because Cisco is dominant in the networking industry today doesn’t mean that it will be dominant forever. As the old investment disclaimer goes, past performance does not guarantee future results. What’s more, Cisco has shown a fallibility of late that was not nearly as apparent in its boom years more than a decade ago.

Early Days, Promising Future

Finally, I’m not sure that Ferro is correct when he says Open Network Foundation’s (ONF) board members and its biggest service providers, including Google, will achieve CapEx but not OpEx savings with SDN. We really don’t know whether these companies are deriving OpEx savings because they’re keeping what they do with their operations and infrastructure highly confidential. Suffice it to say, they see compelling reasons to move away from buying their networking gear from the industry’s leading vendors, and they see similarly compelling reasons to embrace SDN.

Ferro ends his piece with two statements, the first of which I agree with wholeheartedly:

“That is the future of Software Defined Networking – better, dynamic, flexible and business focussed networking. But probably not much cheaper in the long run.”

As for that last statement, I believe there is insufficient evidence on which to render a verdict. As we’ve noted before, these are early days for SDN.

Direct from ODMs: The Hardware Complement to SDN

Subsequent to my return from Network Field Day 3, I read an interesting article published by Wired that dealt with the Internet giants’ shift toward buying networking gear from original design manufacturers (ODMs) rather than from brand-name OEMs such as Cisco, HP Networking, Juniper, and Dell’s Force10 Networks.

The development isn’t new — Andrew Schmitt, now an analyst at Infonetics, wrote about Google designing its own 10-GbE switches a few years ago — but the story confirmed that the trend is gaining momentum and drawing a crowd, which includes brokers and custom suppliers as well as increasing numbers of buyers.

In the Wired article, Google, Microsoft, Amazon, and Facebook were explicitly cited as web giants buying their switches directly from ODMs based in Taiwan and China. These same buyers previously procured their servers directly from ODMs, circumventing brand-name server vendors such as HP and Dell.  What they’re now doing with networking hardware, then, is a variation on an established theme.

The ONF Connection

Just as with servers, the web titans have their reasons for going directly to ODMs for their networking hardware. Sometimes they want a simpler switch than the brand-name networking vendors offer, and sometimes they want certain functionality that networking vendors do not provide in their commercial products. Most often, though, they’re looking for cheap commodity switches based on merchant silicon, which has become more than capable of handling the requirements the big service providers have in mind.

Software is part of the picture, too, but the Wired story didn’t touch on it. Look at the names of the Internet companies that have gone shopping for ODM switches: Google, Microsoft, Facebook, and Amazon.

What do those companies have in common besides their status as Internet giants and their purchases of copious amounts of networking gear? Yes, it’s true that they’re also cloud service providers. But there’s something else, too.

With the exception of Amazon, the other three are board members in good standing of the Open Networking Foundation (ONF). What’s more,  even though Amazon is not an ONF board member (or even a member), it shares the ONF’s philosophical outlook in relation to making networking infrastructure more flexible and responsive, less complex and costly, and generally getting it out of the way of critical data-center processes.

Pica8 and Cumulus

So, yes, software-defined networking (SDN) is the software complement to cloud-service providers’ direct procurement of networking hardware from ODMs.  In the ONF’s conception of SDN, the server-based controller maps application-driven traffic flows to switches running OpenFlow or some other mechanism that provides interaction between the controller and the switch. Therefore, switches for SDN environments don’t need to be as smart as conventional “vertically integrated” switches that combine packet forwarding and the control plane in the same box.

This isn’t just guesswork on my part. Two companies are cited in the Wired article as “brokers” and “arms dealers” between switch buyers and ODM suppliers. Pica8 is one, and Cumulus Networks is the other.

If you visit the Pica8 website,  you’ll see that the company’s goal is “to commoditize the network industry and to make the network platforms easy to program, robust to operate, and low-cost to procure.” The company says it is “committed to providing high-quality open software with commoditized switches to break the current performance/price barrier of the network industry.” The company’s latest switch, the Pronto 3920, uses Broadcom’s Trident+ chipset, which Pica8 says can be found in other ToR switches, including the Cisco Nexus 3064, Force10 S4810, IBM G8264, Arista 7050S, and Juniper QFC-3500.

That “high-quality open software” to which Pica8 refers? It features XORP open-source routing code, support for Open vSwitch and OpenFlow, and Linux. Pica8 also is a relatively longstanding member of ONF.

Hardware and Software Pedigrees

Cumulus Networks is the other switch arms dealer mentioned in the Wired article. There hasn’t been much public disclosure about Cumulus, and there isn’t much to see on the company’s website. From background information on the professional pasts of the company’s six principals, though, a picture emerges of a company that would be capable of putting together bespoke switch offerings, sourced directly from ODMs, much like those Pica8 delivers.

The co-founders of Cumulus are J.R. Rivers, quoted extensively in the Wired article, and Nolan Leake. A perusal of their LinkedIn profiles reveals that both describe Cumulus as “satisfying the networking needs of large Internet service clusters with high-performance, cost-effective networking equipment.”

Both men also worked at Cisco spin-in venture Nuova Systems, where Rivers served as vice president of systems architecture and Leake served in the “Office of the CTO.” Rivers has a hardware heritage, whereas Leake has a software background, beginning his career building a Java IDE and working at senior positions at VMware and 3Leaf Networks before joining Nuova.

Some of you might recall that 3Leaf’s assets were nearly acquired by Huawei, before the Chinese networking company withdrew its offer after meeting with strenuous objections from the Committee on Foreign Investment in the United States (CFIUS). It was just the latest setback for Huawei in its recurring and unsuccessful attempts to acquire American assets. 3Com, anyone?

For the record, Leake’s LinkedIn profile shows that his work at 3Leaf entailed leading “the development of a distributed virtual machine monitor that leveraged a ccNUMA ASIC to run multiple large (many-core) single system image OSes on a Infiniband-connected cluster of commodity x86 nodes.”

For Companies Not Named Google

Also at Cumulus is Shrijeet Mukherjee, who serves as the startup company’s vice president of software engineering. He was at Nuova, too, and worked at Cisco right up until early this year. At Cisco, Mukherjee focused on” virtualization-acceleration technologies, low-latency Ethernet solutions, Fibre Channel over Ethernet (FCoE), virtual switching, and data center networking technologies.” He boasts of having led the team that delivered the Cisco Virtualized Interface Card (vNIC) for the UCS server platform.

Another Nuova alumnus at Cumulus is Scott Feldman, who was employed at Cisco until May of last year. Among other projects, he served in a leading role on development of “Linux/ESX drivers for Cisco’s UCS vNIC.” (Do all these former Nuova guys at Cumulus realize that Cisco reportedly is offering big-bucks inducements to those who join its latest spin-in venture, Insieme?)

Before moving to Nuova and then to Cisco, J.R. Rivers was involved with Google’s in-house switch design. In the Wired article, Rivers explains the rationale behind Google’s switch design and the company’s evolving relationship with ODMs. Google originally bought switches designed by the ODMs, but now it designs its own switches and has the ODMs manufacture them to the specifications, similar to how Apple designs its iPads and iPhones, then  contracts with Foxconn for assembly.

Rivers notes, not without reason, that Google is an unusual company. It can easily design its own switches, but other service providers possess neither the engineering expertise nor the desire to pursue that option. Nonetheless, they still might want the cost savings that accrue from buying bare-bones switches directly from an ODM. This is the market Cumulus wishes to serve.

Enterprise/Cloud-Service Provider Split

Quoting Rivers from the Wired story:

“We’ve been working for the last year on opening up a supply chain for traditional ODMs who want to sell the hardware on the open market for whoever wants to buy. For the buyers, there can be some very meaningful cost savings. Companies like Cisco and Force10 are just buying from these same ODMs and marking things up. Now, you can go directly to the people who manufacture it.”

It has appeal, but only for large service providers, and perhaps also for very large companies that run prodigious server farms, such as some financial-services concerns. There’s no imminent danger of irrelevance for Cisco, Juniper, HP, or Dell, who still have the vast enterprise market and even many service providers to serve.

But this is a trend worth watching, illustrating the growing chasm between the DIY hardware and software mentality of the biggest cloud shops and the more conventional approach to networking taken by enterprises.

Cheriton Sees Opportunity in Infrastructure

When I wrote my first post on this blog, way back in 2006, I assumed that technology infrastructure largely was a spent force. I expected incremental enhancements, gradual advances, but I didn’t anticipate another major boom or a significant disruption of the established order in what once had been a vibrant technology space.

While the technology industry as a whole can suffer from blinkered, willful optimism, perhaps I was afflicted by a different condition entirely. I might have been too pessimistic, too gloomy, dispirited by the technology downturn of the early 2000s and the lack of a meaningful, sustained recovery in the years that immediately followed.

By the way, when I refer to technology, I’m not talking about social networking such as Facebook. I understand that there’s a lot of technology behind the scenes at Facebook, but the customer-facing “social” phenomenon leaves me cold. I never did see the point of Facebook from a user’s perspective, though I understood how it could serve as an unprecedented data-mining machine for advertisers.

Opportunity Renewed

Fortunately, though, I was wrong about the decline and fall of infrastructure. It took a while, but a new era of infrastructure has arisen, based on virtualization, orchestration, and automation. Technological possibilities that we could only dream about more than a decade ago are now possible. In the networking realm, software-defined networking (SDN) is enabling comparatively outmoded network infrastructure to catch up with compute and, to a lesser degree, storage infrastructure as the promise of an application-driven, programmable data center comes into clearer view.

Suddenly, at long last, there’s new opportunity in infrastructure.

You don’t have to take my word for it, either. There are people who’ve designed and developed industry-leading technologies who espouse the same opinion. Some of these people are billionaires, and they’re backed their convictions with substantial sums of money, investing in technologies and companies with clear mandates to remake IT infrastructure.

Outrageously Wealthy Canuck

One of those people is David Cheriton, a billionaire who wears many hats. He is Professor of Computer Science and Electrical Engineering at Stanford University, where he researches networking and distributed systems, and he also serves as a co-founder and chief scientist at Arista Networks. He’s also an investor in startup companies. Back in 1998, one early-stage company in which he invested, along with Arista co-founder Andy Bechtolsheim, was Google.  The duo made a similar early investment in VMware, so they’ve done okay.

Born in Vancouver, raised in Edmonton, Alberta, and ranked 37th on a Wikipedia list of “richest Canadians”** — Forbes ranks him 21st among outrageously wealthy Canucks  — Cheriton recently spoke about innovation and entrepreneurship at a Churchill Club event in Silicon Valley. The event was co-hosted and organized by the Hua Yuan Science and Technology Association and also featured Ken Xie, who founded NetScreen (acquired by Juniper Networks in 2004) and is now president and CEO of unified-threat-management/firewall vendor Fortinet, a company he also founded.

In addition to his apparent knack as an investor, Cheriton has considerable firsthand experience as an entrepreneur and an innovator. Before he and Bechtolsheim combined forces at Arista Networks,  they founded Granite Systems, a Gigabit-Ethernet switching concern that was acquired by Cisco in 1996 for about $220 million in stock, back when shares of Cisco were continuously on the rise.  Subsequently, after the Google investment, Bechtolsheim and Cheriton combined forces again to found Kealia, which specialized in server technology based on AMD’s Opteron microprocessor.  That company was acquired by Sun Microsystems in 2004, providing technology included in the Sun Fire X4500 storage product.

Room for Improvement

In 2005, Cheriton and Bechtolsheiim followed up with Arista, then called Arastra, and its 10-GbE switching technology, which brings us to the approximate present and back to something Cheriton said at the Churchill Club event late last month. Noting that people tend to become preoccupied with the latest developments in social networking and mobility, Cheriton expressed his enthusiasm for infrastructure, as an investment vehicle as well as an area in which he has an abiding technical interest. As quoted in a BusinessWeek article, Cheriton said: “I think there is an opportunity to go back and say, ‘Gee, I think there’s lot of room for improvement in the infrastructure.’ ”

Reinforcing that point, he noted that technology infrastructure today is predicated on ideas that are about 30 years old. The network was the place to start the infrastructure refurbishment, Cheriton believed, and Arista Networks grew from that conviction.

But Cheriton hasn’t stopped there. He also founded a company called Optumsoft, about which not much is known. On its website, Optumsoft is described as an early-stage startup company “taking distributed computing and distributed software development mainstream.” Quoting from the website:

Recent advancements in multi-core computing systems, coupled with the ever increasing functional and performance requirements of software has created an exciting market opportunity for addressing the programmatic and architectural issues involved in modern software development. Optumsoft is addressing this growing market with a novel technology approach that is transparent, scalable, and portable, resulting in significant improvement to the development and maintenance of distributed/parallel structured software systems. Early production usage by commercial clients has validated the technology and value proposition.

Last fall, an anonymous source suggested on Quora that what Optumsoft was building related to “how to structure object-oriented RPC in a way that makes it easy to build robust systems.  The technology behind Arista’s EOS is based on some of these ideas, as was software structure at a previous startup, Kealia.  The technology includes an IDL and a C++ runtime, similar to what you’d get using CORBA.”

Nebula and Tintri

On the investment side, Cheriton and Bechtolsheim have put money into Nebula, which has venture-capital backing from Kleiner Perkins Caulfield & Byers and Highland Capital Partners. Built on OpenStack, the Nebula Enterprise Cloud Appliance is designed to provision and configure flexible, scalable cloud-computing infrastructure. Although it doesn’t say so on the Nebula website, previous reports indicated that Arista’s networking technology is included in the Nebula appliance.

According to the BusinessWeek article,  Cheriton also has a stake in Tintri, co-founded by Kieran Harty and Mark Gritter. Harty was EVP of R&D at VMware for seven years, and Gritter was one of the first of Cheriton’s employees at Kealia. They’ve assembled a PhD-laden engineering team that has developed a virtual-machine-aware storage appliance designed for virtualized environments, which the company says have been underserved by older storage technology that apparently contributes to “VM stall.”

Another early-stage investment that Cheriton made was in Aster Data Systems, a purveyor of a massively parallel DBMS that runs on clustered commodity servers. Already a minority owner of Aster, Teradata bought the 89% of the company it didn’t own for $263 million last year.

Cheriton has made bets on infrastructure, and he’ll likely make others. It’s an encouraging sign for those of us who gravitate to that part of the industry.

(**No, I am not on the list, but thanks for asking.)