Category Archives: Open Networking Foundation

Questioning SDN Cynicism

A few months ago, I noticed that the networking cognoscenti were becoming jaded about software-defined networking (SDN). To be fair, the networking cognoscenti can skew toward disgruntlement, so it was no surprise to see this restive bunch cast a jaundiced eye toward networking’s greatest, latest hope.

I consider myself among the skeptical and wary, always cognizant that vendors can be inclined to advance a self-serving agenda that sometimes is designed to satisfy their own near-term interests over the long-term objectives of their customers. That works particularly well when the vendors can trick the customers into believing that they’re actually looking out for them. As our ancient forebears knew, caveat emptor was more than a catchphrase.

Asking Why

All of which brings me to a puzzling aspect of the current disaffection with SDN, expressed most recently in a highly readable and strongly recommended post by Ethan Banks of PacketPushers fame. My question, which I put to Banks to and to everyone else for whom SDN has become an annoyance, is simple: Are you really upset with SDN, or are you actually frustrated with the way the term has been used and abused by the vendor community?

It’s not an academic or an idle question.

One should remember that SDN, properly defined and understood, is a creation of a customer-centric consortium, the Open Networking Foundation (ONF), not a marketing or technical construct espoused by a given networking vendor or even by a group of vendors. If the term “SDN” is being bastardized and demeaned, it is not the ONF that is doing it. More directly, if the term is being cheapened, the devaluation is occurring at the hands of vendors.

But why? There are at least two possibilities. One is that certain networking vendors want to exploit the positive connotations, the afterglow, that surrounded software-defined networking (SDN). According to this theory, the damage they’re inflicting to the SDN brand is unintentional and ironic: They wanted to ride SDN’s relatively pristine coattails, not pull it into a seedy gutter of disrepute. I would be inclined to accept this theory if vendors adopted SDN definitions that accorded with that of the ONF, but. for the most part, that’s not what’s happened.

Agents Provocateurs: Back in Action 

Instead, vendors typically recast SDN in forms that correspond with product roadmaps and company-specific strategic objectives.  The result has been market confusion and cynicism, understandably so. When a term is spun to mean practically anything to anyone, it risks losing its specificity and its relevance.

Allow me suggest that at least a few vendors would be neither inconvenienced nor unduly troubled to see SDN’s identity fractured and splintered like a broken mirror.  It would not be the first time that fear, uncertainty, and doubt were deployed as agents provocateurs in a commercial context.

Nonetheless, coming back to my question above, I would counsel that we think carefully about whether our annoyance is really with SDN or with the way the term “SDN” is being manipulated and distorted by the vendor community.

As always, it is helpful to diagnose not only what is happening, but to try to understand why it is happening, too.

Advertisement

Northbound API: The Standardization Debate

During the last several months, several extremely informative articles and posts have been written about the significance of the northbound API (or NB API) within the context of software-defined networking (SDN).

We’ve seen two posts on the topic at SDN Central, one written in April by David Lenrow and another written by Roy Chua in early July.  Brent Salisbury, on his blog NetworkStatic, offered an excellent exegesis on the northbound API in June, and he touched on the topic again in a subsequent post in July that dealt with how he believes SDN APIs will evolve. At GigaOm, Stacey Higginbotham also has written on the subject, as have I both here and at TechTarget’s SearchNetworking.

Recently, Greg Ferro, of EtherealMind renown, provided an instructive overview on SDN APIs, opining that it is “unlikely that Northbound APIs will never standardise but I’m not aware of any initiatives in this area.”

I don’t know whether northbound APIs, as Greg suggests, will never standardize, but I do know that most knowledgeable observers (including the aforementioned parties) believe that there should no headlong rush toward standardization. The consensus is that SDN’s northbound APIs should be given an opportunity to flourish first, and that the market ultimately should vote with its feet and with its wallets.

Too Early?

That said, there are those who believe standards bodies should play a role, even at this nascent stage, in defining SDN’s northbound API.  In fact, the matter was raised yesterday on a discussion thread for the IETF’s software-driven network protocol (SDNP) BOF mailing list, where some argued that the Open Networking Foundation’s (ONF) reluctance to begin standardization work on the northbound API — the ONF reportedly will incorporate northbound-API discussions into deliberations of its recently formed architecture workgroup — opened the door for IETF involvement.

Often, but not always, proponents of near-term northbound-API standardization are representatives of legacy vendors familiar with the standards-definition process. (At this point, I feel strangely compelled to invoke the quote often misattributed to Otto von Bismarck regarding the similarity of laws to sausages: “Laws are like sausages. You should never watch them being made.” I believe this maxim also applies to IETF standards.)

The point here, though, isn’t to render a value judgment on who’s right and who’s wrong. What’s salient is that there is stark disagreement on whether the question of the northbound API can and should be settled by market forces or by vendor comity (and committee). Watching to see which players line up on either side of the divide, and how they defend their positions, will be instructive.

Between What Is and What Will Be

I have refrained from writing about recent developments in software-defined networking (SDN) and in the larger realm of what VMware, now hosting VMworld in San Francisco, calls the  “software-defined data center” (SDDC).

My reticence hasn’t resulted from indifference or from hype fatigue — in fact, these technologies do not possess the jaundiced connotations of “hype” — but from a realization that we’ve entered a period of confusion, deception, misdirection, and murk.  Amidst the tumult, my single, independent voice — though resplendent in its dulcet tones — would be overwhelmed or forgotten.

Choppy Transition

We’re in the midst of a choppy transitional period. Where we’ve been is behind us, where we’re going is ahead of us, and where we find ourselves today is between the two. So-called legacy vendors, in both networking and compute hardware, are trying to slow progress toward the future, which will involve the primacy of software and services and related business models. There will be virtualized infrastructure, but not necessarily converged infrastructure, which is predicated on the development and sale of proprietary hardware by a single vendor or by an exclusive club of vendors.

Obviously, there still will be hardware. You can’t run software without server hardware, and you can’t run a network without physical infrastructure. But the purpose and role of that hardware will change. The closed box will be replaced by an open one, not because of any idealism or panglossian optimism, but because of economic, operational, and technological imperatives that first are remaking the largest of public-cloud data centers and soon will stretch into private clouds at large enterprises.

No Wishful Thinking

After all, the driving purpose of the Open Networking Foundation (ONF) involved shifting the balance of power into the hands of customers, who had their own business and operational priorities to address. Where legacy networking failed them, SDN provided a way forward, saving money on capital expenditures and operational costs while also providing flexibility and responsiveness to changing business and technology requirements.

The same is true for the software-defined data center, where SDN will play a role in creating a fluid pool of virtualized infrastructure that can be utilized to optimal business benefit. What’s important to note is that this development will not be restricted to the public cloud-service providers, including all the big names at the top of the ONF power structure. VMware, which coined software-defined data center, is aiming directly for the private cloud, as Greg Ferro mentioned in his analysis of VMware’s acquisition of Nicira Networks.

Fighting Inevitability

Still, it hasn’t happened yet, even though it will happen. Senior staff and executives at the incumbent vendors know what’s happening, they know that they’re fighting against an inevitability, but fight it they must. Their organizations aren’t built to go with this flow, so they will resist it.

That’s where we find ourselves. The signal-to-noise ratio isn’t great. It’s a time marked by disruption and turmoil. The dust and smoke will clear, though. We can see which way the wind is blowing.

Chinese Merchant-Silicon Vendor Joins ONF, Enters SDN Picture

Switching-silicon ODM/OEM Centec Networks last week became the latest company to join the Open Networking Foundation (ONF).

According to a press release, Centec is “committed to contributing to SDN development as a merchant silicon vendor and to pioneering in the promotion of SDN adoption in China.” From the ONF’s standpoint, the more merchant silicon on the market for OpenFlow switches, the better.  Expansion in China doubtless is a welcome prospect, too.

Established in 2005, Centec has been financed by China-Singapore Suzhou Industrial Park Venture Capital, Delta Venture Enterprise, Infinity I-China Investments (Israel), and Suzhou Rongda. A little more than a year ago, Centec announced a $10.7-million “C” round of financing, in which Delta Venture Enterprise, Infinity I-China Investments (Israel), and SuZhou Rongda participated.

Acquisition Rumor

Before that round was announced, Centec’s CEO James Sun, formerly of Cisco and of Fore Systems, told Light Reading’s Craig Matsumoto that the company aspired to become an alternative supplier to Broadcom in the Ethernet merchant-silicon market. As a Chinese company, Centec not surprisingly has cultivated relationships with Chinese carriers and network-gear vendors. In his Light Reading article, in fact, Matsumoto cited a rumor that Centec had declined an acquisition offer from HiSilicon Technologies Co. Ltd., the semiconductor subsidiary of Huawei Technologies, China’s largest network-equipment vendor.

Huawei has been working not only to bolster its enterprise-networking presence, but also to figure out how best to utilize SDN and OpenFlow (and OpenStack, too).  Like Centec, Huawei is a member of the ONF, and it also has been active in IETF and IRTF discourse relating to SDN. What’s more, Huawei has been hiring SDN-savvy engineers in China and in the U.S.

As for Centec, the company made its debut on the SDN stage early this year at the Ethernet Technology Summit, where CEO James Sun gave a silicon vendor’s perspective on OpenFlow and spoke about the company’s plans to release a reference design based on Centec’s TransWarp switching silicon and an SDK with support for Open vSwitch 1.2. That reference design subsequently was showcased at the Open Networking Summit in April.

It will be interesting to see how Centec develops, both in competitive relation to Broadcom and within the context of the SDN ecosystem.

Network-Virtualization Startup PLUMgrid Announces Funding, Reveals Little

Admit it, you thought I’d lost interest in software-defined networking (SDN), didn’t you?

But you know that couldn’t be true. I’m still interested in SDN and how it facilitates network virtualization, network programmability, and what the empire-building folks at EMC/VMware are billing as the software-defined data center, which obviously encompasses more than just networking.

Game On

Apparently I’m not the only one who retains an abiding interest in SDN. In the immediate wake of VMware’s headline-grabbing acquisition of network-virtualization startup Nicira Networks, entrepreneurs and venture capitalists want us to know that the game has just begun.

Last week, for example, we learned that PLUMgrid, a network-virtualization startup in the irritatingly opaque state of development known as stealth mode, has raised $10.7 million in first-round funding led by moneybags VCs U.S. Venture Partners (USVP) and Hummer Winblad Venture Partners. USVP’s Chris Rust and Hummer Winblad’s Lars Leckie have joined PLUMgrid’s board of directors. You can learn more about the individual board members and the company’s executive team, which includes former Cisco employees who were involved in the networking giant’s early dalliance with OpenFlow a few years ago, by perusing the biographies on the PLUMgrid website.

Looking for Clues 

But don’t expect the website to provide a helpful description of the products and technologies that PLUMgrid is developing, apparently in consultation with prospective early customers. We’ll have to wait until the end of this year, or early next year, for PLUMgrid to disclose and discuss its products.

For now, what we get is a game of technology charades, in which PLUMgrid executives, including CEO Awais Nemat, drop hints about what the company might be doing and their media interlocutors then guess at what it all means. It’s amusing at times, but it’s not illuminating.

At SDNCentral, Matt Palmer surmises that PLUMgrid might be playing in “the service orchestration arena for both physical and virtual networks.” In an article written by Jim Duffy at Network World, we learn that PLUMgrid sees its technology as having applicability beyond the parameters of network virtualization. In the same article, PLUMgrid’s Nemat expresses reservations about OpenFlow. To wit:

 “It is a great concept (of decoupling the control plane for the data plane) but it is a demonstration of a concept. Is OpenFlow the right architecture for that separation? That remains to be seen.”

More to Come

That observation is somewhat reminiscent of what Scott Schenker, Nicira co-founder and chief scientist and a professor in the Electrical Engineering and Computer Science Department at the University of California at Berkeley, had to say about OpenFlow last year. (Shenker also is a co-founder and officer of the Open Networking Foundation, a champion and leading proponent of OpenFlow.)

What we know for certain about PLUMgrid is that it is based in Sunnyvale, Calif., and plans to sell its network-virtualization software to businesses that manage physical, virtual, and cloud data centers. In a few months, perhaps before the end of the year, we’ll know more.

Xsigo: Hardware Play for Oracle, Not SDN

When I wrote about Xsigo earlier this year, I noted that many saw Oracle as a potential acquirer of the I/O virtualization vendor. Yesterday morning, Oracle made those observers look prescient, pulling the trigger on a transaction of undisclosed value.

Chris Mellor at The Register calculates that Oracle might have paid about $800 million for Xsigo, but we don’t know. What we do know is that Xsigo’s financial backers were looking for an exit. We also know that Oracle was willing to accommodate it.

For the Love of InfiniBand, It’s Not SDN

Some think Oracle bought a software-defined networking (SDN) company. I was shocked at how many journalists and pundits repeated the mantra that Oracle had moved into SDN with its Xsigo acquisition. That is not right, folks, and knowledgeable observers have tried to rectify that misconception.

I’ve gotten over a killer flu, and I have a residual sinus headache that sours my usually sunny disposition, so I’m no mood to deliver a remedial primer on the fundamentals of SDN. Suffice it to say, readers of this forum and those familiar with the pronouncements of the ONF will understand that what Xsigo does, namely I/O virtualization, is not SDN.  That is not to say that what Xsigo does is not valuable, perhaps especially to Oracle. Nonetheless, it is not SDN.

Incidentally, I have seen a few commentators throwing stones at the Oracle marketing department for depicting Xsigo as an SDN player, comparing it to Nicira Networks, which VMware is in the process of acquiring for a princely sum of $1.26 billion. It’s probably true that Oracle’s marketing mavens are trying to gild their new lily by covering it with splashes of SDN gold, but, truth be told, the marketing team at Xsigo began dressing their company in SDN garb earlier this year, when it became increasingly clear that SDN was a lot more than an ephemeral science project involving OpenFlow and boffins in lab coats.

Why Confuse? It’ll be Obvious Soon Enough

At Network Computing, Howard Marks tries to get everybody onside. I encourage you to read his piece in its entirety, because it provides some helpful background and context, but his superbly understated money quote is this one: “I’ve long been intrigued by the concept of I/O virtualization, but I think calling it software-defined networking is a stretch.”

In this industry, words are stretched and twisted like origami until we can no longer recognize their meaning. The result, more often than not, is befuddlement and confusion, as we witnessed yesterday, an outcome that really doesn’t help anybody. In fact, I would argue that Oracle and Xsigo have done themselves a disservice by playing the SDN card.

As Marks points out, “Xsigo’s use of InfiniBand is a good fit with Oracle’s Exadata and other clustered solutions.” What’s more, Matt Palmer, who notes that Xsigo is “not really an SDN acquisition,” also writes that “Oracle is the perfect home for Xsigo.” Palmer makes the salient point that Xsigo is essentially a hardware play for Oracle, one that aligns with Oracle’s hardware-centric approaches to compute and storage.

Oracle: More Like Cisco Than Like VMWare

Oracle could have explained its strategy and detailed the synergies between Xsigo and its family of hardware-engineered “Exasystems” (Exadata and Exalogic) —  and, to be fair, it provided some elucidation (see slide 11 for a concise summary) — but it muddied the waters with SDN misdirection, confusing some and antagonizing others.

Perhaps my analysis is too crude, but I see a sharp divergence between the strategic direction VMware is heading with its acquisition of Nicira and the path Oracle is taking with its Exasystems and Xsigo. Remember, Oracle, after the Sun acquisition, became a proprietary hardware vendor. Its focus is on embedding proprietary hooks and competitive differentiation into its hardware, much like Cisco Systems and the other converged-infrastructure players.

VMware’s conception of a software-defined data center is a completely different proposition. Both offer virtualization, both offer programmability, but VMware treats the underlying abstracted hardware as an undifferentiated resource pool. Conversely, Oracle and Cisco want their engineered hardware to play integral roles in data-center virtualization. Engineered hardware is what they do and who they are.

Taking the Malocchio in New Directions

In that vein, I expect Oracle to look increasingly like Cisco, at least on the infrastructure side of the house. Does that mean Oracle soon will acquire a storage player, such as NetApp, or perhaps another networking company to fill out its data-center portfolio? Maybe the latter first, because Xsigo, whatever its merits, is an I/O virtualization vendor, not a switching or routing vendor. Oracle still has a networking gap.

For reasons already belabored, Oracle is an improbable SDN player. I don’t see it as the likeliest buyer of, say, Big Switch Networks. IBM is more likely to take that path, and I might even get around to explaining why in a subsequent post. Instead, I could foresee Oracle taking out somebody like Brocade, presuming the price is right, or perhaps Extreme Networks. Both vendors have been on and off the auction block, and though Oracle’s Larry Ellison once disavowed acquisitive interest in Brocade, circumstances and Oracle’s disposition have changed markedly since then.

Oracle, which has entertained so many bitter adversaries over the years — IBM, SAP, Microsoft, SalesForce, and HP among them — now appears ready to cast its “evil eye” toward Cisco.

Inevitability of Virtualized Infrastructure

As a previous post, Infrastructure Virtualization Versus Converged Infrastructure, attests, I strongly believe that virtualization is leading us to a future in which underlying hardware becomes largely undifferentiated and interchangeable. Applications and orchestration will reside in software riding atop the virtualization layer, which effectively will function as an abstraction buffer above hardware infrastructure.  The latter will eventually include hardware for computer, networking, and storage.

Vendors that ride hardware-based business models will have trouble adapting to this new reality. Many of these companies have hordes of software developers and software engineers, but they inextricably intertwine their software and hardware as a matter of business practice, selling the latter as proprietary boxes that often cannot interoperate with, or be swapped out for, competing hardware. It’s classic hardware-based vendor lock-in, and it’s been with us for many years. This applies to vendors that sell all three main types of hardware infrastructure, and to those that sell them tied together as converged infrastructure.

Loosening a Tenacious Grip

Proprietary data-center hardware would appear to be running on borrowed time, though it will not disappear overnight. Its grip will be especially tenacious in the enterprise, though the pull of the cloud eventually will weaken its hold. Proprietary compute infrastructure will be the first to succumb, but networking and storage will fall, too. The economic and operational logic powering the transition is inexorable, so it’s a question of when, not whether, it will happen.

While CapEx cost savings are an obvious benefit, operational flexibility (shifting workloads with agility and less effort) and OpEx savings also are factors. Infrastructure hardware will be cheaper, as well as easier and less costly to run. Pools of industry-standard hardware will be reallocated on demand to serve the needs of application workloads. Data-center customers no longer will be constrained by the hardware-release schedules of their previous vendors of choice. Customers also will be able to take advantage of the latest industry-standard chipsets, which will power hardware with improved energy efficiency and better cooling characteristics.

In servers, and now in storage, Facebook’s Open Compute Project (OCP) has sought to expedite the move to off-the-shelf hardware. Last week at Oscon, Frank Frankovsky, a vice president at  Facebook and the chairman and president of the OCP, rallied the open-source troops by arguing that proprietary x86 systems are “gratuitously differentiated.” He called for all hardware-design specifications to be open.

OCP as Competitive Cudgel

That would benefit Facebook, which launched OCP as a vehicle to help it lower data-center CapEx and OpEx, boost operational flexibility, and — last but not least — mitigate a competitive advantage held by Google, which had a massive head start in rationalizing and fine-tuning its data centers and IT infrastructure. In fact, Google cloaks its IT operations in extreme secrecy, believing that its practices and technologies deliver substantial competitive advantage over its main rivals, including Facebook. The latter must agree, because the animating idea behind Open Compute is to create a market, demand and supply, for commodity server hardware will reduce or eliminate Google’s edge.

Some have wondered why Google hasn’t joined OCP, but the answer should be obvious. Google believes it has cracked the infrastructure code, and it is therefore disinclined to share its insights and best practices with its competitors. Google isn’t a fan of proprietary vanity hardware — it’s been designing its own gear, then going to server and network ODMs, for some time now — but Google feels it has nothing to gain, and much to lose, from opening its kimono to the OCP crowd.

With networking, though, Google felt it needed a little help from its friends — as well as from its enemies. That explains why it allied with Facebook and other cloud-service providers in the Open Networking Foundation (ONF), which I have written about here on many occasions. The goal of the ONF, as with OCP, is to slip the proprietary shackles of hardware vendors, whose gear functions as an impediment to operational agility as well as a costs that could be reduced through SDN-style network virtualization. Google’s communitarian approach to addressing the network-virtualization riddle suggests that it believes it cannot achieve the desired outcome on its own.

Cracking the Nut

Whereas compute hardware was well on its way to standardization, networking hardware, until the ONF, was akin to a vertically integrated mainframe system, replete with a proliferating number of both proprietary and industry-standard protocols. Networking is a bigger, and tougher, nut to crack.

But crack it will, first at the big cloud-service providers, then, as the cloud gains momentum, at enterprises.

PS: I will post something tomorrow about VMware’s just-announced acquisition of Nicira, which is big news no matter how you slice it.  I wrote the above post before I learned of the acquisition.

Infrastructure Virtualization Versus Converged Infrastructure

While writing about software-defined networking (SDN) and what it makes possible, I have been thinking about how its essential premise, and the premise behind infrastructure virtualization, conflicts with visions of converged infrastructure promulgated by the leading systems vendors in the information-technology (IT) industry.

According to the Wikipedia definition, converged infrastructure encompasses servers, storage, networking gear, and software for IT infrastructure management, automation, and orchestration. Accordingly, converged infrastructure leverages pooled IT resources to facilitate automated resource provisioning in support of dynamic application workloads.

Hardware Pedigrees in Software World

Leading vendors, most with more hardware than software pedigrees, have sought to offer proprietary converged-infrastructure offerings that closely integrate the hardware elements with software-based management attributes. In this regard,  we can cite vendors such as Cisco (with a storage assist from EMC or NetApp), Hewlett-Packard, Dell, Hitachi Data Systems, Oracle (though networking remains on open question there),  and, perhaps to a lesser extent, IBM.

Now, let’s think about SDN and where it ultimately leads. Cisco would like us to believe that SDN, if it leads anywhere, will eventually take us to network programmability, with a heavy emphasis on the significance of a northbound API (or APIs).  Cisco says that the means — in this case, SDN — are not as important as the desired ends, networking programmability, and many of Cisco’s enterprise customers will doubtless agree.

SDN End Games

Another SDN outcome is network virtualization, which admittedly can also be achieved through other means. But an interesting aspect of SDN’s approach to network virtualization, with its decoupling of the network’s control and data planes, is that it results in the abstracting of software-based network intelligence from the underlying hardware-based network brawn. It’s a software paradigm taken to a logical extreme, with server-based software running at the network edge controlling an abstracted pool of no-frills networking hardware.

Indeed, this is one end game for SDN, first playing out in the data centers of the major cloud service providers that guide the affairs of the Open Networking Foundation (ONF), and then — at some indeterminate future point too difficult to forecast without a Ouija board and a bottle of scotch  — also at large enterprises worldwide.

Let’s elaborate further. SDN facilitates network virtualization, which in turn is harnessed and orchestrated by cloud-management software, which also manages virtualized compute and storage infrastructure. As we’ve seen already in the compute world of servers, it’s getting increasingly difficult for a vanity hardware vendor to earn a buck in a virtualized world. Many service providers have found that they can get boxes that satisfy their needs, at lower prices, directly from ODMs that often build servers for name-brand OEMs.  Storage is being virtualized, too.

Network’s Turn

And now it is the network’s turn.

In such a world, how much longer will it make sense for customers to achieve converged infrastructure from single-source vendors that equip their hardware with proprietary fripperies and hooks to facilitate lock-in? Again, we can see these trend playing out at large service providers. Some have begun buying their networking hardware off the rack from ODMs, saving not only on capital expenditures (certainly the case for servers), but also on operating expenses relating to the ongoing management of network infrastructure. It’s true that they’re trading one sort of complexity for another, pushing it up the stack and into software rather than an operational hardware, but it’s a trade-off they’re clearly willing to make, probably because they have the resources and skill sets to make it work (and pay).

Obviously that is not a recipe for everybody, certainly not for most enterprises today. But times are changing, and it isn’t inconceivable to foresee a day when the enterprise will be able to avail itself of third-party private-cloud software and management tools that will allow it to exploit a similar model of virtualized infrastructure.

Prescience Pays Off

In the big picture, as far as the established networking vendors are concerned, the ONF’s conception of SDN is about more than just OpenFlow, and even about more than network programmability. It’s about how SDN supports a model of network virtualization, in service to infrastructure virtualization, that significantly enfeebles hardware-based business models. Some of these hardware-oriented vendors will not successfully pivot to a model of virtualized infrastructure and software primacy.

On the other hand, some vendors have had the prescience to see this trend approaching on the horizon; they understand its inevitability, and they have positioned themselves better than others to survive, and perhaps even thrive, after the eventual market transition.

We’ll look at one of those vendors in a subsequent post.

SDN Focus Turns to Infrastructure

At this year’s SIGCOMM conference in Helsinki, Finland, a workshop called Hot Topics in Software-Defined Networking (HotSDN) will be held on August 13.  A number of papers will be presented as part of HotSDN’s technical program, but one has been written as a “call to arms for the SDN community.”

The paper is called: “Fabric: A Retrospective on Evolving SDN.” Its authors are Martin Casado, CTO of Nicira Networks; Teemu Koponen of the International Computer Science Institute (ICSI); Scott Shenker, a co-founder of Nicira Networks (along with Casado) and also a professor of computer science at University of California, Berkeley; and Amin Tootoonchian, a PhD candidate at the University of Toronto and a visiting researcher at ICSI.

SDN Fabrics

We’ll get to their definition of fabric soon enough, but let’s set the stage properly by explaining at the outset that the paper discusses SDN’s shortcomings and proposes “how they can be overcome by adopting the insight underlying MPLS,” which is seen as helping to facilitate an era of simple network hardware and flexible network control.

In the paper’s introductory section, the authors contend that “current networks are too expensive, too complicated to manage, too prone to vendor lock-in, and too hard to change. “ They write that the SDN community has done considerable research on network architecture, but not as much on network infrastructure, an omission that they then attempt to rectify.

Network infrastructure, the paper’s authors contend, has two components: the underlying hardware, and the software that controls the overall behavior of the network. Ideally, they write, hardware should be simple, vendor-neutral, and future-proof, while the control plane should be flexible.

Infrastructure Inadequacies

As far as the authors are concerned, today’s network infrastructure doesn’t satisfy any of those criteria, with “the inadequacies in these infrastructural aspects . . .  probably more problematic than the Internet’s architectural deficiencies.” The deficiencies cannot be overcome through today’s SDN alone, but a better SDN can be built by, as mentioned above, “leveraging the insights underlying MPLS.”

And that’s where network fabrics enter the picture. The authors define a network fabric as “a contiguous and coherently controlled portion of the network infrastructure, and do not limit its meaning to current commercial fabric offerings.” Later, they refer to a network fabric as “a collection of forwarding elements whose primary purpose is packet transport. Under this definition, a network fabric does not provide more complex network services such as filtering or isolation.”

Filtering, isolation, policy, and other network services will be handled in software at the network edge, while the fabric will serve primarily as a “fast and cheap” interconnect. The authors contend that we can’t reach that objective with today’s SDN and OpenFlow.

OpenFlow’s Failings

They write that OpenFlow’s inability to “distinguish between the Host-Network interface (how hosts inform the network of requirements) and the Packet-Switch interface (how a packet identifies itself to a switch) has resulted in three problems, the first of which is that OpenFlow, in its current form, does not “fulfill the promise of simplified hardware” because the protocol requires switch hardware to support lookups of hundreds of bits.

The second problem relates to flexibility. As host requirements evolve, the paper’s authors anticipate “increased generality in the Host-Network interface,” which will mean increasing the generality in . . . “matching allowed and the actions supported” on any every switch supported by OpenFlow. The authors are concerned that “needing functionality to be present on every switch will bias the decision towards a more limited feature set, reducing OpenFlow’s generality.”

The third problem, similar to the second, is that the current implementation of OpenFlow “couples host requirements to the network core behavior.” Consequently, if there is a change in external network protocols (such as a transition from IPv4 to IPv6), the change in packet matching would necessarily extend into the network core.

Toward a New Infrastructure

Accordingly, the authors propose a network fabric that borrows heavily from MPLS, with its labels and encapsulation, and that also benefits from proposed modifications to the SDN model and to OpenFlow itself. What we get is a model that includes a network fabric as “architectural building block” within SDN. A diagram illustrating this SDN model shows a source host connecting to an ingress edge switch, which then applies MPLS-like label-based forwarding within the “fabric elements.” On the other side of the fabric, an egress edge switch ensures that packets are delivered to the destination host. The ingress and egress edge switches answer to an “edge controller,” while a “fabric controller” controls the fabric elements.

The key properties associated with the SDN fabric are separation of forwarding and separation of control. Separation of forwarding is intended to simplify the fabric forwarding elements, but also to “allow for independent evolution of fabric and edge.” As for separation of control, I quote from the paper:

While there are multiple reasons to keep the fabric and the edge’s control planes separate, the one we would like to focus on is that they are solving two different problems. The fabric is responsible for packet transport across the network, while the edge is responsible for providing more semantically rich services such as network security, isolation, and mobility. Separating the control planes allows them each to evolve separately, focusing on the specifics of the problem. Indeed, a good fabric should be able to support any number of intelligent edges (even concurrently) and vice versa.

Two OpenFlows?

As the authors then write, “if the fabric interfaces are clearly defined and standardized, then fabrics offer vendor independence and . . . limiting the function of the fabric to forwarding enables simpler switch implementations.”

The paper goes on to address the fabric-service model, fabric path setup, addressing and forwarding in the fabric, and how the edge context is mapped to the fabric (the options are address translation and encapsulation, which is the authors’ favored mechanism.)

To conclude, the authors look at fabric implications, one of which involves proposed changes to OpenFlow. The authors prescribe an “edge” version of OpenFlow, more general than the current manifestation of the protocol, and a “core” version of OpenFlow that is similar to MPLS forwarding. The authors say the current OpenFlow is an “unhappy medium,” insufficiently general for the edge and not simple enough for the core. The authors say the generic “edge version of OpenFlow should aggressively adopt the assumption that it will be processed in software, and be designed with that freedom in mind.”

Refinements to the Model

In the final analysis, the authors believe their proposal to address infrastructure as well as architecture will result in an SDN model “where the edge processing is done in software and the core in simple (network) hardware,” the latter of which would deliver the joint benefits of reduced costs and “vendor neutrality. “

The paper essentially proposes a refinement to both OpenFlow and to the SDN architectural model. We might call it SDN 2.0, though that might seem a little glib and presumptuous (at least on my part). Regardless of what we call it, it is evident that certain elements in the vanguard of the SDN community continue to work hard to deliver a new type of cloud-era networking that delivers software-based services running over a brawny but relatively simple network infrastructure.

How will the broader SDN community and established vendors in network infrastructure respond? We won’t have to wait long to find out.

Cisco’s SDN Response: Mission Accomplished, but Long Battle Ahead

In concluding my last post, I said I would write a subsequent note on whether Cisco achieved its objectives in its rejoinder to software-defined networking (SDN) at the Cisco Live conference last week in San Diego.

As the largest player in network infrastructure, Cisco’s words carry considerable weight. When Cisco talks, its customers (and the industry ecosystem) listen. As such, we witnessed extensive coverage of the company’s Cisco Open Network Environment (Cisco ONE) proclamations last week.

Really, what Cisco announced with Cisco ONE was relatively modest and wholly unsurprising. What was surprising was the broad spectrum of reactions to what was effectively a positioning statement from the networking market’s leading vendor.

Mission Accomplished . . . For Now

And that positioning statement wasn’t so much about SDN, or about the switch-control protocol OpenFlow, but about something more specific to Cisco, whose installed base of customers, especially in the enterprise, is increasingly curious about SDN. Indeed, Cisco’s response to SDN should be seen, first and foremost, as a response to its customers. One could construe it as a cynical gesture to “freeze the market,” but that would not do full justice to the rationale. Instead, let’s just say that Cisco’s customers wanted to know how their vendor of choice would respond to SDN, and Cisco was more than willing to oblige.

In that regard, it was mission accomplished. Cisco gave its enterprise customers enough reason to put off a serious dalliance with SDN, at least for the foreseeable future (which isn’t that long). But that’s all it did. I didn’t see a vision from Cisco. What I saw was an effective counterpunch — but definitely not a knockout — against a long-term threat to its core market.

Cisco achieved its objective partly by offering its own take on network programmability, replete with a heavy emphasis on APIs and northbound interfaces; but it also did it partly by bashing OpenFlow, the open  protocol that effects physical separation of the network-element control and forwarding planes.

Conflating OpenFlow and SDN

In its criticism of OpenFlow, Cisco sought to conflate the protocol with the larger SDN architecture. As I and many others have noted repeatedly, OpenFlow is not SDN;  the two are not inseparable. It is possible to deliver an SDN architecture without OpenFlow. Even when OpenFlow is included, it’s a small part of the overall picture.  SDN is more than a mechanism by which a physically separate control plane directs packet forwarding on a switch.

If you listened to Cisco last week, however, you would have gotten the distinct impression that OpenFlow and SDN are indistinguishable, and that all that’s happening in SDN is a southbound conversation from a server-based software controller and OpenFlow-capable switches. That’s not true, but the Open Networking Foundation (ONF), the custodians of SDN and OpenFlow, has left an opening that Cisco is only too happy to exploit.

The fact is, the cloud service-provider principals steering the ONF see SDN playing a much bigger role than Cisco would have you believe. OpenFlow is a starting point. It is a means to, well, another means — because SDN is an enabler, too. What SDN enables is network virtualization and network programmability, but not how Cisco would like its customers to get there.

Cisco Knows SDN More Than OpenFlow

To illustrate my point, I refer you to the relatively crude ONF SDN architectural stack showcased in a white paper, Software-Defined Networking: The New Norm for Networks. If you consult the diagram in that document, you will see that OpenFlow is the connective tissue between the controller and the switch — what ONF’s Dan Pitt has described as an “open interface to packet forwarding” — but you will also see that there are abstraction layers that reside well above OpenFlow.

If you want an ever more detailed look at a “modern” SDN architecture, you can consult a presentation given by Cisco’s David Meyer earlier this year. That presentation features physical hardware at the base, with SDN components in the middle. These SDN components include the “forwarding interface abstraction” represented by OpenFlow, a network operation system (NOS) running on a controller (server), a “nypervisor” (network hypervisor), and a global management abstraction that interfaces with the control logic of higher-layer application (control) programs.

So, Cisco clearly knows that SDN comprises more than OpenFlow, but, in its statements last week at Cisco Live, the company preferred to use the protocol as a strawman in its arguments for Cisco-centric network programmability. You can’t blame Cisco, though. It has customers to serve — and to keep in the revenue- and profit-generating fold — and an enterprise-networking franchise to protect.

Mind the Gap

But why did the ONF leave this gap for Cisco to fill? It’s partly because the ONF isn’t overly concerned with the enterprise and partly because the ONF sees OpenFlow as an open, essential precondition for the higher, richer layers of the SDN architectural model.

Without the physical separation of the control plane from the forwarding plane, after all, some of the ONF’s service-provider constituency might not have been able to break free of vendor hegemony in their networks. What’s more, they wouldn’t be able to set the stage for low-priced, ODM-manufactured networking hardware built with merchant silicon.

As you can imagine, that is not the sort of change that Cisco can get behind, much less lead. Therefore, Cisco breaks out the brickbats and goes in hot pursuit of OpenFlow, which it then portrays as deficient for the purposes of far-reaching, north-and-south network programmability.

Exiting (Not Exciting) Plumbing

Make no mistake, though. The ONF has a vision, and it extends well beyond OpenFlow. At a conference in Garmisch, Germany, earlier this year, Dan Pitt, the ONF’s executive director, offered a presentation called “A Revolution in Networking and Standards,” and made the following comments:

“I think networking is going to become an integral part of computing in a way that makes it less important, because it’s less of a problem. It’s not the black sheep any longer. And the same tools you use to create an IT computing infrastructure or virtualization, performance, and policy will flow through to the network component of that as well, without special effort.

I think enterprises are going to be exiting technology – or exiting plumbing. They are not going to care about the plumbing, whether it’s their networks or the cloud networks that increasingly meet their needs, and the cloud services. They’re going to say, here’s the function or the feature I want for my business goal, and you make it happen. And somebody worries about the plumbing, but not as many people who worry about plumbing today. And if you’ve got this virtualized view, you don’t have to look at the plumbing. . . .

The operators are gradually becoming software companies and internet companies. They are bulking up on those skills. They want to be able to add those services and features themselves instead of relying on the vendors, and doing it quickly for their customers. It gives opportunities to operators that they didn’t have before of operating more diverse services and experimenting at low cost with new services.”

No Cartwheels

Again, this is not a vision that would have John Chambers doing cartwheels across the expansive Cisco campus.

While the ONF is making plans to address the northbound interfaces that are a major element in Cisco’s network programmability, it hasn’t done so yet. Even when it does, the ONF is unlikely to standardize higher-layer APIs, at least in the near term. Instead, those APIs will be associated with the controllers that get deployed in customer networks. In other words, the ONF will let the market decide.

On that tenet, Cisco can agree with the ONF. It, too, would like the market to decide, especially since its market presence — the investments customers have made in its routers and switches, and in its protocols and management tools — towers imperiously over the meager real estate being claimed in the nascent SDN market.

With all that Cisco network infrastructure deployed in customer networks, Cisco believes it’s in a commanding position to set the terms for how the network will deliver software intelligence to programmers of applications and management systems. Theoretically, that’s true, but the challenge for Cisco will be in successfully engaging a programming constituency that isn’t its core audience. Can Cisco do it? It will be a stretch.

Do They Get It?

All the while, the ONF and its service-provider backers will be advancing and promoting the SDN model and the network virtualization and programmability that accompany it. The question for the ONF is not whether its movers and shakers understand programmers — it’s pretty clear that Google, Facebook, Microsoft, and Yahoo are familiar with programmers — but whether the ONF understands and cares enough about the enterprise to make that market a priority in its technology roadmap.

If the ONF leaves the enterprise to the dictates of the Internet Engineering Task Force (IETF) and Institute of Electrical and Electronics Engineers (IEEE), Cisco is likely to maintain its enterprise dominance with an approach that provides some benefits of network programmability without the need for server-based controllers.

Meanwhile, as Tom Nolle, president of CIMI Corporation has pointed out, Cisco ONE also serves as a challenge to Cisco’s conventional networking competitors, which are devising their own answers to SDN.

But that is a different thread, and this one is too long already.

Understanding Cisco’s Relationship to SDN Market

Analysts and observers have variously applauded or denounced Cisco for its network-Cisco ONE programmability pronouncements last week.  Some pilloried the company for being tentative in its approach to SDN, contrasting the industry giant’s perceived reticence with its aggressive pursuit of previous emerging technology markets such as IP PBX, videoconferencing, and converged infrastructure (servers).

Conversely, others have lauded Cisco’s approach to SDN as far more aggressive than its lackluster reply to challenges in market segments such as application-delivery controllers (ADCs) and WAN optimization, where F5 and Riverbed, respectively, demonstrated how a tightly focused strategy and expertise above the network layer could pay off against Cisco.

Different This TIme

But I think they’ve missed a very important point about Cisco’s relationship to the emerging SDN market.  Analogies and comparisons should be handled with care. Close inspection reveals that SDN and the applications it enables represent a completely different proposition from the markets mentioned above.

Let’s break this down by examining Cisco’s aggressive pursuit of IP-based voice and video. It’s not a mystery as to why Cisco chose to charge headlong into those markets. They were opportunities for Cisco to pursue its classic market adjacencies in application-related extensions to its hegemony in routing and switching. Cisco also saw video as synergistic with its core network-infrastructure business because it generated bandwidth-intensive traffic that filled up existing pipes and required new, bigger ones.

Meanwhile, Cisco’s move into UCS servers was driven by strategic considerations. Cisco wanted the extra revenue servers provided, but it also wanted to preemptively seize the advantage over its former server partners (HP, Dell, IBM) before they decided to take the fight to Cisco. What’s more, all the aforementioned vendors confronted the challenge of continuing to grow their businesses and public-market stock prices in markets that were maturing and slowing.

Cisco’s reticence to charge into WAN optimization and ADCs also is explicable. Strategically, at the highest echelons within Cisco, the company viewed these markets as attractive, but not as essential extensions to its core business. The difficulty was not only that Cisco didn’t possess the DNA or the acumen to play in higher-layer network services — though that was definitely a problem — but also that Cisco did not perceive those markets as conferring sufficiently compelling rewards or strategic advantages to warrant the focus and resources necessary for market domination. Hence, we have F5 Networks and its ADC market leadership, though certainly F5’s razor-sharp focus and sustained execution factored heavily into the result.

To Be Continued

Now, let’s look at SDN. For Cisco, what sort of market does it represent? Is it an opportunity to extend its IP-based hegemony, like voice, video, and servers? No, not at all. Is it an adjunct market, such as ADCs and WAN optimization, that would be nice to own but isn’t seen as strategically critical or sufficiently large to move the networking giant’s stock-price needle? No, that’s not it, either.

So, what is SDN’s market relationship to Cisco?

Simply put, it is a potential existential threat, which makes it unlike IP PBXes, videoconferencing, compute hardware, ADCs, and WAN optimization. SDN is a different sort of beast, for reasons that have been covered here and elsewhere many times.  Therefore, it necessitates a different sort of response — carefully calculated, precisely measured, and thoroughly plotted. For Cisco, the ONF-sanctioned approach to SDN is not an opportunity that the networking giant can seize,  but an incipient threat to the lifeblood of its business that it must blunt and contain — and, whatever else, keep out of its enterprise redoubt.

Did Cisco achieve its objective? That’s for a subsequent post.

In Assessing SDN’s Future, Take Care in Picking Precedents

Software-defined networking (SDN) continues to generate considerable attention and commentary, with this humble corner of the Internet contributing to the hubbub. There’s always a danger, especially with new technologies, that the hype cycle will result in a scenario in which proponents will overpromise and the technologies, understandably, will underdeliver.

When that happens, disappointment ensues. Gartner calls it the “trough of disillusionment,” which often serves as the darkness before the market dawn.

Certainly many caveats have been raised as expectation moderators to SDN. These caveats often come with references to preceding technologies that didn’t quite evolve according to originator intent or market plan. Lately, in fact, some have cited the slow adoption of IPv6 as a cautionary tale for SDN.

Not Analogous

In more than one respect, however, the comparison of IPv6 comparison with SDN doesn’t fly.

As the existence of the Open Networking Foundation (ONF) attests, large cloud service providers clearly perceive compelling business reasons  for the development and deployment of SDN solutions. Conversely, IPv6 was seen as something enterprises and service providers would have to do eventually as opposed to something they wanted to do.

Where the switch to IPv6 from IPv4 was driven by fear, the transition from conventional networking to software-defined networking (SDN), at least for large service providers, is being driven by the desire for business benefits and increased operational efficiency. While the purveyors of IPv6 sternly wielded a threatening stick to drive compliance, the champions of SDN at the ONF waved the carrots of network programmability and reduced operating expenditures. It was something they want, not something fear compels them to do.

Yes, I know that there always were good business reasons for enterprises and service providers to adopt IPv6, but those reasons often were articulated poorly or inadequately. Instead, fear took center stage, attempting to browbeat and threaten its audience into abject fealty.

Only Works for the Mob

Nobody likes to be threatened. Negative sales campaigns, predicated on implicit or explicit threats of impending doom, are less likely to resonate than those that are positive and inspiring. (Unless, of course, you’re running a protection racket for the mob, in which case your threats might be pretty damn effective, at least for a while.) IPv6 was all about the approach of darkening storm clouds, wheres SDN offers the promise of sunny innovation and a bright future.

As technologies and as market phenomena, IPv6 and SDN have little in common. It seems folly to cite the slow rate of adoption of IPv6 as a predictive precursor for SDN.

So, while SDN might not live up to its promise — and it will meet particularly strong headwinds in the enterprise — it will not face the same problems that confronted IPv6. They are qualitatively different technologies, and SDN will experience a market trajectory quite different from that of IPv6.