Category Archives: OpenFlow

Why Established Networking Vendors Aren’t Leading SDN Charge

Expressing equal parts exasperation and incredulity, Greg Ferro wonders why industry-leading networking vendors aren’t taking the innovative initiative in offering compelling strategies for software-defined networking (SDN).

The answer seems clear enough.

Although applications will be critical to the long-term commercial success of SDN, Google and the other movers and shakers that direct the affairs of the Open Networking Foundation (ONF) originally were drawn to SDN because they were frustrated with the lack of responsiveness and innovation from established vendors. As a result, they devised a networking model that not only separated the control and data planes of network elements, but that also, in the word’s of Google’s Amin Vahdat, separated the “ evolution path for (network) hardware and software.”

Two Paths

Until now, those evolutionary paths have been converged and constrained inside the largely propriety boxes of networking vendors. Google and its confreres with the ONF perceived that state of affairs as the yoke of vendor oppression. The network, slow to evolve and innovate, was getting in the way of progress.  All the combustible ingredients of a cloud-service provider insurrection had cohered. Google, taking the lead in organizing the other major service providers under the rubric of the ONF, lit the fuse.

The effects of the explosion are just being felt, and the reverberations will echo for some time. The big service providers, and perhaps many smaller ones, are gravitating away from the orbit of networking’s ancien regime. The question now is whether enterprises will follow. At some point, that probably will happen, but how and when it will unfold are less clear. Enterprises, unlike the board members of the ONF, are too diverse and prolific to organize in pursuit of common interests. Accordingly, vendors are still able to set the enterprise agenda.

But enterprises will notice the benefits that SDN is capable of conferring, and the ONF’s overlords will seek to cultivate and sustain an ecosystem that can deliver parallel hardware and software innovation. Google, for example, has indicated that while it develops its own networking hardware today, it would be amenable to buying OpenFlow switches from the vendor community. Those switches, like to carry lower margins and prices than the gear sold by the major networking vendors, will probably come from ODMs using merchant silicon from Broadcom, Marvell, Fulcrum (Intel), and others.

Money’s in the Software

The major networking vendors are saying that the cleavage of the control and data planes is not a big deal, that it’s not necessary or isn’t a critical requirement for innovation and network programmability. Perhaps there is some merit to their arguments, but there’s no question that the separation of the control and data planes is not in their business interests. If some their assertions have merit, they also are self-serving.

Cisco, as we’ve discussed before, might be able to develop software, but its business model is predicated on the sale of routers and switches. Effectively, it would have to remake itself comprehensively to recast itself as a vendor of server-based controllers (software) and the applications the run on them. A proprietary hardware box, whether a server or switch, isn’t what the ONF wants.

If the ONF’s SDN vision prevails, the money is in software: server-based controllers, applications, management/orchestration frameworks, and so on. Successful vendors not only will have to be proficient at developing software; they’ll also have to be skilled at marketing and selling it. They’ll have to build their businesses around it.

This is the challenge the major networking vendors confront. It’s why they aren’t leading the SDN charge, and it also is why they are attempting to co-opt and subvert it.

Tidbits: Oracle-Arista Rumor, Controller Complexity, More Cisco SDN

This Week’s Rumor

Rumors that Oracle is considering an acquisition of Arista Networks have circulated this week. They’re likely nothing more than idle chatter. Arista has rejected takeover overtures previously, and it seems determined to go the IPO route.

Controller Complexity

Lori MacVittie provides consistently excellent blogging at F5 Networks’ DevCentral. In a post earlier this week, she examined the challenges and opportunities facing OpenFlow-based SDN controllers. Commenting on the code complexity of controllers, she writes the following:

This likely means unless there are some guarantees regarding the quality and thoroughness of testing (and thus reliability) of OpenFlow controllers, network operators are likely to put up a fight at the suggestion said controllers be put into the network. Which may mean that the actual use of OpenFlow will be limited to an ecosystem of partners offering “certified” (aka guaranteed by the vendor) controllers.

It’s a thought-provoking read, raising valid questions, especially in the context of enterprise customers.

Cisco SDN

Last week, Cisco and Morgan Stanley hosted a conference call on Cisco’s SDN strategy. (To the best of my knowledge, Morgan Stanley doesn’t have one — yet.)  Cisco was represented on the call by David Ward, VP and chief architect of the company’s Service Provider Division; and by Shashi Kiran, senior director of market management for Data Center/Virtualization and Enterprise Switching Group.

The presentation is available online. It doesn’t contain any startling revelations, and it functions partly as a teaser for forthcoming product announcements at CiscoLive in San Diego. Still, it’s worth a perusal for those of you seeking clues on where Cisco is going with its SDN plans. If you do check it out, you’ll notice on side three that a number of headlines are featured attesting to the industry buzz surrounding SDN.  Two bloggers are cited in that slide: Greg Ferro (EtherealMind) and, yes, yours truly, who gets cited for a recent interpretation of Cisco’s SDN maneuverings.

Putting an ONF Conspiracy Theory to Rest

We know that the Open Networking Foundation (ONF) is controlled by the six major service providers that constitute its board of directors.

It is no secret that the ONF is built this way by design. The board members wanted to make sure that they got what they wanted from the ONF’s deliberations, and they felt that existing standards bodies, such as the IETF and IEEE, were gerrymandered and dominated by vendors with self-serving agendas.

The ONF was devised with a different purpose in mind — not to serve the interests of the vendors, but to further the interests of the service-provider community, especially the service providers who sit on the ONF’s board of directors. In their view, conventional networking was a drag on their innovation and business agility, obstructing progress elsewhere in their data centers and IT operations. Whereas compute and storage resources had been virtualized and orchestrated, networking remained a relatively costly and unwieldy fiefdom ruled by “masters of complexity” rummaging manually through an ever-expanding bag of ad-hoc protocols.

Organizing for Clout

Not getting what they desired from their networking vendors, the service providers decided to seize the initiative. Acting on its own,  Google already had done just that, designing and deploying DIY networking gear.

The study of political elites tells us that an organized minority comprising powerful interests can impose its will on a disorganized majority.  In the past, as individual companies, the ONF board members had been unable to counter the agendas of the networking vendors. Together, they hoped to effect the change they desired.

So, we have the ONF, and it’s unlike the IETF and the IEEE in more ways than one. While not a standards body — the ONF describes itself as a “non-profit consortium dedicated to the transformation of networking through the development and standardization of a unique architecture called Software-Defined Networking (SDN)” — there’s no question that the ONF wants to ensure that it defines and delivers SDN according to its own rules  And at its own pace, too, not tied to the product-release schedules of networking vendors.

In certain respects, the ONF is all about consortium of customers taking control and dictating what it wants from the vendor community, which, in this case, should be understood to comprise not only OEM networking vendors, but also ODMs, SDN startups, and purveyors of merchant silicon.

Vehicle of Insurrection?

Just to ensure that its leadership could not be subverted, though, the ONF stipulated that vendors would not be permitted to serve on its board of directors. That means that representatives of Cisco, Juniper, and HP Networking, for example, will never be able to serve on the ONF board.

At least within their self-determined jurisdiction, the ONF’s board members call all the shots. Or do they?

Commenting on my earlier post regarding Cisco’s SDN counterstrategy, a reader, who wished to remain anonymous (Anon4This1), wrote the following:

Regarding this point: “Ultimately, [Cisco] does not control the ONF.”

That was one of the key reasons for the creation of the ONF. That is, there was a sense that existing standards bodies were under the collective thumb of large vendors. ONF was created such that only the ONF board can vote on binding decisions, and no vendors are allowed on the board. Done, right? Ah, well, not so fast. The ONF also has a Technical Advisory Group (TAG). For most decisions, the board actually acts on the recommendations of the TAG. The TAG does not have the same membership restrictions that apply to the ONF board. Indeed, the current chairman of the TAG is none other than influential Cisco honcho, Dave Ward. So if the ONF board listens to the TAG, and the TAG listens to its chairman… Who has more control over the ONF than anyone? https://www.opennetworking.org/about/tag

Board’s Iron Grip

If you follow the link provided by my anonymous commenter, you will find an extensive overview of the ONF’s Technical Advisory Group (TAG). Could the TAG, as constituted, be the tail that wags the ONF dog?

My analysis leads me to a different conclusion.  As I see it, the TAG serves at the pleasure of the ONF board of directors, individually and collectively. Nobody on the TAG does so without the express consent of the board of directors. Moreover, “TAG term appointments are annual and the chair position rotates quarterly.” Whereas Cisco’s Dave Ward serves as the current chair, his term will expire and somebody else will succeed him.

What about the suggestion that the “board actually acts on recommendations of the TAG,” as my commenter asserts. In many instances, that might be true, but the form and substance of the language on the TAG webpage articulates clearly that the TAG is, as its acronym denotes, an advisory body that reports to (and “responds to requests from”) the ONF board of directors.  The TAG offers technical guidance and recommendations, but the board makes the ultimate decisions. If the board doesn’t like what it’s getting from TAG members, annual appointments presumably can be allowed to expire and new members can succeed those who leave.

Currently, two networking-gear OEMs are represented on the ONF’s TAG. Cisco is represented by the aforementioned David Ward, and HP is represented by Jean Tourrilhes, an HP-Labs researcher in Networking and Communication who has worked with OpenFlow since 2008. These gentlemen seem to be on the TAG because those who run the ONF believe they can make meaningful contributions to the development of SDN.

No Coup

It’s instructive to note the company affiliations of the other six members serving on TAG. We find, for instance, Nicira CTO Martin Casado, as well as Verizon’s Dave McDysan, Google’s Amin Vahdat, Microsoft’s Albert Greenberg, Broadcom’s Puneet Agarwal, and Stanford’s Nick McKeown, who also is known as a Nicira co-founder and serves on that company’s board of directors.

If any company has pull, then, on the ONF’s TAG, it would seem to be Nicira Networks, not Cisco Systems. After all, Nicira has two of its corporate directors serving on the ONF’s TAG. Again, though, both gentlemen from Nicira are highly regarded and esteemed SDN proponents, who played critical roles in the advent and development of OpenFlow.

And that’s my point. If you look at who serves on the ONF’s TAG, you can clearly see why they’re in those roles and you can understand why the ONF board members would desire their contributions.

The TAG as a vehicle for an internal coup d’etat at the ONF? That’s one conspiracy theory that I’m definitely not buying.

SDN Controller Ecosystems Critical to Market Success

Software-defined networking (SDN) is a relatively new phenomenon. Consequently, analogies to preceding markets and technologies often are invoked by its proponents to communicate key concepts. One oft-cited analogy involves the server-based solution stack and the nascent SDN stack.

In this comparison, server hardware equates to networking hardware, with the CPU instruction set positioned as analogous to the OpenFlow instruction set. Above those layers, the server operating system is said to be analogous to the SDN controller, which effectively runs a “network operating system.” Above that layer, the analogy extends to similarities between server OS and network OS APIs and to the applications that run atop both stacks.

Analogies and Implications

Let’s consider the comparison of the server operating system to the SDN controller.  While the analogy is apt, it carries implications that prospective early adopters of SDN need to fully understand. As we’ve discussed before, SDN controllers based on OpenFlow today carry no guarantees of interoperability. An application that runs on one controller might not be available (or run) on another controller, just as an application developed for a Windows server might not be available on Linux (and vice versa).

Moreover, we don’t know how difficult it will be to port applications from one OpenFlow-based controller to another. It could be a trivial exercise or an agonizing one. There are many nagging questions, far fewer answers.

Keep in mind that this is an entirely different matter from the question of interoperability between OpenFlow-based controllers and switches. Presuming the OpenFlow standard is adhered to and implemented correctly in all cases, OpenFlow-based controllers on the market today should be able to communicate with OpenFlow-based switches.

But interoperability (or lack thereof) is an unwritten book at the layers where the SDN controller (an NOS akin to a server-based operating system) and the NOS APIs reside.  The poses a potential problem for the market development of a viable SDN ecosystem, at least for the enterprise market. (It’s not as much of an issue for the gargantuan service providers that drive the agenda of the Open Network Foundation; those companies have ample resources and will make their own internal standardization decisions relating to controllers and practically everything else that falls under the SDN rubric.)

Controller Derby

At SDNCentral, no fewer than seven open-source OpenFlow controllers are listed. Three of those controllers are Java based: Beacon, Floodlight, and Jaxon. The other open-source OpenFlow controllers listed at SDNCentral are FlowER, NodeFlow, POX, and Trema.  Additionally, OpenFlow controllers have been developed by several companies, including NEC, BigSwitch Networks (which offers a commercial version of the Floodlight controller), and Nicira Networks, which has built on the foundation of the Onix controller.

Interestingly, Google and Ericsson also have based their controllers on Onix. In a blog post last summer, Nicira CTO Martin Casado described Onix as a “general SDN controller” rather than an OpenFlow controller. Casado admitted that he was devising terminology on the fly, but he defined a “general SDN controller as “one in which the control application is fully decoupled from both the underlying protocol(s) communicating with the switches, and the protocol(s) exchanging state between controller instances (assuming the controllers can run in a cluster).” So, OpenFlow could be part of the picture, but it doesn’t have to be there; another mechanism could substitute for it.

Casado conceded that Onix is the right controller for many environments, but not for others. Wrote Casado:

There have been multiple control applications built on Onix, and it is used in large production deployments in the data centers, as well as in the access and core networks. However, it is probably too heavyweight for smaller networks (the home or small enterprise), and it is certainly too complex to use as a basic research tool.

Horses for Courses

So, there are horses for courses, and there are controllers for applications. Early indications suggest that it will not be a one-size-fits-all world. Nonetheless, at the end of his blog post, Casado expressed that opinion that “standards should be kept away” from controller design, and that the market’s natural-selection process should be allowed to run its course.

Perhaps that is the right prescription. It seems too early for leaden-footed standards bodies, such as the IETF and IEEE, to intervene. Nevertheless, customers will have to be wary. They’ll have to do their research, perform due diligence, and thoroughly understand the strengths, weaknesses, and characteristics of candidate controllers. Without assured controller interoperability, customers that adopt and deploy applications on one controller might have considerable difficulty shifting their investment and their software elsewhere.

Of course, if Google and the other major service providers who rule the roost at ONF want to expedite matters, they could publicly and aggressively endorse one or two controller platforms as de facto standards. But that’s probably unlikely, for a variety of reasons. Even if it were to happen, as Casado points out, any controller that proves favorable at large cloud service providers might not be the best choice for enterprises, especially smaller ones.

Opening for Networking’s Old Guard

At this point, it’s not clear how the SDN controller market will shake out. SDN controllers will struggle for sustenance not only against each other, but also against networking’s conventional distributed control planes already on the market, as well as so-called hybrid approaches — whereby the data path is jointly controlled by the conventional box-based control planes as well as by server-based controllers — that will be articulated and promoted by the major networking vendors, all of whom are keen to retard “pure SDN’s” advance from the environs of the largest cloud service providers to those of enterprise buyers. (As mentioned previously, the hybrid-control approach also is perceived by the ONF as a transitional necessity for customers seeking to move from their networks as they are constituted today to future SDN architectures.)

In that regard, the big networking powers are fortunate that the ONF’s early mandate is focused primarily, if not exclusively, on the requirements of the large cloud-service providers that populate its board of directors. The ambiguity surrounding controllers and their interoperability (or lack thereof) represents another factor that will dissuade enterprise buyers from taking an early leap of faith into the arms of SDN purveyors.

The faster the SDN market sorts out a controller hierarchy — determining the suitability and market prevalence of certain controllers in specific application environments — the sooner valuable ecosystems will form and enterprises will take serious notice.

For now, though, a shakeout doesn’t appear imminent.

Distributed, Hybrid, Northbound: Key Words in Cisco’s SDN Counterstrategy

When it has broached the topic of software-defined networking (SDN) recently, Cisco has attempted to reframe the discussion within the larger context of programmable networks. In Cisco’s conception of the evolving networking universe, the programmable network encompasses SDN, which in turn envelops OpenFlow.

We know by now that OpenFlow is a relatively small part of SDN. OpenFlow is a protocol that provides for the physical separation of the control and data planes, which heretofore have been combined within a switch or router. As such, OpenFlow enables server-based software (a controller) to determine how packets should be forwarded by network elements. As has been mentioned before, here and elsewhere, mechanisms other than OpenFlow could be used for the same purpose.

Logical Outcome

SDN is bigger than OpenFlow. It deals not only with the abstraction of the data plane, but also with higher-layer abstractions, at the control plane and above. The whole idea behind SDN is to put the applications, and the services they deliver, in the driver’s seat, so that the network does not become a costly encumbrance that impedes business agility and operational efficiency. In that sense, Cisco is right to suggest that programmable networks are a logical outcome that can and should result from the rise of SDN.

That said, the devil can always be found in the details, and we should note that Cisco’s definition of SDN, to the extent that it might invoke that acronym rather one of its own, is at variance with the definition that has been proffered by the Open Networking Foundation (ONF), which is controlled by the world’s largest cloud-service providers rather than by the world’s largest networking vendors. Cisco’s understanding of SDN looks a lot more like conventional networking, with a distributed or hybrid control plane instead of the logically centralized control plane favored by the ONF.

This post isn’t about value judgments, though. I am not here to bash Cisco, or anybody else for that matter, but to understand and interpret Cisco’s motivations as it formulates a counterstrategy to the ONF’s plans.

Bog-Standard Switches

Given the context, then, it’s easy to understand why Cisco favors the retention of the distributed — or, failing that, even a hybrid — control plane. Cisco is the market leader in switches and routers, and it owns a lot of valuable real estate on its customers’ networks.  If OpenFlow succeeds, not only in service-provider networks but also in the enterprise, Cisco is at risk of losing the market dominance it has worked so long and hard to build.

Frankly, there isn’t much differentiation to be achieved in bog-standard OpenFlow switches. If the Googles of the world get their way, the merchant silicon vendors all will support OpenFlow on their chipsets, and industry-standard boxes will be available from a number of ODMs and OEMs. It will be a prototypical buyer’s market, perhaps advancing quickly toward commoditization, and that’s not a prospect that Cisco shareholders and executives wish to entertain.

As Cisco comes to grips with SDN, then, it needs to rediscover the sort of leverage that it had before the advent of the ONF.  After all, if SDN is all about putting applications and other software literally in control of networks composed of industry-standard boxes, then network hardware will suffer a significant margin-squeezing demotion in the value hierarchy of customers.  And Cisco, as we’ve discussed before, develops more than its fair share of software, but remains a company wedded to a hardware-based business model.

Compromise and Accommodation 

Cisco would like to resist and undermine any potential market shift to the ONF’s server-based controllers. Fortunately for Cisco, many within the ONF are willing to acquiesce, at least initially and up to a point. A general consensus seems to have developed about the need for a hybrid control plane, which would accommodate both logically centralized controllers and distributed boxes. The ONF’s braintrust sees this move as a necessary compromise that will facilitate a long-term transition to a server-based model. It seems a logical and rational deduction — there’s a lot of networking gear installed out there that does not support the ONF’s conception of SDN — but it’s an opening for Cisco, nonetheless.

Beyond the issue of physical separation of the data plane and the control plane, Cisco has at least one other card to play.  You might have noticed that Cisco representatives have talked a lot during the past couple months about a “northbound interface” for SDN. As currently constituted, OpenFlow is a “southbound” interface, in that serves as a mechanism for a controller to program a switch. On a network diagram, that communication flows downward (hence southbound).

In SDN, a northbound interface would go upward, extending from the switch to the control plane and potentially beyond to applications and management/orchestration software. This is a discussion Cisco wants to have with the industry, at the ONF and elsewhere. Whereas southbound interfaces are all about what is done to a switch by external software, the northbound interface is a conduit by which the switch confers value — in the form of information intrinsic to the network — to the higher layers of abstraction.

Northbound Traffic

For now, the ONF has chosen not to define standard protocols or APIs for northbound interfaces, which could run from the networking devices up to the control plane and to higher layers of abstraction. Cisco, as the vendor with the largest installed base of gear in customer networks, finds itself in a logical position to play a role in helping to define those northbound interfaces.

Ideally, if programmable networks and SDN fulfill their potential, we’ll see the development of a virtuous feedback loop at the highest layers of abstraction, with software programming an underlying virtualized network and the network sending back state and other data that dynamically allows applications to perform even better.

Therefore, the northbound interface will be an important element in the future of SDN. Cisco hopes to leverage it, but more for the sustenance of its own business model than for the furtherance of the ONF’s objectives. Cisco holds some interesting cards, but it should be careful not to overplay them. Ultimately, it does not control the ONF.

As the SDN discourse elevates beyond OpenFlow, watch the traffic in the northbound lanes.

Why Google Isn’t A Networking Vendor

Invariably trenchant and always worth reading, Ivan Pepelnjak today explores what he believes Google is doing with OpenFlow. As it turns out, Pepelnjak posits that Google is doing more with other technologies than it is with OpenFlow, seemingly building a modern routing platform and a traffic-engineering application deserving universal envy and admiration.

In assessing what Google is doing, Pepelnjak would seem to get it right, as he usually does, but I would like to offer modest commentary on a couple minor points. Let’s start with his assessment of how Google is using OpenFlow:

“Google is using OpenFlow between controller and adjacent chassis switches because (like every other vendor) they need a protocol between the control plane and forwarding planes, and they decided to use an already-documented one instead of inventing their own (the extra OpenFlow hype could also persuade hardware vendors to implement more OpenFlow capabilities in their next-generation chipsets).”

OpenFlow: Just A Piece of the Puzzle

First off, Pepelnjak is essentially right. I’m not going to quarrel with his central point, which is that Google adopted OpenFlow as a communication protocol between (and that separates) the control plane and the forwarding plane. That’s OpenFlow’s purpose, its raison d’être, so it’s no surprising that Google would use it that way. As Chris Rock might say, that’s what OpenFlow is supposed to do.

Larger claims made on behalf of OpenFlow are not its fault. Subsequently, Pepelnjak states that OpenFlow is but a small piece of the networking puzzle at Google, and he’s right there, too. I don’t think it’s possible for OpenFlow to be a bigger piece. As a protocol between the control and forwarding planes, OpenFlow is what it is.

Beyond that, though, Pepelnjak refers to Google as a “vendor,” which I find odd.

Not a Networking Vendor

In many ways, Google is a vendor. It’s a cloud vendor, it’s an advertising vendor, it’s a SaaS vendor, and so on. But, in this particular context, Pepelnjak seems to be classifying Google as a networking vendor. That would be an incorrect designation, and here’s why: Vendors sell things, they vend. Google doesn’t sell the homegrown networking hardware and software that it implements internally. It’s doing it only for itself, not as a business proposition that would involve it proffering the technology to customers. As such, it should not be tossed into the same networking-vendor bucket as a Cisco, a Juniper, or an HP.

In fact, Google is going the roll-your-own route with its network infrastructure precisely because it couldn’t get what it wanted from networking vendors. In that respect, it is the anti-vendor. Google and the other gargantuan cloud-service providers who steer the Open Networking Foundation (ONF) promulgated software-defined networking (SDN) and espoused OpenFlow because they wanted network infrastructure to be different from the conventional approaches advanced by networking vendors and the traditional networking industry.

Whatever else one might think of the ONF, it’s difficult not to conclude that it represents an instance of customers (in this case, cloud-service providers) attempting to wrest strategic control from vendors to set a technological agenda. Google, a networking vendor? Only if one misunderstands the origins and purpose of ONF.

Creating a Market

Nonetheless, Google might have a hidden agenda here, and Pepelnjak touches on it when he writes parenthetically that “the extra OpenFlow hype could also persuade hardware vendors to implement more OpenFlow capabilities in their next-generation chipsets.”

Well, yes. Just because Google has chosen to roll its own and doesn’t like what the networking industry is selling today, it doesn’t necessarily mean that it has closed the door to buying from vendors in the future, presuming said vendors jump on the ONF bandwagon and start developing the sorts of products Google wants. Google doesn’t want to disclose the particulars of its network infrastructure, which it views as a source of competitive advantage and differentiation, but it is not averse to hyping OpenFlow in a bid to spur the supply side of the market to get with the SDN program.

Later in his post, Pepelnjak notes that Google used “standard protocols (BGP and IS-IS) like everyone else and their traffic engineering implementation (and probably the northbound API) is proprietary. How is that different (from the openness perspective) from networks built from Juniper’s or Cisco’s gear?”

Critical Distinction

Again, my point is that Google is not a vendor. It is customer building network technologies for its own use. By the very nature of that implicit (non)-transaction, the technologies in question will be proprietary. They’re not going anywhere other than Google’s data-center network. Google owns them, and it is in full control of defining them and releasing them on a schedule that suits Google’s internal objectives.

It’s rather different for vendors, who profit — if they’re doing it right — from the commercial sale of products and technologies to customers. There might be value in proprietary products and technologies in that context, but customers need to ensure that the proprietary value outweighs the proprietary risks, typically represented by vendor lock-in and upgrade cycles dictated by the vendor’s product-release schedule.

Google is not a vendor, and neither are the other companies driving the agenda of the ONF. I think it’s critical to make that distinction in the context of SDN and, to a lesser extent, OpenFlow.

What the Battle for “SDN” Reveals

As Mike Fratto notes in an excellent piece over at Network Computing, “software-defined networking” has become a semantical battleground, with the term getting pushed and pulled in various directions.

For good reason, Fratto was concerned that the proliferating definitions of software-defined networking (SDN) were in danger of shearing the term of all meaning. He compared what was happening to SDN to what happened previously to terms such as cloud computing, and he opined that once a term means anything, it means nothing.

Setting Record Straight

Not wanting to be passive observer to such linguistic nihilism, Fratto decided to set the record straight. He rightly observes that software-defined networking (SDN), as we understand it today, derives its provenance from the Open Networking Foundation (ONF). As such, the ONF’s definition of SDN should be the one that holds sway.

Citing an ONF white paper, “Software-Defined Networking:  The New Norm for Networks,” Fratto notes that, properly understood, SDN emphasizes three key features:

  • Separation of the control plane from the data plane
  • A centralized controller and view of the network
  • Programmability of the network by external applications

Why the Fuss?

I agree that the ONF’s definition is the one that should be authoritative and, well, definitive. What other vendors are doing in areas such as network virtualization and network programmability might be interesting — and perhaps even commendable and valuable to their customers — but unless what they are doing meets the ONF criteria, it should not qualify as SDN.  Furthermore, if what they’re doing doesn’t qualify as SDN, they should call it something else and explain its architectural principles and value clearly. An ambiguous, perhaps even disingenuous, linkage with SDN ought to be avoided.

What Fratto does not explore is why certain parties are attempting to muddy the SDN waters. In my experience, when vendors contest terminology, it suggests the linguistic real estate in question is uncommonly valuable, either strategically or monetarily. I posit that SDN is both.

Like “cloud” before it, everybody seemingly recognizes that SDN has struck a resounding chord. There’s hype attached to SDN, sure, but it also has genuine appeal and has generated considerable interest. As the composition of the ONF’s board of directors has suggested, and as the growing number of cloud service-provider deployments attest, SDN is not a passing fad. At least in the large cloud shops, it already has practical utility and business value.

The Value of Words

That value is likely to grow over time, and, while the enterprise will be a tough nut to crack for more than one reason, it’s certainly conceivable that the SDN eventually will find favor among at least certain enterprise demographics. The timeline for that outcome is not imminent, and, as I’ve written previously, Cisco isn’t about to “do a Nortel” and hold a going-out-of-business sale. Nonetheless, the auguries suggest that the ONF’s SDN will be with us for a long time and represents a significant business threat to networking’s status quo.

In this context, language really is power. If entrenched interests — such as the status quo of networking — don’t like an emerging trend, one way they can attempt to derail it is by co-opting it or subverting it. After all, it’s only an emerging trend, not yet entrenched, so therefore its terminology is nascent, too. If, as a major vendor with industry clout, you can change the meaning of the terminology, or make it so ambiguous that practically anything qualifies for inclusion, you can reassert control and dilute the threat.

In the past, this gambit — change the meaning, change the game — has accrued a decent track record. It works to impede change and to give entrenched interests more time to plot effective countermeasures.

Different This Time

What’s different this time — and Fratto’s piece provides corroborating evidence — is the existence of the ONF, a strong, customer-driven consortium that is (in its own words) “dedicated to the transformation of networking through the development and standardization of a unique architecture called Software-Defined Networking (SDN), which brings direct software programmability to networks worldwide. The mission of the Foundation is to commercialize and promote SDN and the underlying technologies as a disruptive approach to networking that will change how virtually every company with a network operates.”

If the ONF hadn’t existed, if it hadn’t already established an incontrovertible definition of SDN, the old “change the meaning, change the game” play might have worked.

But, as Fratto’s piece illustrates, it probably won’t work now.

Time for HP to Show Its SDN Hand

Although HP has demonstrated mounting support for OpenFlow, it has yet to formulate what I would call a full-fledged strategy for software defined networking (SDN). Yes, HP offers OpenFlow-capable switches, but there’s more to SDN than OpenFlow. Indeed. there’s definitely more to SDN than the packet-shunting hardware at the bottom of the value chain.

The word “software” is prominent in the SDN acronym for a reason, and HP hasn’t told us much about its plans in that area. I am not able to attend this week’s Interop in Las Vegas, but I am hoping HP takes the opportunity this week to disclose a meaningful SDN strategy.

HP could start by telling us what it plans to do on the controller front. Does its strategy involve taking a wait-and-see attitude, working with the likes of Big Switch Networks? Does HP have a controller of its own in the works? As an august publication once trumpeted in a long-ago  advertising campaign, inquiring minds want to know.

Above the controller, how does HP see the ecosystem developing? Does it plan to provide applications, management, orchestration? I think we have a reasonably good idea where Cisco is going with its SDN strategy — though Cisco would rather talk about network programmability (more on which later) — but HP has yet to play its hand.

HP is in Las Vegas this week. It’s as good a time as any to put its SDN cards on the table.

At Dell, Networking’s Role Secondary but Integral

Dell made a networking announcement last week, and, for the most part, reaction was muted. That’s party because Dell’s networking narrative is evolving and in transition, and partly because the announcements related to incremental, though notable, progression.

To be fair, Dell’s networking narrative is part of a larger story the company is telling in the data center. Networking is integral to that story, but it’s not the centerpiece and never will be. Dell is working from the blueprint of its Virtual Network Architecture (VNA), so its purchase and stewardship of Force10 is framed within a bigger picture that involves not just converged infrastructure, but also workload-driven orchestration of virtualized environments.

Integration and Assimilation

Some good news for Dell is that its integration and assimilation of Force10 Networks seems to have gone well and is now complete.  Dell’s OpenManage Networking Manager (OMNM) 5.0. offers a new look and support for the full line of Dell networking products, including the Force10 portfolio. What’s more, in its Dell Force10 MXL blade interconnect, a  40Gb Ethernet switch for the M1000e Blade chassis, Dell brings delivers an apt metaphor as well as a blade-server switch.

In that sense, it’s helpful to recall that Dell’s acquisition of Force10 was motivated by a desire to integrate networking into an automated, orchestrated data center in which it already offered compute and storage. Dell concluded that needed to own networking technology just as it owned server and storage technology. It further deduced that it needed a comprehensive networking portfolio, extending across SAN and LAN environments. Just as it moved previously to shake its dependence on storage partners, it would do likewise in networking.

Dell sees networking as an integral enabling technology, but not as an end in itself. Dell believes it can be more flexible than HP and IBM in certain enterprise demographics, and it believes it can outflank Cisco by being less “network centric” and more open to developments such as software defined networking (SDN).Force10, which was thought to be between a rock and hard place just before being acquired, understands and accepts its role in the Dell universe.

Fitting Into VNA

The key to understanding Dell’s data-center strategy is Virtual Network Architecture (VNA). The announcement of the new blade-server switch fits into that plan. Dell says VNA’s purpose is to virtualize, automate, and orchestrates network services so that they can adapt readily to application and business requirements. Core elements of VNA include the following:

  • High-performance switching systems for the campus and the data center
  • Virtualized Layer 4-7 services
  • Comprehensive automation & orchestration software
  • Open workload/hypervisor interfaces

So, what does it all mean? It means Dell is taking an approach that it believes will be differentiated and add considerable value in customers’ and prospective customers’ data centers. On the networking front, Dell believes it has espoused a strategy that encompasses and envelops the rise of SDN while also taking and accommodating approach to the networking gear already present in customer accounts.

Workload-Oriented Approach

In an article at The VAR Guy, Nathan Eddy quotes Dario Zamarian, VP and GM of Dell Networking, as follows:

“We are taking a workload-oriented approach — as in, ‘What does each require first?’ as opposed to starting with the network first [and] then trying to fit the application to it. In other words, networking is the enabler. The ultimate goal of VNA is to make networking as simple to set up, automate, operate, and manage as servers. VNA is doing for networking what VMware did for servers.”

Well, that’s the plan. In theory, in a slide show, all the pieces are there, but Dell has to execute and deliver on the vision. One can identify holes in the structure, places where Dell will need to buy, partner, or build to close the gaps. It’s clearly doing that, though, as the Force10 acquisition and others recently attest.

Taking Force10’s technology forward in alignment with its plans, Dell not only announced  a 40GbE-enabled blade server switch. It also introduced fabric- and network-management tools to simplify operations in the data center and the campus, and it announced data-center enhancements (stacking technology, L2 multipathing, data-center bridging, automated workload mobility through auto-provisioning of VLANs) to Force10’s FTOS for its S4810 10/40G switching platform.

Encompassing SDN

On the SDN front, Dell announced interoperability with Big Switch Networks’ Open SDN architecture and its OpenFlow-based Floodlight controller. That interoperability will be showcased next week in joint demonstrations at Interop, with the application emphasis on cloud multi-tenancy.

Regardless of where Dell goes with SDN, and regardless of how quickly (or slowly) SDN makes encroachments into the enterprise, Dell’s VNA model accounts for it and much else besides. Dell believes it can win in workload and network orchestration, with its Advanced Infrastructure Manager (AIM) providing virtual-network programming interfaces and doubtless with some forthcoming orchestration technologies it has yet to introduce (or buy).

Dell’s VNA seems a viable plan. But can the company continue to execute on it? Dell would have more focus and resources to do so if it jettisoned its woebegone consumer business, but that divestiture doesn’t seem to be in the cards.

Cisco Not Going Anywhere, but Changes Coming to Networking

Initially, I intended not to comment on the Wired article on Nicira Networks. While it contained some interesting quotes and a few good observations, its tone and too much of its focus were misplaced. It was too breathless, trying to too hard to make the story fit into a simplistic, sensationalized narrative of outsized personalities and the threatened “irrelevance” of Cisco Systems.

There was not enough focus on how Nicira’s approach to network virtualization and its particular conception of software defined networking (SDN) might open new horizons and enable new possibilities to the humble network. On his blog, Plexxi’s William Koss, commenting not about the Wired article but about reaction to SDN from the industry in general, wrote the following:

In my view, SDN is not a tipping point.  SDN is not obsoleting anyone.  SDN is a starting point for a new network.  It is an opportunity to ask if I threw all the crap in my network in the trash and started over what would we build, how would we architect the network and how would it work?  Is there a better way?

Cisco Still There

I think that’s a healthy focus. As Koss writes, and I agree, Cisco isn’t going anywhere; the networking giant will be with us for some time, tending its considerable franchise and moving incrementally forward. It will react more than it “proacts” — yes, I apologize now for the Haigian neologism — but that’s the fate of any industry giant of a certain age, Apple excepted.

Might Cisco, more than a decade from now, be rendered irrelevant?  I, for one, don’t make predictions over such vast swathes of time. Looking that far ahead and attempting to forecast outcomes is a mug’s game. It is nothing but conjecture disguised as foresight, offered by somebody who wants to flash alleged powers of prognostication while knowing full well that nobody else will remember the prediction a month from now, much less years into the future.

As far out as we can see, Cisco will be there. So, we’ll leave ambiguous prophecies to the likes of Nostradamus, whom I believe forecast the deaths of OS/2, Token Ring, and desktop ATM.

Answers are Coming

Fortunately, I think we’re beginning to get answers as to where and how Nicira’s approaches to network virtualization and SDN can deliver value and open new possibilities. The company has been making news with customer testimonials that include background on how its technology has been deployed. (Interestingly, the company has issued just three press releases in 2012, and all of them deal with customer deployments of its Network Virtualization Platform (NVP).)

There’s a striking contrast between the moderation implicit in Nicira’s choice of press releases and the unchecked grandiosity of the Wired story. Then again, I understand that vendors have little control over what journalists (and bloggers) write about them.

That said, one particular quote in the Wired article provoked some thinking from this quarter. I had thought about the subject previously, but the following excerpt provided some extra grist for my wood-burning mental mill:

In virtualizing the network, Nicira lets you make such changes in software, without touching the underlying hardware gear. “What Nicira has done is take the intelligence that sits inside switches and routers and moved that up into software so that the switches don’t need to know much,” says John Engates, the chief technology officer of Rackspace, which has been working with Nicira since 2009 and is now using the Nicira platform to help drive a new beta version of its cloud service. “They’ve put the power in the hands of the cloud architect rather than the network architect.”

Who Controls the Network?

It’s the last sentence that really signifies a major break with how things have been done until now, and this is where the physical separation of the control plane from the switch has potentially major implications.  As Scott Shenker has noted, network architects and network professionals have made their bones by serving as “masters of complexity,” using relatively arcane knowledge of proprietary and industry-standard protocols to keep networks functioning amid increasing demands of virtualized compute and storage infrastructure.

SDN promises an easier way, one that potentially offers a faster, simpler, less costly approach to network operations. It also offers the creative possibility of unleashing new applications and new ways of optimizing data-center resources. In sum, it can amount to a compelling business case, though not everywhere, at least not yet.

Where it does make sense, however, cloud architects and the devops crowd will gain primacy and control over the network. This trend is reflected already in the press releases from Nicira. Notice that customer quotes from Nicira do not come from network architects, network engineers, or anybody associated with conventional approaches to running a network. Instead, we see encomiums to NVP offered by cloud-architects, cloud-architect executives, and VPs of software development.

Similarly, and not surprisingly, Nicira typically doesn’t sell NVP to the traditional networking professional. It goes to the same “cloudy” types to whom quotes are attributed in its press releases. It’s true, too, that Nicira’s SDN business case and value proposition play better at cloud service providers than at enterprises.

Potentially a Big Deal

This is an area where I think the advent of  the programmable server-based controller is a big deal. It changes the customer power dynamic, putting the cloud architects and the programmers in the driver’s seat, effectively placing the network under their control. (Jason Edelman has begun thinking about what the rise of SDN means for the network engineer.) In this model, the network eventually gets subsumed under the broader rubric of computing and becomes just another flexible piece of cloud infrastructure.

Nicira can take this approach because it has nothing to lose and everything to gain. Of course, the same holds true of other startup vendors espousing SDN.

Perhaps that’s why Koss closed his latest post by writing that “the architects, the revolutionaries, the entrepreneurs, the leaders of the next twenty years of networking are not working at the incumbents.”  The word “revolutionaries” seems too strong, and the incumbents will argue that Koss, a VP at startup Plexxi, isn’t an unbiased party.

They’re right, but that doesn’t mean he’s wrong.

Nicira Focuses on Value of NVP Deployments, Avoids Fetishization of OpenFlow

The continuing evolution of Nicira Networks has been intriguing to watch. At one point, not so long ago, many speculated on what Nicira, then still in a teasing stealth mode, might be developing behind the scenes. We now know that it was building its Network Virtualization Platform (NVP), and we’re beginning to learn about how the company’s early customers are deploying it.

Back in Nicira’s pre-launch days, the line between OpenFlow and software defined networking (SDN) was blurrier than it is today.  From the outset, though, Nicira was among the vendors that sought to provide clarity on OpenFlow’s role in the SDN hierarchy.  At the time — partly because the company was communicating in stealthy coyness  — it didn’t always feel like clarity, but the message was there, nonetheless.

Not the Real Story

For instance, when Alan Cohen first joined Nicira last fall to assume the role of VP marketing, he wrote the following on his personal blog:

Virtualization and the cloud is the most profound change in information technology since client-server and the web overtook mainframes and mini computers.  We believe the full promise of virtualization and the cloud can only be fully realized when the network enables rather than hinders this movement.  That is why it needs to be virtualized.

Oh, by the way, OpenFlow is a really small part of the story.  If people think the big shift in networking is simply about OpenFlow, well, they don’t get it.

A few months before Cohen joined the company, Nicira’s CTO Martin Casado had played down OpenFlow’s role  in the company’s conception of SDN. We understand now where Nicira was going, but at the time, when OpenFlow and SDN were invariably conjoined and seemingly inseparable in industry discourse, it might not have seemed as obvious.

Don’t Get Hung Up

That said, a compelling early statement on OpenFlow’s relatively modest role in SDN was delivered in a presentation by Scott Shenker, Nicira’s co-founder and chief scientist (as well as a professor of electrical engineering in the University of California at Berkeley’s Computer Science Department). I’ve written previously about Shenker’s presentation, “The Future of Networking, and the Past of Protocols,” but here I would just like to quote his comments on OpenFlow:

“OpenFlow is one possible solution (as a configuration mechanism); it’s clearly not the right solution. I mean, it’s a very good solution for now, but there’s nothing that says this is fundamentally the right answer. Think of OpenFlow as x86 instruction set. Is the x86 instruction set correct? Is it the right answer? No, It’s good enough for what we use it for. So why bother changing it? That’s what OpenFlow is. It’s the instruction set we happen to use, but let’s not get hung up on it.”

I still think too many industry types are “hung up” on OpenFlow, and perhaps not focused enough on the controller and above, where the applications will overwhelmingly define the value that SDN delivers.

As an open protocol that facilitates physical separation of the control and data-forwarding planes, OpenFlow has a role to play in SDN. Nonetheless, other mechanisms and protocols can play that role, too, and what really counts can be found at higher altitudes of the SDN value chain.

Minor Roles

In Nicira’s recently announced customer deployments, OpenFlow has played relatively minor supporting roles. Last week, for instance, Nicira announced at the OpenStack Design Summit & Conference that its Network Virtualization Platform (NVP) has been deployed at Rackspace in conjunction with OpenStack’s Quantum networking project. The goal at Rackspace was to automate network services independent of data-center network hardware in a bid to improve operational simplicity and to reduce the cost of managing large, multi-tenant clouds.

According to Brad McConnell, principal architect at Rackpspace, Quantum, Open vSwitch, and OpenFlow all were ingredients in the deployment. Quantum was used as the standardized API to describe network connectivity, and OpenFlow served as the underlying protocol that configured and managed Open vSwitch within hypervisors.

A week earlier, Nicira announced that cloud-service provider DreamHost would deploy its NVP to reduce costs and accelerate service delivery in its OpenStack datacenter. In the press release, the following quote is attributed to Carl Perry, DreamHost’s cloud architect:

“Nicira’s NVP software enables truly massive leaps in automation and efficiency.  NVP decouples network services from hardware, providing unique flexibility for both DreamHost and our customers.  By sidestepping the old network paradigm, DreamHost can rapidly build powerful features for our cloud.  Network virtualization is a critical component necessary for architecting the next-generation public cloud services.  Nicira’s plug-in technology, coupled with the open source Ceph and OpenStack software, is a technically sound recipe for offering our customers real infrastructure-as-a-service.”

Well-Placed Focus

You will notice that OpenFlow is not mentioned by Nicira in the press releases detailing NVP deployments at DreamHost and Rackspace. While OpenFlow is present at both deployments, Nicira correctly describes its role as a lesser detail on a bigger canvas.

At DreamHost, for example, NVP uses  OpenFlow for communication between the controller and Open vSwitch, but Nicira has acknowledged that other protocols, including SNMP, could have performed a similar function.

Reflecting on these deployments, I am reminded of Casado’s  earlier statement: “Open Flow is about as exciting as USB.”

For a long time now, Nicira has eschewed the fetishization of OpenFlow. Instead, it has focused on the bigger-picture value propositions associated with network virtualization and programmable networks. If it continues to do so, it likely will draw more customers to NVP.

Debating SDN, OpenFlow, and Cisco as a Software Company

Greg Ferro writes exceptionally well, is technologically knowledgeable, provides incisive commentary, and invariably makes cogent arguments over at EtherealMind.  Having met him, I can also report that he’s a great guy. So, it is with some surprise that I find myself responding critically to his latest blog post on OpenFlow and SDN.

Let’s start with that particular conjunction of terms. Despite occasional suggestions to the contrary, SDN and OpenFlow are not inseparable or interchangeable. OpenFlow is a protocol, a mechanism that allows a server, known in SDN parlance as a controller, to interact with and program flow tables (for packet forwarding) on switches. It facilitates the separation of the control plane from the data plane in some SDN networks.

But OpenFlow is not SDN, which can be achieved with or without OpenFlow.  In fact, Nicira Networks recently announced two SDN customer deployments of its Network Virtualization Platform (NVP) — at DreamHost and at Rackspace, respectively — and you won’t find mention of OpenFlow in either press release, though OpenStack and its Quantum networking project receive prominent billing. (I’ll be writing more about the Nicira deployments soon.)

A Protocol in the Big Picture 

My point is not to diminish or disparage OpenFlow, which I think can and will be used gainfully in a number of SDN deployments. My point is that we have to be clear that the bigger picture of SDN is not interchangeable with the lower-level functionality of OpenFlow.

In that respect, Ferro is absolutely correct when he says that software-defined networking, and specifically SDN controller and application software, are “where the money is.” He conflates it with OpenFlow — which may or may not be involved, as we already have established — but his larger point is valid.  SDN, at the controller and above, is where all the big changes to the networking model, and to the industry itself, will occur.

Ferro also likely is correct in his assertion that OpenFlow, in and of itself, will  not enable “a choice of using low cost network equipment instead of the expensive networking equipment that we use today. “ In the near term, at least, I don’t see major prospects for change on that front as long as backward compatibility, interoperability with a bulging bag of networking protocols, and the agendas of the networking old guard are at play.

Cisco as Software Company

However, I think Ferro is wrong when he says that the market-leading vendors in switching and routing, including Cisco and Juniper, are software companies. Before you jump down my throat, presuming that’s what you intend to do, allow me to explain.

As Ferro says, Cisco and Juniper, among others, have placed increasing emphasis on the software features and functionality of their products. I have no objection there. But Ferro pushes his argument too far and suggests that the “networking business today is mostly a software business.”  It’s definitely heading in that direction, but Cisco, for one, isn’t there yet and probably won’t be for some time.  The key word, by the way, is “business.”

Cisco is developing more software these days, and it is placing more emphasis on software features and functionality, but what it overwhelmingly markets and sells to its customers are switches, routers, and other hardware appliances. Yes, those devices contain software, but Cisco sells them as hardware boxes, with box-oriented pricing and box-oriented channel programs, just as it has always done. Nitpickers will note that Cisco also has collaboration and video software, which it actually sells like software, but that remains an exception to the rule.

Talks Like a Hardware Company, Walks Like a Hardware Company

For the most part, in its interactions with its customers and the marketplace in general, Cisco still thinks and acts like a hardware vendor, software proliferation notwithstanding. It might have more software than ever in its products, but Cisco is in the hardware business.

In that respect, Cisco faces the same fundamental challenge that server vendors such as HP, Dell, and — yes — Cisco confront as they address a market that will be radically transformed by the rise of cloud services and ODM-hardware-buying cloud service providers. Can it think, figuratively and literally, outside the box? Just because Cisco develops more software than it did before doesn’t mean the answer is yes, nor does it signify that Cisco has transformed itself into a software vendor.

Let’s look, for example, at Cisco’s approach to SDN. Does anybody really believe that Cisco, with its ongoing attachment to ASIC-based hardware differentiation, will move toward a software-based delivery model that places the primary value on server-based controller software rather than on switches and routers? It’s just not going to happen, because  it’s not what Cisco does or how it operates.

Missing the Signs 

And that bring us to my next objection.  In arguing that Cisco and others have followed the market and provided the software their customers want, Ferro writes the following:

“Billion dollar companies don’t usually miss the obvious and have moved to enhance their software to provide customer value.”

Where to begin? Well, billion-dollar companies frequently have missed the obvious and gotten it horribly wrong, often when at least some individuals within the companies in question knew that their employer was getting it horribly wrong.  That’s partly because past and present successes can sow the seeds of future failure. As in Clayton M. Christensen’s classic book The Innovator’s Dilemma, industry leaders can have their vision blinkered by past successes, which prevent them from detecting disruptive innovations. In other cases, former market leaders get complacent or fail to acknowledge the seriousness of a competitive threat until it is too late.

The list of billion-dollar technology companies that have missed the obvious and failed spectacularly, sometimes disappearing into oblivion, is too long to enumerate here, but some  names spring readily to mind. Right at the top (or bottom) of our list of industry ignominy, we find Nortel Networks. Once a company valued at nearly $400 billion, Nortel exists today only in thoroughly digested pieces that were masticated by other companies.

Is Cisco Decline Inevitable?

Today, we see a similarly disconcerting situation unfolding at Research In Motion (RIM), where many within the company saw the threat posed by Apple and by the emerging BYOD phenomenon but failed to do anything about it. Going further back into the annals of computing history, we can adduce examples such as Novell, Digital Equipment Corporation, as well as the raft of other minicomputer vendors who perished from the planet after the rise of the PC and client-sever computing. Some employees within those companies might even have foreseen their firms’ dark fates, but the organizations in which they toiled were unable to rescue themselves.

They were all huge successes, billion-dollar companies, but, in the face of radical shifts in industry and market dynamics, they couldn’t change who and what they were.  The industry graveyard is full of the carcasses of company’s that were once enormously successful.

Am I saying this is what will happen to Cisco in an era of software-defined networking? No, I’m not prepared to make that bet. Cisco should be able to adapt and adjust better than the aforementioned companies were able to do, but it’s not a given. Just because Cisco is dominant in the networking industry today doesn’t mean that it will be dominant forever. As the old investment disclaimer goes, past performance does not guarantee future results. What’s more, Cisco has shown a fallibility of late that was not nearly as apparent in its boom years more than a decade ago.

Early Days, Promising Future

Finally, I’m not sure that Ferro is correct when he says Open Network Foundation’s (ONF) board members and its biggest service providers, including Google, will achieve CapEx but not OpEx savings with SDN. We really don’t know whether these companies are deriving OpEx savings because they’re keeping what they do with their operations and infrastructure highly confidential. Suffice it to say, they see compelling reasons to move away from buying their networking gear from the industry’s leading vendors, and they see similarly compelling reasons to embrace SDN.

Ferro ends his piece with two statements, the first of which I agree with wholeheartedly:

“That is the future of Software Defined Networking – better, dynamic, flexible and business focussed networking. But probably not much cheaper in the long run.”

As for that last statement, I believe there is insufficient evidence on which to render a verdict. As we’ve noted before, these are early days for SDN.