Monthly Archives: April 2012

Cisco Not Going Anywhere, but Changes Coming to Networking

Initially, I intended not to comment on the Wired article on Nicira Networks. While it contained some interesting quotes and a few good observations, its tone and too much of its focus were misplaced. It was too breathless, trying to too hard to make the story fit into a simplistic, sensationalized narrative of outsized personalities and the threatened “irrelevance” of Cisco Systems.

There was not enough focus on how Nicira’s approach to network virtualization and its particular conception of software defined networking (SDN) might open new horizons and enable new possibilities to the humble network. On his blog, Plexxi’s William Koss, commenting not about the Wired article but about reaction to SDN from the industry in general, wrote the following:

In my view, SDN is not a tipping point.  SDN is not obsoleting anyone.  SDN is a starting point for a new network.  It is an opportunity to ask if I threw all the crap in my network in the trash and started over what would we build, how would we architect the network and how would it work?  Is there a better way?

Cisco Still There

I think that’s a healthy focus. As Koss writes, and I agree, Cisco isn’t going anywhere; the networking giant will be with us for some time, tending its considerable franchise and moving incrementally forward. It will react more than it “proacts” — yes, I apologize now for the Haigian neologism — but that’s the fate of any industry giant of a certain age, Apple excepted.

Might Cisco, more than a decade from now, be rendered irrelevant?  I, for one, don’t make predictions over such vast swathes of time. Looking that far ahead and attempting to forecast outcomes is a mug’s game. It is nothing but conjecture disguised as foresight, offered by somebody who wants to flash alleged powers of prognostication while knowing full well that nobody else will remember the prediction a month from now, much less years into the future.

As far out as we can see, Cisco will be there. So, we’ll leave ambiguous prophecies to the likes of Nostradamus, whom I believe forecast the deaths of OS/2, Token Ring, and desktop ATM.

Answers are Coming

Fortunately, I think we’re beginning to get answers as to where and how Nicira’s approaches to network virtualization and SDN can deliver value and open new possibilities. The company has been making news with customer testimonials that include background on how its technology has been deployed. (Interestingly, the company has issued just three press releases in 2012, and all of them deal with customer deployments of its Network Virtualization Platform (NVP).)

There’s a striking contrast between the moderation implicit in Nicira’s choice of press releases and the unchecked grandiosity of the Wired story. Then again, I understand that vendors have little control over what journalists (and bloggers) write about them.

That said, one particular quote in the Wired article provoked some thinking from this quarter. I had thought about the subject previously, but the following excerpt provided some extra grist for my wood-burning mental mill:

In virtualizing the network, Nicira lets you make such changes in software, without touching the underlying hardware gear. “What Nicira has done is take the intelligence that sits inside switches and routers and moved that up into software so that the switches don’t need to know much,” says John Engates, the chief technology officer of Rackspace, which has been working with Nicira since 2009 and is now using the Nicira platform to help drive a new beta version of its cloud service. “They’ve put the power in the hands of the cloud architect rather than the network architect.”

Who Controls the Network?

It’s the last sentence that really signifies a major break with how things have been done until now, and this is where the physical separation of the control plane from the switch has potentially major implications.  As Scott Shenker has noted, network architects and network professionals have made their bones by serving as “masters of complexity,” using relatively arcane knowledge of proprietary and industry-standard protocols to keep networks functioning amid increasing demands of virtualized compute and storage infrastructure.

SDN promises an easier way, one that potentially offers a faster, simpler, less costly approach to network operations. It also offers the creative possibility of unleashing new applications and new ways of optimizing data-center resources. In sum, it can amount to a compelling business case, though not everywhere, at least not yet.

Where it does make sense, however, cloud architects and the devops crowd will gain primacy and control over the network. This trend is reflected already in the press releases from Nicira. Notice that customer quotes from Nicira do not come from network architects, network engineers, or anybody associated with conventional approaches to running a network. Instead, we see encomiums to NVP offered by cloud-architects, cloud-architect executives, and VPs of software development.

Similarly, and not surprisingly, Nicira typically doesn’t sell NVP to the traditional networking professional. It goes to the same “cloudy” types to whom quotes are attributed in its press releases. It’s true, too, that Nicira’s SDN business case and value proposition play better at cloud service providers than at enterprises.

Potentially a Big Deal

This is an area where I think the advent of  the programmable server-based controller is a big deal. It changes the customer power dynamic, putting the cloud architects and the programmers in the driver’s seat, effectively placing the network under their control. (Jason Edelman has begun thinking about what the rise of SDN means for the network engineer.) In this model, the network eventually gets subsumed under the broader rubric of computing and becomes just another flexible piece of cloud infrastructure.

Nicira can take this approach because it has nothing to lose and everything to gain. Of course, the same holds true of other startup vendors espousing SDN.

Perhaps that’s why Koss closed his latest post by writing that “the architects, the revolutionaries, the entrepreneurs, the leaders of the next twenty years of networking are not working at the incumbents.”  The word “revolutionaries” seems too strong, and the incumbents will argue that Koss, a VP at startup Plexxi, isn’t an unbiased party.

They’re right, but that doesn’t mean he’s wrong.

As Insieme Emerges, Former Cisco Spin-In Exec Leaves

As Cisco spin-venture Insieme Networks emerges from the shadows, reports indicate that a co-founder of earlier Cisco spin-in property Nuova Systems has left the mothership.

Soni Jiandani, a longtime Cisco executive who joined the company in 1994 and last served as SVP for Cisco’s Server Access and Virtualization Technology Group, reportedly left the company in March, according to sources familiar with the situation.

Previous Player in Spin-In Crews

Her departure seems to have occurred just as news about Insieme began to circulate widely in business reports and the trade press. In the Nuova spin-venture, Jiandani teamed with Mario Mazzola, Luca Cafiero, and Prem Jain, all of whom now are ringmasters at Insieme.

Earlier in her Cisco career, Jiandani served as VP and GM of Cisco’s LAN and SAN switching business unit, part of the Data Center, Switching and Wireless Technology Group. Before that, she was involved with earlier Cisco spin-in venture Andiamo Systems, where her partners in invention again were Mazzola, Cafiero, and Jain, among others.

Notwithstanding her previous involvement with SAN-switching spin-in Andiamo and data-center switching spin-in Nuova, Jiandani was not reported to be involved with Insieme, which will deliver at least part of Cisco’s answer to the incipient threat posed by software defined networking (SDN).

Postscript: On Twitter, Juniper Networks’ Michael C. Leonard just stated that Jiandani is at Insieme. No doubt confirmation (or not) will be forthcoming, here or elsewhere.

Nicira Focuses on Value of NVP Deployments, Avoids Fetishization of OpenFlow

The continuing evolution of Nicira Networks has been intriguing to watch. At one point, not so long ago, many speculated on what Nicira, then still in a teasing stealth mode, might be developing behind the scenes. We now know that it was building its Network Virtualization Platform (NVP), and we’re beginning to learn about how the company’s early customers are deploying it.

Back in Nicira’s pre-launch days, the line between OpenFlow and software defined networking (SDN) was blurrier than it is today.  From the outset, though, Nicira was among the vendors that sought to provide clarity on OpenFlow’s role in the SDN hierarchy.  At the time — partly because the company was communicating in stealthy coyness  — it didn’t always feel like clarity, but the message was there, nonetheless.

Not the Real Story

For instance, when Alan Cohen first joined Nicira last fall to assume the role of VP marketing, he wrote the following on his personal blog:

Virtualization and the cloud is the most profound change in information technology since client-server and the web overtook mainframes and mini computers.  We believe the full promise of virtualization and the cloud can only be fully realized when the network enables rather than hinders this movement.  That is why it needs to be virtualized.

Oh, by the way, OpenFlow is a really small part of the story.  If people think the big shift in networking is simply about OpenFlow, well, they don’t get it.

A few months before Cohen joined the company, Nicira’s CTO Martin Casado had played down OpenFlow’s role  in the company’s conception of SDN. We understand now where Nicira was going, but at the time, when OpenFlow and SDN were invariably conjoined and seemingly inseparable in industry discourse, it might not have seemed as obvious.

Don’t Get Hung Up

That said, a compelling early statement on OpenFlow’s relatively modest role in SDN was delivered in a presentation by Scott Shenker, Nicira’s co-founder and chief scientist (as well as a professor of electrical engineering in the University of California at Berkeley’s Computer Science Department). I’ve written previously about Shenker’s presentation, “The Future of Networking, and the Past of Protocols,” but here I would just like to quote his comments on OpenFlow:

“OpenFlow is one possible solution (as a configuration mechanism); it’s clearly not the right solution. I mean, it’s a very good solution for now, but there’s nothing that says this is fundamentally the right answer. Think of OpenFlow as x86 instruction set. Is the x86 instruction set correct? Is it the right answer? No, It’s good enough for what we use it for. So why bother changing it? That’s what OpenFlow is. It’s the instruction set we happen to use, but let’s not get hung up on it.”

I still think too many industry types are “hung up” on OpenFlow, and perhaps not focused enough on the controller and above, where the applications will overwhelmingly define the value that SDN delivers.

As an open protocol that facilitates physical separation of the control and data-forwarding planes, OpenFlow has a role to play in SDN. Nonetheless, other mechanisms and protocols can play that role, too, and what really counts can be found at higher altitudes of the SDN value chain.

Minor Roles

In Nicira’s recently announced customer deployments, OpenFlow has played relatively minor supporting roles. Last week, for instance, Nicira announced at the OpenStack Design Summit & Conference that its Network Virtualization Platform (NVP) has been deployed at Rackspace in conjunction with OpenStack’s Quantum networking project. The goal at Rackspace was to automate network services independent of data-center network hardware in a bid to improve operational simplicity and to reduce the cost of managing large, multi-tenant clouds.

According to Brad McConnell, principal architect at Rackpspace, Quantum, Open vSwitch, and OpenFlow all were ingredients in the deployment. Quantum was used as the standardized API to describe network connectivity, and OpenFlow served as the underlying protocol that configured and managed Open vSwitch within hypervisors.

A week earlier, Nicira announced that cloud-service provider DreamHost would deploy its NVP to reduce costs and accelerate service delivery in its OpenStack datacenter. In the press release, the following quote is attributed to Carl Perry, DreamHost’s cloud architect:

“Nicira’s NVP software enables truly massive leaps in automation and efficiency.  NVP decouples network services from hardware, providing unique flexibility for both DreamHost and our customers.  By sidestepping the old network paradigm, DreamHost can rapidly build powerful features for our cloud.  Network virtualization is a critical component necessary for architecting the next-generation public cloud services.  Nicira’s plug-in technology, coupled with the open source Ceph and OpenStack software, is a technically sound recipe for offering our customers real infrastructure-as-a-service.”

Well-Placed Focus

You will notice that OpenFlow is not mentioned by Nicira in the press releases detailing NVP deployments at DreamHost and Rackspace. While OpenFlow is present at both deployments, Nicira correctly describes its role as a lesser detail on a bigger canvas.

At DreamHost, for example, NVP uses  OpenFlow for communication between the controller and Open vSwitch, but Nicira has acknowledged that other protocols, including SNMP, could have performed a similar function.

Reflecting on these deployments, I am reminded of Casado’s  earlier statement: “Open Flow is about as exciting as USB.”

For a long time now, Nicira has eschewed the fetishization of OpenFlow. Instead, it has focused on the bigger-picture value propositions associated with network virtualization and programmable networks. If it continues to do so, it likely will draw more customers to NVP.

Cisco’s SDN Strategy: Meet the New Boss, Same as the Old Boss

Like Om Malik, I received and read the memo that Cisco distributed internally regarding the company’s plans for spin-in Insieme and software-defined networking (SDN). Om has published the memo in its entirety, so there’s no need for me to do the same here.

As for Insieme, the memo informs us that Cisco has made an investment of $100 million  in the “early-stage company focused on research and development in the datacenter market. It also notes that Insieme was founded by Mario Mazzola, Luca Cafiero, and Prem Jain in February 2012, and that “Cisco has the right to purchase the remaining interests of Insieme, with a potential payout range of up to $750 million that will be based primarily on the sales and profitability of Insieme products through Cisco.”

Cisco emphasizes that Insieme’s product-development efforts are “complementary” to its current and planned internal efforts, and it notes that further details regarding Insieme will be disclosed in “Cisco’s upcoming 1oQ filing in May.”

Mystery No More

But we don’t have to wait until then to discern how Cisco will position itself in relation to SDN and programmable networks. If we were in need of additional clues as to how Cisco will play its hand, the memo contains more than enough information from which to deduce the company’s strategy.

As far as Cisco is concerned, there isn’t actually anything new to see in SDN. This is where the marketing battle over words and meanings will ensue, because Cisco’s definition of SDN will bear an uncanny resemblance to what it already does today.

In the memo, Padmasree Warrior, Cisco CTO and co-leader of engineering, makes the following statement: “Cisco believes SDN is part of our vision of the intelligent network that is more open, programmable, and application aware—a vision in which the network is transformed into a more effective business enabler.”

Cisco’s SDN

It’s an ambiguous and innocuous opening salvo, and it could mean almost anything. As the memo proceeds, however, Cisco increasingly qualifies what it means by the term SDN.  It also tells us how Insieme fits into the picture.

Here’s what I see as the memo’s money shot:

“Because SDN is still in its embryonic stage, a consensus has yet to be reached on its exact definition. Some equate SDN with OpenFlow or decoupling of control and data planes. Cisco’s view transcends this definition.”

If you want the gist of the memo in a nutshell, it’s all there. Cisco will (and does) contend that the “decoupling of control and data planes” — in other words, server-based software deciding how packets should be routed across networks — does not define SDN.

Don’t Change

This should not come as a surprise. It’s in Cisco’s interest — and, the company will argue, its customers’ interests as well — for it to resist the decoupling of the control and data planes. You won’t get ridiculous hyperbole from me, so I won’t say that such a decoupling represents an existential threat to Cisco. That would be exaggeration for effect, and I don’t play that game. So let me put it another way: It is a business problem that Cisco would rather not have to address.

Could Cisco deal with that problem? Probably, given the resources at its disposal. But it would be a hassle and a headache, and it would require Cisco to change into something different from what it is today. If you’re Cisco, not having to deal with the problem seems a better option.

Later in the Cisco memo, the company tips its hand further. Quoting directly:

While SDN concepts like network virtualization may sound new, Cisco has played a leadership role in this market for many years leveraging its build, buy, partner strategy.  For example, Cisco’s Nexus 1000V series switches—which provide sophisticated NX-OS networking capabilities in virtualized environment down to the virtual machine level—are built upon a controller/agent architecture, a fundamental building block of SDN solutions. With more than 5,000 customers today, Cisco has been shipping this technology for a long time.

“SDN plays into at least two of Cisco’s top five priorities—core routing/switching and data center/virtualization/cloud,” says Warrior.

Cisco has the opportunity to shape and define the SDN market because it is still perceived as an emerging technology, Warrior says. In fact, Cisco innovation will be much deeper than just SDN.

Cisco is operating from established positions of strength, which include the scale of its operating systems, superior ASICS, unique embedded intelligence, experienced engineering expertise, and an expansive installed base—most of which has no interest in completely replacing what it has already invested in so heavily. “

Pouring the Grappa

So, Cisco’s future SDN, including whatever Insieme eventually delivers to market, will look a lot like the “SDN” that Cisco delivers today in the Nexus 1000V series switches and the NX-OS. When one considers that some engineers now on the Insieme team worked on the Nexus 1000V, and that Insieme is licensed to use the NX-OS, it does not take a particularly athletic leap of logic to conclude that Insieme will be building a Nexus-like switch, though perhaps one on steroids.

Insieme, as I’ve written before, will represent an evolution for Cisco, not a revolution. It will be fortified switching wine in an SDN bottle. (Mario Mazzola is fond of giving Italian names to his spin-in companies. He should have called this one “Grappa.”)

Commenting on Cisco’s SDN memo and the company’s decision to tap spin-in venture Insieme as a vehicle in the space, Om Malik interpreted it as “a tactical admission that it (Cisco) has become so big, so bureaucratic and so broken that it cannot count on internal teams to build any ground breaking products.”

Bigger This Time

That might be an accurate assessment, but it’s also possible to see Insieme as business as usual at Cisco. Clearly Cisco never retired its spin-in move, as I once thought it did, but merely put it into prolonged sabbatical, holding it in reserve for when it would be needed again. Malik himself notes that Cisco has gone to the spin-well before, with this particular trio of all-star engineers now involved in their third such venture.

For good or ill, maybe this is how Cisco gets difficult things done in its dotage. It calls together a bunch of proven quantities and old engineering hands and has them build a bigger datacenter switch than they built the last time.

Is that SDN? It’s Cisco’s SDN. The company’s customers ultimately will decide whether it’s theirs, too.

Departures from Avaya’s Mahogany Row Thicken IPO Plot

My plan was to continue writing posts about software defined networking (SDN). And why not?

SDN is controversial (at least in some quarters), innovative, intriguing, and potentially  disruptive to network-infrastructure economics and to the industry’s status quo. What’s more, the Open Networking Summit (ONS) took place this week in Santa Clara, California, serving a veritable gushing geyser of news, commentary, and vigorous debate.

But before I dive back into the overflowing SDN pool, I feel compelled to revisit Avaya. Ahh, yes, Avaya. Whenever I think I’m finished writing about that company, somebody or something pulls me back in.

Executive Tumult

I have written about Avaya’s long-pending IPO, which might not happen at all, and about the challenges the company faces to navigate shifting technological seas and changing industry dynamics. Avaya’s heavy debt load, its uncertain growth prospects, its seemingly shattered strategic compass, and its occasionally complicated relationship with its channel parters are all factors that mitigate against a successful IPO. Some believe the company might be forced into selling itself, in whole or in part, if not into possible bankruptcy.

I will not make a prediction here, but I have some news to report that suggests that something is afoot (executives, mainly) on Avaya’s mahogany row.  Sources with knowledge of the situation report a sequence of executive departures at the company, many of which can and have been confirmed.

On April 12, for example, Avaya disclosed in a regulatory filing with the SEC that “Mohamad S. Ali will step down as Senior Vice President and President, Avaya Client Services, to pursue other opportunities.” Ali’s departure was effective April 13.  Sources also inform me that a vice president who worked for Ali also left Avaya recently. Sure enough, if you check the LinkedIn profile of Martin Ingram, you will find that he left his role as vice president of global services this month after spending more than six years with the company. He has found employment SVP and CIO at Arise Virtual Solutions Inc.

As they say in infomercials, that’s not all.

Change Only Constant

Sources say Alan Baratz, who came to Avaya from Cisco Systems nearly four years ago, has left the company. Baratz, formerly SVP and president of Avaya’s Global Communications Solutions, had taken the role of SVP for  corporate development and strategy amid another in a long line of Avaya executive shuffles that had channel partners concerned about the stability of the company’s executive team.

Sources also report that Dan Berg, Avaya’s VP for R&D, who served as Skype’s CTO from January 2009 until joining Avaya in February 2011, will leave the company at the end of this month.

Furthermore, sources also say that David Downing, VP of worldwide technical operations, apparently has left the company this week. Downing was said to have reported to Joel Hackney, Avaya’s SVP for global sales and marketing and the president of field operations.

On the other side of the pond, it was reported yesterday in TechTarget’s MicroScope that Andrew Shepperd, Avaya’s managing director for the UK, left after just eight months on the job. Shepperd’s departure was preceded by other executive leave-takings earlier this year.

Vanishing IPO?

So, what does all this tumult mean, if anything? It’s possible that all these executives, perhaps like those before them, simply decided individually and separately that it was time for a change. Maybe this cluster of departures and defections is random. That’s one interpretation.

Another interpretation is that these departures are related to the dimming prospects for an IPO this year or next year. With no remunerative payoff above and beyond salary and bonuses on the horizon, these executives, or at least some of them, might have decided that the time was right to seek greener pastures. The company is facing a range of daunting challenges, some beyond its immediate control, and it wouldn’t be surprising to find that many executives have chosen to leave.

Fortunately, we won’t have to wait much longer for clarity from Avaya on where it is going and how it will get there. Sources tell me that Kevin Kennedy, president and CEO, has called an “all-hands meeting” on May 18.

For you SDN aficionados, fret not. We will now return to regularly scheduled programming.

LineRate’s L4-7 Pitch Tailored to Cloud

I’ve written previously about the growing separation between how large cloud service providers see their networks and how enterprises perceive theirs. The chasm seems to get wider by the day, with the major cloud shops adopting innovative approaches to reduce network-related costs and to increase service agility, while their enterprise brethren seem to be  assuming the role of conservative traditionalists — not that there’s anything inherently or necessarily wrong with that.

The truth is, the characteristics and requirements of those networks and the applications that ride on them have diverged, though ultimately a cloud-driven reconvergence is destined to occur.  For now, though, the cloudy service providers are going one way, and the enterprises — and most definitely the networking professionals within them — are standing firm on familiar ground.

It’s no surprise, then, to see LineRate Systems, which is bringing a software-on-commodity-box approach to L4-7 network services, target big cloud shops with its new all-software LineRate Proxy.

Targeting Cloud Shops

LineRate says its eponymous Proxy delivers a broad range of full-proxy Layer 4-7 network services, including load balancing, content switching, content filtering, SSL termination and origination, ACL/IP filtering, TCP optimization, DDoS blocking, application- performance visibility, server-health monitoring, and an IPv4/v6 translation gateway. The product has snared a customer — the online photo- and video-sharing service Photobucket — willing to sing its praises, and the company apparently has two other customers onboard.

As a hook to get those customers and others to adopt its product, LineRate offers pay-for-capacity subscription licensing and a performance guarantee that it says eliminates upfront capital expenditures and does away with the risks associated with capacity planning and the costs of over-provisioning. It’s a great way to overcome, or at least mitigate, the new-tech jitters that prospective customers might experience when approached by a startup.

I’ll touch on the company’s “secret sauce” shortly, but let’s first explain how LineRate got to where it is now. As CEO Steve Georgis explained in an interview late last week, LineRate has been around since 2008. It is a VC-backed company, based in Boulder, Colorado, which grew from research conducted at the University of Colorado by John Giacomoni, now LineRate’s chief technology officer (CTO), and by Manish Vachharajani, LineRate’s chief software architect.

Replacing L4-7 Hardware Appliances 

As reported by the Boulder County Business Report, LineRate closed a $4.75 million Series A round in April 2011, in which Boulder Ventures was the lead investor. Including seed investments, LineRate has raised about $5.4 million in aggregate, and it is reportedly raising a Series B round.

LineRate calls what it does “software defined network services” (SDNS) and company CEO Georgis says the overall SDN market comprises three layers: the Layer 2-3 network fabric, the Layer 4-7 network services, and the applications and web services that run above everything else. LineRate, obviously, plays in the middle, a neighborhood it shares with Embrane, among others.

LineRate contends that software is the new data path. As such, its raison d’être is to eliminate the need for specialized Layer 4-7 hardware appliances by replacing them with software, which it provides, running on industry-standard hardware, which can be and are provided by ODMs and OEMs alike.

LineRate’s Secret Sauce

The company’s software, and its aforementioned secret sauce, is called the LineRate Operating System (LROS). As mentioned above, it was developed from research work that Giacomoni and Vachharajani completed in high-performance computing (HPC), where their focus was on optimizing resource utilization of off-the-shelf hardware.

Based on FreeBSD but augmented with LineRate’s own TCP stack, LROS has been optimized to squeeze maximum performance from the x86 architecture. As a result, Georgis says, LROS can provide 5-10x more network-performance than can a general-purpose operating system, such as Linux or BSD. LineRate claims its software delivers sufficiently impressive performance — 20 to 40 Gbps network processing on a commodity x86 server, with what the company describes as “high session scalability” — to obviate the need for specialized L4-7 hardware appliances.

This sort of story is one that service providers are likely to find intriguing. We have seen variations on this theme at the big cloud shops, first with virtualized servers, then with switches and routers, and now — if LineRate has its way — with L4-7 appliances.

LineRate says it can back up its bluster with the ability to support hundreds of thousands of full-proxy L7 connections per second, amounting to two million concurrent active flows. As such, LineRate claims LROS’s ability to support scale-out high availability and its inherent multi-tenancy make well qualified for the needs of cloud-service providers.  The LineRate Proxy uses a REST API-based architecture, which the company says allows it to integrate with any cloud orchestration or data-center management framework.

Wondering About Service Reach?

At, which has 23 million users that upload about four million photos and videos per day, the LineRate Proxy has been employed as a L7 HTTP load balancer and intelligent-content switch in a 10-Gbps network. The LineRate software runs on a pair of low-cost, high-availability x86 servers, doing away with the need to do a forklift upgrade on a legacy hardware solution that Georgis said included products from “a market-leading load-balancing vendor and a vendor that was once a market leader in the space.”

LineRate claims its scalable subscription model also paid off for Photobucket, by eliminating the need for long-term capacity planning and up-front capital expenditures. It says Photobucket benefits from its “guaranteed performance,” and that on-demand scaling has eliminated risks associated with under- or over-provisioning. On the whole, LineRate says its solution offered an entry cost 70 percent lower than that of a competing hardware-appliance purchase.

When the company first emerged, the founders indicated that load balancing would be the first L4-7 network service that it would target. It will be interesting to see whether its other early-adopter customers also are using the LineRate Proxy for load balancing. Will the product prove more specialized than the L4-7 Ginsu knife the company is positioning?

It’s too early to say. The answer will be provided by future deployments.

The estimable Ivan Pepelnjak offers his perspective, including astute commentary on how and where the LineRate Proxy is likely to find favor.

Not Just a Marketing Overlay

Ivan pokes gentle fun at LineRate’s espousal of SDNS, and his wariness is understandable. Even the least likely of networking vendors seem to be cloaking themselves in SDN garb these days, both to draw the fickle attention of trend-chasing venture capitalists and to catch the preoccupied eyes of the service providers that actually employ SDN technologies.

Nonetheless, there are aspects to what LineRate does that undeniably have a lot in common with what I will call an SDN ethos (sorry to be so effete). One of the key value propositions that LineRate promotes — in addition to its comparatively low cost of entry, its service-based pricing, and its performance guarantee — is the simple scale-out approach it offers to service providers.

As Ivan points out, “ . . . whenever you need more bandwidth, you can take another server from your compute pool and repurpose it as a networking appliance.” That’s definitely a page from the SDN playbook that the big cloud-service providers, such as those who run the Open Networking Foundation (ONF), are following. Ideally, they’d like to use virtualization and SDN to run everything on commodity boxes, perhaps sourced directly from ODMs, and then reallocate hardware dynamically as circumstances dictate.

In a comment on Ivan’s post, Brad Hedlund, formerly of Cisco and now of Dell, offers another potential SDN connection for the LineRate Proxy. Hedlund writes that it “would be really cool if they ran the Open vSwitch on the southbound interfaces, and partnered with Nicira and/or Big Switch, so that the appliance could be used as a gateway in overlay-based clouds such as, um, Rackspace.”

He might have something there. So, maybe, in the final analysis, the SDNS terminology is more than a marketing overlay.

Debating SDN, OpenFlow, and Cisco as a Software Company

Greg Ferro writes exceptionally well, is technologically knowledgeable, provides incisive commentary, and invariably makes cogent arguments over at EtherealMind.  Having met him, I can also report that he’s a great guy. So, it is with some surprise that I find myself responding critically to his latest blog post on OpenFlow and SDN.

Let’s start with that particular conjunction of terms. Despite occasional suggestions to the contrary, SDN and OpenFlow are not inseparable or interchangeable. OpenFlow is a protocol, a mechanism that allows a server, known in SDN parlance as a controller, to interact with and program flow tables (for packet forwarding) on switches. It facilitates the separation of the control plane from the data plane in some SDN networks.

But OpenFlow is not SDN, which can be achieved with or without OpenFlow.  In fact, Nicira Networks recently announced two SDN customer deployments of its Network Virtualization Platform (NVP) — at DreamHost and at Rackspace, respectively — and you won’t find mention of OpenFlow in either press release, though OpenStack and its Quantum networking project receive prominent billing. (I’ll be writing more about the Nicira deployments soon.)

A Protocol in the Big Picture 

My point is not to diminish or disparage OpenFlow, which I think can and will be used gainfully in a number of SDN deployments. My point is that we have to be clear that the bigger picture of SDN is not interchangeable with the lower-level functionality of OpenFlow.

In that respect, Ferro is absolutely correct when he says that software-defined networking, and specifically SDN controller and application software, are “where the money is.” He conflates it with OpenFlow — which may or may not be involved, as we already have established — but his larger point is valid.  SDN, at the controller and above, is where all the big changes to the networking model, and to the industry itself, will occur.

Ferro also likely is correct in his assertion that OpenFlow, in and of itself, will  not enable “a choice of using low cost network equipment instead of the expensive networking equipment that we use today. “ In the near term, at least, I don’t see major prospects for change on that front as long as backward compatibility, interoperability with a bulging bag of networking protocols, and the agendas of the networking old guard are at play.

Cisco as Software Company

However, I think Ferro is wrong when he says that the market-leading vendors in switching and routing, including Cisco and Juniper, are software companies. Before you jump down my throat, presuming that’s what you intend to do, allow me to explain.

As Ferro says, Cisco and Juniper, among others, have placed increasing emphasis on the software features and functionality of their products. I have no objection there. But Ferro pushes his argument too far and suggests that the “networking business today is mostly a software business.”  It’s definitely heading in that direction, but Cisco, for one, isn’t there yet and probably won’t be for some time.  The key word, by the way, is “business.”

Cisco is developing more software these days, and it is placing more emphasis on software features and functionality, but what it overwhelmingly markets and sells to its customers are switches, routers, and other hardware appliances. Yes, those devices contain software, but Cisco sells them as hardware boxes, with box-oriented pricing and box-oriented channel programs, just as it has always done. Nitpickers will note that Cisco also has collaboration and video software, which it actually sells like software, but that remains an exception to the rule.

Talks Like a Hardware Company, Walks Like a Hardware Company

For the most part, in its interactions with its customers and the marketplace in general, Cisco still thinks and acts like a hardware vendor, software proliferation notwithstanding. It might have more software than ever in its products, but Cisco is in the hardware business.

In that respect, Cisco faces the same fundamental challenge that server vendors such as HP, Dell, and — yes — Cisco confront as they address a market that will be radically transformed by the rise of cloud services and ODM-hardware-buying cloud service providers. Can it think, figuratively and literally, outside the box? Just because Cisco develops more software than it did before doesn’t mean the answer is yes, nor does it signify that Cisco has transformed itself into a software vendor.

Let’s look, for example, at Cisco’s approach to SDN. Does anybody really believe that Cisco, with its ongoing attachment to ASIC-based hardware differentiation, will move toward a software-based delivery model that places the primary value on server-based controller software rather than on switches and routers? It’s just not going to happen, because  it’s not what Cisco does or how it operates.

Missing the Signs 

And that bring us to my next objection.  In arguing that Cisco and others have followed the market and provided the software their customers want, Ferro writes the following:

“Billion dollar companies don’t usually miss the obvious and have moved to enhance their software to provide customer value.”

Where to begin? Well, billion-dollar companies frequently have missed the obvious and gotten it horribly wrong, often when at least some individuals within the companies in question knew that their employer was getting it horribly wrong.  That’s partly because past and present successes can sow the seeds of future failure. As in Clayton M. Christensen’s classic book The Innovator’s Dilemma, industry leaders can have their vision blinkered by past successes, which prevent them from detecting disruptive innovations. In other cases, former market leaders get complacent or fail to acknowledge the seriousness of a competitive threat until it is too late.

The list of billion-dollar technology companies that have missed the obvious and failed spectacularly, sometimes disappearing into oblivion, is too long to enumerate here, but some  names spring readily to mind. Right at the top (or bottom) of our list of industry ignominy, we find Nortel Networks. Once a company valued at nearly $400 billion, Nortel exists today only in thoroughly digested pieces that were masticated by other companies.

Is Cisco Decline Inevitable?

Today, we see a similarly disconcerting situation unfolding at Research In Motion (RIM), where many within the company saw the threat posed by Apple and by the emerging BYOD phenomenon but failed to do anything about it. Going further back into the annals of computing history, we can adduce examples such as Novell, Digital Equipment Corporation, as well as the raft of other minicomputer vendors who perished from the planet after the rise of the PC and client-sever computing. Some employees within those companies might even have foreseen their firms’ dark fates, but the organizations in which they toiled were unable to rescue themselves.

They were all huge successes, billion-dollar companies, but, in the face of radical shifts in industry and market dynamics, they couldn’t change who and what they were.  The industry graveyard is full of the carcasses of company’s that were once enormously successful.

Am I saying this is what will happen to Cisco in an era of software-defined networking? No, I’m not prepared to make that bet. Cisco should be able to adapt and adjust better than the aforementioned companies were able to do, but it’s not a given. Just because Cisco is dominant in the networking industry today doesn’t mean that it will be dominant forever. As the old investment disclaimer goes, past performance does not guarantee future results. What’s more, Cisco has shown a fallibility of late that was not nearly as apparent in its boom years more than a decade ago.

Early Days, Promising Future

Finally, I’m not sure that Ferro is correct when he says Open Network Foundation’s (ONF) board members and its biggest service providers, including Google, will achieve CapEx but not OpEx savings with SDN. We really don’t know whether these companies are deriving OpEx savings because they’re keeping what they do with their operations and infrastructure highly confidential. Suffice it to say, they see compelling reasons to move away from buying their networking gear from the industry’s leading vendors, and they see similarly compelling reasons to embrace SDN.

Ferro ends his piece with two statements, the first of which I agree with wholeheartedly:

“That is the future of Software Defined Networking – better, dynamic, flexible and business focussed networking. But probably not much cheaper in the long run.”

As for that last statement, I believe there is insufficient evidence on which to render a verdict. As we’ve noted before, these are early days for SDN.