Monthly Archives: April 2012

Cisco Not Going Anywhere, but Changes Coming to Networking

Initially, I intended not to comment on the Wired article on Nicira Networks. While it contained some interesting quotes and a few good observations, its tone and too much of its focus were misplaced. It was too breathless, trying to too hard to make the story fit into a simplistic, sensationalized narrative of outsized personalities and the threatened “irrelevance” of Cisco Systems.

There was not enough focus on how Nicira’s approach to network virtualization and its particular conception of software defined networking (SDN) might open new horizons and enable new possibilities to the humble network. On his blog, Plexxi’s William Koss, commenting not about the Wired article but about reaction to SDN from the industry in general, wrote the following:

In my view, SDN is not a tipping point.  SDN is not obsoleting anyone.  SDN is a starting point for a new network.  It is an opportunity to ask if I threw all the crap in my network in the trash and started over what would we build, how would we architect the network and how would it work?  Is there a better way?

Cisco Still There

I think that’s a healthy focus. As Koss writes, and I agree, Cisco isn’t going anywhere; the networking giant will be with us for some time, tending its considerable franchise and moving incrementally forward. It will react more than it “proacts” — yes, I apologize now for the Haigian neologism — but that’s the fate of any industry giant of a certain age, Apple excepted.

Might Cisco, more than a decade from now, be rendered irrelevant?  I, for one, don’t make predictions over such vast swathes of time. Looking that far ahead and attempting to forecast outcomes is a mug’s game. It is nothing but conjecture disguised as foresight, offered by somebody who wants to flash alleged powers of prognostication while knowing full well that nobody else will remember the prediction a month from now, much less years into the future.

As far out as we can see, Cisco will be there. So, we’ll leave ambiguous prophecies to the likes of Nostradamus, whom I believe forecast the deaths of OS/2, Token Ring, and desktop ATM.

Answers are Coming

Fortunately, I think we’re beginning to get answers as to where and how Nicira’s approaches to network virtualization and SDN can deliver value and open new possibilities. The company has been making news with customer testimonials that include background on how its technology has been deployed. (Interestingly, the company has issued just three press releases in 2012, and all of them deal with customer deployments of its Network Virtualization Platform (NVP).)

There’s a striking contrast between the moderation implicit in Nicira’s choice of press releases and the unchecked grandiosity of the Wired story. Then again, I understand that vendors have little control over what journalists (and bloggers) write about them.

That said, one particular quote in the Wired article provoked some thinking from this quarter. I had thought about the subject previously, but the following excerpt provided some extra grist for my wood-burning mental mill:

In virtualizing the network, Nicira lets you make such changes in software, without touching the underlying hardware gear. “What Nicira has done is take the intelligence that sits inside switches and routers and moved that up into software so that the switches don’t need to know much,” says John Engates, the chief technology officer of Rackspace, which has been working with Nicira since 2009 and is now using the Nicira platform to help drive a new beta version of its cloud service. “They’ve put the power in the hands of the cloud architect rather than the network architect.”

Who Controls the Network?

It’s the last sentence that really signifies a major break with how things have been done until now, and this is where the physical separation of the control plane from the switch has potentially major implications.  As Scott Shenker has noted, network architects and network professionals have made their bones by serving as “masters of complexity,” using relatively arcane knowledge of proprietary and industry-standard protocols to keep networks functioning amid increasing demands of virtualized compute and storage infrastructure.

SDN promises an easier way, one that potentially offers a faster, simpler, less costly approach to network operations. It also offers the creative possibility of unleashing new applications and new ways of optimizing data-center resources. In sum, it can amount to a compelling business case, though not everywhere, at least not yet.

Where it does make sense, however, cloud architects and the devops crowd will gain primacy and control over the network. This trend is reflected already in the press releases from Nicira. Notice that customer quotes from Nicira do not come from network architects, network engineers, or anybody associated with conventional approaches to running a network. Instead, we see encomiums to NVP offered by cloud-architects, cloud-architect executives, and VPs of software development.

Similarly, and not surprisingly, Nicira typically doesn’t sell NVP to the traditional networking professional. It goes to the same “cloudy” types to whom quotes are attributed in its press releases. It’s true, too, that Nicira’s SDN business case and value proposition play better at cloud service providers than at enterprises.

Potentially a Big Deal

This is an area where I think the advent of  the programmable server-based controller is a big deal. It changes the customer power dynamic, putting the cloud architects and the programmers in the driver’s seat, effectively placing the network under their control. (Jason Edelman has begun thinking about what the rise of SDN means for the network engineer.) In this model, the network eventually gets subsumed under the broader rubric of computing and becomes just another flexible piece of cloud infrastructure.

Nicira can take this approach because it has nothing to lose and everything to gain. Of course, the same holds true of other startup vendors espousing SDN.

Perhaps that’s why Koss closed his latest post by writing that “the architects, the revolutionaries, the entrepreneurs, the leaders of the next twenty years of networking are not working at the incumbents.”  The word “revolutionaries” seems too strong, and the incumbents will argue that Koss, a VP at startup Plexxi, isn’t an unbiased party.

They’re right, but that doesn’t mean he’s wrong.

Advertisement

As Insieme Emerges, Former Cisco Spin-In Exec Leaves

As Cisco spin-venture Insieme Networks emerges from the shadows, reports indicate that a co-founder of earlier Cisco spin-in property Nuova Systems has left the mothership.

Soni Jiandani, a longtime Cisco executive who joined the company in 1994 and last served as SVP for Cisco’s Server Access and Virtualization Technology Group, reportedly left the company in March, according to sources familiar with the situation.

Previous Player in Spin-In Crews

Her departure seems to have occurred just as news about Insieme began to circulate widely in business reports and the trade press. In the Nuova spin-venture, Jiandani teamed with Mario Mazzola, Luca Cafiero, and Prem Jain, all of whom now are ringmasters at Insieme.

Earlier in her Cisco career, Jiandani served as VP and GM of Cisco’s LAN and SAN switching business unit, part of the Data Center, Switching and Wireless Technology Group. Before that, she was involved with earlier Cisco spin-in venture Andiamo Systems, where her partners in invention again were Mazzola, Cafiero, and Jain, among others.

Notwithstanding her previous involvement with SAN-switching spin-in Andiamo and data-center switching spin-in Nuova, Jiandani was not reported to be involved with Insieme, which will deliver at least part of Cisco’s answer to the incipient threat posed by software defined networking (SDN).

Postscript: On Twitter, Juniper Networks’ Michael C. Leonard just stated that Jiandani is at Insieme. No doubt confirmation (or not) will be forthcoming, here or elsewhere.

Nicira Focuses on Value of NVP Deployments, Avoids Fetishization of OpenFlow

The continuing evolution of Nicira Networks has been intriguing to watch. At one point, not so long ago, many speculated on what Nicira, then still in a teasing stealth mode, might be developing behind the scenes. We now know that it was building its Network Virtualization Platform (NVP), and we’re beginning to learn about how the company’s early customers are deploying it.

Back in Nicira’s pre-launch days, the line between OpenFlow and software defined networking (SDN) was blurrier than it is today.  From the outset, though, Nicira was among the vendors that sought to provide clarity on OpenFlow’s role in the SDN hierarchy.  At the time — partly because the company was communicating in stealthy coyness  — it didn’t always feel like clarity, but the message was there, nonetheless.

Not the Real Story

For instance, when Alan Cohen first joined Nicira last fall to assume the role of VP marketing, he wrote the following on his personal blog:

Virtualization and the cloud is the most profound change in information technology since client-server and the web overtook mainframes and mini computers.  We believe the full promise of virtualization and the cloud can only be fully realized when the network enables rather than hinders this movement.  That is why it needs to be virtualized.

Oh, by the way, OpenFlow is a really small part of the story.  If people think the big shift in networking is simply about OpenFlow, well, they don’t get it.

A few months before Cohen joined the company, Nicira’s CTO Martin Casado had played down OpenFlow’s role  in the company’s conception of SDN. We understand now where Nicira was going, but at the time, when OpenFlow and SDN were invariably conjoined and seemingly inseparable in industry discourse, it might not have seemed as obvious.

Don’t Get Hung Up

That said, a compelling early statement on OpenFlow’s relatively modest role in SDN was delivered in a presentation by Scott Shenker, Nicira’s co-founder and chief scientist (as well as a professor of electrical engineering in the University of California at Berkeley’s Computer Science Department). I’ve written previously about Shenker’s presentation, “The Future of Networking, and the Past of Protocols,” but here I would just like to quote his comments on OpenFlow:

“OpenFlow is one possible solution (as a configuration mechanism); it’s clearly not the right solution. I mean, it’s a very good solution for now, but there’s nothing that says this is fundamentally the right answer. Think of OpenFlow as x86 instruction set. Is the x86 instruction set correct? Is it the right answer? No, It’s good enough for what we use it for. So why bother changing it? That’s what OpenFlow is. It’s the instruction set we happen to use, but let’s not get hung up on it.”

I still think too many industry types are “hung up” on OpenFlow, and perhaps not focused enough on the controller and above, where the applications will overwhelmingly define the value that SDN delivers.

As an open protocol that facilitates physical separation of the control and data-forwarding planes, OpenFlow has a role to play in SDN. Nonetheless, other mechanisms and protocols can play that role, too, and what really counts can be found at higher altitudes of the SDN value chain.

Minor Roles

In Nicira’s recently announced customer deployments, OpenFlow has played relatively minor supporting roles. Last week, for instance, Nicira announced at the OpenStack Design Summit & Conference that its Network Virtualization Platform (NVP) has been deployed at Rackspace in conjunction with OpenStack’s Quantum networking project. The goal at Rackspace was to automate network services independent of data-center network hardware in a bid to improve operational simplicity and to reduce the cost of managing large, multi-tenant clouds.

According to Brad McConnell, principal architect at Rackpspace, Quantum, Open vSwitch, and OpenFlow all were ingredients in the deployment. Quantum was used as the standardized API to describe network connectivity, and OpenFlow served as the underlying protocol that configured and managed Open vSwitch within hypervisors.

A week earlier, Nicira announced that cloud-service provider DreamHost would deploy its NVP to reduce costs and accelerate service delivery in its OpenStack datacenter. In the press release, the following quote is attributed to Carl Perry, DreamHost’s cloud architect:

“Nicira’s NVP software enables truly massive leaps in automation and efficiency.  NVP decouples network services from hardware, providing unique flexibility for both DreamHost and our customers.  By sidestepping the old network paradigm, DreamHost can rapidly build powerful features for our cloud.  Network virtualization is a critical component necessary for architecting the next-generation public cloud services.  Nicira’s plug-in technology, coupled with the open source Ceph and OpenStack software, is a technically sound recipe for offering our customers real infrastructure-as-a-service.”

Well-Placed Focus

You will notice that OpenFlow is not mentioned by Nicira in the press releases detailing NVP deployments at DreamHost and Rackspace. While OpenFlow is present at both deployments, Nicira correctly describes its role as a lesser detail on a bigger canvas.

At DreamHost, for example, NVP uses  OpenFlow for communication between the controller and Open vSwitch, but Nicira has acknowledged that other protocols, including SNMP, could have performed a similar function.

Reflecting on these deployments, I am reminded of Casado’s  earlier statement: “Open Flow is about as exciting as USB.”

For a long time now, Nicira has eschewed the fetishization of OpenFlow. Instead, it has focused on the bigger-picture value propositions associated with network virtualization and programmable networks. If it continues to do so, it likely will draw more customers to NVP.

Cisco’s SDN Strategy: Meet the New Boss, Same as the Old Boss

Like Om Malik, I received and read the memo that Cisco distributed internally regarding the company’s plans for spin-in Insieme and software-defined networking (SDN). Om has published the memo in its entirety, so there’s no need for me to do the same here.

As for Insieme, the memo informs us that Cisco has made an investment of $100 million  in the “early-stage company focused on research and development in the datacenter market. It also notes that Insieme was founded by Mario Mazzola, Luca Cafiero, and Prem Jain in February 2012, and that “Cisco has the right to purchase the remaining interests of Insieme, with a potential payout range of up to $750 million that will be based primarily on the sales and profitability of Insieme products through Cisco.”

Cisco emphasizes that Insieme’s product-development efforts are “complementary” to its current and planned internal efforts, and it notes that further details regarding Insieme will be disclosed in “Cisco’s upcoming 1oQ filing in May.”

Mystery No More

But we don’t have to wait until then to discern how Cisco will position itself in relation to SDN and programmable networks. If we were in need of additional clues as to how Cisco will play its hand, the memo contains more than enough information from which to deduce the company’s strategy.

As far as Cisco is concerned, there isn’t actually anything new to see in SDN. This is where the marketing battle over words and meanings will ensue, because Cisco’s definition of SDN will bear an uncanny resemblance to what it already does today.

In the memo, Padmasree Warrior, Cisco CTO and co-leader of engineering, makes the following statement: “Cisco believes SDN is part of our vision of the intelligent network that is more open, programmable, and application aware—a vision in which the network is transformed into a more effective business enabler.”

Cisco’s SDN

It’s an ambiguous and innocuous opening salvo, and it could mean almost anything. As the memo proceeds, however, Cisco increasingly qualifies what it means by the term SDN.  It also tells us how Insieme fits into the picture.

Here’s what I see as the memo’s money shot:

“Because SDN is still in its embryonic stage, a consensus has yet to be reached on its exact definition. Some equate SDN with OpenFlow or decoupling of control and data planes. Cisco’s view transcends this definition.”

If you want the gist of the memo in a nutshell, it’s all there. Cisco will (and does) contend that the “decoupling of control and data planes” — in other words, server-based software deciding how packets should be routed across networks — does not define SDN.

Don’t Change

This should not come as a surprise. It’s in Cisco’s interest — and, the company will argue, its customers’ interests as well — for it to resist the decoupling of the control and data planes. You won’t get ridiculous hyperbole from me, so I won’t say that such a decoupling represents an existential threat to Cisco. That would be exaggeration for effect, and I don’t play that game. So let me put it another way: It is a business problem that Cisco would rather not have to address.

Could Cisco deal with that problem? Probably, given the resources at its disposal. But it would be a hassle and a headache, and it would require Cisco to change into something different from what it is today. If you’re Cisco, not having to deal with the problem seems a better option.

Later in the Cisco memo, the company tips its hand further. Quoting directly:

While SDN concepts like network virtualization may sound new, Cisco has played a leadership role in this market for many years leveraging its build, buy, partner strategy.  For example, Cisco’s Nexus 1000V series switches—which provide sophisticated NX-OS networking capabilities in virtualized environment down to the virtual machine level—are built upon a controller/agent architecture, a fundamental building block of SDN solutions. With more than 5,000 customers today, Cisco has been shipping this technology for a long time.

“SDN plays into at least two of Cisco’s top five priorities—core routing/switching and data center/virtualization/cloud,” says Warrior.

Cisco has the opportunity to shape and define the SDN market because it is still perceived as an emerging technology, Warrior says. In fact, Cisco innovation will be much deeper than just SDN.

Cisco is operating from established positions of strength, which include the scale of its operating systems, superior ASICS, unique embedded intelligence, experienced engineering expertise, and an expansive installed base—most of which has no interest in completely replacing what it has already invested in so heavily. “

Pouring the Grappa

So, Cisco’s future SDN, including whatever Insieme eventually delivers to market, will look a lot like the “SDN” that Cisco delivers today in the Nexus 1000V series switches and the NX-OS. When one considers that some engineers now on the Insieme team worked on the Nexus 1000V, and that Insieme is licensed to use the NX-OS, it does not take a particularly athletic leap of logic to conclude that Insieme will be building a Nexus-like switch, though perhaps one on steroids.

Insieme, as I’ve written before, will represent an evolution for Cisco, not a revolution. It will be fortified switching wine in an SDN bottle. (Mario Mazzola is fond of giving Italian names to his spin-in companies. He should have called this one “Grappa.”)

Commenting on Cisco’s SDN memo and the company’s decision to tap spin-in venture Insieme as a vehicle in the space, Om Malik interpreted it as “a tactical admission that it (Cisco) has become so big, so bureaucratic and so broken that it cannot count on internal teams to build any ground breaking products.”

Bigger This Time

That might be an accurate assessment, but it’s also possible to see Insieme as business as usual at Cisco. Clearly Cisco never retired its spin-in move, as I once thought it did, but merely put it into prolonged sabbatical, holding it in reserve for when it would be needed again. Malik himself notes that Cisco has gone to the spin-well before, with this particular trio of all-star engineers now involved in their third such venture.

For good or ill, maybe this is how Cisco gets difficult things done in its dotage. It calls together a bunch of proven quantities and old engineering hands and has them build a bigger datacenter switch than they built the last time.

Is that SDN? It’s Cisco’s SDN. The company’s customers ultimately will decide whether it’s theirs, too.

Departures from Avaya’s Mahogany Row Thicken IPO Plot

My plan was to continue writing posts about software defined networking (SDN). And why not?

SDN is controversial (at least in some quarters), innovative, intriguing, and potentially  disruptive to network-infrastructure economics and to the industry’s status quo. What’s more, the Open Networking Summit (ONS) took place this week in Santa Clara, California, serving a veritable gushing geyser of news, commentary, and vigorous debate.

But before I dive back into the overflowing SDN pool, I feel compelled to revisit Avaya. Ahh, yes, Avaya. Whenever I think I’m finished writing about that company, somebody or something pulls me back in.

Executive Tumult

I have written about Avaya’s long-pending IPO, which might not happen at all, and about the challenges the company faces to navigate shifting technological seas and changing industry dynamics. Avaya’s heavy debt load, its uncertain growth prospects, its seemingly shattered strategic compass, and its occasionally complicated relationship with its channel parters are all factors that mitigate against a successful IPO. Some believe the company might be forced into selling itself, in whole or in part, if not into possible bankruptcy.

I will not make a prediction here, but I have some news to report that suggests that something is afoot (executives, mainly) on Avaya’s mahogany row.  Sources with knowledge of the situation report a sequence of executive departures at the company, many of which can and have been confirmed.

On April 12, for example, Avaya disclosed in a regulatory filing with the SEC that “Mohamad S. Ali will step down as Senior Vice President and President, Avaya Client Services, to pursue other opportunities.” Ali’s departure was effective April 13.  Sources also inform me that a vice president who worked for Ali also left Avaya recently. Sure enough, if you check the LinkedIn profile of Martin Ingram, you will find that he left his role as vice president of global services this month after spending more than six years with the company. He has found employment SVP and CIO at Arise Virtual Solutions Inc.

As they say in infomercials, that’s not all.

Change Only Constant

Sources say Alan Baratz, who came to Avaya from Cisco Systems nearly four years ago, has left the company. Baratz, formerly SVP and president of Avaya’s Global Communications Solutions, had taken the role of SVP for  corporate development and strategy amid another in a long line of Avaya executive shuffles that had channel partners concerned about the stability of the company’s executive team.

Sources also report that Dan Berg, Avaya’s VP for R&D, who served as Skype’s CTO from January 2009 until joining Avaya in February 2011, will leave the company at the end of this month.

Furthermore, sources also say that David Downing, VP of worldwide technical operations, apparently has left the company this week. Downing was said to have reported to Joel Hackney, Avaya’s SVP for global sales and marketing and the president of field operations.

On the other side of the pond, it was reported yesterday in TechTarget’s MicroScope that Andrew Shepperd, Avaya’s managing director for the UK, left after just eight months on the job. Shepperd’s departure was preceded by other executive leave-takings earlier this year.

Vanishing IPO?

So, what does all this tumult mean, if anything? It’s possible that all these executives, perhaps like those before them, simply decided individually and separately that it was time for a change. Maybe this cluster of departures and defections is random. That’s one interpretation.

Another interpretation is that these departures are related to the dimming prospects for an IPO this year or next year. With no remunerative payoff above and beyond salary and bonuses on the horizon, these executives, or at least some of them, might have decided that the time was right to seek greener pastures. The company is facing a range of daunting challenges, some beyond its immediate control, and it wouldn’t be surprising to find that many executives have chosen to leave.

Fortunately, we won’t have to wait much longer for clarity from Avaya on where it is going and how it will get there. Sources tell me that Kevin Kennedy, president and CEO, has called an “all-hands meeting” on May 18.

For you SDN aficionados, fret not. We will now return to regularly scheduled programming.

LineRate’s L4-7 Pitch Tailored to Cloud

I’ve written previously about the growing separation between how large cloud service providers see their networks and how enterprises perceive theirs. The chasm seems to get wider by the day, with the major cloud shops adopting innovative approaches to reduce network-related costs and to increase service agility, while their enterprise brethren seem to be  assuming the role of conservative traditionalists — not that there’s anything inherently or necessarily wrong with that.

The truth is, the characteristics and requirements of those networks and the applications that ride on them have diverged, though ultimately a cloud-driven reconvergence is destined to occur.  For now, though, the cloudy service providers are going one way, and the enterprises — and most definitely the networking professionals within them — are standing firm on familiar ground.

It’s no surprise, then, to see LineRate Systems, which is bringing a software-on-commodity-box approach to L4-7 network services, target big cloud shops with its new all-software LineRate Proxy.

Targeting Cloud Shops

LineRate says its eponymous Proxy delivers a broad range of full-proxy Layer 4-7 network services, including load balancing, content switching, content filtering, SSL termination and origination, ACL/IP filtering, TCP optimization, DDoS blocking, application- performance visibility, server-health monitoring, and an IPv4/v6 translation gateway. The product has snared a customer — the online photo- and video-sharing service Photobucket — willing to sing its praises, and the company apparently has two other customers onboard.

As a hook to get those customers and others to adopt its product, LineRate offers pay-for-capacity subscription licensing and a performance guarantee that it says eliminates upfront capital expenditures and does away with the risks associated with capacity planning and the costs of over-provisioning. It’s a great way to overcome, or at least mitigate, the new-tech jitters that prospective customers might experience when approached by a startup.

I’ll touch on the company’s “secret sauce” shortly, but let’s first explain how LineRate got to where it is now. As CEO Steve Georgis explained in an interview late last week, LineRate has been around since 2008. It is a VC-backed company, based in Boulder, Colorado, which grew from research conducted at the University of Colorado by John Giacomoni, now LineRate’s chief technology officer (CTO), and by Manish Vachharajani, LineRate’s chief software architect.

Replacing L4-7 Hardware Appliances 

As reported by the Boulder County Business Report, LineRate closed a $4.75 million Series A round in April 2011, in which Boulder Ventures was the lead investor. Including seed investments, LineRate has raised about $5.4 million in aggregate, and it is reportedly raising a Series B round.

LineRate calls what it does “software defined network services” (SDNS) and company CEO Georgis says the overall SDN market comprises three layers: the Layer 2-3 network fabric, the Layer 4-7 network services, and the applications and web services that run above everything else. LineRate, obviously, plays in the middle, a neighborhood it shares with Embrane, among others.

LineRate contends that software is the new data path. As such, its raison d’être is to eliminate the need for specialized Layer 4-7 hardware appliances by replacing them with software, which it provides, running on industry-standard hardware, which can be and are provided by ODMs and OEMs alike.

LineRate’s Secret Sauce

The company’s software, and its aforementioned secret sauce, is called the LineRate Operating System (LROS). As mentioned above, it was developed from research work that Giacomoni and Vachharajani completed in high-performance computing (HPC), where their focus was on optimizing resource utilization of off-the-shelf hardware.

Based on FreeBSD but augmented with LineRate’s own TCP stack, LROS has been optimized to squeeze maximum performance from the x86 architecture. As a result, Georgis says, LROS can provide 5-10x more network-performance than can a general-purpose operating system, such as Linux or BSD. LineRate claims its software delivers sufficiently impressive performance — 20 to 40 Gbps network processing on a commodity x86 server, with what the company describes as “high session scalability” — to obviate the need for specialized L4-7 hardware appliances.

This sort of story is one that service providers are likely to find intriguing. We have seen variations on this theme at the big cloud shops, first with virtualized servers, then with switches and routers, and now — if LineRate has its way — with L4-7 appliances.

LineRate says it can back up its bluster with the ability to support hundreds of thousands of full-proxy L7 connections per second, amounting to two million concurrent active flows. As such, LineRate claims LROS’s ability to support scale-out high availability and its inherent multi-tenancy make well qualified for the needs of cloud-service providers.  The LineRate Proxy uses a REST API-based architecture, which the company says allows it to integrate with any cloud orchestration or data-center management framework.

Wondering About Service Reach?

At Photobucket.com, which has 23 million users that upload about four million photos and videos per day, the LineRate Proxy has been employed as a L7 HTTP load balancer and intelligent-content switch in a 10-Gbps network. The LineRate software runs on a pair of low-cost, high-availability x86 servers, doing away with the need to do a forklift upgrade on a legacy hardware solution that Georgis said included products from “a market-leading load-balancing vendor and a vendor that was once a market leader in the space.”

LineRate claims its scalable subscription model also paid off for Photobucket, by eliminating the need for long-term capacity planning and up-front capital expenditures. It says Photobucket benefits from its “guaranteed performance,” and that on-demand scaling has eliminated risks associated with under- or over-provisioning. On the whole, LineRate says its solution offered an entry cost 70 percent lower than that of a competing hardware-appliance purchase.

When the company first emerged, the founders indicated that load balancing would be the first L4-7 network service that it would target. It will be interesting to see whether its other early-adopter customers also are using the LineRate Proxy for load balancing. Will the product prove more specialized than the L4-7 Ginsu knife the company is positioning?

It’s too early to say. The answer will be provided by future deployments.

The estimable Ivan Pepelnjak offers his perspective, including astute commentary on how and where the LineRate Proxy is likely to find favor.

Not Just a Marketing Overlay

Ivan pokes gentle fun at LineRate’s espousal of SDNS, and his wariness is understandable. Even the least likely of networking vendors seem to be cloaking themselves in SDN garb these days, both to draw the fickle attention of trend-chasing venture capitalists and to catch the preoccupied eyes of the service providers that actually employ SDN technologies.

Nonetheless, there are aspects to what LineRate does that undeniably have a lot in common with what I will call an SDN ethos (sorry to be so effete). One of the key value propositions that LineRate promotes — in addition to its comparatively low cost of entry, its service-based pricing, and its performance guarantee — is the simple scale-out approach it offers to service providers.

As Ivan points out, “ . . . whenever you need more bandwidth, you can take another server from your compute pool and repurpose it as a networking appliance.” That’s definitely a page from the SDN playbook that the big cloud-service providers, such as those who run the Open Networking Foundation (ONF), are following. Ideally, they’d like to use virtualization and SDN to run everything on commodity boxes, perhaps sourced directly from ODMs, and then reallocate hardware dynamically as circumstances dictate.

In a comment on Ivan’s post, Brad Hedlund, formerly of Cisco and now of Dell, offers another potential SDN connection for the LineRate Proxy. Hedlund writes that it “would be really cool if they ran the Open vSwitch on the southbound interfaces, and partnered with Nicira and/or Big Switch, so that the appliance could be used as a gateway in overlay-based clouds such as, um, Rackspace.”

He might have something there. So, maybe, in the final analysis, the SDNS terminology is more than a marketing overlay.

Debating SDN, OpenFlow, and Cisco as a Software Company

Greg Ferro writes exceptionally well, is technologically knowledgeable, provides incisive commentary, and invariably makes cogent arguments over at EtherealMind.  Having met him, I can also report that he’s a great guy. So, it is with some surprise that I find myself responding critically to his latest blog post on OpenFlow and SDN.

Let’s start with that particular conjunction of terms. Despite occasional suggestions to the contrary, SDN and OpenFlow are not inseparable or interchangeable. OpenFlow is a protocol, a mechanism that allows a server, known in SDN parlance as a controller, to interact with and program flow tables (for packet forwarding) on switches. It facilitates the separation of the control plane from the data plane in some SDN networks.

But OpenFlow is not SDN, which can be achieved with or without OpenFlow.  In fact, Nicira Networks recently announced two SDN customer deployments of its Network Virtualization Platform (NVP) — at DreamHost and at Rackspace, respectively — and you won’t find mention of OpenFlow in either press release, though OpenStack and its Quantum networking project receive prominent billing. (I’ll be writing more about the Nicira deployments soon.)

A Protocol in the Big Picture 

My point is not to diminish or disparage OpenFlow, which I think can and will be used gainfully in a number of SDN deployments. My point is that we have to be clear that the bigger picture of SDN is not interchangeable with the lower-level functionality of OpenFlow.

In that respect, Ferro is absolutely correct when he says that software-defined networking, and specifically SDN controller and application software, are “where the money is.” He conflates it with OpenFlow — which may or may not be involved, as we already have established — but his larger point is valid.  SDN, at the controller and above, is where all the big changes to the networking model, and to the industry itself, will occur.

Ferro also likely is correct in his assertion that OpenFlow, in and of itself, will  not enable “a choice of using low cost network equipment instead of the expensive networking equipment that we use today. “ In the near term, at least, I don’t see major prospects for change on that front as long as backward compatibility, interoperability with a bulging bag of networking protocols, and the agendas of the networking old guard are at play.

Cisco as Software Company

However, I think Ferro is wrong when he says that the market-leading vendors in switching and routing, including Cisco and Juniper, are software companies. Before you jump down my throat, presuming that’s what you intend to do, allow me to explain.

As Ferro says, Cisco and Juniper, among others, have placed increasing emphasis on the software features and functionality of their products. I have no objection there. But Ferro pushes his argument too far and suggests that the “networking business today is mostly a software business.”  It’s definitely heading in that direction, but Cisco, for one, isn’t there yet and probably won’t be for some time.  The key word, by the way, is “business.”

Cisco is developing more software these days, and it is placing more emphasis on software features and functionality, but what it overwhelmingly markets and sells to its customers are switches, routers, and other hardware appliances. Yes, those devices contain software, but Cisco sells them as hardware boxes, with box-oriented pricing and box-oriented channel programs, just as it has always done. Nitpickers will note that Cisco also has collaboration and video software, which it actually sells like software, but that remains an exception to the rule.

Talks Like a Hardware Company, Walks Like a Hardware Company

For the most part, in its interactions with its customers and the marketplace in general, Cisco still thinks and acts like a hardware vendor, software proliferation notwithstanding. It might have more software than ever in its products, but Cisco is in the hardware business.

In that respect, Cisco faces the same fundamental challenge that server vendors such as HP, Dell, and — yes — Cisco confront as they address a market that will be radically transformed by the rise of cloud services and ODM-hardware-buying cloud service providers. Can it think, figuratively and literally, outside the box? Just because Cisco develops more software than it did before doesn’t mean the answer is yes, nor does it signify that Cisco has transformed itself into a software vendor.

Let’s look, for example, at Cisco’s approach to SDN. Does anybody really believe that Cisco, with its ongoing attachment to ASIC-based hardware differentiation, will move toward a software-based delivery model that places the primary value on server-based controller software rather than on switches and routers? It’s just not going to happen, because  it’s not what Cisco does or how it operates.

Missing the Signs 

And that bring us to my next objection.  In arguing that Cisco and others have followed the market and provided the software their customers want, Ferro writes the following:

“Billion dollar companies don’t usually miss the obvious and have moved to enhance their software to provide customer value.”

Where to begin? Well, billion-dollar companies frequently have missed the obvious and gotten it horribly wrong, often when at least some individuals within the companies in question knew that their employer was getting it horribly wrong.  That’s partly because past and present successes can sow the seeds of future failure. As in Clayton M. Christensen’s classic book The Innovator’s Dilemma, industry leaders can have their vision blinkered by past successes, which prevent them from detecting disruptive innovations. In other cases, former market leaders get complacent or fail to acknowledge the seriousness of a competitive threat until it is too late.

The list of billion-dollar technology companies that have missed the obvious and failed spectacularly, sometimes disappearing into oblivion, is too long to enumerate here, but some  names spring readily to mind. Right at the top (or bottom) of our list of industry ignominy, we find Nortel Networks. Once a company valued at nearly $400 billion, Nortel exists today only in thoroughly digested pieces that were masticated by other companies.

Is Cisco Decline Inevitable?

Today, we see a similarly disconcerting situation unfolding at Research In Motion (RIM), where many within the company saw the threat posed by Apple and by the emerging BYOD phenomenon but failed to do anything about it. Going further back into the annals of computing history, we can adduce examples such as Novell, Digital Equipment Corporation, as well as the raft of other minicomputer vendors who perished from the planet after the rise of the PC and client-sever computing. Some employees within those companies might even have foreseen their firms’ dark fates, but the organizations in which they toiled were unable to rescue themselves.

They were all huge successes, billion-dollar companies, but, in the face of radical shifts in industry and market dynamics, they couldn’t change who and what they were.  The industry graveyard is full of the carcasses of company’s that were once enormously successful.

Am I saying this is what will happen to Cisco in an era of software-defined networking? No, I’m not prepared to make that bet. Cisco should be able to adapt and adjust better than the aforementioned companies were able to do, but it’s not a given. Just because Cisco is dominant in the networking industry today doesn’t mean that it will be dominant forever. As the old investment disclaimer goes, past performance does not guarantee future results. What’s more, Cisco has shown a fallibility of late that was not nearly as apparent in its boom years more than a decade ago.

Early Days, Promising Future

Finally, I’m not sure that Ferro is correct when he says Open Network Foundation’s (ONF) board members and its biggest service providers, including Google, will achieve CapEx but not OpEx savings with SDN. We really don’t know whether these companies are deriving OpEx savings because they’re keeping what they do with their operations and infrastructure highly confidential. Suffice it to say, they see compelling reasons to move away from buying their networking gear from the industry’s leading vendors, and they see similarly compelling reasons to embrace SDN.

Ferro ends his piece with two statements, the first of which I agree with wholeheartedly:

“That is the future of Software Defined Networking – better, dynamic, flexible and business focussed networking. But probably not much cheaper in the long run.”

As for that last statement, I believe there is insufficient evidence on which to render a verdict. As we’ve noted before, these are early days for SDN.

Hardware Elephant in the HP Cloud

Taking another run at cloud computing, HP made news today with its strategy for the “Converged Cloud,” which focuses on hybrid cloud environments and provides a common architecture that spans existing data centers as well as private and public clouds.

In finally diving into infrastructure as a service (IaaS), with a public beta of HP Public Infrastructure as a Service slated for May 10, HP will go up against current IaaS market leader Amazon Web Services.

HP will tap OpenStack and hypervisor neutrality as it joins the battle. Not surprisingly, it also will leverage its own hardware portfolio for compute, storage, and networking — HP Converged Infrastructure, which it already has promoted for enterprise data centers — as well as a blend of software and services that is meant to provide bonding agents to keep customers in the HP fold regardless of where and how they want to run their applications.

Trying to Set the Cloud Agenda

In addition to HP Public Infrastructure as a Service — providing on-demand compute instances or virtual machines, online storage capacity, and cached content delivery — HP Cloud Services also will unveil a private beta of a relational database service for MySQL and a block storage service that supports movement of data from one compute instance to another.

While HP has chosen to go up against AWS in IaaS — though it apparently is targeting a different constituency from the one served by Amazon — perhaps a bigger story is that HP also will compete with other service providers, too, including other OpenStack purveyors.

There’s some risk in that decision, no question, but perhaps not as much as one might think. The long-term trend, already established at the largest cloud service providers on the planet, is to move away from branded, vanity hardware in favor of no-frills boxes from original design manufacturers (ODMs).  This will not only affect servers, but also storage and networking hardware, the latter of which has seen the rise of merchant silicon. HP can read the writing on the data-center wall, and it knows that it must attempt to set the cloud agenda, or cede the floor and watch its hardware sales atrophy.

Software and Services as Hooks

Hybrid clouds are HP’s best bet, though far from a sure thing. Indeed, one can interpret  HP’s Converged Cloud as a bulwark against what it would perceive as a premature decline in its hardware business.

Simply packaging and reselling OpenStack and a hypervisor of the customer’s choice wouldn’t achieve HP’s “sticky” business objectives, so it is tapping its software and services for the hooks and proprietary value that will keep customers from straying.

For managing hybrid environments, HP has its new Cloud Maps, which provides catalogue of prepackaged application templates to speed deployment of enterprise cloud-services applications.

To test the applications, the company offers HP Service Virtualization 2.0, which enables enterprise customers to test quality and performance of cloud or mobile applications without interfering with production systems. Meanwhile, HP Virtual Application Networks — which taps HP’s Intelligent Management Center (IMC) and the IMC Virtual Application Networks (VAN) Manager Module — also makes its debut. It is designed to eliminate network-related cloud-services bottlenecks by speeding application deployment, automating management, and ensuring service levels for virtual and cloud applications on HP’s FlexNetwork architecture.

Maintaining and Growing

HP also will launch two new networking services: HP Virtual Network Protection Service, which leverages best practices and is intended to set a baseline for security of network virtualization; and HP Network Cloud Optimization Service, which is intended to customers enhance their networks for delivery of cloud services.

For  enterprises that don’t want to manage their clouds, the company offers HP Enterprise Cloud Services as well as other services to get enterprises up to speed on how cloud can best be harnessed.

Whether the software and services will add sufficient stickiness to HP’s hardware business remains to be seen, but there’s no question that HP is looking to maintain existing revenue streams while establishing new ones.

Direct from ODMs: The Hardware Complement to SDN

Subsequent to my return from Network Field Day 3, I read an interesting article published by Wired that dealt with the Internet giants’ shift toward buying networking gear from original design manufacturers (ODMs) rather than from brand-name OEMs such as Cisco, HP Networking, Juniper, and Dell’s Force10 Networks.

The development isn’t new — Andrew Schmitt, now an analyst at Infonetics, wrote about Google designing its own 10-GbE switches a few years ago — but the story confirmed that the trend is gaining momentum and drawing a crowd, which includes brokers and custom suppliers as well as increasing numbers of buyers.

In the Wired article, Google, Microsoft, Amazon, and Facebook were explicitly cited as web giants buying their switches directly from ODMs based in Taiwan and China. These same buyers previously procured their servers directly from ODMs, circumventing brand-name server vendors such as HP and Dell.  What they’re now doing with networking hardware, then, is a variation on an established theme.

The ONF Connection

Just as with servers, the web titans have their reasons for going directly to ODMs for their networking hardware. Sometimes they want a simpler switch than the brand-name networking vendors offer, and sometimes they want certain functionality that networking vendors do not provide in their commercial products. Most often, though, they’re looking for cheap commodity switches based on merchant silicon, which has become more than capable of handling the requirements the big service providers have in mind.

Software is part of the picture, too, but the Wired story didn’t touch on it. Look at the names of the Internet companies that have gone shopping for ODM switches: Google, Microsoft, Facebook, and Amazon.

What do those companies have in common besides their status as Internet giants and their purchases of copious amounts of networking gear? Yes, it’s true that they’re also cloud service providers. But there’s something else, too.

With the exception of Amazon, the other three are board members in good standing of the Open Networking Foundation (ONF). What’s more,  even though Amazon is not an ONF board member (or even a member), it shares the ONF’s philosophical outlook in relation to making networking infrastructure more flexible and responsive, less complex and costly, and generally getting it out of the way of critical data-center processes.

Pica8 and Cumulus

So, yes, software-defined networking (SDN) is the software complement to cloud-service providers’ direct procurement of networking hardware from ODMs.  In the ONF’s conception of SDN, the server-based controller maps application-driven traffic flows to switches running OpenFlow or some other mechanism that provides interaction between the controller and the switch. Therefore, switches for SDN environments don’t need to be as smart as conventional “vertically integrated” switches that combine packet forwarding and the control plane in the same box.

This isn’t just guesswork on my part. Two companies are cited in the Wired article as “brokers” and “arms dealers” between switch buyers and ODM suppliers. Pica8 is one, and Cumulus Networks is the other.

If you visit the Pica8 website,  you’ll see that the company’s goal is “to commoditize the network industry and to make the network platforms easy to program, robust to operate, and low-cost to procure.” The company says it is “committed to providing high-quality open software with commoditized switches to break the current performance/price barrier of the network industry.” The company’s latest switch, the Pronto 3920, uses Broadcom’s Trident+ chipset, which Pica8 says can be found in other ToR switches, including the Cisco Nexus 3064, Force10 S4810, IBM G8264, Arista 7050S, and Juniper QFC-3500.

That “high-quality open software” to which Pica8 refers? It features XORP open-source routing code, support for Open vSwitch and OpenFlow, and Linux. Pica8 also is a relatively longstanding member of ONF.

Hardware and Software Pedigrees

Cumulus Networks is the other switch arms dealer mentioned in the Wired article. There hasn’t been much public disclosure about Cumulus, and there isn’t much to see on the company’s website. From background information on the professional pasts of the company’s six principals, though, a picture emerges of a company that would be capable of putting together bespoke switch offerings, sourced directly from ODMs, much like those Pica8 delivers.

The co-founders of Cumulus are J.R. Rivers, quoted extensively in the Wired article, and Nolan Leake. A perusal of their LinkedIn profiles reveals that both describe Cumulus as “satisfying the networking needs of large Internet service clusters with high-performance, cost-effective networking equipment.”

Both men also worked at Cisco spin-in venture Nuova Systems, where Rivers served as vice president of systems architecture and Leake served in the “Office of the CTO.” Rivers has a hardware heritage, whereas Leake has a software background, beginning his career building a Java IDE and working at senior positions at VMware and 3Leaf Networks before joining Nuova.

Some of you might recall that 3Leaf’s assets were nearly acquired by Huawei, before the Chinese networking company withdrew its offer after meeting with strenuous objections from the Committee on Foreign Investment in the United States (CFIUS). It was just the latest setback for Huawei in its recurring and unsuccessful attempts to acquire American assets. 3Com, anyone?

For the record, Leake’s LinkedIn profile shows that his work at 3Leaf entailed leading “the development of a distributed virtual machine monitor that leveraged a ccNUMA ASIC to run multiple large (many-core) single system image OSes on a Infiniband-connected cluster of commodity x86 nodes.”

For Companies Not Named Google

Also at Cumulus is Shrijeet Mukherjee, who serves as the startup company’s vice president of software engineering. He was at Nuova, too, and worked at Cisco right up until early this year. At Cisco, Mukherjee focused on” virtualization-acceleration technologies, low-latency Ethernet solutions, Fibre Channel over Ethernet (FCoE), virtual switching, and data center networking technologies.” He boasts of having led the team that delivered the Cisco Virtualized Interface Card (vNIC) for the UCS server platform.

Another Nuova alumnus at Cumulus is Scott Feldman, who was employed at Cisco until May of last year. Among other projects, he served in a leading role on development of “Linux/ESX drivers for Cisco’s UCS vNIC.” (Do all these former Nuova guys at Cumulus realize that Cisco reportedly is offering big-bucks inducements to those who join its latest spin-in venture, Insieme?)

Before moving to Nuova and then to Cisco, J.R. Rivers was involved with Google’s in-house switch design. In the Wired article, Rivers explains the rationale behind Google’s switch design and the company’s evolving relationship with ODMs. Google originally bought switches designed by the ODMs, but now it designs its own switches and has the ODMs manufacture them to the specifications, similar to how Apple designs its iPads and iPhones, then  contracts with Foxconn for assembly.

Rivers notes, not without reason, that Google is an unusual company. It can easily design its own switches, but other service providers possess neither the engineering expertise nor the desire to pursue that option. Nonetheless, they still might want the cost savings that accrue from buying bare-bones switches directly from an ODM. This is the market Cumulus wishes to serve.

Enterprise/Cloud-Service Provider Split

Quoting Rivers from the Wired story:

“We’ve been working for the last year on opening up a supply chain for traditional ODMs who want to sell the hardware on the open market for whoever wants to buy. For the buyers, there can be some very meaningful cost savings. Companies like Cisco and Force10 are just buying from these same ODMs and marking things up. Now, you can go directly to the people who manufacture it.”

It has appeal, but only for large service providers, and perhaps also for very large companies that run prodigious server farms, such as some financial-services concerns. There’s no imminent danger of irrelevance for Cisco, Juniper, HP, or Dell, who still have the vast enterprise market and even many service providers to serve.

But this is a trend worth watching, illustrating the growing chasm between the DIY hardware and software mentality of the biggest cloud shops and the more conventional approach to networking taken by enterprises.

Big Switch’s Open Invocation

In my last post, which focused on the nascent market and fluid ecosystem for software defined networking (SDN), I commented on the early jockeying for position in the wide-open controller race.

These still are early days for SDN, especially in the enterprise, where the technology’s footprint is negligible and where networking professionals are inclined to view it as a solution in search of a problem. As such, emergent vendors are trying to get a fast start, hoping that it might be extended into an insurmountable lead in an expanding market . That’s clearly the thinking behind the “Open SDN” strategy at Big Switch Networks.

Big Switch’s conundrum is easy to understand. It seemingly wants to become the Red Hat of SDN, but it first must create a meaningful market for its technology. If all goes according to plan, Big Switch would sell a “premium” version of its Floodlight controller, and it also could provide applications and services that run on it.

Help Wanted

But Big Switch can’t do it alone. It needs other vendors and the broader SDN community to buy into its vision and support the cause. For its controller to succeed, especially among enterprise networking professionals who already tend to be skeptical and even scornful of OpenFlow-based SDN, it will need to enlist third parties to develop and deliver compelling applications and services.

Hence, its “Open SDN” blueprint, which it has trademarked, and which rests on three pillars (networking companies love their pillars):

1) Open Standards, which connotes support for established networking-industry standards (there are plenty from which to choose) as well as for new ones, such as OpenFlow. The desired outcome is easier integration and interoperability between and among products in the SDN ecosystem.

2) Open APIs, which are intended to facilitate the creation of a vibrant ecosystem of infrastructure, network services, and orchestration applications.

3) Open Source, which offers the successfully community templates formed around Linux, MySQL, and Hadoop, and which is seen as an increasingly important factor as networking becomes more software oriented.

Open Invocation

Some people equate “open” with virtuous, as if a stark Manichean melodrama is unfolding between proprietary black-hat vendors and the good guys in white hats who fly the open-source flag. The truth is, each and every vendor is in business to make money. These are not non-profit organizations with altruistic mandates and motives. Vendors might differ in how they make their money, but not in their common desire to make it.

As a vendor of technology that is disruptive to the networking status quo, Big Switch has little to lose (and potentially much to gain) by playing the open-source card. If it can cultivate a community of application vendors around its Floodlight controller and leverage what it hopes will be a growing pool of OpenFlow-compatible switches, Big Switch will have a fighting chance of making the networking cut against established and neophyte players alike.

Enterprise Resistance

But time, as always, is a critical factor. Big Switch must establish and maintain market momentum, providing evidence of customer wins as early and as often as possible. It’s about inertia and perception, which tend to feed off one another. The company that makes perceptible progress will be well placed to make further perceptible progress, but the company that is seen to stumble shortly after leaving the gate might never recover.

Given the company’s enterprise, private-cloud orientation, Big Switch’s “Open SDN” gambit is probably the right call. It’s another matter entirely as to whether that strategy will be sufficient to overcome the SDN doubts of enterprise networking professionals.

Still Early Days in SDN Ecosystem

Jason Edelman has provided a helpful overview of the software-defined networking (SDN) ecosystem and the vendors currently active within it. Like any form chart, though, it’s a snapshot in time, and therefore subject to change, as I’m sure Edelman would concede.

Still, what Edelman has delivered is a useful contextual framework to understand where many vendors stand today, where “stealth” vendors might attempt to make their marks shortly, and where and how the overall space might evolve.

Edelman presents the somewhat-known entities — Nicira, Big Switch, NEC, and Embrane (L4-7) at the applications/services layer — and he also addresses  vendors providing controllers, where no one platform has gained an appreciable commercial advantage because the market remains nascent.  He also covers the “switch infrastructure” vendors, which include HP Networking, Netgear, IBM, Pica8, NEC, Arista, Juniper, and others. (In a value-based analysis of the SDN market, “switch infrastructure” is the least interesting layer, but it is essential to have an abundance of interoperable hardware on the market.)

Cards Still to be Played

The real battle, from which it might take considerable time for clears winners to emerge, will occur at the two upper layers, where controller vendors will be looking to win the patronage of purveyors of applications and services. At the moment, the picture is fuzzy. It remains possible that an eventual winner of the inevitable controller-market shakeout has yet to enter the frame.

In that regard, look for established networking players and new entrants to make some noise in the year ahead. Edelman has listed many of them, and I’ve heard that a few more are lurking in the shadows. Names that  are likely to be in the news soon include Plexxi, LineRate Systems (another L4-7 player, it seems), and Ericsson (with its OpenFlow/MPLS effort).

These are, as the saying goes, early days.

Report from Network Field Day 3: Infineta’s “Big Traffic” WAN Optimization

Last week, I had the privilege of serving as a delegate a Network Field Day 3 (NFD3), part of Tech Field Day.  It actually spanned two days, last Thursday and Friday, and it truly was a memorable and rewarding experience.

I learned a great deal from the vendor presentations (from SolarWinds, NEC, Arista, Infineta on Thursday; from Cisco and Spirent on Friday), and I learned just as much from discussions with my co-delegates, whom I invite you to get to know on Twitter and on their blogs.

The other delegates were great people, with sharp minds and exceptional technical aptitude. They were funny, too. As I said above, I was honored and privileged to spend time in their company.

Targeting “Big Traffic” 

In this post, I will cover our visit with Infineta Systems. Other posts, either directly about NFD3 or indirectly about the information I gleaned from the NFD3 presentations, will follow at later dates as circumstances and time permit.

Infineta contends that WAN optimization comprises two distinct markets: WAN optimization for branch traffic, and WAN optimization for what Infineta terms “big traffic.” Each has different characteristics.  WAN optimization for branch traffic is typified by relatively low bandwidth and going over relatively long distances, whereas WAN optimization for “big traffic” is marked by high bandwidth and traversal of various distances. Given their characteristics, Infineta asserts, the two types of WAN optimization require different system architectures.

Moreover, the two distinct types of WAN optimization also feature different categories of application traffic. WAN optimization for branch traffic is characterized by user-to-machine traffic, which involves a human directly interacting with a device and an application. Conversely, WAN optimization for big traffic, usually data-center to data-center in orientation, features machine-to-machine traffic.

Because different types of buyers involved, the sales processes for the two types of WAN optimization are different, too.

Applications and Use Cases

Infineta has chosen to go big-game hunting in the WAN-optimization market. It’s chasing Big Traffic with its Data Mobility Switch (DMS), equipped with 10-Gbps of processing capacity and a reputed ROI payback of less than a year.

Deployment of DMS is best suited for application environments that are bandwidth intensive, latency sensitive, and protocol inefficient. Applications that map to those characteristics include high-speed replication, large-scale data backup and archiving, huge file transfers, and the scale out of growing application traffic.  That means deployment typically occurs at between two or more data centers that can be hundreds or even thousands of miles apart, employing OC-3 to OC-192 WAN connections.

In Infineta’s presentation to us, the company featured use cases that covered virtual machine disk (VMDK) and database protection as well as high-speed data replication. In each instance, Infineta claimed compelling results in overall performance improvement, throughput, and WAN-traffic reduction.

Dedupe “Crown Jewels”

So, you might be wondering, how does Infineta attain those results? During a demonstration of DMS in action, Infineta tools us through the technology in considerable detail. Infineta says says its deduplication technologies are its “crown jewels,” and it has filed and received a mathematically daunting patent to defend them.

At this point, I need to make brief detour to explain that Infineta’s DMS is  hardware-based product that uses field programmable gate arrays (FPGAs), whereas Infineta’s primary competitors use software that runs on off-the-shelf PC systems. Infineta decided against a software-based approach — replete with large dictionaries and conventional deduplication algorithms — because it ascertained that the operational overhead and latency implicit in that approach inhibited the performance and scalability its customers required for their data-center applications.

To minimize latency, then, Infineta’s DMS was built with FPGA hardware designed around a multi-Gigabit switch fabric. The DMS is the souped-up vehicle that harnesses the power of the company’s approach to deduplication , which is intended to address traditional deduplication bottlenecks relating to disk I/O bandwidth, CPU, memory, and synchronization.

Infineta says its approach to deduplication is typified by an overriding focus on minimizing sequentiality and synchronization, buttressed and served by massive parallelism, computational simplicity, and fixed-size dictionary records.

Patent versus Patented Obtuseness

The company’s founder, Dr.K.V.S. (Ram) Ramarao, then explained Infineta’s deduplication patent. I wish I could convey it to you. I did everything in my limited power to grasp its intricacies and nuances — I’m sure everybody in the room could hear my rickety, wooden mental gears turning and smell the wood burning — but my brain blew a fuse and I lost the plot. Have no fear, though: Derick Winkworth, the notorious @cloudtoad on Twitter, likely will addressing Infineta’s deduplication patent in a forthcoming post at Packet Pushers. He brings a big brain and an even bigger beard to the subject, and he will succeed where I demonstrated only patented obtuseness.

Suffice it to say, Infineta says the techniques described in its patent result in the capacity to scale linearly in lockstep with additional computing resources, effectively obviating the aforementioned bottlenecks relating to disk I/O bandwidth, CPU, memory, and synchronization. (More information on Infineta’s Velocity Dedupe Engine is available on the company’s website.)

Although its crown jewels might reside in deduplication, Infineta also says DMS delivers the goods in TCP optimization, keeping the pipe full across all active connections.

Not coincidentally, Infineta claims to significantly get the measure of its competitors in areas such as throughput, latency, power, space, and “dollar-per-Mpbs” delivered. I’m sure those competitors will take issue with Infineta’s claims. As always, the ultimate arbiters are the customers that constitute the demand side of the marketplace.

Fast-Growing Market

Infineta definitely has customers — NaviSite, now part of Time Warner, among them — and if the exuberance and passion of its product managers and technologists are reliable indicators, the company will more than hold its own competitively as it addresses a growing market for WAN optimization between data centers.

Disclosure: As a delegate, my travel and accommodations were covered by Gestalt IT, which is remunerated by vendors for presentation slots at Network Field Day. Consequently, my travel costs (for airfare, for hotel accommodations, and for meals) were covered indirectly by the vendors, but no other recompense, except for the occasional tchotchke, was accepted by me from the vendors involved. I was not paid for my time, nor was I paid to write about the presentations I witnessed.