Category Archives: Network Field Day

Further Progress of Infineta

When I attended Network Field Day 3 (NFD3) in the Bay Area back in late March, the other delegates and I had the pleasure of receiving a presentation on Infineta Systems’ Data Mobility Switch (DMS), a WAN-optimization system built with merchant silicon and designed to serve as a high-performance data-center interconnect for applications such as multi-gigabit Business Continuity/Disaster Recovery (BCDR), cross-site virtualization, and other variations on what Infineta calls “Big Traffic,” a fast-moving sibling of Big Data.

Waiting on Part II

I wrote about Infineta and its DMS, as did some of the other delegates, including cardigan-clad fashionista Tony Bourke  and avowed Networking Nerd Tom Hollingsworth. Meanwhile, formerly hirsute Derick Winkworth, who goes by the handle of Cloud Toad, began a detailed two-part serialization on Infineta and its technology, but he seems to be taking longer to deliver the sequel than it took Francis Ford Coppola to bring us The Godfather: Part II.

Suffice it to say, Infineta got our attention with its market focus (data-center interconnect rather than branch acceleration) and its compelling technological approach to solving the problem.  I thought Winkworth made an astute point in noting that Infineta’s targeting of data-center interconnect means that the performance and results of its DMS can be assessed purely on the basis of statistical results rather than on human perceptions of application responsiveness.

Name that Tune 

Last week, Infineta’s Haseeb Budhani, the company’s chief product officer, gave me a update that coincided with the company’s announcement of FlowTune, a software QoS feature set for the DMS that is intended to deliver the performance guarantees required for applications such as high-speed replication and data backup.

Budhani used a medical analogy to explain why FlowTune is more effective than traditional solutions. FlowTune, he said, takes a preventive approach to network congestion occasioned by contentious application flows, treating the cause of the problem instead of responding to the symptoms.  So, whereas conventional approaches rely on packet drops to facilitate congestion recovery, FlowTune dynamically manages application-transmission rates through a multi-flow mechanism that allocates bandwidth credits according to QoS priorities that specify minimum and maximum performance thresholds.   As a result, Budhani says, the WAN is fully utilized.

Storage Giants

Last week, Infineta and NetApp jointly announced that the former has joined the NetApp Alliance Partner Program. In a blog post, Budhani says Infineta’s relationships with storage-market leaders EMC and NetApp validate his company’s unique capability to deliver “the scale needed by their customers to accelerate traffic running at multi-Gigabit speeds at any distance.”

A software update, FlowTune is available to all Infineta customers. Budhani says it’s already being  used by Time Warner.


Direct from ODMs: The Hardware Complement to SDN

Subsequent to my return from Network Field Day 3, I read an interesting article published by Wired that dealt with the Internet giants’ shift toward buying networking gear from original design manufacturers (ODMs) rather than from brand-name OEMs such as Cisco, HP Networking, Juniper, and Dell’s Force10 Networks.

The development isn’t new — Andrew Schmitt, now an analyst at Infonetics, wrote about Google designing its own 10-GbE switches a few years ago — but the story confirmed that the trend is gaining momentum and drawing a crowd, which includes brokers and custom suppliers as well as increasing numbers of buyers.

In the Wired article, Google, Microsoft, Amazon, and Facebook were explicitly cited as web giants buying their switches directly from ODMs based in Taiwan and China. These same buyers previously procured their servers directly from ODMs, circumventing brand-name server vendors such as HP and Dell.  What they’re now doing with networking hardware, then, is a variation on an established theme.

The ONF Connection

Just as with servers, the web titans have their reasons for going directly to ODMs for their networking hardware. Sometimes they want a simpler switch than the brand-name networking vendors offer, and sometimes they want certain functionality that networking vendors do not provide in their commercial products. Most often, though, they’re looking for cheap commodity switches based on merchant silicon, which has become more than capable of handling the requirements the big service providers have in mind.

Software is part of the picture, too, but the Wired story didn’t touch on it. Look at the names of the Internet companies that have gone shopping for ODM switches: Google, Microsoft, Facebook, and Amazon.

What do those companies have in common besides their status as Internet giants and their purchases of copious amounts of networking gear? Yes, it’s true that they’re also cloud service providers. But there’s something else, too.

With the exception of Amazon, the other three are board members in good standing of the Open Networking Foundation (ONF). What’s more,  even though Amazon is not an ONF board member (or even a member), it shares the ONF’s philosophical outlook in relation to making networking infrastructure more flexible and responsive, less complex and costly, and generally getting it out of the way of critical data-center processes.

Pica8 and Cumulus

So, yes, software-defined networking (SDN) is the software complement to cloud-service providers’ direct procurement of networking hardware from ODMs.  In the ONF’s conception of SDN, the server-based controller maps application-driven traffic flows to switches running OpenFlow or some other mechanism that provides interaction between the controller and the switch. Therefore, switches for SDN environments don’t need to be as smart as conventional “vertically integrated” switches that combine packet forwarding and the control plane in the same box.

This isn’t just guesswork on my part. Two companies are cited in the Wired article as “brokers” and “arms dealers” between switch buyers and ODM suppliers. Pica8 is one, and Cumulus Networks is the other.

If you visit the Pica8 website,  you’ll see that the company’s goal is “to commoditize the network industry and to make the network platforms easy to program, robust to operate, and low-cost to procure.” The company says it is “committed to providing high-quality open software with commoditized switches to break the current performance/price barrier of the network industry.” The company’s latest switch, the Pronto 3920, uses Broadcom’s Trident+ chipset, which Pica8 says can be found in other ToR switches, including the Cisco Nexus 3064, Force10 S4810, IBM G8264, Arista 7050S, and Juniper QFC-3500.

That “high-quality open software” to which Pica8 refers? It features XORP open-source routing code, support for Open vSwitch and OpenFlow, and Linux. Pica8 also is a relatively longstanding member of ONF.

Hardware and Software Pedigrees

Cumulus Networks is the other switch arms dealer mentioned in the Wired article. There hasn’t been much public disclosure about Cumulus, and there isn’t much to see on the company’s website. From background information on the professional pasts of the company’s six principals, though, a picture emerges of a company that would be capable of putting together bespoke switch offerings, sourced directly from ODMs, much like those Pica8 delivers.

The co-founders of Cumulus are J.R. Rivers, quoted extensively in the Wired article, and Nolan Leake. A perusal of their LinkedIn profiles reveals that both describe Cumulus as “satisfying the networking needs of large Internet service clusters with high-performance, cost-effective networking equipment.”

Both men also worked at Cisco spin-in venture Nuova Systems, where Rivers served as vice president of systems architecture and Leake served in the “Office of the CTO.” Rivers has a hardware heritage, whereas Leake has a software background, beginning his career building a Java IDE and working at senior positions at VMware and 3Leaf Networks before joining Nuova.

Some of you might recall that 3Leaf’s assets were nearly acquired by Huawei, before the Chinese networking company withdrew its offer after meeting with strenuous objections from the Committee on Foreign Investment in the United States (CFIUS). It was just the latest setback for Huawei in its recurring and unsuccessful attempts to acquire American assets. 3Com, anyone?

For the record, Leake’s LinkedIn profile shows that his work at 3Leaf entailed leading “the development of a distributed virtual machine monitor that leveraged a ccNUMA ASIC to run multiple large (many-core) single system image OSes on a Infiniband-connected cluster of commodity x86 nodes.”

For Companies Not Named Google

Also at Cumulus is Shrijeet Mukherjee, who serves as the startup company’s vice president of software engineering. He was at Nuova, too, and worked at Cisco right up until early this year. At Cisco, Mukherjee focused on” virtualization-acceleration technologies, low-latency Ethernet solutions, Fibre Channel over Ethernet (FCoE), virtual switching, and data center networking technologies.” He boasts of having led the team that delivered the Cisco Virtualized Interface Card (vNIC) for the UCS server platform.

Another Nuova alumnus at Cumulus is Scott Feldman, who was employed at Cisco until May of last year. Among other projects, he served in a leading role on development of “Linux/ESX drivers for Cisco’s UCS vNIC.” (Do all these former Nuova guys at Cumulus realize that Cisco reportedly is offering big-bucks inducements to those who join its latest spin-in venture, Insieme?)

Before moving to Nuova and then to Cisco, J.R. Rivers was involved with Google’s in-house switch design. In the Wired article, Rivers explains the rationale behind Google’s switch design and the company’s evolving relationship with ODMs. Google originally bought switches designed by the ODMs, but now it designs its own switches and has the ODMs manufacture them to the specifications, similar to how Apple designs its iPads and iPhones, then  contracts with Foxconn for assembly.

Rivers notes, not without reason, that Google is an unusual company. It can easily design its own switches, but other service providers possess neither the engineering expertise nor the desire to pursue that option. Nonetheless, they still might want the cost savings that accrue from buying bare-bones switches directly from an ODM. This is the market Cumulus wishes to serve.

Enterprise/Cloud-Service Provider Split

Quoting Rivers from the Wired story:

“We’ve been working for the last year on opening up a supply chain for traditional ODMs who want to sell the hardware on the open market for whoever wants to buy. For the buyers, there can be some very meaningful cost savings. Companies like Cisco and Force10 are just buying from these same ODMs and marking things up. Now, you can go directly to the people who manufacture it.”

It has appeal, but only for large service providers, and perhaps also for very large companies that run prodigious server farms, such as some financial-services concerns. There’s no imminent danger of irrelevance for Cisco, Juniper, HP, or Dell, who still have the vast enterprise market and even many service providers to serve.

But this is a trend worth watching, illustrating the growing chasm between the DIY hardware and software mentality of the biggest cloud shops and the more conventional approach to networking taken by enterprises.

Report from Network Field Day 3: Infineta’s “Big Traffic” WAN Optimization

Last week, I had the privilege of serving as a delegate a Network Field Day 3 (NFD3), part of Tech Field Day.  It actually spanned two days, last Thursday and Friday, and it truly was a memorable and rewarding experience.

I learned a great deal from the vendor presentations (from SolarWinds, NEC, Arista, Infineta on Thursday; from Cisco and Spirent on Friday), and I learned just as much from discussions with my co-delegates, whom I invite you to get to know on Twitter and on their blogs.

The other delegates were great people, with sharp minds and exceptional technical aptitude. They were funny, too. As I said above, I was honored and privileged to spend time in their company.

Targeting “Big Traffic” 

In this post, I will cover our visit with Infineta Systems. Other posts, either directly about NFD3 or indirectly about the information I gleaned from the NFD3 presentations, will follow at later dates as circumstances and time permit.

Infineta contends that WAN optimization comprises two distinct markets: WAN optimization for branch traffic, and WAN optimization for what Infineta terms “big traffic.” Each has different characteristics.  WAN optimization for branch traffic is typified by relatively low bandwidth and going over relatively long distances, whereas WAN optimization for “big traffic” is marked by high bandwidth and traversal of various distances. Given their characteristics, Infineta asserts, the two types of WAN optimization require different system architectures.

Moreover, the two distinct types of WAN optimization also feature different categories of application traffic. WAN optimization for branch traffic is characterized by user-to-machine traffic, which involves a human directly interacting with a device and an application. Conversely, WAN optimization for big traffic, usually data-center to data-center in orientation, features machine-to-machine traffic.

Because different types of buyers involved, the sales processes for the two types of WAN optimization are different, too.

Applications and Use Cases

Infineta has chosen to go big-game hunting in the WAN-optimization market. It’s chasing Big Traffic with its Data Mobility Switch (DMS), equipped with 10-Gbps of processing capacity and a reputed ROI payback of less than a year.

Deployment of DMS is best suited for application environments that are bandwidth intensive, latency sensitive, and protocol inefficient. Applications that map to those characteristics include high-speed replication, large-scale data backup and archiving, huge file transfers, and the scale out of growing application traffic.  That means deployment typically occurs at between two or more data centers that can be hundreds or even thousands of miles apart, employing OC-3 to OC-192 WAN connections.

In Infineta’s presentation to us, the company featured use cases that covered virtual machine disk (VMDK) and database protection as well as high-speed data replication. In each instance, Infineta claimed compelling results in overall performance improvement, throughput, and WAN-traffic reduction.

Dedupe “Crown Jewels”

So, you might be wondering, how does Infineta attain those results? During a demonstration of DMS in action, Infineta tools us through the technology in considerable detail. Infineta says says its deduplication technologies are its “crown jewels,” and it has filed and received a mathematically daunting patent to defend them.

At this point, I need to make brief detour to explain that Infineta’s DMS is  hardware-based product that uses field programmable gate arrays (FPGAs), whereas Infineta’s primary competitors use software that runs on off-the-shelf PC systems. Infineta decided against a software-based approach — replete with large dictionaries and conventional deduplication algorithms — because it ascertained that the operational overhead and latency implicit in that approach inhibited the performance and scalability its customers required for their data-center applications.

To minimize latency, then, Infineta’s DMS was built with FPGA hardware designed around a multi-Gigabit switch fabric. The DMS is the souped-up vehicle that harnesses the power of the company’s approach to deduplication , which is intended to address traditional deduplication bottlenecks relating to disk I/O bandwidth, CPU, memory, and synchronization.

Infineta says its approach to deduplication is typified by an overriding focus on minimizing sequentiality and synchronization, buttressed and served by massive parallelism, computational simplicity, and fixed-size dictionary records.

Patent versus Patented Obtuseness

The company’s founder, Dr.K.V.S. (Ram) Ramarao, then explained Infineta’s deduplication patent. I wish I could convey it to you. I did everything in my limited power to grasp its intricacies and nuances — I’m sure everybody in the room could hear my rickety, wooden mental gears turning and smell the wood burning — but my brain blew a fuse and I lost the plot. Have no fear, though: Derick Winkworth, the notorious @cloudtoad on Twitter, likely will addressing Infineta’s deduplication patent in a forthcoming post at Packet Pushers. He brings a big brain and an even bigger beard to the subject, and he will succeed where I demonstrated only patented obtuseness.

Suffice it to say, Infineta says the techniques described in its patent result in the capacity to scale linearly in lockstep with additional computing resources, effectively obviating the aforementioned bottlenecks relating to disk I/O bandwidth, CPU, memory, and synchronization. (More information on Infineta’s Velocity Dedupe Engine is available on the company’s website.)

Although its crown jewels might reside in deduplication, Infineta also says DMS delivers the goods in TCP optimization, keeping the pipe full across all active connections.

Not coincidentally, Infineta claims to significantly get the measure of its competitors in areas such as throughput, latency, power, space, and “dollar-per-Mpbs” delivered. I’m sure those competitors will take issue with Infineta’s claims. As always, the ultimate arbiters are the customers that constitute the demand side of the marketplace.

Fast-Growing Market

Infineta definitely has customers — NaviSite, now part of Time Warner, among them — and if the exuberance and passion of its product managers and technologists are reliable indicators, the company will more than hold its own competitively as it addresses a growing market for WAN optimization between data centers.

Disclosure: As a delegate, my travel and accommodations were covered by Gestalt IT, which is remunerated by vendors for presentation slots at Network Field Day. Consequently, my travel costs (for airfare, for hotel accommodations, and for meals) were covered indirectly by the vendors, but no other recompense, except for the occasional tchotchke, was accepted by me from the vendors involved. I was not paid for my time, nor was I paid to write about the presentations I witnessed.