Category Archives: Riverbed Technology

Understanding Cisco’s Relationship to SDN Market

Analysts and observers have variously applauded or denounced Cisco for its network-Cisco ONE programmability pronouncements last week.  Some pilloried the company for being tentative in its approach to SDN, contrasting the industry giant’s perceived reticence with its aggressive pursuit of previous emerging technology markets such as IP PBX, videoconferencing, and converged infrastructure (servers).

Conversely, others have lauded Cisco’s approach to SDN as far more aggressive than its lackluster reply to challenges in market segments such as application-delivery controllers (ADCs) and WAN optimization, where F5 and Riverbed, respectively, demonstrated how a tightly focused strategy and expertise above the network layer could pay off against Cisco.

Different This TIme

But I think they’ve missed a very important point about Cisco’s relationship to the emerging SDN market.  Analogies and comparisons should be handled with care. Close inspection reveals that SDN and the applications it enables represent a completely different proposition from the markets mentioned above.

Let’s break this down by examining Cisco’s aggressive pursuit of IP-based voice and video. It’s not a mystery as to why Cisco chose to charge headlong into those markets. They were opportunities for Cisco to pursue its classic market adjacencies in application-related extensions to its hegemony in routing and switching. Cisco also saw video as synergistic with its core network-infrastructure business because it generated bandwidth-intensive traffic that filled up existing pipes and required new, bigger ones.

Meanwhile, Cisco’s move into UCS servers was driven by strategic considerations. Cisco wanted the extra revenue servers provided, but it also wanted to preemptively seize the advantage over its former server partners (HP, Dell, IBM) before they decided to take the fight to Cisco. What’s more, all the aforementioned vendors confronted the challenge of continuing to grow their businesses and public-market stock prices in markets that were maturing and slowing.

Cisco’s reticence to charge into WAN optimization and ADCs also is explicable. Strategically, at the highest echelons within Cisco, the company viewed these markets as attractive, but not as essential extensions to its core business. The difficulty was not only that Cisco didn’t possess the DNA or the acumen to play in higher-layer network services — though that was definitely a problem — but also that Cisco did not perceive those markets as conferring sufficiently compelling rewards or strategic advantages to warrant the focus and resources necessary for market domination. Hence, we have F5 Networks and its ADC market leadership, though certainly F5’s razor-sharp focus and sustained execution factored heavily into the result.

To Be Continued

Now, let’s look at SDN. For Cisco, what sort of market does it represent? Is it an opportunity to extend its IP-based hegemony, like voice, video, and servers? No, not at all. Is it an adjunct market, such as ADCs and WAN optimization, that would be nice to own but isn’t seen as strategically critical or sufficiently large to move the networking giant’s stock-price needle? No, that’s not it, either.

So, what is SDN’s market relationship to Cisco?

Simply put, it is a potential existential threat, which makes it unlike IP PBXes, videoconferencing, compute hardware, ADCs, and WAN optimization. SDN is a different sort of beast, for reasons that have been covered here and elsewhere many times.  Therefore, it necessitates a different sort of response — carefully calculated, precisely measured, and thoroughly plotted. For Cisco, the ONF-sanctioned approach to SDN is not an opportunity that the networking giant can seize,  but an incipient threat to the lifeblood of its business that it must blunt and contain — and, whatever else, keep out of its enterprise redoubt.

Did Cisco achieve its objective? That’s for a subsequent post.

Advertisement

Report from Network Field Day 3: Infineta’s “Big Traffic” WAN Optimization

Last week, I had the privilege of serving as a delegate a Network Field Day 3 (NFD3), part of Tech Field Day.  It actually spanned two days, last Thursday and Friday, and it truly was a memorable and rewarding experience.

I learned a great deal from the vendor presentations (from SolarWinds, NEC, Arista, Infineta on Thursday; from Cisco and Spirent on Friday), and I learned just as much from discussions with my co-delegates, whom I invite you to get to know on Twitter and on their blogs.

The other delegates were great people, with sharp minds and exceptional technical aptitude. They were funny, too. As I said above, I was honored and privileged to spend time in their company.

Targeting “Big Traffic” 

In this post, I will cover our visit with Infineta Systems. Other posts, either directly about NFD3 or indirectly about the information I gleaned from the NFD3 presentations, will follow at later dates as circumstances and time permit.

Infineta contends that WAN optimization comprises two distinct markets: WAN optimization for branch traffic, and WAN optimization for what Infineta terms “big traffic.” Each has different characteristics.  WAN optimization for branch traffic is typified by relatively low bandwidth and going over relatively long distances, whereas WAN optimization for “big traffic” is marked by high bandwidth and traversal of various distances. Given their characteristics, Infineta asserts, the two types of WAN optimization require different system architectures.

Moreover, the two distinct types of WAN optimization also feature different categories of application traffic. WAN optimization for branch traffic is characterized by user-to-machine traffic, which involves a human directly interacting with a device and an application. Conversely, WAN optimization for big traffic, usually data-center to data-center in orientation, features machine-to-machine traffic.

Because different types of buyers involved, the sales processes for the two types of WAN optimization are different, too.

Applications and Use Cases

Infineta has chosen to go big-game hunting in the WAN-optimization market. It’s chasing Big Traffic with its Data Mobility Switch (DMS), equipped with 10-Gbps of processing capacity and a reputed ROI payback of less than a year.

Deployment of DMS is best suited for application environments that are bandwidth intensive, latency sensitive, and protocol inefficient. Applications that map to those characteristics include high-speed replication, large-scale data backup and archiving, huge file transfers, and the scale out of growing application traffic.  That means deployment typically occurs at between two or more data centers that can be hundreds or even thousands of miles apart, employing OC-3 to OC-192 WAN connections.

In Infineta’s presentation to us, the company featured use cases that covered virtual machine disk (VMDK) and database protection as well as high-speed data replication. In each instance, Infineta claimed compelling results in overall performance improvement, throughput, and WAN-traffic reduction.

Dedupe “Crown Jewels”

So, you might be wondering, how does Infineta attain those results? During a demonstration of DMS in action, Infineta tools us through the technology in considerable detail. Infineta says says its deduplication technologies are its “crown jewels,” and it has filed and received a mathematically daunting patent to defend them.

At this point, I need to make brief detour to explain that Infineta’s DMS is  hardware-based product that uses field programmable gate arrays (FPGAs), whereas Infineta’s primary competitors use software that runs on off-the-shelf PC systems. Infineta decided against a software-based approach — replete with large dictionaries and conventional deduplication algorithms — because it ascertained that the operational overhead and latency implicit in that approach inhibited the performance and scalability its customers required for their data-center applications.

To minimize latency, then, Infineta’s DMS was built with FPGA hardware designed around a multi-Gigabit switch fabric. The DMS is the souped-up vehicle that harnesses the power of the company’s approach to deduplication , which is intended to address traditional deduplication bottlenecks relating to disk I/O bandwidth, CPU, memory, and synchronization.

Infineta says its approach to deduplication is typified by an overriding focus on minimizing sequentiality and synchronization, buttressed and served by massive parallelism, computational simplicity, and fixed-size dictionary records.

Patent versus Patented Obtuseness

The company’s founder, Dr.K.V.S. (Ram) Ramarao, then explained Infineta’s deduplication patent. I wish I could convey it to you. I did everything in my limited power to grasp its intricacies and nuances — I’m sure everybody in the room could hear my rickety, wooden mental gears turning and smell the wood burning — but my brain blew a fuse and I lost the plot. Have no fear, though: Derick Winkworth, the notorious @cloudtoad on Twitter, likely will addressing Infineta’s deduplication patent in a forthcoming post at Packet Pushers. He brings a big brain and an even bigger beard to the subject, and he will succeed where I demonstrated only patented obtuseness.

Suffice it to say, Infineta says the techniques described in its patent result in the capacity to scale linearly in lockstep with additional computing resources, effectively obviating the aforementioned bottlenecks relating to disk I/O bandwidth, CPU, memory, and synchronization. (More information on Infineta’s Velocity Dedupe Engine is available on the company’s website.)

Although its crown jewels might reside in deduplication, Infineta also says DMS delivers the goods in TCP optimization, keeping the pipe full across all active connections.

Not coincidentally, Infineta claims to significantly get the measure of its competitors in areas such as throughput, latency, power, space, and “dollar-per-Mpbs” delivered. I’m sure those competitors will take issue with Infineta’s claims. As always, the ultimate arbiters are the customers that constitute the demand side of the marketplace.

Fast-Growing Market

Infineta definitely has customers — NaviSite, now part of Time Warner, among them — and if the exuberance and passion of its product managers and technologists are reliable indicators, the company will more than hold its own competitively as it addresses a growing market for WAN optimization between data centers.

Disclosure: As a delegate, my travel and accommodations were covered by Gestalt IT, which is remunerated by vendors for presentation slots at Network Field Day. Consequently, my travel costs (for airfare, for hotel accommodations, and for meals) were covered indirectly by the vendors, but no other recompense, except for the occasional tchotchke, was accepted by me from the vendors involved. I was not paid for my time, nor was I paid to write about the presentations I witnessed. 

Embrane Emerges from Stealth, Brings Heleos to Light

I had planned to write about something else today — and I still might get around to it — but then Embrane came out of stealth mode. I feel compelled to comment, partly because I have written about the company previously, but also because what Embrane is doing deserves notice.

Embrane’s Heleos

With regard to aforementioned previous post, which dealt with Dell acquisition candidates in Layer 4-7 network services, I am now persuaded that Dell is more likely to pull the trigger on a deal for an A10 Networks, let’s say, than it is to take a more forward-looking leap at venture-funded Embrane. That’s because I now know about Embrane’s technology, product positioning, and strategic direction, and also because I strongly suspect that Dell is looking for a purchase that will provide more immediate payback within its installed base and current strategic orientation.

Still, let’s put Dell aside for now and focus exclusively on Embrane.

The company’s founders, former Andiamo-Cisco lads Dante Malagrinò and Marco Di Benedetto, have taken their company out of the shadows and into the light with their announcement of Heleos, which Embrane calls “the industry’s first distributed software platform for virtualizing layer 4-7 network services.” What that means, according to Embrane, is that cloud service providers (CSPs) and enterprises can use Heleos to build more agile networks to deliver cloud-based infrastructure as a service (IaaS). I can perhaps see the qualified utility of Heleos for the former, but I think the applicability and value for the latter constituency is more tenuous.

Three Wise Men

But I am getting ahead of myself, putting the proverbial cart before the horse. So let’s take a step back and consult some learned minds (including  an”ethereal” one) on what Heleos is, how it works, what it does, and where and how it might confer value.

Since the Embrane announcement hit the newswires, I have read expositions on the company and its new product from The 451 Group’s Eric Hanselman, from rock-climbing Ivan Pepelnjak (technical director at NIL Data Communications), and from EtherealMind’s Greg Ferro.  Each has provided valuable insight and analysis. If you’re interested in learning about Embrane and Heleos, I encourage you to read what they’ve written on the subject. (Only one of Hanselman’s two The 451 Group pieces is available publicly online at no charge).

Pepelnjak provides an exemplary technical description and overview of Heleos. He sets out the problem it’s trying to solve, considers the pros and cons of the alternative solutions (hardware appliances and virtual appliances), expertly explores Embrane’s architecture, examines use cases, and concludes with a tidy summary. He ultimately takes a positive view of Heleos, depicting Embrane’s architecture as “one of the best proposed solutions” he’s seen hitherto for scalable virtual appliances in public and private cloud environments.

Limited Upside

Ferro reaches a different conclusion, but not before setting the context and providing a compelling description of what Embrane does. After considering Heleos, Ferro ascertains that its management of IP flows equates to “flow balancing as a form of load balancing.” From all that I’ve read and heard, it seems an apt classification. He also notes that Embrane, while using flow management, is not an “OpenFlow/SDN business. Although I see conceptual similarities between what Embrane is doing and what OpenFlow does, I agree with Ferro, if only because, as I understand it, OpenFlow reaches no higher than the network layer. I suppose the same is true for SDN, but this is where ambiguity enters the frame.

Even as I wrote this piece, there was a kerfuffle on Twitter as to whether or to what extent Embrane’s Heleos can be categorized as the latest manifestation of SDN. (Hours later, at post time, this vigorous exchange of views continues.)

That’s an interesting debate — and I’m sure it will continue — but I’m most intrigued by the business and market implications of what Embrane has delivered. On that score, Ferro sees Embrane’s platform play as having limited upside, restricted to large cloud-service providers with commensurately large data centers. He concludes there’s not much here for enterprises, a view with which I concur.

Competitive Considerations

Hanselman covers some of the same ground that Ferro and Pepelnjak traverse, but he also expends some effort examining the competitive landscape that Embrane is entering. In that Embrane is delivering a virtualization platform for network services, that it will be up against Layer 4-7 stalwarts such as F5 Networks, A10 Networks, Riverbed/Zeus, Radware, Brocade, Citrix, Cisco, among others. F5, the market leader, already recognizes and is acting upon some of the market and technology drivers that doubtless inspired the team that brought Heleos to fruition.

With that in mind, I wish to consider Embrane’s business prospects.

Embrane closed a Series B round of $18 million in August. It was lead by New Enterprise Associates and included the involvement of Lightspeed Venture Partners and North Bridge Venture Partners, both of whom participated in a $9-million series A round in March 2010.

To determine whether Embrane is a good horse to back (hmm, what’s with the horse metaphors today?), one has to consider the applicability of its technology to its addressable market — very large cloud-service providers — and then also project its likelihood of providing a solution that is preferable and superior to alternative approaches and competitors.

Counting the Caveats

While I tend to agree with those who believe Embrane will find favor with at least some large cloud-service providers, I wonder how much favor there is to find. There are three compelling caveats to Embrane’s commercial success:

  1. L4-7 network services, while vitally important cloud service providers and large enterprises, represent a much smaller market than L2-L3 networking, virtualized or otherwise. Just as a benchmark, Dell’Oro reported earlier this year that the L2-3 Ethernet Switch market would be worth approximately $25 billion in 2015, with the L4-7 application delivery controller (ADC) market expected to reach more than $1.5 billion, though the virtual-appliance segment is expected show most growth in that space. Some will say, accurately, that L4-7 network services are growing faster than L2-3 networking. Even so, the gap is size remains notable, which is why SDN and OpenFlow have been drawing so much attention in an increasingly virtualized and “cloudified” world.
  2. Embrane’s focus on large-scale cloud service providers, and not on enterprises (despite what’s stated in the press release), while rational and perfectly understandable, further circumscribes its addressable market.
  3. F5 Networks is a tough competitor, more agile and focused than a Cisco Systems, and will not easily concede customers or market share to a newcomer. Embrane might have to pick up scraps that fall to the floor rather than feasting at the head table. At this point, I don’t think F5 is concerned about Embrane, though that could change if Embrane can use NaviSite — its first customer, now owned by TimeWarner Cable — as a reference account and validator for further business among cloud service providers.

Notwithstanding those reservations, I look forward to seeing more of Embrane as we head into 2012. The company has brought a creative approach and innovation platform architecture to market, a higher-layer counterpart and analog to what’s happening further down the stack with SDN and OpenFlow.

ONF Board Members Call OpenFlow Tune

The concept of software-defined networking (SDN) has generated considerable interest during the last several months.  Although SDNs can be realized in more than one way, the OpenFlow protocol seems to have drawn a critical mass of prospective customers (mainly cloud-service providers with vast data centers) and solicitous vendors.

If you aren’t up to speed with the basics of software-defined networking and OpenFlow, I suggest you visit the Open Networking Foundation (ONF) and OpenFlow websites to familiarize yourself the underlying ideas.  Others have written some excellent articles on the technology, its perceived value, and its potential implications.

In a recent piece he wrote originally for GigaOm, Kyle Forster of Big Switch Networks offers this concise definition:

Concisely Defined

“At its most basic level, OpenFlow is a protocol for server software (a “controller”) to send instructions to OpenFlow-enabled switches, where these instructions give direct control over how those switches forward traffic through the network.

I think of OpenFlow like an x86 instruction set for the network – it’s low-level, but it’s very powerful. Continuing that analogy, if you read the x86 instruction set for the first time, you might walk away thinking it could be useful if you need to build a fancy calculator, but using it to build Linux, Apache, Microsoft Word or World of Warcraft wouldn’t exactly be obvious. Ditto for OpenFlow. It isn’t the protocol that is interesting by itself, but rather all of the layers of software that are starting to emerge on top of it, similar to the emergence of operating systems, development environments, middleware and applications on top of x86.”

Increased Network Functionality, Lower Network Operating Costs

The Open Networking Foundation’s charter summarizes its objectives and the value proposition that advocates of SDN and OpenFlow believe they can deliver:

 “The Open Networking Foundation is a nonprofit organization dedicated to promoting a new approach to networking called Software-Defined Networking (SDN). SDN allows owners and operators of networks to control and manage their networks to best serve their users’ needs. ONF’s first priority is to develop and use the OpenFlow protocol. Through simplified hardware and network management, OpenFlow seeks to increase network functionality while lowering the cost associated with operating networks.”

That last part is the key to understanding the composition of ONF’s board of directors, which includes Deutsche Telecom, Facebook, Google, Microsoft, Verizon, and Yahoo. All of these companies are major cloud-service providers with multiple, sizable data centers. (Yes, Microsoft also is a cloud-technology purveyor, but what it has in common with the other board members is its status as a cloud-service provider that owns and runs data centers.)

Underneath the board of directors are member companies. Most of these are vendors seeking to serve the needs of the ONF board members and similar cloud-service providers that share their business objective: boosting network functionality while reducing the costs associated with network operations.

Who’s Who of Networking

Among the vendor members are a veritable who’s who of the networking industry: Cisco, HP, Juniper, Brocade, Dell/Force10, IBM, Huawei, Nokia Siemens Networks, Riverbed, Extreme, and others. Also members, not surprisingly, are virtualization vendors such as VMware and Citrix, as well as the aforementioned Microsoft. There’s a smattering of SDN/OpenFlow startups, too, such as Big Switch Networks and Nicira Networks.

Of course, membership does not necessarily entail avid participation. Some vendors, including Cisco, likley would not be thrilled at any near-term prospect of OpenFlow’s widespread market adoption. Cisco would be pleased to see the networking status quo persist for as long as possible, and its involvement in ONF probably is more that of vigilant observer than of fervent proponent. In fact, many vendors are taking a wait-and-see approach to OpenFlow. Some members, including Force10, are bearish and have suggested that the protocol is a long way from delivering the maturity and scalability that would satisfy enterprise customers.

Vendors Not In Charge

Still, the board members are steering the ONF ship, not the vendors. Regardless of when OpenFlow or something like it comes of age, the rise of software-defined networking seems inevitable. Servers and storage gear have been virtualized and have become more application-driven, but networks haven’t changed much in the last several years. They’re faster, yes, but they’re still provisioned in the traditional manner, configured rather than programmed. That takes time, consumes resources, and costs money.

Major cloud-service providers, such as those on the ONF board, want network infrastructure to become more elastic, flexible, and dynamic. Vendors will have to respond accordingly, whether with OpenFlow or with some other approach that delivers similar operational outcomes and business benefits.

I’ll be following these developments closely, watching to see how the business concerns of the cloud providers and the business interests of the networking-vendor community ultimately reconcile.

Riverbed’s World-Spanning Acquisitions

Despite some time-constrained, desultory, and ultimately fruitless investigations by your intrepid correspondent, I was unable to determine whether Riverbed Technology’s just-announced acquisitions of virtual application delivery controller (vADC) specialist Zeus Technology and Web-content optimization vendor Aptimize Limited were conditioned by its having most of its available cash outside the USA.

Looks Good on Paper

Don’t misunderstand. I’m not saying these were bad acquisitions. In fact, on paper, these buys look relatively good. Much depends, as it always does, on execution — on how well Riverbed integrates, assimilates, and monetizes its new properties — but strategically there’s not much to dislike about these moves.

Still, it’s interesting that Riverbed bought two companies half a world apart from one another, and another half world away from Riverbed itself. The company’s executives must have racked up prodigious air miles during their due diligence.

There’s nothing wrong with that, of course. The airline industry could use the support. More to the point, these acquisitions could come together to fulfill a strategic vision that will see Riverbed deliver integrated WAN optimization, Web-app optimization, and application traffic management for virtualized and cloud customers worldwide. Riverbed calls the concept “asymmetric optimization” — just one box is required, sitting in a data center — and it believes it can become more than a lucrative niche.

The bigger of the two acquisitions involved UK-based Zeus Technology. Riverbed will pay $110 million upfront for Zeus, and perhaps another $30 million in performance-based bonuses. Zeus, which took a long and winding road toward its ultimate raison d’être in application delivery and load balancing, is highly regarded by knowledgeable market watchers and a growing stable of customers, Rackspace among them.

Zeus Takes Riverbed Into New Battle

Zeus’ virtual traffic manager, which runs on all the major hypervisors, has been installed by about 15,000 customers worldwide. The company apparently generated $15 million in revenue this year, and Riverbed, perhaps underpromising so that it can overdeliver, projects that Zeus’ offerings will account for about $20 million in revenue during their first year under Riverbed’s expanding corporate tent.

In my view, though, the Zeus acquisition isn’t only the bigger of the two, but it’s also more fraught with risk. Yes, it makes sense, presuming the aforementioned plan comes together without a hitch, but it also takes Riverbed into intensive competition with ADC kingpin F5 Networks.

Until now, Riverbed has stayed clear of F5’s core market, which it leads with considerable aplomb. Now Riverbed, already fighting a tough battle against a number of WAN-optimization players, must go up against a strong leader in the ADC space. How well can it fight on two major fronts simultaneously? Part of the answer, I suspect, hinges on how quickly customers begin to perceive the two fronts (or is it three?) as one. If or when that happens, Riverbed’s three-pronged value proposition — WAN optimization, Web-app optimization, and application delivery — will give it the edge it craves.

Mind you, F5 won’t be standing still. It will be interesting to see how this battle plays out in customer accounts.

Kiwis on the Move

The other Riverbed acquisition, of New Zealand-based Aptimize, looks a safer bet. According to reports, Riverbed paid less than $20 million for Aptimize, but the acquired company’s backers could collect more than $30 million if an “earn-out clause” in the deal is fulfilled.

Aptimize’s Website Accelerator, according to Riverbed, “reorders, merges and resizes content, essentially transforming it in real time . . . to deliver the application up to four times faster.” It’s closer conceptually and practically to what Riverbed does today than is the Zeus technology, making it easier to integrate, package, and sell to the company’s existing customers.

While the Zeus team will remain in England, the Aptimize team, comprising co-founder Edward Robinson and ten engineers, will relocate from New Zealand to the Bay Area.

As an aside, though not to investors, Riverbed stock plunged vertiginously on the markets today after the high-flying company, whose shares had soared during the past year, missed its revenue number, disappointing punters and analysts who tend to be unforgiving about such things.

Handicapping Dell Networking Acquisition Candidates

There’s a strong possibility that Dell will make a networking acquisition in the near future. In the spirit of fun, I thought it would be mildly entertaining, and perhaps edifying — though I don’t want to push it — to handicap the field of potential candidates, providing morning-line odds for each vendor.

Brocade 5-2

I addressed the Dell-Brocade scenario in a previous post.

Even though there are reasons Dell might not pursue Brocade, the company is a logical candidate and should be considered the favorite. As any gambler can tell you, however, favorites don’t always win, and there’s a chance Dell will look elsewhere in the field for its networking play.

Juniper Networks 7-1

Dell resells Juniper’s enterprise switches and security boxes under its own PowerConnect brand, but a lot of what Juniper offers, particularly routers to carriers and service providers, isn’t a Dell priority.  What’s more, Juniper would prefer to remain independent, has other major partnerships (especially with IBM), and believes it is well placed to take share from Cisco at carriers and service providers as virtualization proliferates and cloud computing takes hold.

Last, but probably not least, Juniper’s market capitalization, at more than $16 billion, makes it prohibitively expensive. Dell’s cash hoard amounts to more than $14 billion, but I doubt it wants to break the bank  on a single transaction.

Aruba Networks 10-1

Dell sells Aruba’s wireless networking solutions under the Dell PowerConnect W-Series. Aruba is seen to benefit from continued growth in enterprise wireless networking. Still, Dell is probably happy to leave the relationship as it stands.

Enterasys 12-1

The two companies were active partners several years back, but not much is happening today. Not likely.

 Arista Networks 7-1

Michael Dell is enthusiastic about the prospects for 10GbE and cloud computing. Arista probably isn’t willing to sell, but my guess is that Dell — seeing Arista’s gains against Cisco in financial services, with more possibly to come in other verticals — would be interested.

That said, Arista seems destined for an IPO. The company’s CEO Jayshree Ullal has said she is asked often by customers about Arista’s exit strategy, and she replies that the company’s plan is to remain independent.

Extreme Networks 6-1

Extreme and Dell have an existing partnerships, with the former’s switches supporting Dell’s EqualLogic iSCSI SAN arrays. Extreme also has the 10GbE  switching of which Michael Dell is so enamored.

Extreme isn’t an industry leader, and it’s still struggling for traction in a competitive marketplace, but it’s active in many verticals where Dell is strong — including healthcare — and Dell might feel it could do relatively well with such a cost-effective purchase. (Extreme’s market capitalization is $314 million.) It could be a good way Dell to make a modest entry into networking, though it would create complications with existing partners.

 Force10 Networks  7-2

Dell partners with Force10 for Layer 3 backbone switches and for Layer 2 aggregation switches. Customers that have deployed Dell/Force10 networks include eHarmony, Salesforce.com, Yahoo, and F5 Networks.

Again, Michael Dell has expressed an interest in 10GbE and Force10 fits the bill. The company has struggled to break out of its relatively narrow HPC niche, placing increasing emphasis on its horizontal enterprise and data-center capabilities. Dell and Force10 have a history together and have deployed networks in real-word accounts. That could set the stage for a deepening of the relationship, presuming Force10 is realistic about its market valuation.

 F5 Networks 8-1

Dell is the largest reseller of F5 products, and the relationship clearly is working for both companies. Dell resells not only F5’s flagship BIG-IP application-traffic controller, but also the company’s ARX file-virtualization appliance.

Dell and F5 have a great partnership, but I think Dell believes F5 isn’t going anywhere — it will likely remain independent, despite the perennial rumors that it could be acquired — and will agree to leave well enough alone.

Riverbed Technology 8-1

Riverbed and Dell are partners, with Riverbed’s Steelhead WAN-optimization appliances and Dell EqualLogic PS Series iSCSI SAN arrays deployed together in disaster-recovery and centralized data-backup applications.

The relationship works, Dell has other near-term priorities, and an acquisition of Riverbed would be relatively pricy and still leave Dell with networking gaps.

Any Others? 

It’s possible Dell will look elsewhere, perhaps at an emerging niche player, so I’ll leave the field open for late entrants. If you think any should be included, let me know.

How Cisco Arrived at the Crossroads

As reports of Cisco’s impending layoffs intensify and spread, I started thinking about how the networking giant got into its current predicament and whether it can escape from it.

One major problem for the company is that the challenges it faces aren’t entirely attributable to its own mistakes. If Cisco’s own bumbling was wholly responsible for the company’s middle-life crisis, one might think it could stop engaging in self-harm, right the ship, and chart a course to renewed prosperity.

Internal Missteps Exacerbated by External Factors

But, even though Cisco has contributed significantly to its own decline — with a byzantine bureaucratic management structure replete with a multitude of executive councils, half-baked forays into consumer markets about which it knew next to nothing, imperial overstretch into too many markets with too many diluted products, and the loss of far too many talented leaders — external factors also played a meaningful role in bringing the company to this crossroads.

Those external factors comprise market dynamics and increasingly effective incursions by competitors into Cisco’s core business of switching and routing, not just in the telco space but increasingly — and more significantly — in enterprise markets, where Cisco heretofore has maintained hegemonic dominance.

If we look into the recent past, we can see that Cisco saw one threat coming well before it actually arrived. Before cloud computing crashed the networking party and threatened to rearrange data-center infrastructure worldwide, Cisco faced the threat of network-gear commoditization from a number of vendors, including the “China-out” 3Com, which had completely remade itself into a Chinese company with an American name through its now-defunct H3C joint venture with Huawei.

Now, of course, 3Com is part of HP Networking, and a big draw for HP when it acquired 3Com was represented by the cost-effective products and low-priced engineering talent that H3C offered. HP reasoned that if Cisco wanted to come after its server market with Unified Computing System (UCS), HP would fight back by attacking the relatively robust margins in Cisco’s bread-and-butter business with aggressively priced networking gear.

Cisco Prescience

HP’s strategy, especially in a baleful macroeconomic world where cost-cutting in enterprises and governments is now an imperative rather than a prerogative, is beginning to bear fruit, as recent market-share gains attest.

Meanwhile, Cisco knew that Huawei, gradually eating into its telecommunications market share in markets outside North America, would eventually seek future growth in the enterprise. It was inevitable, and Cisco had to prepare for the same low-priced, value-based onslaught that Huawei waged so successfully against it in overseas carrier accounts. In the enterprise, Huawei would follow the same telco script, focusing first on overseas markets — in its home market, China, as well as in Asia, the Middle East, Europe, and South America — before making its push into a less-receptive North American market.

That is happening now, as I write this post, but Cisco had the prescience to see it on the horizon years before it actually occurred.

Explaining Drive for Diversification

What do you think that hit-and-miss diversification strategy — into consumer markets, into home networking, into enterprise collaboration with WebEx, into telepresence, into smart grids, into so much else besides — was all about? Cisco was looking to escape getting hit by the bullet train of network commoditization, aimed straight at its core business.

That Cisco has not excelled in its diversification strategy into new markets and technologies shouldn’t come as a surprise. Well before it make those moves, it had failed in diversification efforts much closer to home, in areas such as WAN optimization, where it had been largely unsuccessful against Riverbed, and in load balancing/application traffic management, where F5 had throughly beaten back the giant. The truth is, Cisco has a spotty record in truly adjacent or contiguous markets, so it’s no wonder that it has struggled to dominate markets that are further afield.

Game Gets More Complicated

Still, the salient point is that Cisco went into all those markets because it felt it needed to do so, for revenue growth, for margin support, for account control, for stakeholder benefit.

Now, cloud computing, with all its many implications for networking, is roiling the telco, service provider, and enterprise markets. It’s not certain that Cisco can respond successfully to cloud-centric threats posed by data-center networking vendors such Juniper Networks as Arista Networks or by technologies such as software-defined networking (as represented by the OpenFlow protocol).

Cisco was already fighting one battle, against the commoditizing Huaweis and 3Coms of the world, and now another front has opened.

Riverbed CEO Confident of Maintaining Competitive Edge over Cisco

As interviews with CEOs go, the one Network World offers with Riverbed’s Jerry Kennelly makes good reading.

At times, like any CEO pitching to the press, Kennelly shucks, jives, spins, and postures. He’s selling his company, delivering a marketing message, and trying to accomplish his media mission. Like other CEOs, he wouldn’t be doing the interview unless he and his company thought it could serve a practical purpose.

Bright Prospects

The overall message Kennelly delivers is that business is good for Riverbed, that it has a defensible leadership position over Cisco in WAN optimization, and that it foresees robust growth and bright prospects for years to come. Kennelly supports his optimistic outlook with carefully reasoned arguments, pointing to technology and business trends, such as data-center consolidation and virtualization, that play to Riverbed’s strengths.

I think he does particularly well explaining how and why Riverbed has been able to outperform Cisco at the upper reaches of the OSI protocol stack. At one point, he mentions that Riverbed is not alone in that regard. He notes that just as F5 Networks gave Cisco a beating at Layer 4-7 application traffic management, Riverbed did likewise in WAN optimization. It’s a an accurate observation, and it makes one wonder about what Riverbed and F5 might be able to accomplish together as technology and economic trends continue to furnish each company with growth opportunities.

Wall Street’s Built-In Protection

But the two companies are unlikely to get together, for reasons Kennelly cites late in the interview. As he says, not only is Riverbed not seeking to be acquired, but it’s also a company that the market has afforded with built-in protection against acquisition.

We’re actually somewhat protected by Wall Street because Wall Street shares the vision of Riverbed and has awarded us a strong earnings multiple on our stock price. It’s one of the top multiples. The type of people who would acquire you are the larger, slow growth companies, big technical companies. We’re a high-growth, high-multiple company. They’re all lower growth, low-multiple companies. It’s actually dilutive for them to try to do an acquisition of us. So we have some protection on that front of things. We desire to be a standalone, independent company for a long time, and I think we’re best served by that.

Indeed, Riverbed has a market capitalization of approximately $2.5 billion. It’s board would insist on a rich acquisition premium, so any buyer probably would have to part with a minimum of $4 billion to complete the deal. Some big, slower-growth, lower-multiple companies — and I could think of one or two — might be willing to consider such an arrangement, but growing, high-multiple F5, with its market capitalization of $6.8 billion, probably wouldn’t attempt to digest a meal that rich.

No Need for Private-Cloud Garnish

Where Kennelly slips in the interview, and it isn’t a fatal indiscretion by any means, is when he tries to invoke the ambiguous private cloud where it doesn’t belong. He explains, correctly, that Riverbed’s growth is being driven by data-center consolidation. Then, however, he suggests that data-center consolidation is a proxy for the private cloud, which, in Kennelly’s reasoning, leads ineluctably to the public cloud. Here’s the relevant excerpt:

Again, these data center consolidations are a proxy for private cloud computing and then public cloud computing. The genie’s out of the bottle on that. It’s not going back. The ability to connect at the application layer across networks is going to be a permanent requirement of everyone.

The interviewer rightly questions this doubtful syllogism, and Kennelly immediately retreats, waving the whole thing off with the following comment:

Either way, we get their business, whether they do it in the cloud format or in a very traditional corporate data center format. But yeah, I take your point. I’m not trying to push the cloud by the way.

Okay, maybe he wasn’t trying to push the cloud, but it definitely seemed that he was willing to take it for a marketing spin. From the tone and substance of the interview, it appears Riverbed’s marketing mavens must have pressed their CEO to cite the buzzy private cloud at every conceivable opportunity. His heart clearly wasn’t in the puffery, as the remark quoted above demonstrates.

Riverbed doesn’t need to gild the lily. It’s doing well enough without having to resort to buzzword legerdemain.

Implications of HP’s 3Com Buy for Other Networking Players

As I mentioned yesterday, HP didn’t get revolutionary, game-changing products and technologies from its $2.7-billion acquisition of 3Com, a company that has gone through more reinventions and market repositionings than Madonna.

In 3Com’s long and eventful history, it has gone from providing the original Ethernet adapters and hubs for enterprises and small businesses, to an acquisition of Chipcom for its chassis-based hubs and switches, to deserting the enterprise market entirely — even directing its jilted corporate customers into the outstretched arms of Extreme Networks.

Subsequently, after a dalliance with consumer markets, 3Com focused on the SMB space before coming back to enterprise markets in its H3C joint venture with Huawei.

That joint venture is now deceased, with 3Com having bought out Huawei’s interest. It now competes against its former partner for the patronage of customers in China and elsewhere. (This is an important point that some people have gotten entirely wrong. 3Com and Huawei no longer are partners in H3C. The loss of Huawei-related business in China represented a serious drag on H3C revenue and necessitated the “China Out” strategy that 3Com pursued.)

Nevertheless, 3Com was reborn on the foundation of cost-effective Chinese engineering, which I believe was a big draw for HP.

Putting all that aside, what does HP’s buy of 3Com mean for smaller vendors in the marketplace, those left out of this latest installment of industry consolidation?

Let’s start with Juniper, one of the bigger independent networking vendors still on the board. As long as it continues to build on its data-center strategy, and to strengthen its partnerships with IBM and Dell, it should survive HP’s onslaught.

Recently, Juniper underwent a rebranding and repositioning of its own, albeit not as dramatic or radical as some of 3Com’s transformations. Juniper overriding message is that it presents a flexible, intelligent, and open alternative to the closed, proprietary systems offered by data-center behemoths Cisco and HP.

To get that message across, Juniper has introduced open, programmable capabilities in its flagship JUNOS software. It also announced new JUNOS chips and systems, including the JUNOS One line of processors and JUNOS Trio chipset with “3D Scaling,” a technology that provides dynamic support for additional subscribers, services, and bandwidth.

Juniper also unveiled new JUNOS-based cloud-networking and security products, including enhancements to Juniper’s SRX Services Gateway as well as modules, implementation guides, and best practices for building a “Cloud Ready Data Center.”

You can see what Juniper is attempting to do.

As much as its server-vendor partners, especially IBM, would like networking hardware to be interchangeable, standards-based commodities managed by an intelligent layer of data-center orchestration software, Juniper is seeking to make itself indispensable by providing its own layer of software intelligence riding atop the network fabric. If it can sell IBM and Dell on the necessity and value of that software, and it can develop and expose interfaces to complementary software its partners are promoting, all should be well and no nasty divorces will ensue.

To survive and perhaps to prosper, Juniper has to execute on its plan and maintain its partnerships.

Now let’s consider Brocade. Reports indicated that HP considered Brocade as an alternative to 3Com. Obviously, HP chose the latter, and I think the decision turned on the lower cost of goods and margin flexibility that 3Com’s enterprise-switching products offered relative to Brocade’s Foundry enterprise-networking gear.

There have been rumors that Dell might buy Brocade, but I think you can discount, if not dismiss, such speculation. Dell is content, for now, to stay with its partnering approach in filling out its data-center strategy. It seems to be mimicking IBM, following a similar plan and establishing similar technology alliances and partnerships. Dell has priorities other than big-ticket computer-networking acquisitions, and I can see it buying storage- and virtualization-software companies well before it gives consideration to a networking buy.

So, despite its best efforts to flog itself, Brocade appears orphaned.

The same story applies to Extreme Networks, which is left without a bigger corporate home to move into. Like Juniper, Extreme seems to have had a good indication where the industry — and perhaps HP — was heading, because it recently restructured and retrenched to significantly reduce its operating expenditures.

Extreme will suffer from the broader consolidation in the industry. Its first priority is to defend its installed base from competitive incursions.

What about WAN-optimization vendor Riverbed and application-delivery-networking (ADN) leader F5 Networks?

F5 probably isn’t for sale — it has been dogged by takeover rumors for years — but neither 3Com nor HP competes meaningfully on F5’s specialized turf. This deal means nothing to F5, which probably will maintain its long-running partnership with HP. If you liked F5 before this deal was announced, you have no reason to dislike the company today.

The story is similar, though not identical, for Riverbed, whose WAN-optimization products also have no direct competitor in the ProCurve or 3Com product portfolio.

So, there you have it.

HP’s acquisition of 3Com is only slightly damaging to Juniper, whose fate will turn on the success of its strategic direction with JUNOS and its partnerships with IBM and Dell.

For different reasons, the deal will have significant negative implications for Brocade and Extreme Networks. Finally, the deal is neutral for, and really doesn’t affect, F5 and Riverbed.

Some of you might be wondering about how this deal affects Cisco. I don’t think it really adds anything lethal to HP’s product portfolio, especially in relation to data-center convergence, but the lower-cost networking products likely to flow from 3Com’s Chinese engineering operations will put price pressure on Cisco’s margins.

At the end of the day, though, Cisco — which is pursuing a large number of “market adjacencies” and is suffering from attenuated focus in its legacy markets — might well become its own worst enemy over the long haul.

Blue Coat Shifts Development to India in Significant Restructuring

A market leader in WAN optimization, where its primary adversaries are Riverbed and Cisco, Blue Coat Systems dropped a three-pronged announcement today.

One thread in the announcement was Blue Coat’s confirmation that net revenue for its second fiscal quarter, which ended October 31, tracked toward the high end of its previous guidance of $116 to $121 million.

Along with that news, however, it also announced a sweeping restructuring that will see it shed about 10 percent of its global workforce. Connected to that restructuring was Blue Coat’s announcement that it has acquired Indian firm S7 Software, which has approximately 50 engineers involved in software development, code migration, and network security. Blue Coat is paying $5.25 million for S7, which it describes as a “top engineering company in India.”

Expected to close in the third quarter of Blue Coat’s 2010 fiscal year, the S7 acquisition is the fulcrum for the company’s extensive restructuring.

According to a corporate fact sheet Blue Coat published earlier this year, it had about 1,450 employees, which means approximately 145 will be laid off.

As Blue Coat says in a press release:

The restructuring plan will shift a number of engineering positions from Sunnyvale, Calif. and Austin, Texas to it other locations, such as the site it plans to acquire in Bangalore, India, through the S7 Software acquisition. It also involves the closure of three small facilities in Riga, Latvia; South Plainfield, New Jersey; and Zoetermeer, The Netherlands. Blue Coat is also reorganizing other functional areas, including sales and marketing, general and administrative, and support, to gain greater efficiencies.

This is not a modest reallocation of resources; it’s a substantive overhaul. In the near term, it means the transfer of some Blue Coat jobs from North America and Europe to India, but in the long term it also suggests that Blue Coat will attempt to do as much of its engineering as possible in lower-cost India.

Regarding future R&D, Blue Coat said the following:

The Company’s research and development work will now be undertaken at four sites that include: Sunnyvale, Calif.; Draper, Utah; Waterloo, Canada; and the new center in Bangalore, India. Each development center will be vertically integrated, so that each can assume full responsibility for the entire development process for each new or enhanced product or technology. Previously, projects crossed multiple design centers, resulting in a higher cost structure and slower time-to-market.

Depending on how well the S7 venture proceeds, it isn’t difficult to envision Blue Coat researching and developing new products in India, with “enhanced” products or technologies pursued at the North American sites.