Category Archives: HP Networking

Some Thoughts on VMware’s Strategic Acquisition of Nicira

If you were a regular or occasional reader of Nicira Networks CTO Martin Casado’s blog, Network Heresy, you’ll know that his penultimate post dealt with network virtualization, a topic of obvious interest to him and his company. He had written about network virtualization many times, and though Casado would not describe the posts as such, they must have looked like compelling sales pitches to the strategic thinkers at VMware.

Yesterday, as probably everyone reading this post knows, VMware announced its acquisition of Nicira for $1.26 billion. VMware will pay $1.05 billion in cash and $210 million in unvested equity awards.  The ubiquitous Frank Quattrone and his Quatalyst Partners, which reportedly had been hired previously to shop Brocade Communications, served as Nicira’s adviser.

Strategic Buy

VMware should have surprised no one when it emphasized that its acquisition of Nicira was a strategic move, likely to pay off in years to come, rather than one that will produce appreciable near-term revenue. As Reuters and the New York Times noted, VMware’s buy price for Nicira was 25 times the amount ($50 million) invested in the company by its financial backers, which include venture-capital firms Andreessen Horowitz, Lightspeed,and NEA. Diane Greene, co-founder and former CEO of VMware — replaced four years ago by Paul Maritz — had an “angel” stake in Nicira, as did as Andy Rachleff, a former general partner at Benchmark Capital.

Despite its acquisition of Nicira, VMware says it’s not “at war” with Cisco. Technically, that’s correct. VMware and its parent company, EMC, will continue to do business with Cisco as they add meat to the bones of their data-center virtualization strategy. But the die was cast, and  Cisco should have known it. There were intimations previously that the relationship between Cisco and EMC had been infected by mutual suspicion, and VMware’s acquisition of Nicira adds to the fear and loathing. Will Cisco, as rumored, move into storage? How will Insieme, helmed by Cisco’s aging switching gods, deliver a rebuttal to VMware’s networking aspirations? It won’t be too long before the answers trickle out.

Still, for now, Cisco, EMC, and VMware will protest that it’s business as usual. In some ways, that will be true, but it will also be a type of strategic misdirection. The relationship between EMC and Cisco will not be the same as it was before yesterday’s news hit the wires. When these partners get together for meetings, candor could be conspicuous by its absence.

Acquisitive Roads Not Traveled

Some have posited that Cisco might have acquired Nicira if VMware had not beaten it to the punch. I don’t know about that. Perhaps Cisco might have bought Nicira if the asking price were low, enabling Cisco to effectively kill the startup and be done with it. But Cisco would not have paid $1.26 billion for a company whose approach to networking directly contradicts Cisco’s hardware-based business model and market dominance. One typically doesn’t pay that much to spike a company, though I suppose if the prospective buyer were concerned enough about a strategic technology shift and a major market inflection, it might do so. In this case, though, I suspect Cisco was blindsided by VMware. It just didn’t see this coming — at least not now, not at such an early state of Nicira’s development.

Similarly, I didn’t see Microsoft or Citrix as buyers of Nicira. Microsoft is distracted by its cloud-service provider aspirations, and the $1.26 billion would have been too rich for Citrix.

IBM’s Moves and Cisco’s Overseas Cash Horde

One company I had envisioned as a potential (though less likely) acquirer of Nicira was IBM, which already has a vSwitch. IBM might now settle for the SDN-controller technology available from Big Switch Networks. The two have been working together on IBM’s Open Data Center Interoperable Network (ODIN), and Big Switch’s technology fits well with IBM’s PureSystems and its top-down model of having application workloads command and control  virtualized infrastructure. As the second network-virtualization domino to fall, Big Switch likely will go for a lower price than did Nicira.

On Twitter, Dell’s Brad Hedlund asked whether Cisco would use its vast cash horde to strike back with a bold acquisition of its own. Cisco has two problems here. First, I don’t see an acquisition that would effectively blunt VMware’s move. Second, about 90 percent of Cisco’s cash (more than $42 billion) is offshore, and CEO John Chambers doesn’t want to take a tax hit on its repatriation. He had been hoping for a “tax holiday” from the U.S. government, but that’s not going to happen in the middle of an election campaign, during a macroeconomic slump in which plenty of working Americans are struggling to make ends meet. That means a significant U.S.-based acquisition likely is off the table, unless the target company is very small or is willing to take Cisco stock instead of cash.

Cisco’s Innovator’s Dilemma

Oh, and there’s a third problem for Cisco, mentioned earlier in this prolix post. Cisco doesn’t want to embrace this SDN stuff. Cisco would rather resist it. The Cisco ONE announcement really was about Cisco’s take on network programmability, not about SDN-type virtualization in which overlay networks run atop an underyling physical network.

Cisco is caught in a classic innovator’s dilemma, held captive by the success it has enjoyed selling prodigious amounts of networking gear to its customers, and I don’t think it can extricate itself. It’s built a huge and massively successful business selling a hardware-based value proposition predicated on switches and routers. It has software, but it’s not really a software company.

For Cisco, the customer value, the proprietary hooks, are in its boxes. Its whole business model — which, again, has been tremendously successful — is based around that premise. The entire company is based around that business model.  Cisco eventually will have to reinvent itself, like IBM did after it failed to adapt to client-server computing, but the day of reckoning hasn’t arrived.

On the Defensive

Expect Cisco to continue to talk about the northbound interface (which can provide intelligence from the switch) and about network programmability, but don’t expect networking’s big leopard to change its spots. Cisco will try to portray the situation differently, but it’s defending rather than attacking, trying to hold off the software-based marauders of infrastructure virtualization as long as possible. The doomsday clock on when they’ll arrive in Cisco data centers just moved up a few ticks with VMware’s acquisition of Nicira.

What about the other networking players? Sadly, HP hasn’t figured out what to about SDN, even though OpenFlow is available on its former ProCurve switches. HP has a toe dipped in the SDN pool, but it doesn’t seeming willing to take the initiative. Juniper, which previously displayed ingenuity in bringing forward QFabric, is scrambling for an answer. Brocade is pragmatically embracing hybrid control planes to maintain account presence and margins in the near- to intermediate-term.

Arista Networks, for its part, might be better positioned to compete on networking’s new playing field. Arista Networks’ CEO Jayshree Ullal had the following to say about yesterday’s news:

“It’s exciting to see the return of innovative networking companies and the appreciation for great talent/technology. Software Defined Networking (SDN) is indeed disrupting legacy vendors. As a key partner of VMware and co-innovator in VXLANs, we welcome the interoperability of Nicira and VMWare controllers with Arista EOS.”

Arista’s Options

What’s interesting here is that Arista, which invariably presents its Extensible OS (EOS) as “controller friendly,” earlier this year demonstrated interoperability with controllers from VMware, Big Switch Networks, and Nebula, which has built a cloud controller for OpenStack.

One of Nebula’s investors is Andy Bechtolsheim, whom knowledgeable observers will recognize as the chief development officer (CDO) of, and major investor in, Arista Networks.  It is possible that Bechtolsheim sees a potential fit between the two companies — one building a cloud controller and one delivering cloud networking. To add fuel to this particular fire, which may or may not emit smoke, note that the Nebula cloud controller already features Arista technology, and that Nebula is hiring a senior network engineer, who ideally would have “experience with cloud infrastructure (OpenStack, AWS, etc. . . .  and familiarity with OpenFlow and Open vSwitch.”

 Open or Closed?

Speaking of Open vSwitch, Matt Palmer at SDN Centralwill feel some vindication now that VMware has purchased a company whose engineering team has made significant contributions to the OVS code. Palmer doubtless will cast a wary eye on VMware’s intentions toward OVS, but both Steve Herrod, VMware’s CTO, and Martin Casado, Nicira’s CTO, have provided written assurances that their companies, now combining, will not retreat from commitments to OVS and to Open Flow and Quantum, the OpenStack networking  project.

Meanwhile, GigaOm’s Derrick Harris thinks it would be bad business for VMware to jilt the open-source community, particularly in relation to hypervisors, which “have to be treated as the workers that merely carry out the management layer’s commands. If all they’re there to do is create virtual machines that are part of a resource pool, the hypervisor shouldn’t really matter.”

This seems about right. In this brave new world of virtualized infrastructure, the ultimate value will reside in an intelligent management layer.

PS: I wrote this post under a slight fever and a throbbing headache, so I would not be surprised to discover belatedly that it contains at least a couple typographical errors. Please accept my apologies in advance.

Advertisement

Infrastructure Virtualization Versus Converged Infrastructure

While writing about software-defined networking (SDN) and what it makes possible, I have been thinking about how its essential premise, and the premise behind infrastructure virtualization, conflicts with visions of converged infrastructure promulgated by the leading systems vendors in the information-technology (IT) industry.

According to the Wikipedia definition, converged infrastructure encompasses servers, storage, networking gear, and software for IT infrastructure management, automation, and orchestration. Accordingly, converged infrastructure leverages pooled IT resources to facilitate automated resource provisioning in support of dynamic application workloads.

Hardware Pedigrees in Software World

Leading vendors, most with more hardware than software pedigrees, have sought to offer proprietary converged-infrastructure offerings that closely integrate the hardware elements with software-based management attributes. In this regard,  we can cite vendors such as Cisco (with a storage assist from EMC or NetApp), Hewlett-Packard, Dell, Hitachi Data Systems, Oracle (though networking remains on open question there),  and, perhaps to a lesser extent, IBM.

Now, let’s think about SDN and where it ultimately leads. Cisco would like us to believe that SDN, if it leads anywhere, will eventually take us to network programmability, with a heavy emphasis on the significance of a northbound API (or APIs).  Cisco says that the means — in this case, SDN — are not as important as the desired ends, networking programmability, and many of Cisco’s enterprise customers will doubtless agree.

SDN End Games

Another SDN outcome is network virtualization, which admittedly can also be achieved through other means. But an interesting aspect of SDN’s approach to network virtualization, with its decoupling of the network’s control and data planes, is that it results in the abstracting of software-based network intelligence from the underlying hardware-based network brawn. It’s a software paradigm taken to a logical extreme, with server-based software running at the network edge controlling an abstracted pool of no-frills networking hardware.

Indeed, this is one end game for SDN, first playing out in the data centers of the major cloud service providers that guide the affairs of the Open Networking Foundation (ONF), and then — at some indeterminate future point too difficult to forecast without a Ouija board and a bottle of scotch  — also at large enterprises worldwide.

Let’s elaborate further. SDN facilitates network virtualization, which in turn is harnessed and orchestrated by cloud-management software, which also manages virtualized compute and storage infrastructure. As we’ve seen already in the compute world of servers, it’s getting increasingly difficult for a vanity hardware vendor to earn a buck in a virtualized world. Many service providers have found that they can get boxes that satisfy their needs, at lower prices, directly from ODMs that often build servers for name-brand OEMs.  Storage is being virtualized, too.

Network’s Turn

And now it is the network’s turn.

In such a world, how much longer will it make sense for customers to achieve converged infrastructure from single-source vendors that equip their hardware with proprietary fripperies and hooks to facilitate lock-in? Again, we can see these trend playing out at large service providers. Some have begun buying their networking hardware off the rack from ODMs, saving not only on capital expenditures (certainly the case for servers), but also on operating expenses relating to the ongoing management of network infrastructure. It’s true that they’re trading one sort of complexity for another, pushing it up the stack and into software rather than an operational hardware, but it’s a trade-off they’re clearly willing to make, probably because they have the resources and skill sets to make it work (and pay).

Obviously that is not a recipe for everybody, certainly not for most enterprises today. But times are changing, and it isn’t inconceivable to foresee a day when the enterprise will be able to avail itself of third-party private-cloud software and management tools that will allow it to exploit a similar model of virtualized infrastructure.

Prescience Pays Off

In the big picture, as far as the established networking vendors are concerned, the ONF’s conception of SDN is about more than just OpenFlow, and even about more than network programmability. It’s about how SDN supports a model of network virtualization, in service to infrastructure virtualization, that significantly enfeebles hardware-based business models. Some of these hardware-oriented vendors will not successfully pivot to a model of virtualized infrastructure and software primacy.

On the other hand, some vendors have had the prescience to see this trend approaching on the horizon; they understand its inevitability, and they have positioned themselves better than others to survive, and perhaps even thrive, after the eventual market transition.

We’ll look at one of those vendors in a subsequent post.

Juniper Steers QFabric Toward Midmarket

In taking its QFabric to mid-sized data centers, Juniper Networks has made the right decision. In my discussions with networking cognoscenti at customer organizations large and small, Juniper’s QFabric technology often engenders praise and respect. It also was perceived as beyond the reach, architecturally and financially, of many shops.

Now Juniper is attempting to get to those mid-market admirers that previously saw QFabric as above their station.

Quest for Growth

To be sure, Juniper targeted the original QFabric, the QFX 3000-G, at large enterprises and high-end service providers, addressing applications such as high-performance computing (HPC), high-frequency trading in financial services, and cloud services. In a blog post discussing the downsized QFabric QFX3000-M, R.K. Anand, EVP and general manager of Juniper’s Data Center Business Unit, writes, “ . . . the beauty of the “M” configuration is that it’s ideal for satellite data centers, new 10GbE pods and space-constrained data center environments.”

Juniper is addressing a gap here, and it’s a wise move. Still, some wonder whether it has come too late. It’s a fair question.

In pursuing the midmarket, Juniper is ratcheting up its competitive profile against the likes of Cisco Systems and HP, which also have been targeting the mid market for growth, a commodity in short supply in the enterprise-networking space these days.

Analysts are concerned about maturation and slow growth in the networking market, as well as increasing competition and “challenging” — that’s an analyst-speak euphemism for crappy –macroeconomic conditions.

Belated . . . Or Just Too Late

At its annual shindig for analysts, Juniper did little to allay those concerns, though the company understandably put an optimistic spin on its product strategy, competitive positioning, and ability to execute.  Needham and Company analyst Alex Henderson summarized proceedings as follows:

“Despite an upbeat tone to Juniper’s strategy positioning and its new product development story, management reset its long term revenue and margin targets to a lower level. Juniper lowered its revenue growth targets to 9-12% from a much older growth target of 20% plus. In addition, management lowered gross margin target to 63-66% from the prior target of 65-67%.”

Like its competitors, Juniper is eager to find growth markets, preferably those that will support robust margins. A smaller QFabric won’t necessarily provide a panacea for Juniper’s market dilemma, but it certainly won’t hurt.

It also gives Juniper’s channel partners reason to call on customers that might have been off their radar previously. As Dhritiman Dasgupta, senior director of Enterprise System and Routing at Juniper, told The VAR Guy, the channel is calling the new QFX-3000-M “their version” of the product.

We’ll have to see whether Juniper’s QFabric for mid-sized data centers qualifies as a belated arrival or as a move that simply came too late.

Putting an ONF Conspiracy Theory to Rest

We know that the Open Networking Foundation (ONF) is controlled by the six major service providers that constitute its board of directors.

It is no secret that the ONF is built this way by design. The board members wanted to make sure that they got what they wanted from the ONF’s deliberations, and they felt that existing standards bodies, such as the IETF and IEEE, were gerrymandered and dominated by vendors with self-serving agendas.

The ONF was devised with a different purpose in mind — not to serve the interests of the vendors, but to further the interests of the service-provider community, especially the service providers who sit on the ONF’s board of directors. In their view, conventional networking was a drag on their innovation and business agility, obstructing progress elsewhere in their data centers and IT operations. Whereas compute and storage resources had been virtualized and orchestrated, networking remained a relatively costly and unwieldy fiefdom ruled by “masters of complexity” rummaging manually through an ever-expanding bag of ad-hoc protocols.

Organizing for Clout

Not getting what they desired from their networking vendors, the service providers decided to seize the initiative. Acting on its own,  Google already had done just that, designing and deploying DIY networking gear.

The study of political elites tells us that an organized minority comprising powerful interests can impose its will on a disorganized majority.  In the past, as individual companies, the ONF board members had been unable to counter the agendas of the networking vendors. Together, they hoped to effect the change they desired.

So, we have the ONF, and it’s unlike the IETF and the IEEE in more ways than one. While not a standards body — the ONF describes itself as a “non-profit consortium dedicated to the transformation of networking through the development and standardization of a unique architecture called Software-Defined Networking (SDN)” — there’s no question that the ONF wants to ensure that it defines and delivers SDN according to its own rules  And at its own pace, too, not tied to the product-release schedules of networking vendors.

In certain respects, the ONF is all about consortium of customers taking control and dictating what it wants from the vendor community, which, in this case, should be understood to comprise not only OEM networking vendors, but also ODMs, SDN startups, and purveyors of merchant silicon.

Vehicle of Insurrection?

Just to ensure that its leadership could not be subverted, though, the ONF stipulated that vendors would not be permitted to serve on its board of directors. That means that representatives of Cisco, Juniper, and HP Networking, for example, will never be able to serve on the ONF board.

At least within their self-determined jurisdiction, the ONF’s board members call all the shots. Or do they?

Commenting on my earlier post regarding Cisco’s SDN counterstrategy, a reader, who wished to remain anonymous (Anon4This1), wrote the following:

Regarding this point: “Ultimately, [Cisco] does not control the ONF.”

That was one of the key reasons for the creation of the ONF. That is, there was a sense that existing standards bodies were under the collective thumb of large vendors. ONF was created such that only the ONF board can vote on binding decisions, and no vendors are allowed on the board. Done, right? Ah, well, not so fast. The ONF also has a Technical Advisory Group (TAG). For most decisions, the board actually acts on the recommendations of the TAG. The TAG does not have the same membership restrictions that apply to the ONF board. Indeed, the current chairman of the TAG is none other than influential Cisco honcho, Dave Ward. So if the ONF board listens to the TAG, and the TAG listens to its chairman… Who has more control over the ONF than anyone? https://www.opennetworking.org/about/tag

Board’s Iron Grip

If you follow the link provided by my anonymous commenter, you will find an extensive overview of the ONF’s Technical Advisory Group (TAG). Could the TAG, as constituted, be the tail that wags the ONF dog?

My analysis leads me to a different conclusion.  As I see it, the TAG serves at the pleasure of the ONF board of directors, individually and collectively. Nobody on the TAG does so without the express consent of the board of directors. Moreover, “TAG term appointments are annual and the chair position rotates quarterly.” Whereas Cisco’s Dave Ward serves as the current chair, his term will expire and somebody else will succeed him.

What about the suggestion that the “board actually acts on recommendations of the TAG,” as my commenter asserts. In many instances, that might be true, but the form and substance of the language on the TAG webpage articulates clearly that the TAG is, as its acronym denotes, an advisory body that reports to (and “responds to requests from”) the ONF board of directors.  The TAG offers technical guidance and recommendations, but the board makes the ultimate decisions. If the board doesn’t like what it’s getting from TAG members, annual appointments presumably can be allowed to expire and new members can succeed those who leave.

Currently, two networking-gear OEMs are represented on the ONF’s TAG. Cisco is represented by the aforementioned David Ward, and HP is represented by Jean Tourrilhes, an HP-Labs researcher in Networking and Communication who has worked with OpenFlow since 2008. These gentlemen seem to be on the TAG because those who run the ONF believe they can make meaningful contributions to the development of SDN.

No Coup

It’s instructive to note the company affiliations of the other six members serving on TAG. We find, for instance, Nicira CTO Martin Casado, as well as Verizon’s Dave McDysan, Google’s Amin Vahdat, Microsoft’s Albert Greenberg, Broadcom’s Puneet Agarwal, and Stanford’s Nick McKeown, who also is known as a Nicira co-founder and serves on that company’s board of directors.

If any company has pull, then, on the ONF’s TAG, it would seem to be Nicira Networks, not Cisco Systems. After all, Nicira has two of its corporate directors serving on the ONF’s TAG. Again, though, both gentlemen from Nicira are highly regarded and esteemed SDN proponents, who played critical roles in the advent and development of OpenFlow.

And that’s my point. If you look at who serves on the ONF’s TAG, you can clearly see why they’re in those roles and you can understand why the ONF board members would desire their contributions.

The TAG as a vehicle for an internal coup d’etat at the ONF? That’s one conspiracy theory that I’m definitely not buying.

HP’s Latest Cuts: Will It Be Any Different This Time?

If you were to interpret this list of acquisitions by Hewlett-Packard as a past-performance chart, and focused particularly on recent transactions running from the summer of 2008 through to the present, you might reasonably conclude that HP has spent its money unwisely.

That’s particularly true if you correlate the list of transactions with the financial results that followed. Admittedly, some acquisitions have performed better than others, but arguably the worst frights in this house of M&A horrors have been delivered by the most costly buys.

M&A House of Horrors

As Exhibit A, I cite the acquisition of EDS, which cost HP nearly $14 billion. As a series of subsequent staff cuts and reorganizations illustrate, the acquisition has not gone according to plan. At least one report suggested that HP, which just announced that it will shed about 27,000 employees during the next two years, will make about half its forthcoming personnel cuts in HP Enterprise Services, constituted by what was formerly known as EDS. Rather than building on EDS, HP seems to be shrinking the asset systematically.

The 2011 acquisition of Autonomy, which cost HP nearly $11 billion, seems destined for ignominy, too. HP described its latest financial results from Autonomy as “disappointing,” and though HP says it still has high hopes for the company’s software and the revenue it might derive from it, many senior executives at Autonomy and a large number of its software developers already have decamped. There’s a reasonable likelihood that HP is putting lipstick on a slovenly pig when it tries to put the best face on its prodigious investment in Autonomy.

Taken together, HP wagered a nominal $25 billion on EDS and Autonomy. In reality, it has spent more than that when one considers the additional operational expenses involved in integrating and (mis)managing those assets.

Still Haven’t Found What They’re Looking For

Then there was the Palm acquisition, which involved HP shelling out $1.2 billion to Bono and friends. By the time the sorry Palm saga ended, nobody at HP was covered in glory. It was an unmitigated disaster, marked by strategic reversals and tactical blunders.

I also would argue that HP has not gotten full value from its 3Com purchase. HP bought 3Com for about $2.7 billion, and many expected the acquisition to help HP become a viable threat to Cisco in enterprise networking. Initially, HP made some market-share gains with 3Com in the fold, but those advances have stalled, as Cisco CEO John Chambers recently chortled.

It is baffling to many, your humble scribe included, that HP has not properly consolidated its networking assets — HP ProCurve, 3Com outside China, and H3C in China. Even to this day, the three groups do not work together as closely as they should. H3C in China apparently regards itself as an autonomous subsidiary rather than an integrated part of HP’s networking business.

Meanwhile,  HP runs two networking operating systems (NOS) across its gear. HP justifies its dual-NOS strategy by asserting that it doesn’t want to alienate its installed base of customers, but there should be a way to manage a transition toward a unified code base. There should also be a way for all the gear to be managed by the same software. In sum, there should be a way for HP to get better results from its investments in networking technologies.

Too Many Missteps

As for some of HP’s other acquisitions during the last few years, it overpaid for 3PAR in a game of strategic-bidding chicken against Dell, though it seems to have wrung some value from its relatively modest purchase of LeftHand Networks. The jury is still out on HP’s $1.5-billion acquisition of ArcSight and its security-related technologies.

One could argue that the rationales behind the acquisitions of at least some of those companies weren’t terrible, but that the execution — the integration and assimilation — is where HP comes up short. The result, however, is the same: HP has gotten poor returns on its recent M&A investments, especially those represented by the largest transactions.

The point of this post is that we have to put the latest announcement about significant employee cuts at HP into a larger context of HP’s ongoing strategic missteps. Nobody said life is fair, but nonetheless it seems clear that HP employees are paying for the sins of their corporate chieftains in the executive suites and in the company’s notoriously fractious boardroom.

Until HP decides what it wants to be when it grows up, the problems are sure to continue. This latest in a long line of employee culling will not magically restore HP’s fortunes, though the bleating of sheep-like analysts might lead you to think otherwise. (Most market analysts, and the public markets that respond to them, embrace personnel cuts at companies they cover, nominally because the staff reductions result in near-term cost savings. However, companies with bad strategies can slash their way to diminutive irrelevance.)

Different This Time? 

Two analysts refused to read from the knee-jerk script that says these latest cuts necessarily position HP for better times ahead. Baird and Co. analyst Jason Noland was troubled by the drawn-out timeframe for the latest job cuts, which he described as “disheartening” and suggested would put a “cloud over morale.” Noland showed a respect for history and a good memory, saying that it is uncertain whether these layoffs would bolster the company’s fortunes any more than previous sackings had done.

Quoting from a story first published by the San Jose Mercury News:

In June 2010, HP announced it was cutting about 9,000 positions “over a multiyear period to reinvest for future growth.” Two years earlier, it disclosed a “restructuring program” to eliminate 24,600 employees over three years. And in 2005, it said it was cutting 14,500 workers over the next year and a half.

Rot Must Stop

If you are good with sums, you’ll find that HP has announced more than 48,000 job cuts from 2005 through 2010. And now another 27,000 over the next two years. But this time, we are told, it will be different.

Noland isn’t the only analyst unmoved by that argument. Deutsche Bank analysts countered that past layoffs “have done little to improve HP’s competitive position or reduce its reliance on declining or troubled businesses.” To HP’s assertion that cost savings from these cuts would be invested in growth initiatives such as cloud computing, security technology, and data analytics, Deutsche’s analysts retorted that HP “has been restructuring for the past decade.”

Unfortunately, it hasn’t only been restructuring. HP also has been an acquisitive spendthrift, investing and operating like a drunken, peyote-slathered sailor.  The situation must change. The people who run HP need to formulate and execute a coherent strategy this time so that other stakeholders, including those who still work for the company, don’t have to pay for their sins.

Last Week’s Leavings: Avaya and HP

Update on Avaya

Pursuant to a post I wrote earlier last week on Avaya’s latest quarterly financial results and its continue travails, I’m increasingly pessimistic about the company’s prospects to deliver a happy ending (as in a successful exit) for its principal private-equity stakeholders.  There’s no growth profile, cost containment has yet to yield profitability, and the long-term debt overhang remains ominous. The company could sell its networking business, but that would only buy a modest amount of latitude.

At a company all-hands meeting last week, which I mentioned in the aforementioned post, Avaya CEO Kevin Kennedy spoke but didn’t say anything momentous, according to our sources. Those sources described the session as “disappointing,” in that little was disclosed about the company’s plans to right the ship. Kennedy also didn’t talk much about the long-delayed IPO, though he did say its timing would be determined by the company’s sponsors — which is true, but doesn’t tell us anything.

Kennedy apparently did say that the employee headcount at the company is likely to be reduced through layoffs, attrition, and “restructuring,” the last of which typically results in layoffs. He also reportedly said Avaya had too many locations, which suggests that geographic consolidation is in the cards.

HP: Layoffs will continue until morale improves

Speaking of cuts, reports that HP might be shedding a whopping eight percent of its staff are troubling. Remember, HP is a company that was headed by Mark Hurd, a CEO notorious for his operational austerity. Hurd wielded the sharp budgetary implements so exuberantly, he must have brought tears to the eyes of Chainsaw Al Dunlap, former CEO of Sunbeam, who, like Hurd, was ousted under dubious circumstances.

During Hurd’s reign at HP, spending on R&D was slashed aggressively, and it was somewhat jokingly suggested that the tightfisted CEO might insist that his employees power their offices by riding electric stationary bikes.  After the Hurd years, and the desultory and fleeting rule of Leo Apotheker, HP now appears to be getting another whopping dollop of restructuring. The groups affected will be hit hard, and one wonders how morale throughout the company will be affected. We might learn more about the extent and nature of the cuts later today.

Cisco’s Storage Trap

Recent commentary from Barclays Capital analyst Jeff Kvaal has me wondering whether  Cisco might push into the storage market. In turn, I’ve begun to think about a strategic drift at Cisco that has been apparent for the last few years.

But let’s discuss Cisco and storage first, then consider the matter within a broader context.

Risks, Rewards, and Precedents

Obviously a move into storage would involve significant risks as well as potential rewards. Cisco would have to think carefully, as it presumably has done, about the likely consequences and implications of such a move. The stakes are high, and other parties — current competitors and partners alike — would not sit idly on their hands.

Then again, Cisco has been down this road before, when it chose to start selling servers rather than relying on boxes from partners, such as HP and Dell. Today, of course, Cisco partners with EMC and NetApp for storage gear. Citing the precedent of Cisco’s server incursion, one could make the case that Cisco might be tempted to call the same play .

After all, we’re entering a period of converged and virtualized infrastructure in the data center, where private and public clouds overlap and merge. In such a world, customers might wish to get well-integrated compute, networking, and storage infrastructure from a single vendor. That’s a premise already accepted at HP and Dell. Meanwhile, it seems increasingly likely data-center infrastructure is coming together, in one way or another, in service of application workloads.

Limits to Growth?

Cisco also has a growth problem. Despite attempts at strategic diversification, including failed ventures in consumer markets (Flip, anyone?), Cisco still hasn’t found a top-line driver that can help it expand the business while supporting its traditional margins. Cisco has pounded the table perennially for videoconferencing and telepresence, but it’s not clear that Cisco will see as much benefit from the proliferation of video collaboration as once was assumed.

To complicate matters, storm clouds are appearing on the horizon, with Cisco’s core businesses of switching and routing threatened by the interrelated developments of service-provider alienation and software-defined networking (SDN). Cisco’s revenues aren’t about to fall off a cliff by any means, but nor are they on the cusp of a second-wind surge.

Such uncertain prospects must concern Cisco’s board of directors, its CEO John Chambers, and its institutional investors.

Suspicious Minds

In storage, Cisco currently has marriages of mutual convenience with EMC (VBlocks and the sometimes-strained VCE joint venture) and with NetApp (the FlexPod reference architecture).  The lyrics of Mark James’ song Suspicious Minds are evocative of what’s transpiring between Cisco and these storage vendors. The problem is not only that Cisco is bigamous, but that the networking giant might have another arrangement in mind that leaves both partners jilted.

Neither EMC nor NetApp is oblivious to the danger, and each has taken care to reduce its strategic reliance on Cisco. Conversely, Cisco would be exposed to substantial risks if it were to abandon its existing partnership in favor of a go-it-alone approach to storage.

I think that’s particularly true in the case of EMC, which is the majority owner of server-virtualization market leader VMware as well as a storage vendor. The corporate tandem of VMware and EMC carries considerable enterprise clout, and Cisco is likely to be understandably reluctant to see the duo become its adversaries.

Caught in a Trap

Still, Cisco has boxed itself into a strategic corner. It needs growth, it hasn’t been able to find it from diversification away from the data center, and it could easily see the potential of broadening its reach from networking and servers to storage. A few years ago, the logical choice might have been for Cisco to acquire EMC. Cisco had the market capitalization and the onshore cash to pull it off five years ago, perhaps even three years ago.

Since then, though, the companies’ market fortunes have diverged. EMC now has a market capitalization of about $54 billion, while Cisco’s is slightly more than $90 billion. Even if Cisco could find a way of repatriating its offshore cash hoard without taking a stiff hit from the U.S. taxman, it wouldn’t have the cash to pull of an acquisition of EMC, whose shareholders doubtless would be disinclined to accept Cisco stock as part of a proposed transaction.

Therefore, even if it wanted to do so, Cisco cannot acquire EMC. It might have been a good move at one time, but it isn’t practical now.

Losing Control

Even NetApp, with a market capitalization of more than $12.1 billion, would rate as the biggest purchase by far in Cisco’s storied history of acquisitions. Cisco could pull it off, but then it would have to try to further counter and commoditize VMware’s virtualization and cloud-management presence through a fervent embrace of something like OpenStack or a potential acquisition of Citrix. I don’t know whether Cisco is ready for either option.

Actually, I don’t see an easy exit from this dilemma for Cisco. It’s mired in somewhat beneficial but inherently limiting and mutually distrustful relationships with two major storage players. It would probably like to own storage just as it owns servers, so that it might offer a full-fledged converged infrastructure stack, but it has let the data-center grass grow under its feet. Just as it missed a beat and failed to harness virtualization and cloud as well as it might have done, it has stumbled similarly on storage.

The status quo is likely to prevail until something breaks. As we all know, however, making no decision effectively is a decision, and it carries consequences. Increasingly, and to an extent that is unprecedented, Cisco is losing control of its strategic destiny.

Why Google Isn’t A Networking Vendor

Invariably trenchant and always worth reading, Ivan Pepelnjak today explores what he believes Google is doing with OpenFlow. As it turns out, Pepelnjak posits that Google is doing more with other technologies than it is with OpenFlow, seemingly building a modern routing platform and a traffic-engineering application deserving universal envy and admiration.

In assessing what Google is doing, Pepelnjak would seem to get it right, as he usually does, but I would like to offer modest commentary on a couple minor points. Let’s start with his assessment of how Google is using OpenFlow:

“Google is using OpenFlow between controller and adjacent chassis switches because (like every other vendor) they need a protocol between the control plane and forwarding planes, and they decided to use an already-documented one instead of inventing their own (the extra OpenFlow hype could also persuade hardware vendors to implement more OpenFlow capabilities in their next-generation chipsets).”

OpenFlow: Just A Piece of the Puzzle

First off, Pepelnjak is essentially right. I’m not going to quarrel with his central point, which is that Google adopted OpenFlow as a communication protocol between (and that separates) the control plane and the forwarding plane. That’s OpenFlow’s purpose, its raison d’être, so it’s no surprising that Google would use it that way. As Chris Rock might say, that’s what OpenFlow is supposed to do.

Larger claims made on behalf of OpenFlow are not its fault. Subsequently, Pepelnjak states that OpenFlow is but a small piece of the networking puzzle at Google, and he’s right there, too. I don’t think it’s possible for OpenFlow to be a bigger piece. As a protocol between the control and forwarding planes, OpenFlow is what it is.

Beyond that, though, Pepelnjak refers to Google as a “vendor,” which I find odd.

Not a Networking Vendor

In many ways, Google is a vendor. It’s a cloud vendor, it’s an advertising vendor, it’s a SaaS vendor, and so on. But, in this particular context, Pepelnjak seems to be classifying Google as a networking vendor. That would be an incorrect designation, and here’s why: Vendors sell things, they vend. Google doesn’t sell the homegrown networking hardware and software that it implements internally. It’s doing it only for itself, not as a business proposition that would involve it proffering the technology to customers. As such, it should not be tossed into the same networking-vendor bucket as a Cisco, a Juniper, or an HP.

In fact, Google is going the roll-your-own route with its network infrastructure precisely because it couldn’t get what it wanted from networking vendors. In that respect, it is the anti-vendor. Google and the other gargantuan cloud-service providers who steer the Open Networking Foundation (ONF) promulgated software-defined networking (SDN) and espoused OpenFlow because they wanted network infrastructure to be different from the conventional approaches advanced by networking vendors and the traditional networking industry.

Whatever else one might think of the ONF, it’s difficult not to conclude that it represents an instance of customers (in this case, cloud-service providers) attempting to wrest strategic control from vendors to set a technological agenda. Google, a networking vendor? Only if one misunderstands the origins and purpose of ONF.

Creating a Market

Nonetheless, Google might have a hidden agenda here, and Pepelnjak touches on it when he writes parenthetically that “the extra OpenFlow hype could also persuade hardware vendors to implement more OpenFlow capabilities in their next-generation chipsets.”

Well, yes. Just because Google has chosen to roll its own and doesn’t like what the networking industry is selling today, it doesn’t necessarily mean that it has closed the door to buying from vendors in the future, presuming said vendors jump on the ONF bandwagon and start developing the sorts of products Google wants. Google doesn’t want to disclose the particulars of its network infrastructure, which it views as a source of competitive advantage and differentiation, but it is not averse to hyping OpenFlow in a bid to spur the supply side of the market to get with the SDN program.

Later in his post, Pepelnjak notes that Google used “standard protocols (BGP and IS-IS) like everyone else and their traffic engineering implementation (and probably the northbound API) is proprietary. How is that different (from the openness perspective) from networks built from Juniper’s or Cisco’s gear?”

Critical Distinction

Again, my point is that Google is not a vendor. It is customer building network technologies for its own use. By the very nature of that implicit (non)-transaction, the technologies in question will be proprietary. They’re not going anywhere other than Google’s data-center network. Google owns them, and it is in full control of defining them and releasing them on a schedule that suits Google’s internal objectives.

It’s rather different for vendors, who profit — if they’re doing it right — from the commercial sale of products and technologies to customers. There might be value in proprietary products and technologies in that context, but customers need to ensure that the proprietary value outweighs the proprietary risks, typically represented by vendor lock-in and upgrade cycles dictated by the vendor’s product-release schedule.

Google is not a vendor, and neither are the other companies driving the agenda of the ONF. I think it’s critical to make that distinction in the context of SDN and, to a lesser extent, OpenFlow.

Time for HP to Show Its SDN Hand

Although HP has demonstrated mounting support for OpenFlow, it has yet to formulate what I would call a full-fledged strategy for software defined networking (SDN). Yes, HP offers OpenFlow-capable switches, but there’s more to SDN than OpenFlow. Indeed. there’s definitely more to SDN than the packet-shunting hardware at the bottom of the value chain.

The word “software” is prominent in the SDN acronym for a reason, and HP hasn’t told us much about its plans in that area. I am not able to attend this week’s Interop in Las Vegas, but I am hoping HP takes the opportunity this week to disclose a meaningful SDN strategy.

HP could start by telling us what it plans to do on the controller front. Does its strategy involve taking a wait-and-see attitude, working with the likes of Big Switch Networks? Does HP have a controller of its own in the works? As an august publication once trumpeted in a long-ago  advertising campaign, inquiring minds want to know.

Above the controller, how does HP see the ecosystem developing? Does it plan to provide applications, management, orchestration? I think we have a reasonably good idea where Cisco is going with its SDN strategy — though Cisco would rather talk about network programmability (more on which later) — but HP has yet to play its hand.

HP is in Las Vegas this week. It’s as good a time as any to put its SDN cards on the table.

At Dell, Networking’s Role Secondary but Integral

Dell made a networking announcement last week, and, for the most part, reaction was muted. That’s party because Dell’s networking narrative is evolving and in transition, and partly because the announcements related to incremental, though notable, progression.

To be fair, Dell’s networking narrative is part of a larger story the company is telling in the data center. Networking is integral to that story, but it’s not the centerpiece and never will be. Dell is working from the blueprint of its Virtual Network Architecture (VNA), so its purchase and stewardship of Force10 is framed within a bigger picture that involves not just converged infrastructure, but also workload-driven orchestration of virtualized environments.

Integration and Assimilation

Some good news for Dell is that its integration and assimilation of Force10 Networks seems to have gone well and is now complete.  Dell’s OpenManage Networking Manager (OMNM) 5.0. offers a new look and support for the full line of Dell networking products, including the Force10 portfolio. What’s more, in its Dell Force10 MXL blade interconnect, a  40Gb Ethernet switch for the M1000e Blade chassis, Dell brings delivers an apt metaphor as well as a blade-server switch.

In that sense, it’s helpful to recall that Dell’s acquisition of Force10 was motivated by a desire to integrate networking into an automated, orchestrated data center in which it already offered compute and storage. Dell concluded that needed to own networking technology just as it owned server and storage technology. It further deduced that it needed a comprehensive networking portfolio, extending across SAN and LAN environments. Just as it moved previously to shake its dependence on storage partners, it would do likewise in networking.

Dell sees networking as an integral enabling technology, but not as an end in itself. Dell believes it can be more flexible than HP and IBM in certain enterprise demographics, and it believes it can outflank Cisco by being less “network centric” and more open to developments such as software defined networking (SDN).Force10, which was thought to be between a rock and hard place just before being acquired, understands and accepts its role in the Dell universe.

Fitting Into VNA

The key to understanding Dell’s data-center strategy is Virtual Network Architecture (VNA). The announcement of the new blade-server switch fits into that plan. Dell says VNA’s purpose is to virtualize, automate, and orchestrates network services so that they can adapt readily to application and business requirements. Core elements of VNA include the following:

  • High-performance switching systems for the campus and the data center
  • Virtualized Layer 4-7 services
  • Comprehensive automation & orchestration software
  • Open workload/hypervisor interfaces

So, what does it all mean? It means Dell is taking an approach that it believes will be differentiated and add considerable value in customers’ and prospective customers’ data centers. On the networking front, Dell believes it has espoused a strategy that encompasses and envelops the rise of SDN while also taking and accommodating approach to the networking gear already present in customer accounts.

Workload-Oriented Approach

In an article at The VAR Guy, Nathan Eddy quotes Dario Zamarian, VP and GM of Dell Networking, as follows:

“We are taking a workload-oriented approach — as in, ‘What does each require first?’ as opposed to starting with the network first [and] then trying to fit the application to it. In other words, networking is the enabler. The ultimate goal of VNA is to make networking as simple to set up, automate, operate, and manage as servers. VNA is doing for networking what VMware did for servers.”

Well, that’s the plan. In theory, in a slide show, all the pieces are there, but Dell has to execute and deliver on the vision. One can identify holes in the structure, places where Dell will need to buy, partner, or build to close the gaps. It’s clearly doing that, though, as the Force10 acquisition and others recently attest.

Taking Force10’s technology forward in alignment with its plans, Dell not only announced  a 40GbE-enabled blade server switch. It also introduced fabric- and network-management tools to simplify operations in the data center and the campus, and it announced data-center enhancements (stacking technology, L2 multipathing, data-center bridging, automated workload mobility through auto-provisioning of VLANs) to Force10’s FTOS for its S4810 10/40G switching platform.

Encompassing SDN

On the SDN front, Dell announced interoperability with Big Switch Networks’ Open SDN architecture and its OpenFlow-based Floodlight controller. That interoperability will be showcased next week in joint demonstrations at Interop, with the application emphasis on cloud multi-tenancy.

Regardless of where Dell goes with SDN, and regardless of how quickly (or slowly) SDN makes encroachments into the enterprise, Dell’s VNA model accounts for it and much else besides. Dell believes it can win in workload and network orchestration, with its Advanced Infrastructure Manager (AIM) providing virtual-network programming interfaces and doubtless with some forthcoming orchestration technologies it has yet to introduce (or buy).

Dell’s VNA seems a viable plan. But can the company continue to execute on it? Dell would have more focus and resources to do so if it jettisoned its woebegone consumer business, but that divestiture doesn’t seem to be in the cards.

Hardware Elephant in the HP Cloud

Taking another run at cloud computing, HP made news today with its strategy for the “Converged Cloud,” which focuses on hybrid cloud environments and provides a common architecture that spans existing data centers as well as private and public clouds.

In finally diving into infrastructure as a service (IaaS), with a public beta of HP Public Infrastructure as a Service slated for May 10, HP will go up against current IaaS market leader Amazon Web Services.

HP will tap OpenStack and hypervisor neutrality as it joins the battle. Not surprisingly, it also will leverage its own hardware portfolio for compute, storage, and networking — HP Converged Infrastructure, which it already has promoted for enterprise data centers — as well as a blend of software and services that is meant to provide bonding agents to keep customers in the HP fold regardless of where and how they want to run their applications.

Trying to Set the Cloud Agenda

In addition to HP Public Infrastructure as a Service — providing on-demand compute instances or virtual machines, online storage capacity, and cached content delivery — HP Cloud Services also will unveil a private beta of a relational database service for MySQL and a block storage service that supports movement of data from one compute instance to another.

While HP has chosen to go up against AWS in IaaS — though it apparently is targeting a different constituency from the one served by Amazon — perhaps a bigger story is that HP also will compete with other service providers, too, including other OpenStack purveyors.

There’s some risk in that decision, no question, but perhaps not as much as one might think. The long-term trend, already established at the largest cloud service providers on the planet, is to move away from branded, vanity hardware in favor of no-frills boxes from original design manufacturers (ODMs).  This will not only affect servers, but also storage and networking hardware, the latter of which has seen the rise of merchant silicon. HP can read the writing on the data-center wall, and it knows that it must attempt to set the cloud agenda, or cede the floor and watch its hardware sales atrophy.

Software and Services as Hooks

Hybrid clouds are HP’s best bet, though far from a sure thing. Indeed, one can interpret  HP’s Converged Cloud as a bulwark against what it would perceive as a premature decline in its hardware business.

Simply packaging and reselling OpenStack and a hypervisor of the customer’s choice wouldn’t achieve HP’s “sticky” business objectives, so it is tapping its software and services for the hooks and proprietary value that will keep customers from straying.

For managing hybrid environments, HP has its new Cloud Maps, which provides catalogue of prepackaged application templates to speed deployment of enterprise cloud-services applications.

To test the applications, the company offers HP Service Virtualization 2.0, which enables enterprise customers to test quality and performance of cloud or mobile applications without interfering with production systems. Meanwhile, HP Virtual Application Networks — which taps HP’s Intelligent Management Center (IMC) and the IMC Virtual Application Networks (VAN) Manager Module — also makes its debut. It is designed to eliminate network-related cloud-services bottlenecks by speeding application deployment, automating management, and ensuring service levels for virtual and cloud applications on HP’s FlexNetwork architecture.

Maintaining and Growing

HP also will launch two new networking services: HP Virtual Network Protection Service, which leverages best practices and is intended to set a baseline for security of network virtualization; and HP Network Cloud Optimization Service, which is intended to customers enhance their networks for delivery of cloud services.

For  enterprises that don’t want to manage their clouds, the company offers HP Enterprise Cloud Services as well as other services to get enterprises up to speed on how cloud can best be harnessed.

Whether the software and services will add sufficient stickiness to HP’s hardware business remains to be seen, but there’s no question that HP is looking to maintain existing revenue streams while establishing new ones.

Direct from ODMs: The Hardware Complement to SDN

Subsequent to my return from Network Field Day 3, I read an interesting article published by Wired that dealt with the Internet giants’ shift toward buying networking gear from original design manufacturers (ODMs) rather than from brand-name OEMs such as Cisco, HP Networking, Juniper, and Dell’s Force10 Networks.

The development isn’t new — Andrew Schmitt, now an analyst at Infonetics, wrote about Google designing its own 10-GbE switches a few years ago — but the story confirmed that the trend is gaining momentum and drawing a crowd, which includes brokers and custom suppliers as well as increasing numbers of buyers.

In the Wired article, Google, Microsoft, Amazon, and Facebook were explicitly cited as web giants buying their switches directly from ODMs based in Taiwan and China. These same buyers previously procured their servers directly from ODMs, circumventing brand-name server vendors such as HP and Dell.  What they’re now doing with networking hardware, then, is a variation on an established theme.

The ONF Connection

Just as with servers, the web titans have their reasons for going directly to ODMs for their networking hardware. Sometimes they want a simpler switch than the brand-name networking vendors offer, and sometimes they want certain functionality that networking vendors do not provide in their commercial products. Most often, though, they’re looking for cheap commodity switches based on merchant silicon, which has become more than capable of handling the requirements the big service providers have in mind.

Software is part of the picture, too, but the Wired story didn’t touch on it. Look at the names of the Internet companies that have gone shopping for ODM switches: Google, Microsoft, Facebook, and Amazon.

What do those companies have in common besides their status as Internet giants and their purchases of copious amounts of networking gear? Yes, it’s true that they’re also cloud service providers. But there’s something else, too.

With the exception of Amazon, the other three are board members in good standing of the Open Networking Foundation (ONF). What’s more,  even though Amazon is not an ONF board member (or even a member), it shares the ONF’s philosophical outlook in relation to making networking infrastructure more flexible and responsive, less complex and costly, and generally getting it out of the way of critical data-center processes.

Pica8 and Cumulus

So, yes, software-defined networking (SDN) is the software complement to cloud-service providers’ direct procurement of networking hardware from ODMs.  In the ONF’s conception of SDN, the server-based controller maps application-driven traffic flows to switches running OpenFlow or some other mechanism that provides interaction between the controller and the switch. Therefore, switches for SDN environments don’t need to be as smart as conventional “vertically integrated” switches that combine packet forwarding and the control plane in the same box.

This isn’t just guesswork on my part. Two companies are cited in the Wired article as “brokers” and “arms dealers” between switch buyers and ODM suppliers. Pica8 is one, and Cumulus Networks is the other.

If you visit the Pica8 website,  you’ll see that the company’s goal is “to commoditize the network industry and to make the network platforms easy to program, robust to operate, and low-cost to procure.” The company says it is “committed to providing high-quality open software with commoditized switches to break the current performance/price barrier of the network industry.” The company’s latest switch, the Pronto 3920, uses Broadcom’s Trident+ chipset, which Pica8 says can be found in other ToR switches, including the Cisco Nexus 3064, Force10 S4810, IBM G8264, Arista 7050S, and Juniper QFC-3500.

That “high-quality open software” to which Pica8 refers? It features XORP open-source routing code, support for Open vSwitch and OpenFlow, and Linux. Pica8 also is a relatively longstanding member of ONF.

Hardware and Software Pedigrees

Cumulus Networks is the other switch arms dealer mentioned in the Wired article. There hasn’t been much public disclosure about Cumulus, and there isn’t much to see on the company’s website. From background information on the professional pasts of the company’s six principals, though, a picture emerges of a company that would be capable of putting together bespoke switch offerings, sourced directly from ODMs, much like those Pica8 delivers.

The co-founders of Cumulus are J.R. Rivers, quoted extensively in the Wired article, and Nolan Leake. A perusal of their LinkedIn profiles reveals that both describe Cumulus as “satisfying the networking needs of large Internet service clusters with high-performance, cost-effective networking equipment.”

Both men also worked at Cisco spin-in venture Nuova Systems, where Rivers served as vice president of systems architecture and Leake served in the “Office of the CTO.” Rivers has a hardware heritage, whereas Leake has a software background, beginning his career building a Java IDE and working at senior positions at VMware and 3Leaf Networks before joining Nuova.

Some of you might recall that 3Leaf’s assets were nearly acquired by Huawei, before the Chinese networking company withdrew its offer after meeting with strenuous objections from the Committee on Foreign Investment in the United States (CFIUS). It was just the latest setback for Huawei in its recurring and unsuccessful attempts to acquire American assets. 3Com, anyone?

For the record, Leake’s LinkedIn profile shows that his work at 3Leaf entailed leading “the development of a distributed virtual machine monitor that leveraged a ccNUMA ASIC to run multiple large (many-core) single system image OSes on a Infiniband-connected cluster of commodity x86 nodes.”

For Companies Not Named Google

Also at Cumulus is Shrijeet Mukherjee, who serves as the startup company’s vice president of software engineering. He was at Nuova, too, and worked at Cisco right up until early this year. At Cisco, Mukherjee focused on” virtualization-acceleration technologies, low-latency Ethernet solutions, Fibre Channel over Ethernet (FCoE), virtual switching, and data center networking technologies.” He boasts of having led the team that delivered the Cisco Virtualized Interface Card (vNIC) for the UCS server platform.

Another Nuova alumnus at Cumulus is Scott Feldman, who was employed at Cisco until May of last year. Among other projects, he served in a leading role on development of “Linux/ESX drivers for Cisco’s UCS vNIC.” (Do all these former Nuova guys at Cumulus realize that Cisco reportedly is offering big-bucks inducements to those who join its latest spin-in venture, Insieme?)

Before moving to Nuova and then to Cisco, J.R. Rivers was involved with Google’s in-house switch design. In the Wired article, Rivers explains the rationale behind Google’s switch design and the company’s evolving relationship with ODMs. Google originally bought switches designed by the ODMs, but now it designs its own switches and has the ODMs manufacture them to the specifications, similar to how Apple designs its iPads and iPhones, then  contracts with Foxconn for assembly.

Rivers notes, not without reason, that Google is an unusual company. It can easily design its own switches, but other service providers possess neither the engineering expertise nor the desire to pursue that option. Nonetheless, they still might want the cost savings that accrue from buying bare-bones switches directly from an ODM. This is the market Cumulus wishes to serve.

Enterprise/Cloud-Service Provider Split

Quoting Rivers from the Wired story:

“We’ve been working for the last year on opening up a supply chain for traditional ODMs who want to sell the hardware on the open market for whoever wants to buy. For the buyers, there can be some very meaningful cost savings. Companies like Cisco and Force10 are just buying from these same ODMs and marking things up. Now, you can go directly to the people who manufacture it.”

It has appeal, but only for large service providers, and perhaps also for very large companies that run prodigious server farms, such as some financial-services concerns. There’s no imminent danger of irrelevance for Cisco, Juniper, HP, or Dell, who still have the vast enterprise market and even many service providers to serve.

But this is a trend worth watching, illustrating the growing chasm between the DIY hardware and software mentality of the biggest cloud shops and the more conventional approach to networking taken by enterprises.