Category Archives: Storage

Dell Makes Enterprise Moves, Confronts Dilemma

Dell reported its third-quarter earnings yesterday, and reactions to the news generally made for grim reading. The company cannot help but know that it faces a serious dilemma: It must continue an aggressive shift into enterprise solutions while propping up a punch-drunk personal-computer business that is staggered, bloody, and all but beaten.

The word “dilemma” is particularly appropriate in this context. The definition of dilemma is “a situation in which a difficult choice has to be made between two or more alternatives, especially equally undesirable ones.” 

Hard Choices

Dell seems too attached to the PC to give it up, but in the unlikely event that Dell chose to kick to the commoditized box to the curb, it would surrender a large, though diminishing, pool of low-margin revenue. The market would react adversely, particularly if Dell were not able to accelerate growth in other areas.  

While Dell is growing its revenue in servers and networking, especially the latter, those numbers aren’t rising fast enough to compensate for erosion in what Dell calls “mobility” and “desktop.” What’s more, Dell’s storage business has gone into a funk, with “Dell-owned IP storage revenue” down 3% on a year-to-year basis.

Increased Enterprise Focus

To its credit, Dell seems to recognize that it needs to pull out all the stops. It continues to make acquisitions, most of them related to software, designed bolster its enterprise-solutions profile. Today, in fact, it announced the acquisition of Gale Technologies, and it also announced that Dario Zamarian, a former Cisco executive who has been serving as VP and GM of Dell Networking, has become vice president and general manager of  the newly formed Dell Enterprise Systems & Solutions, “focused on the delivery of converged and enterprise workload topologies and solutions.” Zamarian will report to former HP executive Marius Haas, president of Dell Enterprise Solutions Group. 

Zamarian’s former role as VP and GM of Dell Networking will be assumed by Tom Burns, who comes directly from Alcatel-Lucent, where he served as president of that company’s Enterprise Products Group, which included voice, unified communications, networking, and security solutions.

Dell has the cash to make other acquisitions to strengthen its hand in private and hybrid clouds, and we should expect it to do so.  The company would have more cash to make those moves if it were to divest its PC business, but Dell doesn’t seem willing to bite that bullet. 

That would be a difficult move to make — wiping out substantial revenue while eliminating a piece of the business that is a vestigial piece of Dell’s identity — but half measures aren’t in Dell’s long-term interests.  It needs to be all-in on the enterprise, and I think also needs to adopt a software mindset. As long as the PC business is around, I suspect Dell won’t be able to fully and properly make that transition. 

Inevitability of Virtualized Infrastructure

As a previous post, Infrastructure Virtualization Versus Converged Infrastructure, attests, I strongly believe that virtualization is leading us to a future in which underlying hardware becomes largely undifferentiated and interchangeable. Applications and orchestration will reside in software riding atop the virtualization layer, which effectively will function as an abstraction buffer above hardware infrastructure.  The latter will eventually include hardware for computer, networking, and storage.

Vendors that ride hardware-based business models will have trouble adapting to this new reality. Many of these companies have hordes of software developers and software engineers, but they inextricably intertwine their software and hardware as a matter of business practice, selling the latter as proprietary boxes that often cannot interoperate with, or be swapped out for, competing hardware. It’s classic hardware-based vendor lock-in, and it’s been with us for many years. This applies to vendors that sell all three main types of hardware infrastructure, and to those that sell them tied together as converged infrastructure.

Loosening a Tenacious Grip

Proprietary data-center hardware would appear to be running on borrowed time, though it will not disappear overnight. Its grip will be especially tenacious in the enterprise, though the pull of the cloud eventually will weaken its hold. Proprietary compute infrastructure will be the first to succumb, but networking and storage will fall, too. The economic and operational logic powering the transition is inexorable, so it’s a question of when, not whether, it will happen.

While CapEx cost savings are an obvious benefit, operational flexibility (shifting workloads with agility and less effort) and OpEx savings also are factors. Infrastructure hardware will be cheaper, as well as easier and less costly to run. Pools of industry-standard hardware will be reallocated on demand to serve the needs of application workloads. Data-center customers no longer will be constrained by the hardware-release schedules of their previous vendors of choice. Customers also will be able to take advantage of the latest industry-standard chipsets, which will power hardware with improved energy efficiency and better cooling characteristics.

In servers, and now in storage, Facebook’s Open Compute Project (OCP) has sought to expedite the move to off-the-shelf hardware. Last week at Oscon, Frank Frankovsky, a vice president at  Facebook and the chairman and president of the OCP, rallied the open-source troops by arguing that proprietary x86 systems are “gratuitously differentiated.” He called for all hardware-design specifications to be open.

OCP as Competitive Cudgel

That would benefit Facebook, which launched OCP as a vehicle to help it lower data-center CapEx and OpEx, boost operational flexibility, and — last but not least — mitigate a competitive advantage held by Google, which had a massive head start in rationalizing and fine-tuning its data centers and IT infrastructure. In fact, Google cloaks its IT operations in extreme secrecy, believing that its practices and technologies deliver substantial competitive advantage over its main rivals, including Facebook. The latter must agree, because the animating idea behind Open Compute is to create a market, demand and supply, for commodity server hardware will reduce or eliminate Google’s edge.

Some have wondered why Google hasn’t joined OCP, but the answer should be obvious. Google believes it has cracked the infrastructure code, and it is therefore disinclined to share its insights and best practices with its competitors. Google isn’t a fan of proprietary vanity hardware — it’s been designing its own gear, then going to server and network ODMs, for some time now — but Google feels it has nothing to gain, and much to lose, from opening its kimono to the OCP crowd.

With networking, though, Google felt it needed a little help from its friends — as well as from its enemies. That explains why it allied with Facebook and other cloud-service providers in the Open Networking Foundation (ONF), which I have written about here on many occasions. The goal of the ONF, as with OCP, is to slip the proprietary shackles of hardware vendors, whose gear functions as an impediment to operational agility as well as a costs that could be reduced through SDN-style network virtualization. Google’s communitarian approach to addressing the network-virtualization riddle suggests that it believes it cannot achieve the desired outcome on its own.

Cracking the Nut

Whereas compute hardware was well on its way to standardization, networking hardware, until the ONF, was akin to a vertically integrated mainframe system, replete with a proliferating number of both proprietary and industry-standard protocols. Networking is a bigger, and tougher, nut to crack.

But crack it will, first at the big cloud-service providers, then, as the cloud gains momentum, at enterprises.

PS: I will post something tomorrow about VMware’s just-announced acquisition of Nicira, which is big news no matter how you slice it.  I wrote the above post before I learned of the acquisition.

Further Progress of Infineta

When I attended Network Field Day 3 (NFD3) in the Bay Area back in late March, the other delegates and I had the pleasure of receiving a presentation on Infineta Systems’ Data Mobility Switch (DMS), a WAN-optimization system built with merchant silicon and designed to serve as a high-performance data-center interconnect for applications such as multi-gigabit Business Continuity/Disaster Recovery (BCDR), cross-site virtualization, and other variations on what Infineta calls “Big Traffic,” a fast-moving sibling of Big Data.

Waiting on Part II

I wrote about Infineta and its DMS, as did some of the other delegates, including cardigan-clad fashionista Tony Bourke  and avowed Networking Nerd Tom Hollingsworth. Meanwhile, formerly hirsute Derick Winkworth, who goes by the handle of Cloud Toad, began a detailed two-part serialization on Infineta and its technology, but he seems to be taking longer to deliver the sequel than it took Francis Ford Coppola to bring us The Godfather: Part II.

Suffice it to say, Infineta got our attention with its market focus (data-center interconnect rather than branch acceleration) and its compelling technological approach to solving the problem.  I thought Winkworth made an astute point in noting that Infineta’s targeting of data-center interconnect means that the performance and results of its DMS can be assessed purely on the basis of statistical results rather than on human perceptions of application responsiveness.

Name that Tune 

Last week, Infineta’s Haseeb Budhani, the company’s chief product officer, gave me a update that coincided with the company’s announcement of FlowTune, a software QoS feature set for the DMS that is intended to deliver the performance guarantees required for applications such as high-speed replication and data backup.

Budhani used a medical analogy to explain why FlowTune is more effective than traditional solutions. FlowTune, he said, takes a preventive approach to network congestion occasioned by contentious application flows, treating the cause of the problem instead of responding to the symptoms.  So, whereas conventional approaches rely on packet drops to facilitate congestion recovery, FlowTune dynamically manages application-transmission rates through a multi-flow mechanism that allocates bandwidth credits according to QoS priorities that specify minimum and maximum performance thresholds.   As a result, Budhani says, the WAN is fully utilized.

Storage Giants

Last week, Infineta and NetApp jointly announced that the former has joined the NetApp Alliance Partner Program. In a blog post, Budhani says Infineta’s relationships with storage-market leaders EMC and NetApp validate his company’s unique capability to deliver “the scale needed by their customers to accelerate traffic running at multi-Gigabit speeds at any distance.”

A software update, FlowTune is available to all Infineta customers. Budhani says it’s already being  used by Time Warner.

Dell’s Steady Progression in Converged Infrastructure

With its second annual Dell Storage Forum in Boston providing the backdrop, Dell made a converged-infrastructure announcement this week.  (The company briefed me under embargo late last week.)

The press release is available on the company’s website, but I’d like to draw attention to a few aspects of the announcement that I consider noteworthy.

First off, Dell now is positioned to offer its customers a full complement of converged infrastructure, spanning server, storage, and networking hardware, as well as management software. For customers seeking a single-vendor, one-throat-to-choke solution, this puts Dell  on parity with IBM and HP, while Cisco still must partner with EMC or with NetApp for its storage technology.

Bringing the Storage

Until this announcement, Dell was lacking the storage ingredients. Now, with what Dell is calling the Dell Converged Blade Data Center solution, the company is adding its EqualLogic iSCSI Blade Arrays to Dell PowerEdge blade servers and Dell Force10 MXL blade switching. Dell says this package gives customers an entire data center within a single blade enclosure, streamlining operations and management, and thereby saving money.

Dell’s other converged-infrastructure offering is the Dell vStart 1000. For this iteration of vStart, Dell is including, for the first time, its Compellent storage and Force10 networking gear in one integrated rack for private-cloud environments.

The vStart 1000 comes in two configurations: the vStart 1000m and the vStart 1000v. The packages are nearly identical — PowerEdge M620 servers, PowerEdge R620 management servers, Dell Compellent Series 40 storage, Dell Force10 S4810 ToR Networking and Dell Force10 S4810 ToR Networking, plus Brocade 5100 ToR Fibre-Channel Switches — but the vStart 1000m comes with Windows Server 2008 R2 Datacenter (with the Hyper-V hypervisor), whereas the vStart 1000v features trial editions of VMware vCenter and VMware vSphere (with the ESXi hypervisor).

An an aside, it’s worth mentioning that Dell’s inclusion of Brocade’s Fibre-Channel switches confirms that Dell is keeping that partnership alive to satisfy customers’ FC requirements.

Full Value from Acquisitions

In summary, then, is Dell delivering converged infrastructure with both its in-house storage options, demonstrating that it has fully integrated its major hardware acquisitions into the mix.   It’s covering as much converged ground as it can with this announcement.

Nonetheless, it’s fair to ask where Dell will find customers for its converged offerings. During my briefing with Dell, I was told that mid-market was the real sweet spot, though Dell also sees departmental opportunities in large enterprises.

The mid-market, though, is a smart choice, not only because the various technology pieces, individually and collectively, seem well suited to the purpose, but also because Dell, given its roots and lineage, is a natural player in that space. Dell has a strong mandate to contest the mid-market, where it can hold its own against any of its larger converged-infrastructure rivals.

Mid-Market Sweet Spot

What’s more, the mid-market — unlike cloud-service providers today and some large enterprise in the not-too-distant future — are unlikely to have the inclination, resources, and skills to pursue a DIY, software-driven, DevOps-oriented variant of converged infrastructure that might involve bare-bones hardware from Asian ODMs. At the end of the day, converged infrastructure is sold as packaged hardware, and paying customers will need to perceive and realize value from buying the boxes.

The mid-market would seem more than receptive to the value proposition that Dell is selling, which is that its converged infrastructure will reduce the complexity of IT management and deliver operational cost savings.

This finally leads us to a discussion of Dell’s take on converged infrastructure. As noted in an eChannelLine article, Dell’s notion of converged infrastructure encompasses operations management, services management, and applications management. As Dell continues down the acquisition trail, we should expect the company to place greater emphasis on software-based intelligence in those areas.

That, too, would be a smart move. The battle never ends, but Dell — despite its struggles in the PC market — is now more than punching its own weight in converged infrastructure.

Cisco’s Storage Trap

Recent commentary from Barclays Capital analyst Jeff Kvaal has me wondering whether  Cisco might push into the storage market. In turn, I’ve begun to think about a strategic drift at Cisco that has been apparent for the last few years.

But let’s discuss Cisco and storage first, then consider the matter within a broader context.

Risks, Rewards, and Precedents

Obviously a move into storage would involve significant risks as well as potential rewards. Cisco would have to think carefully, as it presumably has done, about the likely consequences and implications of such a move. The stakes are high, and other parties — current competitors and partners alike — would not sit idly on their hands.

Then again, Cisco has been down this road before, when it chose to start selling servers rather than relying on boxes from partners, such as HP and Dell. Today, of course, Cisco partners with EMC and NetApp for storage gear. Citing the precedent of Cisco’s server incursion, one could make the case that Cisco might be tempted to call the same play .

After all, we’re entering a period of converged and virtualized infrastructure in the data center, where private and public clouds overlap and merge. In such a world, customers might wish to get well-integrated compute, networking, and storage infrastructure from a single vendor. That’s a premise already accepted at HP and Dell. Meanwhile, it seems increasingly likely data-center infrastructure is coming together, in one way or another, in service of application workloads.

Limits to Growth?

Cisco also has a growth problem. Despite attempts at strategic diversification, including failed ventures in consumer markets (Flip, anyone?), Cisco still hasn’t found a top-line driver that can help it expand the business while supporting its traditional margins. Cisco has pounded the table perennially for videoconferencing and telepresence, but it’s not clear that Cisco will see as much benefit from the proliferation of video collaboration as once was assumed.

To complicate matters, storm clouds are appearing on the horizon, with Cisco’s core businesses of switching and routing threatened by the interrelated developments of service-provider alienation and software-defined networking (SDN). Cisco’s revenues aren’t about to fall off a cliff by any means, but nor are they on the cusp of a second-wind surge.

Such uncertain prospects must concern Cisco’s board of directors, its CEO John Chambers, and its institutional investors.

Suspicious Minds

In storage, Cisco currently has marriages of mutual convenience with EMC (VBlocks and the sometimes-strained VCE joint venture) and with NetApp (the FlexPod reference architecture).  The lyrics of Mark James’ song Suspicious Minds are evocative of what’s transpiring between Cisco and these storage vendors. The problem is not only that Cisco is bigamous, but that the networking giant might have another arrangement in mind that leaves both partners jilted.

Neither EMC nor NetApp is oblivious to the danger, and each has taken care to reduce its strategic reliance on Cisco. Conversely, Cisco would be exposed to substantial risks if it were to abandon its existing partnership in favor of a go-it-alone approach to storage.

I think that’s particularly true in the case of EMC, which is the majority owner of server-virtualization market leader VMware as well as a storage vendor. The corporate tandem of VMware and EMC carries considerable enterprise clout, and Cisco is likely to be understandably reluctant to see the duo become its adversaries.

Caught in a Trap

Still, Cisco has boxed itself into a strategic corner. It needs growth, it hasn’t been able to find it from diversification away from the data center, and it could easily see the potential of broadening its reach from networking and servers to storage. A few years ago, the logical choice might have been for Cisco to acquire EMC. Cisco had the market capitalization and the onshore cash to pull it off five years ago, perhaps even three years ago.

Since then, though, the companies’ market fortunes have diverged. EMC now has a market capitalization of about $54 billion, while Cisco’s is slightly more than $90 billion. Even if Cisco could find a way of repatriating its offshore cash hoard without taking a stiff hit from the U.S. taxman, it wouldn’t have the cash to pull of an acquisition of EMC, whose shareholders doubtless would be disinclined to accept Cisco stock as part of a proposed transaction.

Therefore, even if it wanted to do so, Cisco cannot acquire EMC. It might have been a good move at one time, but it isn’t practical now.

Losing Control

Even NetApp, with a market capitalization of more than $12.1 billion, would rate as the biggest purchase by far in Cisco’s storied history of acquisitions. Cisco could pull it off, but then it would have to try to further counter and commoditize VMware’s virtualization and cloud-management presence through a fervent embrace of something like OpenStack or a potential acquisition of Citrix. I don’t know whether Cisco is ready for either option.

Actually, I don’t see an easy exit from this dilemma for Cisco. It’s mired in somewhat beneficial but inherently limiting and mutually distrustful relationships with two major storage players. It would probably like to own storage just as it owns servers, so that it might offer a full-fledged converged infrastructure stack, but it has let the data-center grass grow under its feet. Just as it missed a beat and failed to harness virtualization and cloud as well as it might have done, it has stumbled similarly on storage.

The status quo is likely to prevail until something breaks. As we all know, however, making no decision effectively is a decision, and it carries consequences. Increasingly, and to an extent that is unprecedented, Cisco is losing control of its strategic destiny.

Further Thoughts on Cisco’s Latest Spin-In Venture

This is a follow-up post to my last missive regarding Cisco’s latest reported spin-in venture, Insieme (not Insiemi, apparently). As you will recall, we had heard for some time that Cisco’s masters of the spin-in venture were getting back in the saddle for at least one more stretch run.

The question had become not whether they’d come back, but what they would put on the playlist for their reunion. Now, as indicated in an article in the New York TImes, the widely held assumption is that Insieme will provide Cisco’s answer to software-defined networking (SDN).

But, as we know, SDN means different things to different vendors. Given the composition and capabilities of the team at Insieme, I wouldn’t expect this group to recreate the sort of logically centralized control plane and server-based programmable networking that the likes of Nicira and Big Switch Networks have championed.

ASICs in the Mix 

After all, the central protagonists at Insieme — Mario Mazzola, Luca Cafiero, Prem Jain — are hardware engineers. Throughout their long, storied, and illustrious careers, they have built switches. There is no reason to think they will be cast against type in this particular venture. A variation on what they’ve done in their previous spin-in ventures for Cisco –  Andiamo, which was responsible for Cisco’s storage-area networking (SAN) switches, and Nuova, which provided Cisco with its Nexus data-center switches — is probably what they’ll do this time, too.

Admittedly, there is some software talent on the Insieme roster. Network World’s Jim Duffy reported that Ronak Desai, the architect of Cisco’s NX-OS FabricPath and Virtual Device Context software, and of the MDS SAN switch operating system, is on the team. Michael Smith, a distinguished engineer who worked on Cisco’s Nexus 1000v virtual switch, also might be part of the Insieme squad.

Still, John Chambers recently reiterated Cisco’s unswerving commitment to the propriety switching ASIC, which Cisco sees a point of differentiation against Arista Networks and others. Chambers’ words suggest that Cisco isn’t about to get the newfangled SDN religion. In fact, if anything, they suggest that Cisco is still working from its well-thumbed playbook of ASIC-based switches in a network-centric world.

Moreover, with Tom Edsall, the lead ASIC architect on the Nexus and MDS switching lines, reportedly on board with Insieme, we can probably safely deduce that the ASIC will be front and center in whatever the spin-in effort delivers. So, if it’s an SDN architecture Insieme has been mandated to deliver, it will be one with a distributed control plane and absolutely no role for dumb, off-the-rack switches.

Two Possible Scenarios

With regard to the increasingly contested definition of SDN — look no further than the marketing messages of certain vendors or to the software-driven networking hijinks now occurring in the IETF — there’s also the possibility that what the Insieme pack are doing could be only incidentally connected to what many consider SDN.

With that in mind, I want to turn to some intriguing speculation that William Koss, now at Plexxi, has provided on what he believes Cisco’s latest spin-in venture might be building. In a post on his blog, Koss reviews Cisco’s switching history, much of it involving the three musketeers now reuniting at Insieme, He then explains why Cisco does spin-in ventures before he offers his assessment of what Insieme might be  trying to accomplish.

He offers two possible paths Insieme might take. The first path would involve Cisco attending to what Koss terms “unfinished business” (including Brocade) in the storage space. In this scenario, the Insieme team would build a successor switch to the Nexus line with storage-networking hooks. This switch would be intended as a crushing reply to Xsigo’s I/O Director, while simultaneously representing an attempt to limit further market encroachments by Arista Networks, currently well entrenched in low-latency application environments, and also to potentially inoculate against potential traction from SDN startups such as Nicira and Big Switch.

As for the second option, he envisions something proceeding along an “SDN OpenFlow strategy path.” In this scenario, Koss foresees a  new platform that functions as a “Nexus OS-to-OpenFlow arbitration box,” which he describes as analogous to a session border controller (SBC) between the two networks. This would give Cisco’s installed base to SDN-like capabilities while keeping them wrapped inside Cisco’s proprietary cocoon.

Surprise Not Likely

In my view, both paths described by Koss are plausible scenarios for Insieme.  My gut feeling is that the first is more likely. The second option is more software intensive, and it would seem to feature less of the ASIC and storage-networking expertise possessed by known members of the Insieme team. Perhaps Mario, Luca, and Prem will blaze an entirely different path and surprise us all, but Koss might be on the right track with his speculative musings.

As always, we shall see.

Cheriton Sees Opportunity in Infrastructure

When I wrote my first post on this blog, way back in 2006, I assumed that technology infrastructure largely was a spent force. I expected incremental enhancements, gradual advances, but I didn’t anticipate another major boom or a significant disruption of the established order in what once had been a vibrant technology space.

While the technology industry as a whole can suffer from blinkered, willful optimism, perhaps I was afflicted by a different condition entirely. I might have been too pessimistic, too gloomy, dispirited by the technology downturn of the early 2000s and the lack of a meaningful, sustained recovery in the years that immediately followed.

By the way, when I refer to technology, I’m not talking about social networking such as Facebook. I understand that there’s a lot of technology behind the scenes at Facebook, but the customer-facing “social” phenomenon leaves me cold. I never did see the point of Facebook from a user’s perspective, though I understood how it could serve as an unprecedented data-mining machine for advertisers.

Opportunity Renewed

Fortunately, though, I was wrong about the decline and fall of infrastructure. It took a while, but a new era of infrastructure has arisen, based on virtualization, orchestration, and automation. Technological possibilities that we could only dream about more than a decade ago are now possible. In the networking realm, software-defined networking (SDN) is enabling comparatively outmoded network infrastructure to catch up with compute and, to a lesser degree, storage infrastructure as the promise of an application-driven, programmable data center comes into clearer view.

Suddenly, at long last, there’s new opportunity in infrastructure.

You don’t have to take my word for it, either. There are people who’ve designed and developed industry-leading technologies who espouse the same opinion. Some of these people are billionaires, and they’re backed their convictions with substantial sums of money, investing in technologies and companies with clear mandates to remake IT infrastructure.

Outrageously Wealthy Canuck

One of those people is David Cheriton, a billionaire who wears many hats. He is Professor of Computer Science and Electrical Engineering at Stanford University, where he researches networking and distributed systems, and he also serves as a co-founder and chief scientist at Arista Networks. He’s also an investor in startup companies. Back in 1998, one early-stage company in which he invested, along with Arista co-founder Andy Bechtolsheim, was Google.  The duo made a similar early investment in VMware, so they’ve done okay.

Born in Vancouver, raised in Edmonton, Alberta, and ranked 37th on a Wikipedia list of “richest Canadians”** — Forbes ranks him 21st among outrageously wealthy Canucks  — Cheriton recently spoke about innovation and entrepreneurship at a Churchill Club event in Silicon Valley. The event was co-hosted and organized by the Hua Yuan Science and Technology Association and also featured Ken Xie, who founded NetScreen (acquired by Juniper Networks in 2004) and is now president and CEO of unified-threat-management/firewall vendor Fortinet, a company he also founded.

In addition to his apparent knack as an investor, Cheriton has considerable firsthand experience as an entrepreneur and an innovator. Before he and Bechtolsheim combined forces at Arista Networks,  they founded Granite Systems, a Gigabit-Ethernet switching concern that was acquired by Cisco in 1996 for about $220 million in stock, back when shares of Cisco were continuously on the rise.  Subsequently, after the Google investment, Bechtolsheim and Cheriton combined forces again to found Kealia, which specialized in server technology based on AMD’s Opteron microprocessor.  That company was acquired by Sun Microsystems in 2004, providing technology included in the Sun Fire X4500 storage product.

Room for Improvement

In 2005, Cheriton and Bechtolsheiim followed up with Arista, then called Arastra, and its 10-GbE switching technology, which brings us to the approximate present and back to something Cheriton said at the Churchill Club event late last month. Noting that people tend to become preoccupied with the latest developments in social networking and mobility, Cheriton expressed his enthusiasm for infrastructure, as an investment vehicle as well as an area in which he has an abiding technical interest. As quoted in a BusinessWeek article, Cheriton said: “I think there is an opportunity to go back and say, ‘Gee, I think there’s lot of room for improvement in the infrastructure.’ ”

Reinforcing that point, he noted that technology infrastructure today is predicated on ideas that are about 30 years old. The network was the place to start the infrastructure refurbishment, Cheriton believed, and Arista Networks grew from that conviction.

But Cheriton hasn’t stopped there. He also founded a company called Optumsoft, about which not much is known. On its website, Optumsoft is described as an early-stage startup company “taking distributed computing and distributed software development mainstream.” Quoting from the website:

Recent advancements in multi-core computing systems, coupled with the ever increasing functional and performance requirements of software has created an exciting market opportunity for addressing the programmatic and architectural issues involved in modern software development. Optumsoft is addressing this growing market with a novel technology approach that is transparent, scalable, and portable, resulting in significant improvement to the development and maintenance of distributed/parallel structured software systems. Early production usage by commercial clients has validated the technology and value proposition.

Last fall, an anonymous source suggested on Quora that what Optumsoft was building related to “how to structure object-oriented RPC in a way that makes it easy to build robust systems.  The technology behind Arista’s EOS is based on some of these ideas, as was software structure at a previous startup, Kealia.  The technology includes an IDL and a C++ runtime, similar to what you’d get using CORBA.”

Nebula and Tintri

On the investment side, Cheriton and Bechtolsheim have put money into Nebula, which has venture-capital backing from Kleiner Perkins Caulfield & Byers and Highland Capital Partners. Built on OpenStack, the Nebula Enterprise Cloud Appliance is designed to provision and configure flexible, scalable cloud-computing infrastructure. Although it doesn’t say so on the Nebula website, previous reports indicated that Arista’s networking technology is included in the Nebula appliance.

According to the BusinessWeek article,  Cheriton also has a stake in Tintri, co-founded by Kieran Harty and Mark Gritter. Harty was EVP of R&D at VMware for seven years, and Gritter was one of the first of Cheriton’s employees at Kealia. They’ve assembled a PhD-laden engineering team that has developed a virtual-machine-aware storage appliance designed for virtualized environments, which the company says have been underserved by older storage technology that apparently contributes to “VM stall.”

Another early-stage investment that Cheriton made was in Aster Data Systems, a purveyor of a massively parallel DBMS that runs on clustered commodity servers. Already a minority owner of Aster, Teradata bought the 89% of the company it didn’t own for $263 million last year.

Cheriton has made bets on infrastructure, and he’ll likely make others. It’s an encouraging sign for those of us who gravitate to that part of the industry.

(**No, I am not on the list, but thanks for asking.)

HP’s Project Voyager Alights on Server Value

Hewlett-Packard earlier this week announced the HP ProLiant Generation 8 (Gen8) line of servers, based on the HP ProActive Insight architecture. The technology behind the architecture and the servers results from Project Voyager, a two-year initiative to redefine data-center economics by automating every aspect of the server lifecycle.

You can read the HP press release on the announcement, which covers all the basics, and you also can peruse coverage at a number of different media outposts online.

Voyager Follows Moonshot and Odyssey

The Project Voyager-related announcement follows Project Moonshot and Project Odyssey announcements last fall. Moonshot, you might recall, related to low-energy computing infrastructure for web-scale deployments, whereas Odyssey was all about unifying mission-critical computing — encompassing Unix and x86-based Windows and Linux servers — in one system.

A $300-million, two-year program that yielded more than 900 patents, Project Voyager’s fruits, as represented by the ProActive Insight architecture, will span the entire HP Converged Infrastructure.

Intelligence and automation are the buzzwords behind HP’s latest server push. By enabling servers to “virtually take care of themselves,” HP is looking to reduce data-center complexity and cost, while increasing system uptime and boosting compute-related innovation. In support of the announcement, HP culled assorted facts and figures to assert that savings from the new servers can be significant across various enterprise deployment scenarios.

Taking Care of Business

In taking care of its customers, of course, HP is taking care of itself. HP says it tested the ProLiant servers in more than 100 real-world data centers, and that they include more than 150 client-inspired design innovations. That process was smart, and so were the results, which not only speak to real needs of customers, but also address areas that are beyond the purview of Intel (or AMD).

The HP launch eschewed emphasis on system boards, processors, and “feeds and speeds.” While some observers wondered whether that decision was taken because Intel had yet to launch its latest Xeon chips, the truth is that HP is wise to redirect the value focus away from chip performance and toward overall system and data-center capabilities.

Quest for Sustainable Value, Advantage 

Processor performance, including speeds and feeds, is the value-added purview of Intel, not of HP. All system vendors ultimately get the same chips from Intel (or AMD). They really can’t differentiate on the processor, because the processor isn’t theirs. Any gains they get from being first to market with a new Intel processor architecture will be evanescent.

They can, however, differentiate more sustainably around and above the processor, which is what HP has done here. Certainly, a lot of value-laden differentiation has been created, as the 900 patent filings attest. In areas such as management, conservation, and automation, HP has found opportunity not only to innovate, but also to make a compelling argument that its servers bring unique benefits into customer data centers.

With margin pressure unlikely to abate in server hardware, HP needed to make the sort of commitment and substantial investment that Project Voyager represented.

Questions About Competition, Patents

From a competitive standpoint, however, two questions arise. First, how easy (or hard) will it be for HP’s system rivals to counter what HP has done, thereby mitigating HP’s edge? Second, what sort of strategy, if any, does HP have in store for its Voyager-related patent portfolio? Come to think of it, those questions — and the answers to them — might be related.

As a final aside, the gentle folks at The Register inform us that HP’s new series of servers is called the ProLiant Gen8 rather than ProLiant G8 — the immediately predecessors are called ProLiant G7 (for Generation 7) — because the sound “gee-ate” is uncomfortably similar to a slang term for “penis” in Mandarin.

Presuming that to be true, one can understand why HP made the change.

Brocade Engages Qatalyst Again, Hopes for Different Result

The networking industry’s version of Groundhog Day resurfaced late last week when the Wall Street Journal published an article in which “people familiar with the matter” indicated that Brocade Communications Systems was up for sale — again.

Just like last time, investment-banking firm Qatalyst Partners, headed by the indefatigable Frank Quattrone, appears to have been retained as Brocade’s agent. Quattrone and company failed to find a buyer for Brocade last time, and many suspect the same fate will befall the principals this time around.

Changed Circumstances

A few things, however, are different from the last time Brocade was put on the block and Qatalyst beat Silicon Valley’s bushes seeking prospective buyers. For one thing, Brocade is worth less now than it was back then. The company’s shares are worth roughly half as much as they were worth during fevered speculation about its possible acquisition back in the early fall of 2009. With a current market capitalization of about $2.15 billion, Brocade would be easier for a buyer to digest these days.

That said, the business case for Brocade acquisition doesn’t seem as compelling now as it was then. The core of its commercial existence, still its Fibre Channel product portfolio, is well on its way to becoming a slow-growth legacy business. What’s worse, it has not become a major player in Ethernet switching subsequent to its $3 billion purchase of Foundry Networks in 2008. Running the numbers, prospective buyers would be disinclined to pay much of a premium for Brocade today unless they held considerable faith in the company’s cloud-networking vision and strategy, which isn’t at all bad but isn’t assured to succeed.

Unfortunately, another change is that fewer prospective buyers would seem to be in the market for Brocade these days. Back in 2009, Dell, HP, Oracle, IBM all were mentioned as possible acquirers of the company. One would be hard pressed to devise a plausible argument for any of those vendors to make a play for Brocade now.

Dell is busily and happily assimilating and integrating Force10 Networks; HP is still trying to get its networking house in order and doesn’t need the headaches and overlaps an acquisition of Brocade would entail; IBM is content to stand pat for now with its BLADE Network Technologies acquisition; and, as for Oracle, Larry Ellison was adamant that he wanted no part of Brocade. Admittedly, Ellison is known for his shrewdness and occasional reverses, but he sured seemed convincing regarding Oracle’s position on Brocade.

Sorting Out the Remaining Candidates

So, that leaves, well, who exactly? Some believe Cisco might buy up Brocade as a consolidation play, but that seems only a remote possibility. Others see Juniper Networks similarly making a consolidation play for Brocade. It could happen, I suppose, but I don’t think Juniper needs a distraction of that scale just as it is reaching several strategic crossroads (delivery of product roadmap, changing industry dynamics, technological shifts in its telco and service-provider markets). No, that just wouldn’t seem a prudent move, with the risks significantly outweighing the potential rewards.

Some say that private-equity players, some still flush with copious cash in their coffers, might buy Brocade. They have the means and the opportunity, but is the motive sufficient? It all comes back to believing that Brocade is on a strategic path that will make it more valuable in the future than it is today. In that regard, the company’s recent past performance, from a valuation standpoint, is not encouraging.

A far-out possibility, one that I would classify as remotely unlikely, envisions EMC buying Brocade. That would signal an abrupt end to the Cisco-EMC partnership, and I don’t see a divorce, were it to transpire, occurring quite so suddenly or irrevocably.

I do, however, see one dark-horse vendor that could make a play for Brocade, and might already have done so.

Could it Be . . . Hitachi?

That vendor? It’s Hitachi Data Systems. Yes, you’re probably wondering whether I’ve partaken of some pre-Halloween magic mushrooms, but I’ve made at least a half-way credible case for a Hitachi acquisition of Brocade previously. With its well-hidden Unified Compute Platform (UCP), Hitachi has aspirations to compete against Cisco, HP, Dell and others in converged data-center infrastructure. Hitachi owns 60 percent of a networking joint venture, with NEC as the junior partner, called Alaxala. If you go to the Alaxala website, you’ll see the joint venture’s current networking portfolio, which is bereft of Fibre Channel switches.

The question is, does Hitachi want them? Today, as indicated on the Hitachi website, the company partners with Brocade, Cisco, Emulex (adapters), and QLogic (adapters) for Fibre Channel networking and with Brocade and QLogic (adapters) for iSCSI networking.

The last time Brocade was said to the market, the anticlimactic outcome left figurative egg on the faces of Brocade directors and on those of the investment bankers at Qatalyst, which has achieved a relatively good batting average as a sales agent. Let’s assume — and, believe me, it’s a safe assumption — that media leaks about potential acquisitions typically are carefully contrived occurrences, done either to make a market or to expand a market in which there’s a single bidder that has declared intent and made an offer. In the latter case, the leak is made to solicit a competitive bid and drive up value.

Hold the Egg this Time

I’m not sure what transpired the first time Qatalyst was contracted to find a buyer for Brocade. The only sure inference is that the result (or lack thereof) was not part of the plan. Giving both parties the benefit of the doubt, one would think lessons were learned and they would not want to perform a reprise of the previous script. So, while perhaps last time there wasn’t a bidder or the bidder withdrew its offer after the media leak was made, I think there’s a prospective buyer firmly at the table this time. I also think Brocade wants to see whether a better offer can be had.

My educated guess, with the usual riders and qualifications in effect,* is that perhaps Hitachi or a private-equity concern (Silver Lake, maybe) is at the table. With the leak, Brocade and Qatalyst are playing for time and leverage.

We’ll see, perhaps sooner rather than later.

* I could, alas, be wrong.

Further Intimations of Cisco-EMC Tensions

At the risk of further ad-hominem attacks, I will note again that all might not be well with the relationship between Cisco and EMC, particularly within the context of their VCE joint venture.

I suggested previously that Cisco and EMC might be heading for a not-so-amicable divorce, and I still feel that the organizational and technological auguries point in that direction. The signs at VCE — which provides converged infrastructure comprising Cisco servers and switches, EMC storage, and VMware virtualization — have been inauspicious lately, with layoffs, significant restructuring, and Cisco’s increasingly ardent converged-infrastructure partnership with EMC competitor NetApp adding murk to the mix.

Capellas Loses CEO Title

Now, there’s more to consider. A few weeks ago, as reported by The Register, Michael Capellas was delisted as VCE’s CEO on the company’s website. Capellas is a Cisco board member who was strongly backed by John Chambers for the CEO position at VCE.  The official story from VCE is that nothing has changed at VCE, that Capellas’ role remains the same even though he’s lost the CEO designation and now shares the responsibility of running the company with Frank Hauck, a longtime EMC executive who was appointed VCE president earlier this year.

Perhaps VCE’s official spin on the mahogany-row shuffle is true, but skepticism seems warranted.

In the same piece at The Register that updates us on Capellas’ current status at VCE, we also learn that a source formerly employed by the joint venture says “the Cisco originator of the Vblock concept  is no longer at VCE and neither is the Cisco staffer who ran VCE’s service provider and channel sales operation.”

Mere coincidence, one might contend, and I’m inclined to take that possibility under advisement.

EMC in Server Business?

There’s one other piece of evidence to consider, though. As reported by The Register (yes, again), EMC seems to have moved, via its storage arrays, into the server business. That, as you might expect, could have implications for EMC’s relationship with Cisco and its Unified Computing System (UCS) servers.

Here’s a particularly salient excerpt from The Register article, written by Chris Mellor:

“If you have a VMAX, with flash-enhanced engines, able to run application software, then you wouldn’t need UCS servers to do that job. Were EMC to do a deal with a network supplier, then you wouldn’t need Cisco network switches to hook the application server/array complex up to accessing clients either, and we might have a VMAXblock as well as a Vblock.”

For its part, EMC is ambiguous on whether it’s actually entering the server space. On his blog, EMC staffer Mark Twomey has enjoyed some mischievous fun with the proposition, concluding that EMC’s moves put in the compute and systems business and “maybe” in the server business.

Such fine distinctions might be lost on server vendors such as HP, Dell, and IBM.

Follow the Money

Let’s remember that EMC is the overwhelming majority shareholder — and, thus, owner — of VMware. As such, the virtualization leader will not do anything to hurt the business prospects of its de facto parent. More to the point, VMware remains in the strategic service of EMC, furthering its big-picture agenda while advancing its own interests.

That combination isn’t just a competitive threat to the likes of HP, IBM, and Dell. Increasingly — indirectly or otherwise — Cisco seems to be in EMC-VMware gunsights, too.

ONF Board Members Call OpenFlow Tune

The concept of software-defined networking (SDN) has generated considerable interest during the last several months.  Although SDNs can be realized in more than one way, the OpenFlow protocol seems to have drawn a critical mass of prospective customers (mainly cloud-service providers with vast data centers) and solicitous vendors.

If you aren’t up to speed with the basics of software-defined networking and OpenFlow, I suggest you visit the Open Networking Foundation (ONF) and OpenFlow websites to familiarize yourself the underlying ideas.  Others have written some excellent articles on the technology, its perceived value, and its potential implications.

In a recent piece he wrote originally for GigaOm, Kyle Forster of Big Switch Networks offers this concise definition:

Concisely Defined

“At its most basic level, OpenFlow is a protocol for server software (a “controller”) to send instructions to OpenFlow-enabled switches, where these instructions give direct control over how those switches forward traffic through the network.

I think of OpenFlow like an x86 instruction set for the network – it’s low-level, but it’s very powerful. Continuing that analogy, if you read the x86 instruction set for the first time, you might walk away thinking it could be useful if you need to build a fancy calculator, but using it to build Linux, Apache, Microsoft Word or World of Warcraft wouldn’t exactly be obvious. Ditto for OpenFlow. It isn’t the protocol that is interesting by itself, but rather all of the layers of software that are starting to emerge on top of it, similar to the emergence of operating systems, development environments, middleware and applications on top of x86.”

Increased Network Functionality, Lower Network Operating Costs

The Open Networking Foundation’s charter summarizes its objectives and the value proposition that advocates of SDN and OpenFlow believe they can deliver:

 “The Open Networking Foundation is a nonprofit organization dedicated to promoting a new approach to networking called Software-Defined Networking (SDN). SDN allows owners and operators of networks to control and manage their networks to best serve their users’ needs. ONF’s first priority is to develop and use the OpenFlow protocol. Through simplified hardware and network management, OpenFlow seeks to increase network functionality while lowering the cost associated with operating networks.”

That last part is the key to understanding the composition of ONF’s board of directors, which includes Deutsche Telecom, Facebook, Google, Microsoft, Verizon, and Yahoo. All of these companies are major cloud-service providers with multiple, sizable data centers. (Yes, Microsoft also is a cloud-technology purveyor, but what it has in common with the other board members is its status as a cloud-service provider that owns and runs data centers.)

Underneath the board of directors are member companies. Most of these are vendors seeking to serve the needs of the ONF board members and similar cloud-service providers that share their business objective: boosting network functionality while reducing the costs associated with network operations.

Who’s Who of Networking

Among the vendor members are a veritable who’s who of the networking industry: Cisco, HP, Juniper, Brocade, Dell/Force10, IBM, Huawei, Nokia Siemens Networks, Riverbed, Extreme, and others. Also members, not surprisingly, are virtualization vendors such as VMware and Citrix, as well as the aforementioned Microsoft. There’s a smattering of SDN/OpenFlow startups, too, such as Big Switch Networks and Nicira Networks.

Of course, membership does not necessarily entail avid participation. Some vendors, including Cisco, likley would not be thrilled at any near-term prospect of OpenFlow’s widespread market adoption. Cisco would be pleased to see the networking status quo persist for as long as possible, and its involvement in ONF probably is more that of vigilant observer than of fervent proponent. In fact, many vendors are taking a wait-and-see approach to OpenFlow. Some members, including Force10, are bearish and have suggested that the protocol is a long way from delivering the maturity and scalability that would satisfy enterprise customers.

Vendors Not In Charge

Still, the board members are steering the ONF ship, not the vendors. Regardless of when OpenFlow or something like it comes of age, the rise of software-defined networking seems inevitable. Servers and storage gear have been virtualized and have become more application-driven, but networks haven’t changed much in the last several years. They’re faster, yes, but they’re still provisioned in the traditional manner, configured rather than programmed. That takes time, consumes resources, and costs money.

Major cloud-service providers, such as those on the ONF board, want network infrastructure to become more elastic, flexible, and dynamic. Vendors will have to respond accordingly, whether with OpenFlow or with some other approach that delivers similar operational outcomes and business benefits.

I’ll be following these developments closely, watching to see how the business concerns of the cloud providers and the business interests of the networking-vendor community ultimately reconcile.

Dell Might Announce Networking Acquisition Next Week

As those of you who regularly visit this dusty outpost of the blogosphere will know, I recently took a shot at handicapping which networking company Dell might acquire. I assembled a field of nine entries, considered the likelihood that Dell would pursue a transaction with each of them, and assigned odds to each scenario.

Before writing that post, I had read and heard mounting speculation about the increasing likelihood of Dell buying its way into networking to consummate and round out integrated data-center solutions (servers, storage, networking) and to compete more effectively against competitors HP and Cisco.

The drumbeat for a networking acquisition by Dell has only gotten louder and more insistent since then. Now the word on the street — and in the pubs, in the cafes, on the patios, at the gyms, and on the fairways — is that Dell might announce its networking buy as early as next week.

Furthermore, multiple sources, spanning the gamut of reliability, tell me that the company Dell will buy is the one I listed as the 5-2 favorite in my mildly diverting handicapping exercise.  (The candidate I listed at 7-2 was alleged to have been in the running, too.)

Nothing is a given, of course, until the announcement goes out over the wires, but the word is that Dell has made its choice, going with the favorite, and will tell the world all about it imminently.

Maybe I should shorten the odds accordingly.