Category Archives: Dell

Dell’s Bid for Data-Center Distinction

Since Dell’s acquisition of Force10 Networks, many of us have wondered how Dell’s networking business, under the leadership of former Cisco Systems executive Dario Zamarian, would chart a course of distinction in data-center networking.

While Zamarian has talked about adding Layer 4-7 network services, presumably through acquisition, what about the bigger picture? We’ve pondered that question, and some have asked it, including one gentleman who posed the query on the blog of Brad Hedlund, another former Ciscoite now at Dell.

Data Center’s Big Picture

The question surfaced in a string of comments that followed Hedlund’s perceptive analysis of Embrane’s recent Heleos unveiling. Specifically, the commenter asked Hedlund to elucidate Dell’s strategic vision in data-center networking. He wanted Hedlund to provide an exposition on how Dell intended to differentiate itself from the likes of Cisco’s UCS/Nexus, Juniper’s QFabric, and Brocade’s VCS.

I quote Hedlund’s response:

 “This may not be the answer you are looking for right now, but .. Consider for a moment that the examples you cite; Cisco UCS/Nexus; Juniper QFabric; Brocade VCS — all are either network only or network centric strategies. Think about that for a second. Take your network hat off for just a minute and consider the data center as a whole. Is the network at the center of the data center universe? Or is network the piece that facilitates the convergence of compute and storage? Is the physical data center network trending toward a feature/performance discussion, or price/performance?

Yes, Dell now has a Tier 1 data center network offering with Force10. And with Force10, Dell can (and will) win in network only conversations. Now consider for a moment what Dell represents as a whole .. a total IT solutions provider of Compute, Storage, Network, Services, and Software. And now consider Dell’s heritage ofproviding solutions that are open, capable, and affordable.”

Compare and Contrast

It’s a fair enough answer. By reframing the relevant context to encompass the data center in its entirety, rather than just the network infrastructure, Dell can offer an expansive value-based, one-stop narrative that its rivals — at least those cited by the questioner –  cannot match on their own.

Let’s consider Cisco. For all its work with EMC/VMware and NetApp on Vblocks and FlexPods, respectively, Cisco does not provide its own storage technologies for converged infrastructure. Juniper and Brocade are pure networking vendors, dependent on partners for storage, compute, and complementary software and services.

HP, though not cited by the commenter in his question, is one Dell rival that can offer the same pitch. Like Dell, HP offers data-center compute, storage, networking, software, and services. It’s true, though, that HP also resells networking gear, notably Brocade’s Fibre Channel storage-networking switches. The same, of course, applies to Dell, which also continues to resell Brocade’s Fibre Channel switches and maintains — at least for now — a nominal relationship with Juniper.

IBM also warrants mention. Its home-grown networking portfolio is restricted to the range of products it obtained through its acquisition of Blade Network Technologies last year. Like HP, but to a greater degree, IBM resells and OEMs networking gear from other vendors, including Brocade and Juniper. It also OEMs some of its storage portfolio from NetApp, but it also has a growing stable of orchestration and management software, and it definitely has a prodigious services army.

Full-Course Fare 

Caveats aside, Dell can tell a reasonably credible story about its ability to address the full range of data-center requirements. Dell’s success with that strategy will depend not only its sales execution, but also on its capacity to continually deliver high-quality solutions across the gamut of compute, storage, networking, software, and services. Offering a moderately tasty data-center repast won’t be good enough.  If Dell wants customers to patronize it and return for more, it must deliver a savory menu spanning every course of the meal.

To his credit, Hedlund acknowledges that Dell must be “capable.” He also notes that Dell must  be open and affordable. To be sure, Dell doesn’t have the data-center brand equity to extract the proprietary entitlements derived from vendor lock-in, certainly not in the networking sphere, where even Cisco is finding that game to be harder work these days.

Dell, HP, and IBM each might be able to craft a single-vendor narrative that spans the entire data center, but the cogency of those pitches are only as credible as the solutions the vendors deliver. For many customers, a multivendor infrastructure, especially in a truly interoperable standards-based world, might be preferable to a soup-to-nuts solution from a single vendor. That’s particularly true if the single-vendor alternative has glaring deficiencies and weaknesses, or if it comes with perpetual proprietary overhead and constraints.

Still Early

I think the real differentiation isn’t so much in whether data-center solutions are delivered by a single vendor or by multiple vendors. I suspect the meaningful differentiation will be delivered in how those environments are further virtualized, automated, orchestrated, and managed as coherent unified entities.

Dell has bought itself a seat at the table where that high-stakes game will unfold. But it isn’t alone, and the big cards have yet to be played.

Embrane Emerges from Stealth, Brings Heleos to Light

I had planned to write about something else today — and I still might get around to it — but then Embrane came out of stealth mode. I feel compelled to comment, partly because I have written about the company previously, but also because what Embrane is doing deserves notice.

Embrane’s Heleos

With regard to aforementioned previous post, which dealt with Dell acquisition candidates in Layer 4-7 network services, I am now persuaded that Dell is more likely to pull the trigger on a deal for an A10 Networks, let’s say, than it is to take a more forward-looking leap at venture-funded Embrane. That’s because I now know about Embrane’s technology, product positioning, and strategic direction, and also because I strongly suspect that Dell is looking for a purchase that will provide more immediate payback within its installed base and current strategic orientation.

Still, let’s put Dell aside for now and focus exclusively on Embrane.

The company’s founders, former Andiamo-Cisco lads Dante Malagrinò and Marco Di Benedetto, have taken their company out of the shadows and into the light with their announcement of Heleos, which Embrane calls “the industry’s first distributed software platform for virtualizing layer 4-7 network services.” What that means, according to Embrane, is that cloud service providers (CSPs) and enterprises can use Heleos to build more agile networks to deliver cloud-based infrastructure as a service (IaaS). I can perhaps see the qualified utility of Heleos for the former, but I think the applicability and value for the latter constituency is more tenuous.

Three Wise Men

But I am getting ahead of myself, putting the proverbial cart before the horse. So let’s take a step back and consult some learned minds (including  an”ethereal” one) on what Heleos is, how it works, what it does, and where and how it might confer value.

Since the Embrane announcement hit the newswires, I have read expositions on the company and its new product from The 451 Group’s Eric Hanselman, from rock-climbing Ivan Pepelnjak (technical director at NIL Data Communications), and from EtherealMind’s Greg Ferro.  Each has provided valuable insight and analysis. If you’re interested in learning about Embrane and Heleos, I encourage you to read what they’ve written on the subject. (Only one of Hanselman’s two The 451 Group pieces is available publicly online at no charge).

Pepelnjak provides an exemplary technical description and overview of Heleos. He sets out the problem it’s trying to solve, considers the pros and cons of the alternative solutions (hardware appliances and virtual appliances), expertly explores Embrane’s architecture, examines use cases, and concludes with a tidy summary. He ultimately takes a positive view of Heleos, depicting Embrane’s architecture as “one of the best proposed solutions” he’s seen hitherto for scalable virtual appliances in public and private cloud environments.

Limited Upside

Ferro reaches a different conclusion, but not before setting the context and providing a compelling description of what Embrane does. After considering Heleos, Ferro ascertains that its management of IP flows equates to “flow balancing as a form of load balancing.” From all that I’ve read and heard, it seems an apt classification. He also notes that Embrane, while using flow management, is not an “OpenFlow/SDN business. Although I see conceptual similarities between what Embrane is doing and what OpenFlow does, I agree with Ferro, if only because, as I understand it, OpenFlow reaches no higher than the network layer. I suppose the same is true for SDN, but this is where ambiguity enters the frame.

Even as I wrote this piece, there was a kerfuffle on Twitter as to whether or to what extent Embrane’s Heleos can be categorized as the latest manifestation of SDN. (Hours later, at post time, this vigorous exchange of views continues.)

That’s an interesting debate — and I’m sure it will continue — but I’m most intrigued by the business and market implications of what Embrane has delivered. On that score, Ferro sees Embrane’s platform play as having limited upside, restricted to large cloud-service providers with commensurately large data centers. He concludes there’s not much here for enterprises, a view with which I concur.

Competitive Considerations

Hanselman covers some of the same ground that Ferro and Pepelnjak traverse, but he also expends some effort examining the competitive landscape that Embrane is entering. In that Embrane is delivering a virtualization platform for network services, that it will be up against Layer 4-7 stalwarts such as F5 Networks, A10 Networks, Riverbed/Zeus, Radware, Brocade, Citrix, Cisco, among others. F5, the market leader, already recognizes and is acting upon some of the market and technology drivers that doubtless inspired the team that brought Heleos to fruition.

With that in mind, I wish to consider Embrane’s business prospects.

Embrane closed a Series B round of $18 million in August. It was lead by New Enterprise Associates and included the involvement of Lightspeed Venture Partners and North Bridge Venture Partners, both of whom participated in a $9-million series A round in March 2010.

To determine whether Embrane is a good horse to back (hmm, what’s with the horse metaphors today?), one has to consider the applicability of its technology to its addressable market — very large cloud-service providers — and then also project its likelihood of providing a solution that is preferable and superior to alternative approaches and competitors.

Counting the Caveats

While I tend to agree with those who believe Embrane will find favor with at least some large cloud-service providers, I wonder how much favor there is to find. There are three compelling caveats to Embrane’s commercial success:

  1. L4-7 network services, while vitally important cloud service providers and large enterprises, represent a much smaller market than L2-L3 networking, virtualized or otherwise. Just as a benchmark, Dell’Oro reported earlier this year that the L2-3 Ethernet Switch market would be worth approximately $25 billion in 2015, with the L4-7 application delivery controller (ADC) market expected to reach more than $1.5 billion, though the virtual-appliance segment is expected show most growth in that space. Some will say, accurately, that L4-7 network services are growing faster than L2-3 networking. Even so, the gap is size remains notable, which is why SDN and OpenFlow have been drawing so much attention in an increasingly virtualized and “cloudified” world.
  2. Embrane’s focus on large-scale cloud service providers, and not on enterprises (despite what’s stated in the press release), while rational and perfectly understandable, further circumscribes its addressable market.
  3. F5 Networks is a tough competitor, more agile and focused than a Cisco Systems, and will not easily concede customers or market share to a newcomer. Embrane might have to pick up scraps that fall to the floor rather than feasting at the head table. At this point, I don’t think F5 is concerned about Embrane, though that could change if Embrane can use NaviSite — its first customer, now owned by TimeWarner Cable — as a reference account and validator for further business among cloud service providers.

Notwithstanding those reservations, I look forward to seeing more of Embrane as we head into 2012. The company has brought a creative approach and innovation platform architecture to market, a higher-layer counterpart and analog to what’s happening further down the stack with SDN and OpenFlow.

BMC Still Likelier to Buy than to be Bought

After reading a recent Network Computing piece on BMC Software, it struck me that the management-software purveyor finds itself in a Darwinian dilemma: acquire or be acquired.

If it chooses to acquire, something to which it has not been averse previously, BMC might wish to make a play in enterprise mobility management (EMM) or mobile device management (MDM). As the article at Network Computing explains, that is a current area of need for BMC.  There’s no shortage of fish in that pond, and BMC is likely to find one at the right price.

Conversely, BMC might decide that it can’t compete in the long run with much bigger systems-management rivals such as IBM, HP, Microsoft, and Oracle. Even as BMC continues its transition toward defining itself as a multiplatform, hardware-neutral cloud-management vendor, it might conclude that the odds and resources stacked against are too great to overcome.

Dell Could Come Knocking

That, though, is by no means inevitable. The company has been independent for a long time — about 31 years, if we’re counting — and it has been subject to almost as many takeover rumors in the last few years as has F5 Networks. Still, like F5, it remains an independent company, and it might continue to do so indefinitely.

Nonetheless, if BMC finally chose to entertain a buyer, Dell might be at the front of the queue. Yes, we know that Dell is shopping for other goods — Dario Zamarian, Dell’s networking GM and SVP, has suggested that a purchase in L4-L7 network services might be forthcoming — and BMC’s price tag might be a bit steep (its market capitalization is about $6 billion).

Then again, Dell sees itself as an up-and-coming player in converged data-center infrastructure, and BMC offers management-software capabilities that Dell might need if it is to weave a compelling cloud-management narrative.

Intangibles and Existing Partnership

As for intangibles, Dell and BMC are very familiar with one another. The companies have partnered since 2002, working to accelerate IT deployment and configuration in a growing number of data centers. Dell has been a BMC customer for many years, too. Last and least, they’re both Texas-based companies.

The current arrangement between the two companies involves integration of Dell’s Advanced Infrastructure Manager (AIM) with BMC’s Atrium Orchestrator. It also encompasses BMC Asset Management as well as integration between BMC Server Automation (part of the BMC BladeLogic Automation Suite) and the Dell Lifecycle Controller.

If Dell were to acquire BMC, it obviously would want to squeeze more from the marriage. One possible scenario would involve Dell recreating and expanding upon the sort of engagement BMC has with Cisco pertaining to the latter’s Unified Computing System (UCS).

Congruent Messages

In this case, though, BMC’s software would be wedded to Dell’s evolving Virtual Integrated System (VIS). A lot of the marketing language Dell uses on its website is uncannily similar to the sort of pitch BMC makes for its cloud-management software. Both companies talk about automating and simplifying data-center environments, they both emphasize management of physical and virtual infrastructure, and they both stress the openness of their respective architectures, especially the ability to manage multiplatform (and multivendor) hardware and software.

In selling itself to Dell, though, BMC would be walking away from its relationship with Cisco, and its partnerships with some others, too. What’s more, Dell would assume ownership of some parts of the BMC business, such as mainframe-management software, that might not seem a great fit, at least at first glance.  Still, a Dell-BMC combination seems more plausible than fanciful.

If I were to wager on whether BMC will buy or be bought, though, it’s probably easier to imagine it buying an EMM or MDM vendor than to envision it getting scooped up at a potentially considerable premium by Dell (or another vendor). Even so, either outcome is within the realm of rational deduction.

Rackspace’s Bridge Between Clouds

OpenStack has generated plenty of sound and fury during the past several months, and, with sincere apologies to William Shakespeare, there’s evidence to suggest the frenzied activity actually signifies something.

Precisely what it signifies, and how important OpenStack might become, is open to debate, of which there has been no shortage. OpenStack is generally depicted as an open-source cloud operating system, but that might be a generous interpretation. On the OpenStack website, the following definition is offered:

“OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.”

Just for fun and giggles (yes, that phrase has been modified so as not to offend reader sensibilities), let’s parse that passage, shall we? In the words of the OpenStackers themselves, their project is an open-source cloud-computing platform for public and private clouds, and it is reputedly simple to implement, massively scalable, and feature rich. Notably, it consists of a “series of interrelated projects delivering various components for a cloud infrastructure solution.”

Simple for Some

Given that description, especially the latter reference to interrelated projects spawning various components, one might wonder exactly how “simple” OpenStack is to implement and by whom. That’s a question others have raised, including David Linthicum in a recent piece at InfoWorld. In that article, Linthicum notes that undeniable vendor enthusiasm and burgeoning market momentum accrue to OpenStack — the community now has 138 member companies (and counting), including big-name players such as HP, Dell, Intel, Rackspace, Cisco, Citrix, Brocade, and others — but he also offers the following caveat:

“So should you consider OpenStack as your cloud computing solution? Not on its own. Like many open source projects, it takes a savvy software vendor, or perhaps cloud provider, to create a workable product based on OpenStack. The good news is that many providers are indeed using OpenStack as a foundation for their products, and most of those are working, or will work, just fine.”

Creating Value-Added Services

Meanwhile, taking issue with a recent InfoWorld commentary by Savio Rodrigues — who argued that OpenStack will falter while its open-source counterpart Eucalyptus will thrive — James Urquhart, formerly of Cisco and now VP of product strategy at enStratus, made this observation:

“OpenStack is not a cloud service, per se, but infrastructure automation tuned to cloud-model services, like IaaS, PaaS and SaaS. Intsall OpenStack, and you don’t get a system that can instantly bill customers, provide a service catalog, etc. That takes additional software.

What OpenStack represents is the commodity element of cloud services: the VM, object, server image and networking management components. Yeah, there is a dashboard to interact with those commodity elements, but it is not a value-add capability in-and-of itself.

What HP, Dell, Cisco, Citrix, Piston, Nebula and others are doing with OpenStack is creating value-add services on top (or within) the commodity automation. Some focus more on “being commodity”, others focus more on value-add, but they are all building on top of the core OpenStack projects.”

New Revenue Stream for Rackspace

All of which brings us, in an admittedly roundabout fashion, to Rackspace’s recent announcement of its Rackspace Cloud Private Edition, a packaged version of OpenStack components that can be used by enterprises for private-cloud deployments. This move makes sense for OpenStack on couple levels.

First off, it opens up a new revenue stream for the company. While Rackspace won’t try to make money on the OpenStack software or the reference designs — featuring a strong initial emphasis on Dell servers and Cisco networking gear for now, though bare-bones OpenCompute servers are likely to be embraced before long –  it will provide value-add, revenue-generating managed services to customers of Rackspace Cloud Private Edition. These managed services will comprise installation of OpenStack updates, analysis of system issues, and assistance with specific questions relating to systems engineering. Some security-related services also will be offered. While the reference architecture and the software are available now, Rackspace’s managed services won’t be available until January.

Building a Bridge

The launch of Rackspace Cloud Private Edition is a diversification initiative for Rackspace, which hitherto has made its money by hosting and managing applications and computing services for others in its own data centers. The OpenStack bundle takes it into the realm of provided managed services in its customers’ data centers.

As mentioned above, this represents a new revenue stream for Rackspace, but it also provides a technological bridge that will allow customers who aren’t ready for multi-tenant public cloud services today to make an easy transition to Rackspace’s data centers at some future date. It’s a smart move, preventing prospective customers from moving to another platform for private cloud deployment, ensuring in the process that said customers don’t enter the orbit of another vendor’s long-term gravitational pull.

The business logic coheres. For each customer engagement, Rackspace gets a payoff today, and potentially a larger one at a later date.

Assessing Dell’s Layer 4-7 Options

As it continues to integrate and assimilate its acquisition of Force10 Networks, Dell is thinking about its next networking move.

Based on what has been said recently by Dario Zamarian, Dell’s GM and SVP of networking, the company definitely will be making that move soon. In an article covering Dell’s transition from box pusher to data-center and cloud contender, Zamarian told Fritz Nelson of InformationWeek that “Dell needs to offer Layer 4 and Layer 7 network services, citing security, load balancing, and overall orchestration as its areas of emphasis.”

Zamarian didn’t say whether the move into Layer 4-7 network services would occur through acquisition, internal development, or partnership. However, as I invoke deductive reasoning that would make Sherlock Holmes green with envy (or not), I think it’s safe to conclude an acquisition is the most likely route.

F5 Connection

Why? Well, Dell already has partnerships that cover Layer 4-7 services. F5 Networks, the leader in the application-delivery controllers (ADCs), is a significant Dell partner in the Layer 4-7 sphere. Dell and F5 have partnered for 10 years, and Dell bills itself as the largest reseller of F5 solutions. If you consider what Zamarian described as Dell’s next networking priority, F5 certainly fits the bill.

There’s one problem. F5 probably isn’t selling at any price Dell would be willing to pay.  As of today, F5 has a market capitalization of more than $8.5 billion. Dell has the cash, about $16 billion and counting, to buy F5 at a premium, but it’s unlikely Dell would be willing to fork over more than $11 billion — which, presuming mutual interest, might be F5’s absolute minimum asking price — to close the deal. Besides, observers have been thinking F5 would be acquired since before the Internet bubble of 2000 burst. It’s not likely to happen this time either.

Dell could see whether one of its other partners, Citrix, is willing to sell its NetScaler business. I’m not sure that’s likely to happen, though. I definitely can’t envision Dell buying Citrix outright. Citrix’s market cap, at more than $13.7 billion, is too high, and there are pieces of the business Dell probably wouldn’t want to own.

Shopping Not Far From Home?

Who else is in the mix? Radware is an F5 competitor that Dell might consider, but I don’t see that happening. Dell’s networking group is based in the Bay Area, and I think they’ll be looking for something closer to home, easier to integrate.

That brings us to F5 rival A10 Networks. Force10 Networks, which Dell now owns, had a partnership with A10, and there’s a possibility Dell might inherit and expand upon that relationship.

Then again, maybe not. Generally, A10 is a seen as purveyor of cost-effective ADCs. It is not typically perceived as an innovator and trailblazer, and it isn’t thought to have the best solutions for complex enterprise or data-center environments, exactly the areas where Dell wants to press its advantage. It’s also worth bearing in mind that A10 has been involved in exchanges of not-so-friendly litigious fire — yes, lawsuits volleyed back and forth furiously — with F5 and others.

All in all, A10 doesn’t seem a perfect fit for Dell’s needs, though the price might be right.

Something Programmable 

Another candidate, one that’s quite intriguing in many respects, is Embrane. The company is bringing programmable network services, delivered on commodity x86 servers, to the upper layers of the stack, addressing many of the areas in which Zamarian expressed interest. Embrane is focusing on virtualized data centers where Dell wants to be a player, but initially its appeal will be with service providers rather than with enterprises.

In an article written by Stacey Higginbotham and published at GigaOM this summer, Embrane CEO Dante Malagrinò explained that his company’s technology would enable hosting companies to provide virtualized services at Layers 4 through 7, including load balancing, firewalls, virtual private networking (VPN),  among others.

Some of you might see similarities between what Embrane is offering and the OpenFlow-enabled software-defined networking (SDN). Indeed, there are similarities, but, as Embrane points out, OpenFlow promises network virtualization and programmability at Layers 2 and 3 of the stack, not at Layers 4 through 7.

Higher-Layer Complement to OpenFlow

Dell, as we know, has talked extensively about the potential of OpenFlow to deliver operational cost savings and innovative services to data centers at service provides and enterprises. One could see what Embrane does as a higher-layer complement to OpenFlow’s network programmability. Both technologies take intelligence away from specialized networking gear and place it at the edge of the network, running in software on industry-standard hardware.

Interestingly, there aren’t many degrees of separation between the principals at Embrane and Dell’s Zamarian. It doesn’t take much sleuthing to learn that Zamarian knows both Malagrinò and Marco Di Benedetto, Embrane’s CTO. They worked together at Cisco Systems. Moreover, Zamarian and Malagrinò both studied at the Politecnico di Torino, though a decade or so apart.  Zamarian also has connections to Embrane board members.

Play an Old Game, Or Define a New One

In and of itself, those don’t mean anything. Dell would have to see value in what Embrane offers, and Embrane and its backers would have to want to sell. The company announced that in August that it had closed an $18-million Series-financing round, led by New Enterprise Associates (NEA). Lightspeed Venture Partners and North Bridge Ventures also took part in the round, which followed initial lead investments in the company’s $9-million Series-A funding.

Embrane’s product has been in beta, but the company planned a commercial launch before the end of this year. Its blog has been quiet since August.

I would be surprised to see Dell acquire F5, and I don’t think Citrix will part with NetScaler. If Dell is thinking about plugging L4-7 holes cost-effectively, it might opt for an acquisition of A10, but, if it’s thinking more ambitiously — if it really is transforming itself into a solutions provider for cloud providers and data centers — then it might reach for something with the potential to establish a new game rather than play at an old one.

Like OpenFlow, Open Compute Signals Shift in Industry Power

I’ve written quite a bit recently about OpenFlow and the Open Networking Foundation (ONF). For a change of pace, I will focus today on the Open Compute Project.

In many ways, even though OpenFlow deals with networking infrastructure and Open Compute deals with computing infrastructure, they are analogous movements, springing from the same fundamental set of industry dynamics.

Open Compute was introduced formally to the world in April. Its ostensible goal was “to develop servers and data centers following the model traditionally associated with open-source software projects.”  That’s true insofar as it goes, but it’s only part of the story. The stated goal actually is a means to an end, which is to devise an operational template that allows cloud behemoths such as Facebook to save lots of money on computing infrastructure. It’s all about commoditizing and optimizing the operational efficiency of the hardware encompassed within many of the largest cloud data centers that don’t belong to Google.

Speaking of Google, it is not involved with Open Compute. That’s primarily because Google has been taking a DIY approach to its data center long before Facebook began working on the blueprint for the Open Compute Project.

Google as DIY Trailblazer

For Google, its ability to develop and deliver its own data-center technologies — spanning computing, networking and storage infrastructure — became a source of competitive advantage. By using off-the-shelf hardware components, Google was able to provide itself with cost- and energy-efficient data-center infrastructure that did exactly what it needed to do — and no more. Moreover, Google no longer had to pay a premium to technology vendors that offered products that weren’t ideally suited to its requirements and that offered extraneous “higher-value” (pricier) features and functionality.

Observing how Google had used its scale and its ample resources to fashion its cost-saving infrastructure, Facebook  considered how it might follow suit. The goal at Facebook was to save money, of course, but also to mitigate or perhaps eliminate the infrastructure-based competitive advantage Google had developed. Facebook realized that it could never compete with Google at scale in the infrastructure cost-saving game, so it sought to enlist others in the cause.

And so the Open Computer project was born. The aim is to have a community of shared interest deliver cost-saving open-hardware innovations that can help Facebook scale its infrastructure at an operational efficiency approximating Google’s. If others besides Facebook benefit, so be it. That’s not a concern.

Collateral Damage

As Facebook seeks to boost its advertising revenue, it is effectively competing with Google. The search giant still derives nearly 97 percent of its revenue from advertising, and its Google+ is intended to distract it not derail Facebook’s core business, just as Google Apps is meant to keep Microsoft focused on protecting one of its crown jewels rather than on allocating more corporate resources to search and search advertising.

There’s nothing particularly striking about that. Cloud service providers are expected to compete against other by developing new revenue-generating services and by achieving new cost-saving operational efficiencies.  In that context, the Open Compute Project can be seen, at least in one respect, as Facebook’s open-source bid to level the infrastructure playing field and undercut, as previously noted, what has been a Google competitive advantage.

But there’s another dynamic at play. As the leading cloud providers with their vast data centers increasingly seek to develop their own hardware infrastructure — or to create an open-source model that facilitates its delivery — we will witness some significant collateral damage. Those taking the hit, as is becoming apparent, will be the hardware systems vendors, including HP, IBM, Oracle (Sun), Dell, and even Cisco. That’s only on the computing side of the house, of course. In networking, as software-defined networking (SDN) and OpenFlow find ready embrace among the large cloud shops, Cisco and others will be subject to the loss of revenue and profit margin, though how much and how soon remain to be seen.

Who’s Steering the OCP Ship?

So, who, aside from Facebook, will set the strategic agenda of Open Compute? To answer to that question, we need only consult the identities of those named to the Open Compute Project Foundation’s board of directors:

  • Chairman/President – Frank Frankovsky, Director, Technical Operations at Facebook
  • Jason Waxman, General Manager, High Density Computing, Data Center Group, Intel
  • Mark Roenigk, Chief Operating Officer, Rackspace Hosting
  • Andy Bechtolshiem, Industry Guru
  • Don Duet, Managing Director, Goldman-Sachs

It’s no shocker that Facebook retains the chairman’s role. Facebook didn’t launch this initiative to have somebody else steer the ship.

Similarly, it’s not a surprise that Intel is involved. Intel benefits regardless of whether cloud shops build their own systems, buy them from HP or Dell, or even get them from a Taiwanese or Chinese ODM.

As for the Rackspace representation, that makes sense, too. Rackspace already has OpenStack, open-source software for private and public clouds, and the Open Compute approach provides a logical hardware complement to that effort.

After that, though, the board membership of the Open Compute Project Foundation gets rather interesting.

Examining Bechtolsheim’s Involvement

First, there’s the intriguing presence of Andy Bechtolsheim. Those who follow the networking industry will know that Andy Bechtolsheim is more than an “industry guru,” whatever that means. Among his many roles, Bechtolsheim serves as the chief development officer and co-founder of Arista Networks, a growing rival to Cisco in low-latency data-center switching, especially at cloud-scale web shops and financial-services companies. It bears repeating that Open Compute’s mandate does not extend to network infrastructure, which is the preserve of the analogous OpenFlow.

Bechtolsheim’s history is replete with successes, as a technologist and as an investor. He was one of the earliest investors in Google, which makes his involvement in Open Compute deliciously ironic.

More recently, he disclosed a seed-stage investment in Nebula, which, as Derrick Harris at GigaOM wrote this summer, has “developed a hardware appliance pre-loaded with customized OpenStack software and Arista networking tools, designed to manage racks of commodity servers as a private cloud.” The reference architectures for the commodity servers comprise Dell’s PowerEdge C Micro Servers and servers that adhere to Open Compute specifications.

We know, then, why Bechtolsheim is on the board. He’s a high-profile presence that I’m sure Open Compute was only too happy to welcome with open arms (pardon the pun), and he also has business interests that would benefit from a furtherance of Open Compute’s agenda. Not to put too fine a point on it, but there’s an Arista and a Nebula dimension to Bechtolsheim’s board role at the Open Compute Project Foundation.

OpenStack Angle for Rackspace, Dell

Interestingly, the presence of Bechtolsheim and Rackspace’s Mark Roenigk on the board both emphasize OpenStack considerations, as does Dell’s involvement with Open Compute. Dell doesn’t have a board seat — at least not according to the Open Compute website — but it seems to think it can build a business for solutions based on Open Compute and OpenStack among second-tier purveyors of public-cloud services and among those pursuing large private or hybrid clouds. Both will become key strategic markets for Dell as its SMB installed base migrates applications and spending to the cloud.

Dell notably lost a chunk of server business when Facebook chose to go the DIY route, in conjunction with Taiwanese ODM Quanta Computer, for servers in its data center in Pineville, Oregon. Through its involvement in Open Compute, Dell might be trying to regain lost ground at Facebook, but I suspect that ship has sailed. Instead, Dell probably is attempting to ensure that it prevents or mitigates potential market erosion among smaller service providers and enterprise customers.

What Goldman Sachs Wants

The other intriguing presence on the Open Compute Project Foundation board is Don Duet from Goldman Sachs. Here’s what Duet had to say about his firm’s involvement with Open Compute:

“We build a lot of our own technology, but we are not at the hyperscale of Google or Facebook. We are a mid-scale company with a large global footprint. The work done by the OCP has the potential to lower the TCO [total cost of ownership] and we are extremely interested in that.”

Indeed, that perspective probably worries major server vendors more than anything else about Open Compute. Once Goldman Sachs goes this route, other financial-services firms will be inclined to follow, and nobody knows where the market attrition will end, presuming it ends at all.

Like Facebook, Goldman Sachs saw what Google was doing with its home-brewed, scale-out data-center infrastructure, and wondered how it might achieve similar business benefits. That has to be disconcerting news for major server vendors.

Welcome to the Future

The big takeaway for me, as I absorb these developments, is how the power axis of the industry is shifting. The big systems vendors used to set the agenda, promoting and pushing their products and influencing the influencers so that enterprise buyers kept their growth rates on the uptick. Now, though, a combination of factors — widespread data-center virtualization, the rise of cloud computing, a persistent and protected global economic downturn (which has placed unprecedented emphasis on IT cost containment) — is reshaping the IT universe.

Welcome to the future. Some might like it more than others, but there’s no going back.

Brocade Engages Qatalyst Again, Hopes for Different Result

The networking industry’s version of Groundhog Day resurfaced late last week when the Wall Street Journal published an article in which “people familiar with the matter” indicated that Brocade Communications Systems was up for sale — again.

Just like last time, investment-banking firm Qatalyst Partners, headed by the indefatigable Frank Quattrone, appears to have been retained as Brocade’s agent. Quattrone and company failed to find a buyer for Brocade last time, and many suspect the same fate will befall the principals this time around.

Changed Circumstances

A few things, however, are different from the last time Brocade was put on the block and Qatalyst beat Silicon Valley’s bushes seeking prospective buyers. For one thing, Brocade is worth less now than it was back then. The company’s shares are worth roughly half as much as they were worth during fevered speculation about its possible acquisition back in the early fall of 2009. With a current market capitalization of about $2.15 billion, Brocade would be easier for a buyer to digest these days.

That said, the business case for Brocade acquisition doesn’t seem as compelling now as it was then. The core of its commercial existence, still its Fibre Channel product portfolio, is well on its way to becoming a slow-growth legacy business. What’s worse, it has not become a major player in Ethernet switching subsequent to its $3 billion purchase of Foundry Networks in 2008. Running the numbers, prospective buyers would be disinclined to pay much of a premium for Brocade today unless they held considerable faith in the company’s cloud-networking vision and strategy, which isn’t at all bad but isn’t assured to succeed.

Unfortunately, another change is that fewer prospective buyers would seem to be in the market for Brocade these days. Back in 2009, Dell, HP, Oracle, IBM all were mentioned as possible acquirers of the company. One would be hard pressed to devise a plausible argument for any of those vendors to make a play for Brocade now.

Dell is busily and happily assimilating and integrating Force10 Networks; HP is still trying to get its networking house in order and doesn’t need the headaches and overlaps an acquisition of Brocade would entail; IBM is content to stand pat for now with its BLADE Network Technologies acquisition; and, as for Oracle, Larry Ellison was adamant that he wanted no part of Brocade. Admittedly, Ellison is known for his shrewdness and occasional reverses, but he sured seemed convincing regarding Oracle’s position on Brocade.

Sorting Out the Remaining Candidates

So, that leaves, well, who exactly? Some believe Cisco might buy up Brocade as a consolidation play, but that seems only a remote possibility. Others see Juniper Networks similarly making a consolidation play for Brocade. It could happen, I suppose, but I don’t think Juniper needs a distraction of that scale just as it is reaching several strategic crossroads (delivery of product roadmap, changing industry dynamics, technological shifts in its telco and service-provider markets). No, that just wouldn’t seem a prudent move, with the risks significantly outweighing the potential rewards.

Some say that private-equity players, some still flush with copious cash in their coffers, might buy Brocade. They have the means and the opportunity, but is the motive sufficient? It all comes back to believing that Brocade is on a strategic path that will make it more valuable in the future than it is today. In that regard, the company’s recent past performance, from a valuation standpoint, is not encouraging.

A far-out possibility, one that I would classify as remotely unlikely, envisions EMC buying Brocade. That would signal an abrupt end to the Cisco-EMC partnership, and I don’t see a divorce, were it to transpire, occurring quite so suddenly or irrevocably.

I do, however, see one dark-horse vendor that could make a play for Brocade, and might already have done so.

Could it Be . . . Hitachi?

That vendor? It’s Hitachi Data Systems. Yes, you’re probably wondering whether I’ve partaken of some pre-Halloween magic mushrooms, but I’ve made at least a half-way credible case for a Hitachi acquisition of Brocade previously. With its well-hidden Unified Compute Platform (UCP), Hitachi has aspirations to compete against Cisco, HP, Dell and others in converged data-center infrastructure. Hitachi owns 60 percent of a networking joint venture, with NEC as the junior partner, called Alaxala. If you go to the Alaxala website, you’ll see the joint venture’s current networking portfolio, which is bereft of Fibre Channel switches.

The question is, does Hitachi want them? Today, as indicated on the Hitachi website, the company partners with Brocade, Cisco, Emulex (adapters), and QLogic (adapters) for Fibre Channel networking and with Brocade and QLogic (adapters) for iSCSI networking.

The last time Brocade was said to the market, the anticlimactic outcome left figurative egg on the faces of Brocade directors and on those of the investment bankers at Qatalyst, which has achieved a relatively good batting average as a sales agent. Let’s assume — and, believe me, it’s a safe assumption — that media leaks about potential acquisitions typically are carefully contrived occurrences, done either to make a market or to expand a market in which there’s a single bidder that has declared intent and made an offer. In the latter case, the leak is made to solicit a competitive bid and drive up value.

Hold the Egg this Time

I’m not sure what transpired the first time Qatalyst was contracted to find a buyer for Brocade. The only sure inference is that the result (or lack thereof) was not part of the plan. Giving both parties the benefit of the doubt, one would think lessons were learned and they would not want to perform a reprise of the previous script. So, while perhaps last time there wasn’t a bidder or the bidder withdrew its offer after the media leak was made, I think there’s a prospective buyer firmly at the table this time. I also think Brocade wants to see whether a better offer can be had.

My educated guess, with the usual riders and qualifications in effect,* is that perhaps Hitachi or a private-equity concern (Silver Lake, maybe) is at the table. With the leak, Brocade and Qatalyst are playing for time and leverage.

We’ll see, perhaps sooner rather than later.

* I could, alas, be wrong.

Platform CEO Discusses IBM Deal, Says Partnerships Unaffected

In an email message addressed to me and a number of other recipients over the weekend, Platform Computing CEO Songnian Zhou referred to a blog post  he wrote — he jokingly referred to it as the “world’s most long-winded” — that explains why and how his company’s acquisition by IBM unfolded.

The post covers Platform’s 19-year chronology as well as the big-picture evolution of distributed computing. It’s a good read, well worth checking out. Zhou knows his subject matter well, writes in a refreshingly jargon- and hype-free style, and he covers a lot of ground. What’s more, the post isn’t nearly as prolix as he make it out to be.

Breaking It Down

As for how the acquisition came about, Zhou explains that it was driven by market dynamics and technological advances:

“The foundation of this acquisition is the ever expanding technical computing market going mainstream. IDC has been tracking this technical computing systems market segment at $14B, or 20% of the overall systems market. It is growing at 8%/year, or twice the growth rate of servers overall. Both IDC and users also point out that the biggest bottleneck to wider adoption is the complexity of clusters and grids, and thus the escalating needs for middleware and management software to hide all the moving parts and just deliver IT as a service. You see, it’s well worth paying a little for management software to get the most out of your hardware. Platform has a single mission: to rapidly deliver effective distributed computing management software to the enterprise. On our own, especially in the early days when going was tough, we have been doing a pretty good job for some enterprises in some parts of the world. But, we are only 536 heroes. Combined with IBM, we can get to all the enterprises worldwide. We have helped our customers to run their businesses better, faster, cheaper. After 19 years, IBM convinced us that there can also be a “better, faster, cheaper” way to help more customers and to grow our business. As they say, it’s all about leverage and scale.”

In a previous post I wrote about acquisition, I wondered, as have others, about how IBM’s ownership of Platform and its technology might affect the latter’s ability to support heterogeneous systems encompassing servers from IBM’s competitors. Zhou suggests that won’t be a problem, that a post-acquisition Platform  “will work even harder to add value to our partners, including IBM’s competitors.”

That said, even assuming that IBM takes a systems-centric view with Platform, continuing to allow the acquired company to support heterogeneous environments, one has to wonder whether Dell, HP and others will be as receptive to Platform as they were before. It’s a fair question, and those vendors, as well as Platform’s installed base of customers, ultimately will provide the answer.

ONF Deadly Serious About OpenFlow-Based SDNs

Yes, I’m back for further cogitation on software-defined networking (SDN) and OpenFlow.

As I wrote in my last post, relating to Cisco’s recent support for OpenFlow, I wasn’t able to attend the Open Networking Summit held last week at Stanford University.  I have, however, been reading coverage of the conference, and I am now convinced of a few fundamental SDN market realities.

Let’s start with who’s steering this particular SDN ship. The Open Networking Foundation (ONF) has been the driving force behind OpenFlow-based SDN. As I’ve written before, perhaps to the point of mind-numbing redundancy, the ONF is controlled not by networking vendors, but by the behemoths of the cloud service-provider community.

Control and the Power 

Networking vendors can be (and are) ONF members, but one needs to appreciate their place in the foundation’s hierarchy.  They are second-class citizens, and they are not setting the agenda. One more time, I will list the “founding and board members” of the ONF: Deutsche Telekom, Verizon, Google, Facebook, Microsoft, and Yahoo. Microsoft is there by dint of its status as a cloud service provider, not because it is a technology vendor.

Any doubts about where control and power reside within ONF were put to definitive rest in a recap of a third day of the Open Networking Summit provided by Dell’s Art Fewell on the NetworkWorld website:

“ . . . . Open Networking Foundation (ONF) Director Dan Pitt gave an excellent presentation that demonstrated that the ONF put a lot of thought into how they designed and structured the organization to incorporate lessons learned from older standards bodies, software communities and from the devops and open source movements. He noted that the ONF’s charter would not allow technology vendors to serve on the board of directors, but rather it should be governed by the network operators who have to live with the results. Working group chairs are assigned by the board, and a system of checks and balances has been put into place to try to prevent the problems that some standards organizations have become notorious for.”

It’s All About the Money

The message is clear. The network operators know what they want from SDN and OpenFlow, and they believe they know how to get it. What’s more, they don’t want the networking vendors compromising, subverting, or undermining the result.* (*Not that they’d do that sort of thing, of course.)

What, then, is the overriding objective these big network operators have in mind? Well, it’s to save money, as I explained in my previous post. SDN, and especially SDN enabled by an industry-standard protocol such as OpenFlow, is perceived by the major service providers as a means of substantially reducing network-related capital and, more to the point, operating expenditures. Service-provider executives, especially the mahogany-row bean counters, get excited about that sort of thing.

As Stacey Higginbotham notes, recounting an Open Networking Summit address given by a representative of Verizon:

“Stuart Elby, VP and network architecture & technology chief technologist for Verizon Digital Media Services, laid out how the promise of software-defined networking could make the company’s cost curve match its revenue by cutting down on the need for expensive gear that is costly to buy and even more costly to operate. In a conversation before his presentation, Elby explained how Verizon’s network can view every single packet on the network, but how keeping track of those packets is both a big data problem and expensive from a network management perspective.”

Verizon’s Compelling Chart

Verizon is not alone. Every one of the founding players in ONF sees the same business value in OpenFlow-enabled SDN. In the eyes of the ONF’s most powerful players, conventional network infrastructure is holding back substantial business benefits. It’s not personal, but it is business. And it is how and why major tectonic shifts in this industry come about.

Along those lines, Elby presented a visually powerful illustration that makes clear just how big an issue network-related costs are for Verizon. The chart is reproduced in Higginbotham’s article at GigaOM and in Fewell’s piece at NetworkWorld. If you haven’t seen it, I suggest you take a look. It really is worth a thousand words, but I’ll summarize as follows: Verizon’s network operating costs soon will surpass its revenues, resulting in what Verizon quaintly calls a “non-sustainable business case.” Therefore, there is an urgent need for a solution that lowers network-equipment expenditures, through utilization of off-the-shelf hardware, and enables a business case that better aligns operating costs with revenues. Verizon sees SDN and OpenFlow as the ticket to “inexpensive feature insertion for new services and revenue uplift.”

Verizon is not alone. It’s safe to say the others on the ONF board are dealing with variations of the same problem and are seeking similar solutions.

Google Goes Further

Google, for one, isn’t stopping at switches. As Higginbotham explored in an earlier post at GigaOM last week, Google is a fervent proponent of Quagga and the Open Sourcing Routing Project. The search giant’s goals are practical, namely  “cheaper, highly programmable routers it can use in its (core) network.” Called the Open LSR, Google’s router, as Higginbotham writes, is “an open-source router that consists of a switch made with merchant silicon and running Open vSwitch that talks to a server that has an OpenFlow-based controller and uses Quagga to generate the routing tables and forwarding information.”

As if the theme needs further belaboring, it’s all about taking cost out of network infrastructure. Google is working with others in the service-provider community to make its low-cost routing dream a reality.

It is clear, then, that the largest service providers, and perhaps may smaller ones besides, want to gain more control over their networks and with the costs associated with them. They have constructed the Open Network Foundation with a clear purpose in mind, they see SDN and Open Flow as solutions to a clearly articulated business problem, and they seem determined to see it through to fruition.

What About the Enterprise?

What remains to be seen is how willing enterprises will be to go along for the SDN ride. This is a point that was hammered home by Peter Cristy of the Internet Research Group, who, as reported by Fewell, told the audience at the Open Networking Summit that SDN and OpenFlow are likely to face significant challenges in cracking the enterprise market. Cristy’s points were valid. His most salient observations were that there have been few OpenFlow “killler apps,” and that enterprises do not favor “reproducing the same thing with new technology,” especially if that technology is new and complicated.

He’s right. But we have to remember that the ONF is captained by service providers, and they are not leading their particular SDN charge because they are motivated by altruistic concern for enterprise networks and their stewards. No, for now at least, the ONF’s conception of SDNs will be applicable to the demographic represented by the composition of the ONF board. Enterprises will have to wait, it seems, and that’s probably good news for the established order of networking vendors, especially for Cisco Systems.

Assessing Market Implications

Still, I have to wonder. Cristy is correct to note that the enterprise accounts for the “biggest part of the networking market.” Nonetheless, times are changing. As more applications move to the cloud, and to cloud service providers, SDN and presumably OpenFlow are likely to increasingly affect the top and bottom lines of networking vendors.

Those companies — Cisco, Juniper, and all the rest — have to keep a wary eye on SDN developments. Even if networking vendors eventually lose a chunk of business at network service providers, they’ll still have the enterprise, presuming they can position themselves correctly and anticipate change rather than react belatedly to it.

There’s a lot at stake as this story plays out in the months and years ahead.

Update on IBM’s Acquisition of Platform Computing

Despite my best efforts, I have been unable to obtain specific details relating to the price that IBM paid to acquire high-performance computing (HPC) workload-management pioneer Platform Computing. If anything further surfaces on that front, I’ll let you know.

In the meantime, others have made some good observations regarding the logic behind the acquisition and the potential ramifications of the move. Dan Kusnetzky who has longstanding familiarity with Platform in both a vendor and analyst capacity, provides a succinct explanation of what Platform does and then provides the following verdict:

“I believe IBM will be able to take this technology, integrate it into its “Smarter Computing” marketing programs and introduce many organizations to the benefits of harnessing together the power of a large number of systems to tackle very large and complex workloads.

This is a good match. “

Meanwhile, Curt Monash recounts details of a briefing he had with Platform in August. He suspects that IBM acquired Platform for its MapReduce offering, but, as Kusnetzky suggests, I think IBM also sees a lot of untapped potential in Platform’s traditional HPC-oriented technical markets, where the company already has an impressive roster  of blue-chip customers that have achieved compelling business results in cost savings and time-to-market improvements with the company’s cluster-management and load-sharing software.

There’s a lot of bluster about the cloud in relation to this acquisition, and that undoubtedly is a facet IBM will try to exploit in the future, but today Platform still does a robust business with its flagship software in scientific and technical computing. 

Platform apparently told Monash that it had “close to $100 million in revenue” and about 500 employees. The employee count seems about right, but I suspect the revenue number is exaggerated. According to a CBC news item on the acquisition, market-research firm Branham Group Inc. estimated that Platform generated revenue of about $71.6 million in its 2010 fiscal year. Presuming the Branham numbers to be correct, Platform would have 2011 fiscal year revenue ranging from $75 million to $80 million.

Finally, Ian Lumb, formerly an employee at Platform (as was your humble scribe) considers the potential implications of the acquisition on Platform’s long-heralded capacity to manage heterogeneous systems and workloads for its customers. This is a point that many analysts missed, and Lumb does an excellent job framing the dilemma IBM faces. Ostensibly, as Lumb notes, it will be business as usual for Platform and its support of heterogeneous systems, including those of IBM competitors such as Dell and HP.

But IBM faces a conundrum. Even if it were to choose to continue to support Platform’s heterogeneous-systems approach in deference to customer demand, the practicalities of doing so would prove daunting. Lumb explains why:

“To deliver a value-rich solution in the HPC context, Platform has to work (extremely) closely with the ‘system vendor’. In many cases, this closeness requires that Intellectual Property (IP) of a technical and/or business nature be communicated – often well before solutions are introduced to the marketplace and made available for purchase. Thus Platform’s new status as an IBM entity, has the potential to seriously complicate matters regarding risk, trust, etc., relating to the exchange of IP.

Although it’s been stated elsewhere that IBM will allow Platform measures of post-acquisition independence, I doubt that this’ll provide sufficient comfort for matters relating to IP. While NDAs specific to the new (and independent) Platform business unit within IBM may offer some measure of additional comfort, I believe that technically oriented approaches offer the greatest promise for mitigating concerns relating to risk, trust, etc., in the exchange of IP.”

It will be interesting to see how IBM addresses that challenge. Platform’s competitors, as Lumb writes, already are attempting to capitalize on the issue. 

Can Dell Think Outside the Box?

Michael Dell has derived great pleasure from HP’s apparent decision to spin off its PC business. As he has been telling the Financial Times and others recently, Dell (the company) believes having a PC business will be a critical differentiator as it pulls together and offers complete IT solutions to enterprise, service-provider, and SMB customers.

Hardware Edge?

Here’s what Dell had to say to the Financial Times about his company’s hardware-based differentiation:

 “We are very distinct from some of our competitors. We believe the devices and the hardware still matter as part of the complete, end-to-end solution . . . . Think about the scale economies in our business. As a company spins off its PC business, it goes from one of the top buyers in the world of disk drives and processors and memory chips to not being one of the top five. And that raises the cost of making servers and storage products. Ultimately we believe that presents an enormous opportunity for us and you can be sure we are going to seize it.”

Well, perhaps. I don’t know the intimate details of Dell’s PC economies of scale or its server-business costs, nor do I know what HP’s server-business costs will be when (and if) it eventually spins off its PC business. What I do know, however, is that IBM doesn’t seem to have difficulty competing and selling servers as integral parts of its solutions portfolio; nor does Cisco seem severely handicapped as it grows its server business without a PC product line.

Consequences of Infatuation

I suspect there’s more to Dell’s attachment to PCs than pragmatic dollars-and-cents business logic. I think Michael Dell likes PCs, that he understands them and their business more than he understands the software or services market. If I am right in those assumptions, they don’t suggest that Dell necessarily is wrong to stay in the PC business or that it will fail in selling software and services.

Still, it’s a company mindset that could inhibit Dell’s transition to a world driven increasingly by the growing commercial influence of cloud-service providers, the consumerizaton of IT, the proliferation of mobile devices, and the value inherent in software that provides automation and intelligent management of “dumb” industry-standard hardware boxes.

To be clear, I am not arguing that the “PC is dead.” Obviously, the PC is not dead, nor is it on life support.

In citing market research suggesting that two billion of them will be sold in 2014, Michael Dell is right to argue that there’s still strong demand for PCs worldwide.  While tablets are great devices for the consumption of content and media, they are not ideal devices for creating content — such as writing anything longer than a brief email message, crafting a presentation, or working on a spreadsheet, among other things.  Although it’s possible many buyers of tablets don’t create or supply content, and therefore have no need for a keyboard-equipped PC, I tend to think there still is and will be a substantial market for devices that do more than facilitate the passive consumption of information and entertainment.

End . . . or Means to an End?

Notwithstanding the PC market’s relative health, the salient question here is whether HP or Dell can make any money from the business of purveying them. HP decided it wanted the PC’s wafer-thin margins off its books as it drives a faster transition to software and services, whereas Dell has decided that it can live with the low margins and the revenue infusion that accompanies them. In rationalizing that decision, Michael Dell has said that “software is great, but you have to run it on something.”

There’s no disputing that fact, obviously, but I do wonder whether Dell is philosophically disposed to think outside the box, figuratively and literally. Put another way, does Dell see hardware as a container or receptacle of primary value, or does it see it as a necessary, relatively low-value conduit through which higher-value software-based services will increasingly flow?

I could be wrong, but Michael Dell still seems to see the world through the prism of the box, whether it be a server or a PC.

For me, Dell’s decision to maintain his company’s presence in PCs is beside the point. What’s important is whether he understands where the greatest business value will reside in the years to come, and whether he and his company can remain focused enough to conceive and execute a strategy that will enable them to satisfy evolving customer requirements.

Clarity on HP’s PC Business

Hewlett-Packard continues to contemplate how it should divest its Personal Systems Group (PSG), a $40-billion business dedicated overwhelmingly to sales of personal computers.  Although HP hasn’t communicated as effectively as it should have done, current indications are that the company will spin off its PC business as a standalone entity rather than sell it to a third party.

That said, the situation remains fluid. HP might yet choose to sell the business, even though Todd Bradley, PSG chieftain, seems adamant that it should be a separate company that he should lead. HP hasn’t been consistent or predictable lately on mobile hardware or PCs, though, so nothing is carved in stone.

Not a PC Manufacturer

No matter what it decides to do, the media should be clearer on exactly what HP will be spinning off or selling. I’ve seen it misreported repeatedly that HP will be selling or spinning off its “PC manufacturing arm” or its “PC manufacturing business.”

That’s wrong. As knowledgeable observers know, HP doesn’t manufacture PCs. Increasingly, it doesn’t even design them in any meaningful way, which is more than partly why HP finds itself in the current dilemma of deciding whether to spin off or sell a wafer-thin-margin business.

HP’s PSG business brands, markets, and sells PCs. But — and this is important to note — it doesn’t manufacture them. The manufacturing of the PCs is done by original design manufacturers (ODMs), most of which originated in Taiwan but now have operations in China and many others countries. These ODMs increasingly provide a lot more than contract manufacturing. They also provide design services that are increasingly sophisticated.

Brand is the Value

A dirty little secret your favorite PC vendor (Apple excluded) doesn’t want you to know is that it doesn’t really do any PC innovation these days. The PC-creation process today operates more along these lines: brand-name PC vendor goes to Taiwan to visit ODMs, which demonstrate a range of their latest personal-computing prototypes, from which the brand-name vendor chooses some designs and perhaps suggests some modifications. Then the products are put through the manufacturing process and ultimately reach market under the vendor’s brand.

That’s roughly how it works. HP doesn’t manufacture PCs. It does scant PC design and innovation, too. If you think carefully about the value that is delivered in the PC-creation process, HP provides its brand, its marketing, and its sales channels. Its value — and hence its margins — are dependent on the premiums its brand can bestow and the volumes its channel can deliver . Essentially, an HP PC is no different from any other PC designed and manufactured by ODMs that provide PCs for the entire industry.

HP and others allowed ODMs to assume a greater share of PC value creation — far beyond simple manufacturing — because they were trying to cut costs. You might recall that cost cutting was  a prominent feature of the lean-and-mean Mark Hurd regime at HP. As a result, innovation suffered, and not just in PCs.

Inevitable Outcome

In that context, it’s important to note that HP’s divestment of its low-margin PC business, regardless of whether it’s sold outright or spun off as a standalone entity, has been a long time coming.

Considering the history and the decisions that were made, one could even say it was inevitable.