Category Archives: PCs

Dell Makes Enterprise Moves, Confronts Dilemma

Dell reported its third-quarter earnings yesterday, and reactions to the news generally made for grim reading. The company cannot help but know that it faces a serious dilemma: It must continue an aggressive shift into enterprise solutions while propping up a punch-drunk personal-computer business that is staggered, bloody, and all but beaten.

The word “dilemma” is particularly appropriate in this context. The definition of dilemma is “a situation in which a difficult choice has to be made between two or more alternatives, especially equally undesirable ones.” 

Hard Choices

Dell seems too attached to the PC to give it up, but in the unlikely event that Dell chose to kick to the commoditized box to the curb, it would surrender a large, though diminishing, pool of low-margin revenue. The market would react adversely, particularly if Dell were not able to accelerate growth in other areas.  

While Dell is growing its revenue in servers and networking, especially the latter, those numbers aren’t rising fast enough to compensate for erosion in what Dell calls “mobility” and “desktop.” What’s more, Dell’s storage business has gone into a funk, with “Dell-owned IP storage revenue” down 3% on a year-to-year basis.

Increased Enterprise Focus

To its credit, Dell seems to recognize that it needs to pull out all the stops. It continues to make acquisitions, most of them related to software, designed bolster its enterprise-solutions profile. Today, in fact, it announced the acquisition of Gale Technologies, and it also announced that Dario Zamarian, a former Cisco executive who has been serving as VP and GM of Dell Networking, has become vice president and general manager of  the newly formed Dell Enterprise Systems & Solutions, “focused on the delivery of converged and enterprise workload topologies and solutions.” Zamarian will report to former HP executive Marius Haas, president of Dell Enterprise Solutions Group. 

Zamarian’s former role as VP and GM of Dell Networking will be assumed by Tom Burns, who comes directly from Alcatel-Lucent, where he served as president of that company’s Enterprise Products Group, which included voice, unified communications, networking, and security solutions.

Dell has the cash to make other acquisitions to strengthen its hand in private and hybrid clouds, and we should expect it to do so.  The company would have more cash to make those moves if it were to divest its PC business, but Dell doesn’t seem willing to bite that bullet. 

That would be a difficult move to make — wiping out substantial revenue while eliminating a piece of the business that is a vestigial piece of Dell’s identity — but half measures aren’t in Dell’s long-term interests.  It needs to be all-in on the enterprise, and I think also needs to adopt a software mindset. As long as the PC business is around, I suspect Dell won’t be able to fully and properly make that transition. 

Advertisement

Dell’s Steady Progression in Converged Infrastructure

With its second annual Dell Storage Forum in Boston providing the backdrop, Dell made a converged-infrastructure announcement this week.  (The company briefed me under embargo late last week.)

The press release is available on the company’s website, but I’d like to draw attention to a few aspects of the announcement that I consider noteworthy.

First off, Dell now is positioned to offer its customers a full complement of converged infrastructure, spanning server, storage, and networking hardware, as well as management software. For customers seeking a single-vendor, one-throat-to-choke solution, this puts Dell  on parity with IBM and HP, while Cisco still must partner with EMC or with NetApp for its storage technology.

Bringing the Storage

Until this announcement, Dell was lacking the storage ingredients. Now, with what Dell is calling the Dell Converged Blade Data Center solution, the company is adding its EqualLogic iSCSI Blade Arrays to Dell PowerEdge blade servers and Dell Force10 MXL blade switching. Dell says this package gives customers an entire data center within a single blade enclosure, streamlining operations and management, and thereby saving money.

Dell’s other converged-infrastructure offering is the Dell vStart 1000. For this iteration of vStart, Dell is including, for the first time, its Compellent storage and Force10 networking gear in one integrated rack for private-cloud environments.

The vStart 1000 comes in two configurations: the vStart 1000m and the vStart 1000v. The packages are nearly identical — PowerEdge M620 servers, PowerEdge R620 management servers, Dell Compellent Series 40 storage, Dell Force10 S4810 ToR Networking and Dell Force10 S4810 ToR Networking, plus Brocade 5100 ToR Fibre-Channel Switches — but the vStart 1000m comes with Windows Server 2008 R2 Datacenter (with the Hyper-V hypervisor), whereas the vStart 1000v features trial editions of VMware vCenter and VMware vSphere (with the ESXi hypervisor).

An an aside, it’s worth mentioning that Dell’s inclusion of Brocade’s Fibre-Channel switches confirms that Dell is keeping that partnership alive to satisfy customers’ FC requirements.

Full Value from Acquisitions

In summary, then, is Dell delivering converged infrastructure with both its in-house storage options, demonstrating that it has fully integrated its major hardware acquisitions into the mix.   It’s covering as much converged ground as it can with this announcement.

Nonetheless, it’s fair to ask where Dell will find customers for its converged offerings. During my briefing with Dell, I was told that mid-market was the real sweet spot, though Dell also sees departmental opportunities in large enterprises.

The mid-market, though, is a smart choice, not only because the various technology pieces, individually and collectively, seem well suited to the purpose, but also because Dell, given its roots and lineage, is a natural player in that space. Dell has a strong mandate to contest the mid-market, where it can hold its own against any of its larger converged-infrastructure rivals.

Mid-Market Sweet Spot

What’s more, the mid-market — unlike cloud-service providers today and some large enterprise in the not-too-distant future — are unlikely to have the inclination, resources, and skills to pursue a DIY, software-driven, DevOps-oriented variant of converged infrastructure that might involve bare-bones hardware from Asian ODMs. At the end of the day, converged infrastructure is sold as packaged hardware, and paying customers will need to perceive and realize value from buying the boxes.

The mid-market would seem more than receptive to the value proposition that Dell is selling, which is that its converged infrastructure will reduce the complexity of IT management and deliver operational cost savings.

This finally leads us to a discussion of Dell’s take on converged infrastructure. As noted in an eChannelLine article, Dell’s notion of converged infrastructure encompasses operations management, services management, and applications management. As Dell continues down the acquisition trail, we should expect the company to place greater emphasis on software-based intelligence in those areas.

That, too, would be a smart move. The battle never ends, but Dell — despite its struggles in the PC market — is now more than punching its own weight in converged infrastructure.

Fear Compels HP and Dell to Stick with PCs

For better or worse, Hewlett-Packard remains committed to the personal-computer business, neither selling off nor spinning off that unit in accordance with the wishes of its former CEO. At the same, Dell is claiming that it is “not really a PC company,” even though it will continue to sell an abundance of PCs.

Why are these two vendors staying the course in a low-margin business? The popular theory is that participation in the PC business affords supply-chain benefits such as lower costs for components that can be leveraged across servers. There might be some truth to that, but not as much as you might think.

At the outset, let’s be clear about something: Neither HP nor Dell manufactures its own PCs. Manufacture of personal computers has been outsourced to electronics manufacturing services (EMS) companies and original design manufacturers (ODMs).

Growing Role of the ODM

The latter do a lot more than assemble and manufacture PCs. They also provide outsourced R&D and design for OEM PC vendors.  As such, perhaps the greatest amount of added value that a Dell or an HP brings to its PCs is represented by the name on the bezel (the brand) and the sales channels and customer-support services (which also can be outsourced) they provide.

Major PC vendors many years ago decided to transfer manufacturing to third-party companies in Taiwan and China. Subsequently, they also increasingly chose to outsource product design. As a result, ODMs design and manufacture PCs. Typically ODMs will propose various designs to the PC vendors and will then build the models the vendors select. The PC vendor’s role in the design process often comes down to choosing the models they want, sometimes with vendor-specified tweaks for customization and market differentiation.

In short, PC vendors such as HP and Dell don’t really make PCs at all. They rebrand them and sell them, but their involvement in the actual creation of the computers has diminished markedly.

Apple Bucks the Trend 

At this point, you might be asking: What about Apple? Simply put, unlike its PC brethren, Apple always has insisted on controlling and owning a greater proportion of the value-added ingredients of its products.

Unlike Dell and HP, for example, Apple has its own operating system for its computers, tablets, and smartphones. Also unlike Dell and HP, Apple did not assign hardware design to ODMs. In seeking costs savings from outsourced design and manufacture, HP and Dell sacrificed control over and ownership of their portable and desktop PCs. Apple wagered that it could deliver a premium, higher-cost product with a unique look and feel. It won the bet.

A Spurious Claim?

Getting back to HP, does it actually derive economies of scale for its server business from the purchase of PC components in the supply chain? It’s possible, but it seems unlikely. The ODMs with which HP contracts for design and manufacture of its PCs would get a much better deal on component costs than would HP, and it’s now standard practice for those ODMs to buy common components that can be used in the manufacture and assembly of products for all their brand-name OEM customers. It’s not clear to me what proportion of components in HP’s PCs are supplied and integrated by the ODMs, but I suspect the percentage is substantial.

On the whole, then, HP and Dell might be advancing a spurious argument about remaining in the PC business because it confers savings on the purchase of components that can used in servers.

Diagnosing the Addiction

If so, then, why would HP and Dell remain in the PC game? Well, the answer is right there on the balance sheets of both companies. Despite attempts at diversification, and despite initiatives to transform into the next IBM, each company still has a revenue reliance on — perhaps even an addiction to — PCs.

According to calculations by Sterne Agee analyst Shaw Wu, about 70 to 75 percent of Dell revenue is connected to the sale of PCs. (Dell derived about 43 percent of its revenue directly from PCs in its most recent quarter.) In relative terms, HP’s revenue reliance on PCs is not as great — about 30% of direct revenue — but, when one considers the relationship between PCs and related related peripherals, including printers, the company’s PC exposure is considerable.

If either company were to exit the PC business, shareholders would react adversely. The departure from the PC business would leave a gaping revenue hole that would not be easy to fill. Yes, relative margins and profitability should improve, but at the cost of much lower channel and revenue profiles. Then there is the question of whether a serious strategic realignment would actually be successful. There’s risk in letting go of a bird in hand for one that’s not sure to be caught in the bush.

ODMs Squeeze Servers, Too

Let’s put aside, at least for this post, the question of whether it’s good strategy for Dell and HP to place so much emphasis on their server businesses. We know that the server business faces high-end disruption from ODMs, which increasingly offer hardware directly to large customers such as cloud service providers, oil-and-gas firms,  and major government agencies. The OEM (or vanity) server vendors still have the vast majority of their enterprise customers as buyers, but it’s fair to wonder about the long-term viability of that market, too.

As ODMs take on more of the R&D and design associated with server-hardware production, they must question just how much value the vanity OEM vendors are bringing to customers. I think the customers and vendors themselves are asking the same questions, because we’re now seeing a concerted effort in the server space by vendors such as Dell and HP to differentiate “above the board” with software and system innovations.

Fear Petrifies

Can HP really become a dominant purveyor of software and services to enterprises and cloud service providers? Can Dell be successful as a major player in the data center? Both companies would like to think that they can achieve those objectives, but it remains to be seen whether they have the courage of their convictions. Would they bet the business on such strategic shifts?

Aye, there’s the rub. Each is holding onto a commoditized, low-margin PC business not because they like being there, but because they’re afraid of being somewhere else.

Can Dell Think Outside the Box?

Michael Dell has derived great pleasure from HP’s apparent decision to spin off its PC business. As he has been telling the Financial Times and others recently, Dell (the company) believes having a PC business will be a critical differentiator as it pulls together and offers complete IT solutions to enterprise, service-provider, and SMB customers.

Hardware Edge?

Here’s what Dell had to say to the Financial Times about his company’s hardware-based differentiation:

 “We are very distinct from some of our competitors. We believe the devices and the hardware still matter as part of the complete, end-to-end solution . . . . Think about the scale economies in our business. As a company spins off its PC business, it goes from one of the top buyers in the world of disk drives and processors and memory chips to not being one of the top five. And that raises the cost of making servers and storage products. Ultimately we believe that presents an enormous opportunity for us and you can be sure we are going to seize it.”

Well, perhaps. I don’t know the intimate details of Dell’s PC economies of scale or its server-business costs, nor do I know what HP’s server-business costs will be when (and if) it eventually spins off its PC business. What I do know, however, is that IBM doesn’t seem to have difficulty competing and selling servers as integral parts of its solutions portfolio; nor does Cisco seem severely handicapped as it grows its server business without a PC product line.

Consequences of Infatuation

I suspect there’s more to Dell’s attachment to PCs than pragmatic dollars-and-cents business logic. I think Michael Dell likes PCs, that he understands them and their business more than he understands the software or services market. If I am right in those assumptions, they don’t suggest that Dell necessarily is wrong to stay in the PC business or that it will fail in selling software and services.

Still, it’s a company mindset that could inhibit Dell’s transition to a world driven increasingly by the growing commercial influence of cloud-service providers, the consumerizaton of IT, the proliferation of mobile devices, and the value inherent in software that provides automation and intelligent management of “dumb” industry-standard hardware boxes.

To be clear, I am not arguing that the “PC is dead.” Obviously, the PC is not dead, nor is it on life support.

In citing market research suggesting that two billion of them will be sold in 2014, Michael Dell is right to argue that there’s still strong demand for PCs worldwide.  While tablets are great devices for the consumption of content and media, they are not ideal devices for creating content — such as writing anything longer than a brief email message, crafting a presentation, or working on a spreadsheet, among other things.  Although it’s possible many buyers of tablets don’t create or supply content, and therefore have no need for a keyboard-equipped PC, I tend to think there still is and will be a substantial market for devices that do more than facilitate the passive consumption of information and entertainment.

End . . . or Means to an End?

Notwithstanding the PC market’s relative health, the salient question here is whether HP or Dell can make any money from the business of purveying them. HP decided it wanted the PC’s wafer-thin margins off its books as it drives a faster transition to software and services, whereas Dell has decided that it can live with the low margins and the revenue infusion that accompanies them. In rationalizing that decision, Michael Dell has said that “software is great, but you have to run it on something.”

There’s no disputing that fact, obviously, but I do wonder whether Dell is philosophically disposed to think outside the box, figuratively and literally. Put another way, does Dell see hardware as a container or receptacle of primary value, or does it see it as a necessary, relatively low-value conduit through which higher-value software-based services will increasingly flow?

I could be wrong, but Michael Dell still seems to see the world through the prism of the box, whether it be a server or a PC.

For me, Dell’s decision to maintain his company’s presence in PCs is beside the point. What’s important is whether he understands where the greatest business value will reside in the years to come, and whether he and his company can remain focused enough to conceive and execute a strategy that will enable them to satisfy evolving customer requirements.

Intel-Microsoft Mobile Split All Business

In an announcement today, Google and Intel said they would work together to optimize future versions of the  Android operating system for smartphones and other mobile devices powered by Intel chips.

It makes good business sense.

Pursuit of Mobile Growth

Much has been made of alleged strains in the relationship between the progenitors of Wintel — Microsoft’s Windows operating system and Intel’s microprocessors — but business partnerships are not affairs of the heart; they’re always pragmatic and results oriented. In this case, each company is seeking growth and pursuing its respective interests.

I don’t believe there’s any malice between Intel and Microsoft. The two companies will combine on the desktop again in early 2012, when Microsoft’s Windows 8 reaches market on PCs powered by Intel’s chips as well as on systems running the ARM architecture.

Put simply, Intel must pursue growth in mobile markets and data centers. Microsoft must similarly find partners that advance its interests.  Where their interests converge, they’ll work together; where their interests diverge, they’ll go in other directions.

Just Business

In PCs, the Wintel tandem was and remains a powerful industry standard. In mobile devices, Intel is well behind ARM in processors, while Microsoft is well behind Google and Apple in mobile operating systems. It makes sense that Intel would want to align with a mobile industry leader in Google, and that Microsoft would want to do likewise with ARM. A combination of Microsoft and Intel in mobile computing would amount to two also-rans combining to form . . . well, two also-rans in mobile computing.

So, with Intel and Microsoft, as with all alliances in the technology industry, it’s always helpful to remember the words of Don Lucchesi in The Godfather: Part III: “It’s not personal, it’s just business.”

Limits to Consumerization of IT

At GigaOm, Derrick Harris is wondering about the limits of consumerization of IT for enterprise applications. It’s a subject that warrants consideration.

My take on consumerization of IT is that it makes sense, and probably is an unstoppable force, when it comes to the utilization of mobile hardware such as smartphones and tablets (the latter composed primarily and almost exclusively of iPads these days).

This is a mutually beneficial arrangement. Employees are happier, not to mention more productive and engaged, when using their own computing and communications devices. Employers benefit because they don’t have to buy and support mobile devices for their staff.  Both groups win.

Everybody Wins

Moreover, mobile device management (MDM) and mobile-security suites, together with various approaches to securing applications and data, mean that the security risks of allowing employees to bring their devices to work have been sharply mitigated. In relation to mobile devices, the organizational rewards of IT consumerization — greater employee productivity, engaged and involved employees, lower capital and operating expenditures — outweigh the security risks, which are being addressed by a growing number of management and security vendors who see a market opportunity in making the practice safer.

In other areas, though, the case in favor of IT consumerization is not as clear. In his piece, Harris questions whether VMware will be successful with a Dropbox-like application codenamed Project Octopus. He concludes that those already using Dropbox will be reluctant to swap it for a an enterprise-sanctioned service that provides similar features, functionality, and benefits. He posits that consumers will want to control the applications and services they use, much as they determine which devices they bring to work.

Data and Applications: Different Proposition

However, the circumstances and the situations are different. As noted above, there’s diminishing risk for enterprise IT in allowing employees to bring their devices to work.  Dropbox, and consumer-oriented data-storage services in general, is an entirely different proposition.

Enterprises increasingly have found ways to protect sensitive corporate data residing on and being sent to and from mobile devices, but consumer-oriented products like Dropbox do an end run around secure information-management practices in the enterprises and can leave sensitive corporate information unduly exposed. The enterprise cost-benefit analysis for a third-party service like Dropbox shows risks outweighing potential rewards, and that sets up a dynamic where many corporate IT departments will mandate and insist upon company-wide adoption of enterprise-class alternatives.

Just as I understand why corporate minders acceded to consumerization of IT in relation to mobile devices, I also fully appreciate why corporate IT will draw the line at certain types of consumer-oriented applications and information services.

Consumerization of IT is a real phenomenon, but it has its limits.

Clarity on HP’s PC Business

Hewlett-Packard continues to contemplate how it should divest its Personal Systems Group (PSG), a $40-billion business dedicated overwhelmingly to sales of personal computers.  Although HP hasn’t communicated as effectively as it should have done, current indications are that the company will spin off its PC business as a standalone entity rather than sell it to a third party.

That said, the situation remains fluid. HP might yet choose to sell the business, even though Todd Bradley, PSG chieftain, seems adamant that it should be a separate company that he should lead. HP hasn’t been consistent or predictable lately on mobile hardware or PCs, though, so nothing is carved in stone.

Not a PC Manufacturer

No matter what it decides to do, the media should be clearer on exactly what HP will be spinning off or selling. I’ve seen it misreported repeatedly that HP will be selling or spinning off its “PC manufacturing arm” or its “PC manufacturing business.”

That’s wrong. As knowledgeable observers know, HP doesn’t manufacture PCs. Increasingly, it doesn’t even design them in any meaningful way, which is more than partly why HP finds itself in the current dilemma of deciding whether to spin off or sell a wafer-thin-margin business.

HP’s PSG business brands, markets, and sells PCs. But — and this is important to note — it doesn’t manufacture them. The manufacturing of the PCs is done by original design manufacturers (ODMs), most of which originated in Taiwan but now have operations in China and many others countries. These ODMs increasingly provide a lot more than contract manufacturing. They also provide design services that are increasingly sophisticated.

Brand is the Value

A dirty little secret your favorite PC vendor (Apple excluded) doesn’t want you to know is that it doesn’t really do any PC innovation these days. The PC-creation process today operates more along these lines: brand-name PC vendor goes to Taiwan to visit ODMs, which demonstrate a range of their latest personal-computing prototypes, from which the brand-name vendor chooses some designs and perhaps suggests some modifications. Then the products are put through the manufacturing process and ultimately reach market under the vendor’s brand.

That’s roughly how it works. HP doesn’t manufacture PCs. It does scant PC design and innovation, too. If you think carefully about the value that is delivered in the PC-creation process, HP provides its brand, its marketing, and its sales channels. Its value — and hence its margins — are dependent on the premiums its brand can bestow and the volumes its channel can deliver . Essentially, an HP PC is no different from any other PC designed and manufactured by ODMs that provide PCs for the entire industry.

HP and others allowed ODMs to assume a greater share of PC value creation — far beyond simple manufacturing — because they were trying to cut costs. You might recall that cost cutting was  a prominent feature of the lean-and-mean Mark Hurd regime at HP. As a result, innovation suffered, and not just in PCs.

Inevitable Outcome

In that context, it’s important to note that HP’s divestment of its low-margin PC business, regardless of whether it’s sold outright or spun off as a standalone entity, has been a long time coming.

Considering the history and the decisions that were made, one could even say it was inevitable.

PC Market: Tired, Commoditized — But Not Dead

As Hewlett-Packard prepares to spinoff or sell its PC business within the next 12 to 18 months, many have spoken about the “death of the PC.”

Talk of “Death” and “Killing”

Talk of metaphorical “death” and “killing” has been rampant in technology’s new media for the past couple years . When observers aren’t noting that a product or technology is “dead,” they’re saying that an emergent product of one sort or another will “kill” a current market leader. It’s all exaggeration and melodrama, of course, but it’s not helpful. It lowers the discourse, and it makes the technology industry appear akin to professional wrestling with nerds. Nobody wants to see that.

Truth be told, the PC is not dead. It’s enervated, it’s best days are behind it, but it’s still here. It has, however, become a commodity with paper-thin margins, and that’s why HP — more than six years after IBM set the precedent — is bailing on the PC market.

Commoditized markets are no place for thrill seekers or for CEOs of companies that desperately seek bigger profit margins. HP CEO Leo Apotheker, as a longtime software executive, must have viewed HP’s PC business, which still accounts for about 30 percent of the company’s revenues, with utter disdain when he first joined the company.

No Room for Margin

As  I wrote in this forum a while back, PC vendors these days have little room to add value (and hence margin) to the boxes they sell. It was bad enough when they were trying to make a living atop the microprocessors and operating systems of Intel and Microsoft, respectively. Now they also have to factor original design manufacturers (ODMs)  into the shrinking-margin equation.

It’s almost a dirty little secret, but the ODMs do a lot more than just manufacture PCs for the big brands, including HP and Dell. Many ODMs effectively have taken over hardware design and R&D from cost-cutting PC brands. Beyond a name on a bezel, and whatever brand equity that name carries, PC vendor aren’t adding much value to the box that ships.

For further background on how it came to this — and why HP’s exit from the PC market was inevitable — I direct you to my previous post on the subject, written more than a year ago. In that post, I quoted and referenced Stan Shih, Acer’s founder, who said that “U.S. computer brands may disappear over the next 20 years, just like what happened to U.S. television brands.”

Given the news this week, and mounting questions about Dell’s commitment to the low-margin PC business, Shih might want to give that forecast a sharp forward revision.

Is Li-Fi the Next Wi-Fi?

The New Scientist published a networking-related article last week that took me back to my early days in the industry.

The piece in question dealt with Visible Light Communication (VLC), a form of light-based networking in which data is encoded and transmitted by varying the rate at which LEDs flicker on and off, all at intervals imperceptible to the human eye.

Also called Li-Fi — yes, indeed, the marketers are involved already — VLC is being positioned for various applications, including those in hospitals, on aircraft, on trading floors, in automotive car-to-car and traffic-control scenarios, on trade-show floors, in military settings,  and perhaps even in movie theaters where VLC-based projection might improve the visual acuity of 3D films. (That last wacky one was just something that spun off the top of my shiny head.)

From FSO to VLC

Where I don’t see VLC playing a big role, certainly not as a replacement for Wi-Fi or its future RF-based successors, is in home networking. VLC’s requirement for line of sight will make it a non-starter for Wi-Fi scenarios where wireless networking must traverse floors, walls, and ceilings. There are other room-based applications for VLC in the home, though, and those might work if device (PC, tablet, mobile phone), display,  and lighting vendors get sufficiently behind the technology.

I feel relatively comfortable pronouncing an opinion on this technology. The idea of using light-based networking has been with us for some time, and I worked extensively with infrared and laser data-transmission technologies back in the early to mid 90s. Those were known as free-space optical (FSO) communications systems, and they fulfilled a range of niche applications, primarily in outdoor point-to-point settings. The vendor for which I worked provided systems for campus deployments at universities, hospitals, museums, military bases, and other environments where relatively high-speed connectivity was required but couldn’t be delivered by trenched fiber.

The technology mostly worked . . . except when it didn’t. Connectivity disruptions typically were caused by what I would term “transient environmental factors,” such as fog, heavy rain or snow, as well as dust and sand particulate. (We had some strange experiences with one or two desert deployments). From what I can gather, the same parameters generally apply to VLC systems.

Will that be White, Red, or Resonant Cavity?

Then again, the performance of VLC systems goes well beyond what we were able to achieve with FSO in the 90s. Back then, laser-based free-space optics could deliver maximum bandwidth of OC3 speeds (144Mbps), whereas the current high-end performance of VLC systems reaches transmission rates of 500Mbps. An article published earlier this year at theEngineer.com provides an overview of VLC performance capabilities:

 “The most basic form of white LEDs are made up of a bluish to ultraviolet LED surrounded by a yellow phosphor, which emits white light when stimulated. On average, these LEDs can achieve data rates of up to 40Mb/sec. Newer forms of LEDs, known as RGBs (red, green and blue), have three separate LEDs that, when lit at the same time, emit a light that is perceived to be white. As these involve no delay in stimulating a phosphor, data rates in RGBs can reach up to 100Mb/sec.

But it doesn’t stop there. Resonant-cavity LEDs (RCLEDs), which are similar to RGB LEDs and are fitted with reflectors for spectral clarity, can now work at even higher frequencies. Last year, Siemens and Berlin’s Heinrich Hertz Institute achieved a data-transfer rate of 500Mb/sec with a white LED, beating their earlier record of 200Mb/sec. As LED technology improves with each year, VLC is coming closer to reality and engineers are now turning their attention to its potential applications.”

I’ve addressed potential applications earlier in this post, but a sage observation is offered in theEngineer.com piece by Oxford University’s Dr. Dominic O’Brien, who sees applications falling into two broad buckets: those that “augment existing infrastructure,” and those in which  visible networking offers a performance or security advantage over conventional alternatives.

Will There Be Light?

Despite the merit and potential of VLC technology, its market is likely to be limited, analogous to the demand that developed for FSO offerings. One factor that has changed, and that could work in VLC’s favor, is RF spectrum scarcity. VLC could potentially help to conserve RF spectrum by providing much-needed bandwidth; but such a scenario would require more alignment and cooperation between government and industry than we’ve seen heretofore. Curb your enthusiasm accordingly.

The lighting and display industries have a vested interest in seeing VLC prosper. Examining the membership roster of the Visible Light Communications Consortium (VLCC), one finds it includes many of Japan’s big names in consumer electronics. Furthermore, in its continuous pursuit of new wireless technologies, Intel has taken at least a passing interest in VLC/Li-Fi.

If the vendor community positions it properly, standards cohere, and the market demands it, perhaps there will be at least some light.

Reviewing Dell’s Acquisition of Force10

Now seems a good time to review Dell’s announcement last week regarding its acquisition of Force10 Networks. We knew a deal was coming, and now that the move finally has been made, we can can consider the implications.

It was big news on a couple fronts. First, it showcased Dell’s continued metamorphosis from being a PC vendor and box pusher into becoming a comprehensive provider of enterprise and cloud solutions. At the same time, and in a related vein, it gave Dell the sort of converged infrastructure that allows it to compete more effectively against Cisco, HP, and IBM.

The transaction price of Dell’s Force10 acquisition was not disclosed, but “people familiar with the matter” allege that Dell paid about $700 million to seal the deal. Another person apparently privy to what happened behind the scenes says that Dell considered buying Brocade before opting for Force10. That seems about right.

Rationale for Acquisition

As you’ll recall (or perhaps not), I listed Force10 as the second favorite, at 7-2, in my Dell Networking Derby, my attempt to forecast which networking company Dell would buy. Here’s what I said about the rationale for a Dell acquisition of Force10:

 “Dell partners with Force10 for Layer 3 backbone switches and for Layer 2 aggregation switches. Customers that have deployed Dell/Force10 networks include eHarmony, Salesforce.com, Yahoo, and F5 Networks.

Again, Michael Dell has expressed an interest in 10GbE and Force10 fits the bill. The company has struggled to break out of its relatively narrow HPC niche, placing increasing emphasis on its horizontal enterprise and data-center capabilities. Dell and Force10 have a history together and have deployed networks in real-word accounts. That could set the stage for a deepening of the relationship, presuming Force10 is realistic about its market valuation.”

While not a cheap buy, Force10 went for a lot less than an acquisition of Brocade, at a market capitalization of $2.83 billion, would have entailed. Of course, bigger acquisitions always are harder to integrate and assimilate than smaller ones. Dell has found a targeted acquisition model that seems to work, and a buy the size of Brocade would have been difficult for the company to digest culturally and operationally. In hindsight, which usually gives one a chance to be 100% correct, Dell made a safer play in opting for Force10.

IPO Plans Shelved

Although Force10 operates nominally in 60 countries worldwide, it derived 80 percent of its $200 million in revenue last year from US customers, primarily data-center implementations. Initially, at least, Dell will focus its sales efforts on cross-pollination between its and Force10’s customers in North America. It will expand from there.

Force10 has about 750 employees, most of whom work at its company headquarters in San Jose, California, and at a research facility in Chennai, India. Force10 doesn’t turn Dell into an overnight networking giant; the acquired vendor had just two percent market share in data-center networking during the first half of 2011, according to IDC. Numbers from Dell’Oro suggest that Force10 owned less than one percent of the overall Ethernet switch market.

Once upon a time, Force10 had wanted to fulfill its exit strategy via an IPO. Those plans obviously were not realized. The scuttlebutt on the street is that, prior to being acquired by Dell, Force10 had been slashing prices aggressively to maintain market share against bigger players.

Channel Considerations

Force10 has about 1,400 customers, getting half its revenue and the other half from channel sales. Dell doesn’t see an immediate change in the sales mix.

Dell will work to avoid channel conflict, but I foresee an increasing shift toward direct sales, not only with the Force10’s data-center networking gear, but also with any converged data-center-in-a-box offerings Dell might assemble.

Converged Infrastructure (AKA Integrated Solution Stack) 

Strategically, Dell and its major rivals are increasingly concerned with provision of converged infrastructure, otherwise known as as an integrated technology stack (servers, storage, networking, associated management and services) for data centers. The ultimate goal is to offer comprehensive automation of tightly integrated data-center infrastructure. These things probably will never run themselves — though one never knows — but there’s customer value (and vendor revenue) in pushing them as far along that continuum as possible.

For some time,  Dell has been on a targeted acquisition trail, assembling all the requisite pieces of the converged-infrastructure puzzle. Key acquisitions included Perot Systems for services, EqualLogic and Compellent for storage, Kace for systems management, and SecureWorks for security capabilities. At the same time, Dell has been constructing data centers worldwide to host cloud applications.

Dell’s converged-infrastructure strategy is called Virtual Network Services Architecture (VNSI), and the company claims Force10’s Open Cloud Networking (OCN) strategy, which stresses automation and virtualization based on open standards, is perfectly aligned with its plans. Dario Zamarian, VP and GM of Dell Networking, said last week that VNSI is predicated on three pillars: “managing from the edge,” where servers and storage are attached to the network; “flattening the network,” which is all the rage these days; and “scaling virtualization.”

For its part, Force10 has been promoting the concept of flatter and more scalable networks comprising its interconnected Z9000 switches in distributed data-center cores.

 The Network OS Question

I don’t really see Dell worrying unduly about gaining greater direct involvement in wiring-closet switches. It has its own PowerConnect switches already, and it could probably equip those to run Force10’s FTOS on those boxes. It seems FTOS, which Dell is positioning as an open networking OS, could play a prominent role in Dell’s competitive positioning against Cisco, HP, Juniper, IBM, and perhaps even Huawei Symantec.

Then again, Dell’s customers might have a say in the matter. At least two big Dell customers, Facebook and Yahoo, are on the board of directors of the Open Networking Foundation (ONF), a nonprofit organization dedicated to promoting software-defined networking (SDN) using the OpenFlow protocol. Dell and Force10 are members of ONF.

It’s possible that Dell and Force10 might look to keep those big customers, and pursue others within the ONF’s orbit, by fully embracing OpenFlow. The ONF’s current customer membership is skewed toward high-performance computing and massive cloud environments, both of which seem destined to be aggressive early adopters of SDN and, by extension, the OpenFlow protocol.  (I won’t go into my thoughts on OpenFlow here — I’ve already written a veritable tome in this missive — but I will cover it in a forthcoming post.)

Notwithstanding its membership in the Open Networking Foundation, Force10 is perceived as relatively bearish on OpenFlow. Earlier this year, Arpit Joshipura, Force10’s chief marketing officer, indicated his company would wait for OpenFlow to mature and become more scalable before offering it on its switches. He said “big network users” — presumably including major cloud providers — are more interested in OpenFlow today than are enterprise customers. Then again, the cloud ultimately is one of the destinations where Dell wants to go.

Still, Dell and Force10 might see whether FTOS can fit the bill, at least for now. As Cindy Borovick, research vice president for IDC’s enterprise communications and data center networks, has suggested, Dell could see Force10‘s FTOS as something that can be easily customized for a wide range of deployment environments. Dell could adapt FTOS to deliver prepackaged products to customers, which then could further customize the network OS depending on their particular requirements.

It’ll be interesting to see how Dell proceeds with FTOS and with OpenFlow.

 Implications for Others

You can be sure that Dell’s acquisition of Force10 will have significant implications for its OEM partners, namely Juniper Networks and Brocade Communications. From what I have heard, not much has developed commercially from Dell’s rebranding of Juniper switches, so any damage to Juniper figures to be relatively modest.

It’s Brocade that appears destined to suffer a more meaningful hit. Sure, Dell will continue to carry and sell its Fiber Channel SAN switches, but it won’t be offering Brocade’s Foundry-derived Ethernet switches, and one would have to think that the relationship, even on the Fiber Channel front, has seen its best days.

As for whether Dell will pursue other networking acquisitions in the near team, I seriously doubt it. Zeus Kerravala advises Dell to buy Extreme Networks, but I don’t see the point. As mentioned earlier, Dell already has its PowerConnect line, and the margins are in the data-center, not out in the wiring closets. Besides, as Dario Zamarian has noted, data-center networking is expected to grow at a compound annual growth rate of 21 percent through 2015, much faster than the three-percent growth forecast for the rest of the industry.

The old Dell would have single-mindedly chased the network box volumes, but the new Dell aspires to something grander.

HP’s TouchPad: Ground to Make Up, but Still in Race

After I wrote my last post about the limited commercial horizons of Cisco’s Cius tablet, I was asked to comment on the prospects for HP’s webOS-based TouchPad.

A Tale of Two Tablets

Like Cisco’s Cius, the TouchPad made its market debut this month, a few weeks ahead of its Cisco counterpart. The two tablets also have an enterprise orientation in common. Moreover, like Cisco’s Cius, the TouchPad was greeted with ambivalent early reviews. Actually, I suppose the early reviews for the TouchPad, while not glowing, were warmer than the tepid-to-icy responses occasioned by Cisco’s Cius.

There are other differences between the two tablets. For one, HP’s TouchPad sports its own mobile operating system, whereas Cisco has chosen to ride Google’s Android. There’s nothing wrong with Cisco’s choice, per se, but HP, in buying Palm and its webOS, has a deeper commitment to making its mobile-device strategy work.

As we’ve learned, Cisco is casting the Cius as an entry point — just one more conduit and access device — to its collaboration ecosystem as represented by the likes of WebEx and its Telepresence offerings.

Different Aspirations and Objectives

Put another way, HP clearly sees itself as a player in the tablet wars, while, for Cisco, tablets are incidental, a tactical means to a strategic end, represented by greater adoption of bandwidth-sucking collaboration suites and videoconferencing systems by enterprises worldwide. Consequently, it would come as no surprise to see Cisco bail on the tablet market before the end of this year, but it would come as a genuine shock if HP threw in the towel on webOS (and its associated devices) during the same timeframe.

That won’t happen, of course. HP believes it can carve out a niche for itself as a mobile-device purveyor for enterprise customers. To accomplish that goal, HP will port webOS to PCs and printers as well as to a growing family of tablets and smartphones. It also will license webOS to other vendors of tablets and smartphones — and perhaps to other vendors of PCs, too, presuming such demand materializes. Cisco doesn’t have an OS in the mobile race, so it doesn’t have those sorts of aspirations.

Multiple Devices, Bundling, and Services

Another difference is that HP actually knows how to make money selling client devices with more than a modicum of consumer appeal. That’s still uncharted territory for Cisco. In a period in which “consumerization of IT” is much more than a buzz phrase, it helps that HP has some consumer chops, just as it hurts that Cisco does not. Presuming that HP can generate demand from end users — maybe that’s why it is using the decidedly non-corporate Russell Brand as its TouchPad pitchman — it can then use bundling of webOS-based tablets, smartphones, printers, and PCs to captivate enterprise IT departments.

To top it all off, HP can wrap up the whole package with extensive consulting and integration services.

I’m not saying HP is destined for greatness in the tablet derby — the company will have to persevere and work hard to address perceived weaknesses and to amass application support from the developer community — but I’d wager that HP is better constituted than Cisco to stay the course.

Pondering Intel’s Grand Design for McAfee

Befuddlement and buzz jointly greeted Intel’s announcement today regarding its pending acquisition of security-software vendor McAfee for $7.68 billion in cash.

Intel was not among the vendors I expected to take an acquisitive run at McAfee. It appears I was not alone in that line of thinking, because the widespread reaction to the news today involved equal measures of incredulity and confusion. That was partly because Intel was McAfee’s buyer, of course, but also because Intel had agreed to pay such a rich premium, $48 per McAfee share, 60 percent above McAfee’s closing price of $29.93 on Wednesday.

What was Intel Thinking?

That Intel paid such a price tells us a couple things. First, that Intel really felt it had to make this acquisition; and, second, that Intel probably had competition for the deal. Who that competition might have been is anybody’s guess, but check my earlier posts on potential McAfee acquirers for a list of suspects.

One question that came to many observers’ minds today was a simple one: What the hell was Intel thinking? Put another way, just what does Intel hope to derive from ownership of McAfee that it couldn’t have gotten from a less-expensive partnership with the company?

Many attempting to answer this question have pointed to smartphones and other mobile devices, such as slates and tablets, as the true motivations for Intel’s purchase of McAfee. There’s a certain logic to that line of thinking, to the idea that Intel would want to embed as much of McAfee’s security software as possible into chips that it heretofore has had a difficult time selling to mobile-device vendors, who instead have gravitated to  designs from ARM.

Embedded M2M Applications

In the big picture, that’s part of Intel’s plan, no doubt. But I also think other motivations were at play.  An important market for Intel, for instance, is the machine-to-machine (M2M) space.

That M2M space is where nearly everything that can be assigned an IP address and managed or monitored remotely — from devices attached to the smart grid (smart meters, hardened switches in substations, power-distribution gear) to medical equipment, to building-control systems, to televisions and set-top boxes  — is being connected to a communications network. As Intel’s customers sell systems into those markets, downstream buyers have expressed concerns about potential security vulnerabilities. Intel could help its embedded-systems customers ship more units and generate more revenue for Intel by assuaging the security fears of downstream buyers.

Still, that roadmap, if it exists, will take years to reach fruition. In the meantime, Intel will be left with slideware and a necessarily loose coupling of its microprocessors with McAfee’s security software. As Nathan Brookwood, principal analyst at Insight 64 suggested, Intel could start off by designing its hardware to work better with McAfee software, but it’s likely to take a few years, and new processor product cycles, for McAfee technology to get fully baked into Intel’s chips.

Will Take Time

So, for a while, Intel won’t be able to fully realize the value of McAfee as a asset. What’s more, there are parts of McAfee that probably don’t fit into Intel’s chip-centric view of the world. I’m not sure, for example, what this transaction portends for McAfee’s line of Internet-security products obtained through its acquisition of Secure Computing. Given that McAfee will find its new home inside Intel’s Software and Service division, as Richard Stiennon notes, the prospects for the Secure Computing product line aren’t bright.

I know Intel wouldn’t do this deal just because it flipped a coin or lost a bet, but Intel has a spotty track record, at best, when it comes to M&A activity. Media observers sometimes assume that technology executives are like masters of the universe, omniscient beings with superior intellects and brilliant strategic designs. That’s rarely true, though. Usually, they’re just better-paid, reasonably intelligent human beings, doing their best, with limited information and through hazy visibility, to make the right business decisions. They make mistakes, sometimes big ones.

M&A Road Full of Potholes

Don’t take it from me; consult the business-school professors. A Wharton course on mergers and acquisitions spotlights this quote from Robert W. Holthausen, Nomura Securities Company Professor, Professor of Accounting and Finance and Management:

“Various studies have shown that mergers have failure rates of more than 50 percent. One recent study found that 83 percent of all mergers fail to create value and half actually destroy value. This is an abysmal record. What is particularly amazing is that in polling the boards of the companies involved in those same mergers, over 80 percent of the board members thought their acquisitions had created value.”

I suppose I’m trying to say is that just because Intel thinks it has a plan for McAfee, that doesn’t mean the plan is a a good one or, even presuming it is a good plan, that it will be executed successfully. There are many potholes and unwanted detours along M&A road.