Category Archives: AMD

HP’s Launches Its Moonshot Amid Changing Industry Dynamics

As I read about HP’s new Project Moonshot, which was covered extensively by the trade press, I wondered about the vendor’s strategic end game. Where was it going with this technology initiative, and does it have a realistic likelihood of meeting its objectives?

Those questions led me to consider how drastically the complexion of the IT industry has changed as cloud computing takes hold. Everything is in flux, advancing toward an ultimate galactic configuration that, in many respects, will be far different from what we’ve known previously.

What’s the Destination?

It seems to me that Project Moonshot, with its emphasis on a power-sipping and space-saving server architecture for web-scale processing, represents an effort by HP to re-establish a reputation for innovation and thought leadership in a burgeoning new market. But what, exactly, is the market HP has in mind?

Contrary to some of what I’ve seen written on the subject, HP doesn’t really have a serious chance of using this technology to wrest meaningful patronage from the behemoths of the cloud service-provider world. Google won’t be queuing up for these ARM-based, Calxeda-designed, HP-branded “micro servers.” Nor will Facebook or Microsoft. Amazon or Yahoo probably won’t be in the market for them, either.

The biggest of the big cloud providers are heading in a different direction, as evidenced by their aggressive patronage of open-source hardware initiatives that, when one really thinks about it, are designed to reduce their dependence on traditional vendors of server, storage, and networking hardware. They’re breaking that dependence — in some ways, they see it as taking back their data centers — for a variety of reasons, but their behavior is invariably motivated by their desire to significantly reduce operating expenditures on data-center infrastructure while freeing themselves to innovate on the fly.

When Customers Become Competitors

We’ve reached an inflection point where the largest cloud players — the Googles, the Facebooks, the Amazons, some of the major carriers who have given thought to such matters — have figured out that they can build their own hardware infrastructure, or order it off the shelf from ODMs, and get it to do everything they need it to do (they have relatively few revenue-generating applications to consider) at a lower operating cost than if they kept buying relatively feature-laden, more-expensive gear from hardware vendors.

As one might imagine, this represents a major business concern for the likes of HP, as well as for Cisco and others who’ve built a considerable business selling hardware at sustainable margins to customers in those markets. An added concern is that enterprise customers, starting with many SMBs, have begun transitioning their application workloads to cloud-service providers. The vendor problem, then, is not only that the cloud market is growing, but also that segments of the enterprise market are at risk.

Attempt to Reset Technology Agenda

The vendors recognize the problem, and they’re doing what they can to adapt to changing circumstances. If the biggest web-scale cloud providers are moving away from reliance on them, then hardware vendors must find buyers elsewhere. Scores of cloud service providers are not as big, or as specialized, or as resourceful as Google, Facebook, or Microsoft. Those companies might be considering the paths their bigger brethren have forged, with initiatives such as the Open Compute Project and OpenFlow (for computing and networking infrastructure, respectively), but perhaps they’re not entirely sold on those models or don’t think they’re quite right  for their requirements just yet.

This represents an opportunity for vendors such as HP to reset the technology agenda, at least for these sorts of customers. Hence, Project Moonshot, which, while clearly ambitious, remains a work in progress consisting of the Redstone Server Development Platform, an HP Discovery Lab (the first one is in Houston), and HP Pathfinder, a program designed to create open standards and third-party technology support for the overall effort.

I’m not sure I understand who will buy the initial batch of HP’s “extreme low-power servers” based on Calxeda’s EnergyCore ARM server-on-a-chip processors. As I said before, and as an article at Ars Technica explains, those buyers are unlikely to be the masters of the cloud universe, for both technological and business reasons. For now, buyers might not even come from the constituency of smaller cloud providers

Friends Become Foes, Foes Become Friends (Sort Of)

But HP is positioning itself for that market and to be involved in those buying decisions relating to the energy-efficient system architectures.  Its Project Moonshot also will embrace energy-efficient microprocessors from Intel and AMD.

Incidentally, what’s most interesting here is not that HP adopted an ARM-based chip architecture before opting for an Intel server chipset — though that does warrant notice — but that Project Moonshot has been devised not so much as to compete against other server vendors as it is meant to provide a rejoinder to an open-computing model advanced by Facebook and others.

Just a short time ago, industry dynamics were relatively easy to discern. Hardware and system vendors competed against one another for the patronage of service providers and enterprises. Now, as cloud computing grows and its business model gains ascendance, hardware vendors also find themselves competing against a new threat represented by mammoth cloud service providers and their cost-saving DIY ethos.

Advertisements

OVA Members Hope to Close Ground

I discussed the fast-growing Open Virtualization Alliance (OVA) in a recent post about its primary objective, which is to commoditize VMware’s daunting market advantage. In catching up on my reading, I came across an excellent piece by InformationWeek’s Charles Babcock that puts the emergence of OVA into historical perspective.

As Babcock writes, the KVM-centric OVA might not have come into existence at all if an earlier alliance supporting another open-source hypervisor hadn’t foundered first. Quoting Babcock regarding OVA’s vanguard members:

Hewlett-Packard, IBM, Intel, AMD, Red Hat, SUSE, BMC, and CA Technologies are examples of the muscle supporting the alliance. As a matter of fact, the first five used to be big backers of the open source Xen hypervisor and Xen development project. Throw in the fact Novell was an early backer of Xen as the owner of SUSE, and you have six of the same suspects. What happened to support for Xen? For one, the company behind the project, XenSource, got acquired by Citrix. That took Xen out of the strictly open source camp and moved it several steps closer to the Microsoft camp, since Citrix and Microsoft have been close partners for over 20 years.

Xen is still open source code, but its backers found reasons (faster than you can say vMotion) to move on. The Open Virtualization Alliance still shares one thing in common with the Xen open source project. Both groups wish to slow VMware’s rapid advance.

Wary Eyes

Indeed, that is the goal. Most of the industry, with the notable exception of VMware’s parent EMC, is casting a wary eye at the virtualization juggernaut, wondering how far and wide its ambitions will extend and how they will impact the market.

As Babcock points out, however, by moving in mid race from one hypervisor horse (Xen) to another (KVM), the big backers of open-source virtualization might have surrendered insurmountable ground to VMware, and perhaps even to Microsoft. Much will depend on whether VMware abuses its market dominance, and whether Microsoft is successful with its mid-market virtualization push into its still-considerable Windows installed base.

Long Way to Go

Last but perhaps not least, KVM and the Open Virtualization Alliance (OVA) will have a say in the outcome. If OVA members wish to succeed, they’ll not only have to work exceptionally hard, but they’ll also have to work closely together.

Coming from behind is never easy, and, as Babcock contends, just trying to ride Linux’s coattails will not be enough. KVM will have to continue to define its own value proposition, and it will need all the marketing and technological support its marquee backers can deliver. One area of particular importance is operations management in the data center.

KVM’s market share, as reported by Gartner earlier this year, was less than one percent in server virtualization. It has a long way to go before it causes VMware’s executives any sleepless nights. That it wasn’t the first choice of its proponents, and that it has lost so much time and ground, doesn’t help the cause.

Will Microsoft Get Into the PC Hardware Business?

In his commentary last Friday, John C. Dvorak wondered whether Microsoft might be about to enter the PC hardware market in the United States.

Dvorak builds a superficially plausible case, citing a Microsoft-branded PC available in India and the Redmond company’s growing range of hardware products, now extending from PC peripherals, such as mice and keyboards, to Xbox 360 consoles and the Zune media player.

He also argues that Microsoft might be getting frustrated with its existing hardware OEM partners, including Dell and HP. Those vendors sell Linux-based systems as well as Windows Vista offerings. Moreover, Dvorak contends that the hardware vendors have been primarily to blame for many of mechanical glitches that have prevented Vista from being an unalloyed success in the marketplace.

Dvorak isn’t complete adrift. I’m sure some of the big brains on the Microsoft campus have carefully pondered the pros and cons of following Apple’s lead and getting into the business of selling the entire PC experience.

Still, from a Microsoft perspective, the costs of such a strategy would seem to outweigh the benefits. First, selling PC hardware isn’t exactly a high-margin business. In fact it’s getting harder to make money at it all the time, as this piece in this weekend’s Wall Street Journal attests.

What’s more, Microsoft doesn’t have the hardware brand that Apple possesses. Apple is renowned for its elegant, stylish hardware. Microsoft is known for serviceable mice and keyboards, a box-office bomb in the ungainly form of Zune, and Xbox and Xbox 360 consoles that have been riddled with design and manufacturing faults. If, as Dvorak suggests, the Xbox has been a trial run toward a Microsoft-branded PC, the experience has provided at least as much reason for prudent pause as for an enthusiastic leap into a new frontier.

There’s also the inherent risk of Microsoft, in choosing to produce its own PCs, pushing its existing business partners firmly into the arms of Linux distributors. Despite Apple’s recent market-share gains, Microsoft still owns the vast majority of the client operating-system marketplace. In choosing to make its own PCs, Microsoft would likely lose more market share than it would gain, making low-cost Linux PCs more attractive to entry buyers while failing to gain market share from Apple’s elegant products.

Besides, at the end of the day, Microsoft should know it must focus on where it can deliver the best returns for its shareholders. Is that really in the consumer market, selling low-margin PCs into an operating-system space it already dominates? Isn’t it obvious to nearly everybody by now that Microsoft is better at serving businesses and enterprises than at serving capricious consumers?

Microsoft’s focus ought to be in expanding its footprint in enterprise software rather than in trying to beat Apple and its own current hardware partners in the PC market. The risks of such an endeavor clearly outweigh any likely rewards.

Intel Closing Cambridge Research Lab

InfoWorld reports that Intel will close its research laboratory in Cambridge, England, before the end of this year.

It’s yet another sign that Intel’s focus and resources are being placed practical, tactical, near-term objectives, such as revenue realization and profitability, rather than on long-term strategic research initiatives.

The Cambridge lab is one of four that Intel runs in collaboration with universities to work on long-term projects, and the only such facility outside the U.S. Researchers in Cambridge worked on wireless and optical-networking projects and on technology related to distributed applications.

Intel’s three other university labs are at the University of California at Berkeley, Carnegie Mellon University in Pittsburgh, and the University of Washington in Seattle. It is not known whether any of those research facilities are in danger of being shut down as part of Intel’s cost-cutting and restructuring zeal.

Given Intel’s struggles in recent years and its suddenly heated competitive battle against AMD in the microprocessor market, it is understandable that the company would want to get its business priorities in order.

Still, Intel has to be careful not to go too far in cutting back on forward-looking research initiatives. Some of those projects might have the potential to give Intel technologies that would help it regain a competitive edge against AMD.

AMD Completes, Further Explains ATI Acquisition

AMD announced today that it has completed its $5.4-billion acquisition of Canada’s ATI Technologies.

In welcoming ATI formally into the AMD fold, the chipmaker provided further insight into on how ATI’s chipsets and graphics processors will be integrated into AMD’s microprocessors.

AMD announced a project code-named Fusion that will merge a PC processor with an integrated graphics processor core by 2008 or 2009. AMD believes, not unreasonably, that graphics processors will become more critical to PCs  as more games and multimedia applications become more advanced.

Although I understand the rationale for AMD’s acquisition of ATI, I still think the price it paid was steep, necessitating a loan of $2.5 billion in cash from Morgan Stanley Senior Lending.

Intel Prepares to Slash at Least 10,000 from Payroll; Analysts Want More

Apparently having concluded an internal efficiency review launched in April, Intel CEO Paul Otellini and his executive team are reputed to be on the cusp of announcing cuts to at least 10,000 employees, according to reports at CNET News.com and in the Wall Street Journal (subscription required).

The layoffs, which reports say will come Tuesday after the close of market trading, are not unexpected.

Intel announced in July that it would dump 1,000 managers, and it has sold two communications-chip businesses in recent months. As it has lost ground to AMD, particularly in the higher-margin server space, Intel increasingly has been pushing for greater efficiency, focus, and management accountability.

One might think a payroll reduction of 10,000 employees — about 10 percent of Intel’s workforce — would be more than enough to satisfy the ghouls on Wall Street, who typically love this sort of thing because of what it does (over time) for the bottom line, but apparently the analysts want more blood running through the cubicle corridors of Intel.

Said David Wu of Global Crown Capital:

It would be seen as lame if Intel does less than 10,000.

Added Doug Freedman, an analyst at American Technology Research:

Ten thousand would be at the low end of everyone’s expectations.

Meanwhile, Mark Edelstone of Morgan Stanley, quoted in the piece that appears in the Wall Street Journal, said staff reductions at Intel could range as high as 15,000 to 20,000, though he contends those cuts would include future sales of business units (who many more can Intel sell?). Edelstone asserts that headcount at Intel “just ballooned, almost out of control” in 2004, and that it needs to be rolled back significantly.

The impeding job cuts is are likely to take an inordinately large chunk out of Intel’s marketing staff. After doing studies comparing its staffing levels to those of its competitors, primarily AMD, Intel concluded that its ratio of marketing personnel to salespeople was too high, according to sources cited in the CNET report.

That’s all well and good, but, cutting staff will not cure Intel’s competitive torpor. Oh, it well help lower operating costs, no question, but it cannot and will not, in and of itself, address the relative lack of aggressiveness, creativity, and innovation that has afflicted Intel in recent years.

I’m no bean counter, nor do I play one on television, but perhaps Intel needs better C-level executives at least as much as it needs fewer employees.

Dell Ends Week of Woe, but What Has It Learned?

Dell Inc. must be pleased that this week has come to an end. As the Associated Press reports, Dell’s woeful week included a record-setting recall of notebook batteries, the disclosure of a federal accounting probe. and a 51-percent decline in second-quarter profit.

There also was a lingering controversy over the poorly managed release of a laptop model in China that was powered by a microprocessor other than the one listed in Dell’s product literature.

In a bid to show it is getting with the program, at least belatedly, Dell announced that it would increase its use of microprocessors from AMD.

Until earlier this year, Dell had been exclusively an Intel shop. It’s first tentative stop toward AMD came when it released a four-way, Opteron-based server for high-performance technical-computing applications. Now it will embrace AMD microprocessors across the board, using them to power a range of desktop and laptop personal computers as well as for two-way servers.

Dell will start selling its consumer-oriented Dimension desktops with AMD processors next month. Other models of PCs, including notebooks, will be powered by AMD chips before the end of the year. Opteron-based servers also are expected to reach market by year’s end.

According to a report on CNET’s News.com site yesterday, quoting a report by a Bank of America market analyst, Dell has ordered between 1 million and 1.2 million AMD-powered desktop computers and about 800,000 notebooks.

Calculations by the Bank of America analyst suggest that AMD will capture about 15 percent to 16 percent of Dell’s desktop business and approximately 18 percent to 19 percent of its notebook business. That’s a lot of business, and you can be sure Intel, already in the midst of a restructuring effort, won’t be happy about it.

At least the AMD news demonstrates that Dell is trying to climb out of the gaping hole it dug for itself. Make no mistake, it is a deep hole, and Dell didn’t dig it overnight. It took a lot of misguided shoveling to produce a crater this vast, and Dell now faces a brand crisis that could hobble its image with consumers and enterprise buyers for a long time to come.

There’s no quick fix to the problems Dell faces. None of these debacles that burst into the headlines this week was a result of chance or of fate. Each one resulted from Dell’s institutionalized arrogance, neglect, and a monomaniacal preoccupation with internal processes over all else.

Shifting its mix of microprocessors more toward AMD responds to a symptom of Dell’s chronic problems, but what about diagnosing and treating the disease?

Before it can get well, Dell must admit it has a problem. The problem is one of corporate dogma and ideology. Dell’s success planted the seeds of its decline. It began to view its business-process formula — comprising direct sales, build to order PCs, and a low-cost supply chain — as an end rather than a means to an end.

It got religious about its business model — and business and religion don’t mix, unless you are a televangelist. To paraphrase the late Warren Zevon, what worked for you before might not work for you now.

At Dell, there now should be no sacred cows. Pragmatic executives, prepared to look at everything Dell does with critical detachment, are required. Can the current executive cast at Dell do that job? Are they capable of asking the tough questions and coming up with the right answers, even ones that are anathema to Dell’s previous practices?

That’s the challenge, and it will be fascinating to see whether Dell can remake itself before its decline becomes irreversible.