Monthly Archives: February 2012

Fear Compels HP and Dell to Stick with PCs

For better or worse, Hewlett-Packard remains committed to the personal-computer business, neither selling off nor spinning off that unit in accordance with the wishes of its former CEO. At the same, Dell is claiming that it is “not really a PC company,” even though it will continue to sell an abundance of PCs.

Why are these two vendors staying the course in a low-margin business? The popular theory is that participation in the PC business affords supply-chain benefits such as lower costs for components that can be leveraged across servers. There might be some truth to that, but not as much as you might think.

At the outset, let’s be clear about something: Neither HP nor Dell manufactures its own PCs. Manufacture of personal computers has been outsourced to electronics manufacturing services (EMS) companies and original design manufacturers (ODMs).

Growing Role of the ODM

The latter do a lot more than assemble and manufacture PCs. They also provide outsourced R&D and design for OEM PC vendors.  As such, perhaps the greatest amount of added value that a Dell or an HP brings to its PCs is represented by the name on the bezel (the brand) and the sales channels and customer-support services (which also can be outsourced) they provide.

Major PC vendors many years ago decided to transfer manufacturing to third-party companies in Taiwan and China. Subsequently, they also increasingly chose to outsource product design. As a result, ODMs design and manufacture PCs. Typically ODMs will propose various designs to the PC vendors and will then build the models the vendors select. The PC vendor’s role in the design process often comes down to choosing the models they want, sometimes with vendor-specified tweaks for customization and market differentiation.

In short, PC vendors such as HP and Dell don’t really make PCs at all. They rebrand them and sell them, but their involvement in the actual creation of the computers has diminished markedly.

Apple Bucks the Trend 

At this point, you might be asking: What about Apple? Simply put, unlike its PC brethren, Apple always has insisted on controlling and owning a greater proportion of the value-added ingredients of its products.

Unlike Dell and HP, for example, Apple has its own operating system for its computers, tablets, and smartphones. Also unlike Dell and HP, Apple did not assign hardware design to ODMs. In seeking costs savings from outsourced design and manufacture, HP and Dell sacrificed control over and ownership of their portable and desktop PCs. Apple wagered that it could deliver a premium, higher-cost product with a unique look and feel. It won the bet.

A Spurious Claim?

Getting back to HP, does it actually derive economies of scale for its server business from the purchase of PC components in the supply chain? It’s possible, but it seems unlikely. The ODMs with which HP contracts for design and manufacture of its PCs would get a much better deal on component costs than would HP, and it’s now standard practice for those ODMs to buy common components that can be used in the manufacture and assembly of products for all their brand-name OEM customers. It’s not clear to me what proportion of components in HP’s PCs are supplied and integrated by the ODMs, but I suspect the percentage is substantial.

On the whole, then, HP and Dell might be advancing a spurious argument about remaining in the PC business because it confers savings on the purchase of components that can used in servers.

Diagnosing the Addiction

If so, then, why would HP and Dell remain in the PC game? Well, the answer is right there on the balance sheets of both companies. Despite attempts at diversification, and despite initiatives to transform into the next IBM, each company still has a revenue reliance on — perhaps even an addiction to — PCs.

According to calculations by Sterne Agee analyst Shaw Wu, about 70 to 75 percent of Dell revenue is connected to the sale of PCs. (Dell derived about 43 percent of its revenue directly from PCs in its most recent quarter.) In relative terms, HP’s revenue reliance on PCs is not as great — about 30% of direct revenue — but, when one considers the relationship between PCs and related related peripherals, including printers, the company’s PC exposure is considerable.

If either company were to exit the PC business, shareholders would react adversely. The departure from the PC business would leave a gaping revenue hole that would not be easy to fill. Yes, relative margins and profitability should improve, but at the cost of much lower channel and revenue profiles. Then there is the question of whether a serious strategic realignment would actually be successful. There’s risk in letting go of a bird in hand for one that’s not sure to be caught in the bush.

ODMs Squeeze Servers, Too

Let’s put aside, at least for this post, the question of whether it’s good strategy for Dell and HP to place so much emphasis on their server businesses. We know that the server business faces high-end disruption from ODMs, which increasingly offer hardware directly to large customers such as cloud service providers, oil-and-gas firms,  and major government agencies. The OEM (or vanity) server vendors still have the vast majority of their enterprise customers as buyers, but it’s fair to wonder about the long-term viability of that market, too.

As ODMs take on more of the R&D and design associated with server-hardware production, they must question just how much value the vanity OEM vendors are bringing to customers. I think the customers and vendors themselves are asking the same questions, because we’re now seeing a concerted effort in the server space by vendors such as Dell and HP to differentiate “above the board” with software and system innovations.

Fear Petrifies

Can HP really become a dominant purveyor of software and services to enterprises and cloud service providers? Can Dell be successful as a major player in the data center? Both companies would like to think that they can achieve those objectives, but it remains to be seen whether they have the courage of their convictions. Would they bet the business on such strategic shifts?

Aye, there’s the rub. Each is holding onto a commoditized, low-margin PC business not because they like being there, but because they’re afraid of being somewhere else.

Networking Vendors Tilt at ONF Windmill

Closely following the latest developments and continuing progress of software-defined networking (SDN), I am reminded of what somebody who shall remain nameless said not long ago about why he chose to leave Cisco to pursue his career elsewhere.

He basically said that Cisco, as a huge networking company, is having trouble reconciling itself to the reality that the growing force known as cloud computing is not “network centric.” His words stuck with me, and I’ve been giving them a lot of thought since then.

All Computing Now

His opinion was validated earlier this week at a NetEvents symposium in Garmisch, Germany, where Dan Pitt, executive director of the Open Networking Foundation (ONF) made some statements about software-defined networking (SDN) that, while entirely consistent with what we’ve heard before from that community’s most fervent proponents, also seemed surprisingly provocative. Quoting Pitt, from a blog post published at ZDNet UK:

“In future, networking will become just an integral part of computing, using same tools as the rest of computing. Enterprises will get out of managing plumbing, operators will become software companies, IT will add more business value, and there will be more network startups from Generation Y.”

Pitt was asked what impact this architectural shift would have on network performance. He said that a 30,000-user campus could be supported by a four-year-old Dell PC.

Redefining Architecture, Redefining Value

Naturally, networking vendors can’t be elated at that prospect. Under the SDN master plan, the intelligence (and hence the value) of switching and routing gets moved to a server, or to a cluster of servers, on the edge of the network. Whether this is done with OpenFlow, Open vSwitch, or some other mechanism between the control plane and the switch doesn’t really matter in the big picture. What matters is that networking architectures will be redefined, and networking value will migrate into (and be subsumed within) a computing paradigm. Not to put too fine a point on it, but networking value will be inherent in applications and control-plane software, not in the dumb, physical hardware that will be relegated to shunting packets on the network.

At that same NetEvents symposium in Germany, a Computerworld UK story quoted Pitt saying something very similar to, though perhaps less eloquent than, what Berkeley professor and Nicira co-founder Scott Shenker said about network-protocol complexity.

Said Pitt:

“There are lots of networking protocols which make it very labour intensive to manage a network. There are too many “band aids” being used to keep a network working, and these band aids can actually cause many of the problems elsewhere in the network.”

Politics of ONF

I’ve written previously about the political dynamics of the Open Networking Foundation (ONF).

Just to recap, if you look at the composition of the board of directors at the ONF, you’ll know all you need to know about who wields power in that organization. The ONF board members are Google, Yahoo, Verizon, Deutsche Telekom, NTT, and Microsoft. Make no mistake about Microsoft’s presence. It is there as a cloud service provider, not as a vendor of technology products.

The ONF is run by large cloud service providers, and it’s run for large cloud service providers, though it’s conceivable that much of what gets done in the ONF will have applicability and value to cloud shops of smaller size and stature. I suppose it’s also conceivable that some of the ONF’s works will prove valuable at some point to large enterprises, though it should be noted that the enterprise isn’t a constituency that is foremost of mind to the ONF.

Vendors Not Driving

One thing is certain: Networking vendors are not steering the ONF ship. I’ve written that before, and I’ll no doubt write it again. In fact, I’ll quote Dan Pit to that effect right now:

“No vendors are allowed on the (ONF) board. Only the board can found a working group, approve standards, and appoint chairs of working groups. Vendors can be on the groups but not chair them. So users are in the driving seat.”

And those users — really the largest of the cloud service providers — aren’t about to move over. In fact, the power elite that governs that ONF has a definite vision in mind for the future of networking, a future that — as we’ve already seen — will make the networking subservient to applications, programmability, and computing.

Transition on the Horizon

As the SDN vision moves downstream from the largest service providers, such as those who run the show at the ONF, to smaller service providers and then to large enterprises, networking companies will have to transform themselves into software vendors — with software business models.

Can they do that? Some of them probably can, but others — including probably the largest of all — will have a difficult time making the transition, a prisoner of its own past success and circumscribed by the classic “innovator’s dilemma.”  Cisco, a networking colossus, has built a thriving franchise and dominant market position, replete with a full-fledged business model and an enormous sales machine. It will be hard to move away from a formula that’s filled the coffers all these years.

Still, move they must, though timing, as it often does, will count for a lot. The SDN wave won’t inundate the marketplace overnight, but, regardless of the underlying protocols and mechanisms that might run alongside or supersede OpenFlow, SDN seems set to eventually win adherents in CFO and CIO offices beyond the realm of the companies represented on the ONF’s board of directors. It will take some time, probably many years, but it’s a movement that will gain followers and momentum as it delivers quantifiable business benefits to those that adopt it.

Enterprise As Last Redoubt

The enterprise will be the last redoubt of conventional networking infrastructure, and it’s not difficult to envision Cisco doing everything in its power to keep it that way for as long as possible. Expect networking’s old guard to strongly resist the siren song of SDN. That’s only natural, even if — in the very long haul — it seems a vain pursuit and, ultimately, a losing battle.

At this point, I just want to emphasizes that SDN need not lead to the commoditization of networking. Granted, it might lead to the commoditization of certain types of networking hardware, but there’s still value, much of it proprietary, that software-centric networking vendors can bring to the SDN table. But, as I said earlier, for many vendors that will mean a shift in business model, product focus, and go-to-market strategy.

In that Computerworld piece, some wonder whether networking vendors could prevent the rise of software-defined networking by refusing to play along.

Not Going Away

Again, I can easily imagine the vendors slowing and impeding the ascent of SDN within enterprises, but there’s absolutely no way for them to forestall its adoption at the major service providers represented by the ONF board members. Those players have the capital and the operational resources, to say nothing of the business motivation, to roll their own switches, perhaps with the help of ODMs, and to program their own applications and networks. That train has left the station and it can’t be recalled by even the largest of networking vendors, who really have no leverage or say in the matter. They can play along and try to find a sinecure where they can continue to add value, or they can dig in their heels and get circumvented entirely.  It’s their choice.

Either way, the tension between the ONF and the traditional networking vendors is palpable. In the IETF, the vendors are casting glances and sometimes aspersions at the ONF, trying to figure out how they can mount a counterattack. The battle will be joined, but the ONF rules its own roost — and it isn’t going away.

Avaya’s Long-Deferred IPO Now Rescheduled

Bloomberg reported last week that Avaya’s long-deferred IPO could materialize in April. That date isn’t carved in stone, however, so don’t assume that it’s a done deal. A lot can happen between now and then.

Avaya’s latest quarterly financial report featured a reduced loss, though revenue growth remains tepid at best. Most of the improvement on the balance sheet resulted from cost-cutting measures, 

From what I have learned, Avaya continues to pursue assiduous cost reductions. In fact, I was informed last week that Avaya dismissed about 70 call-center employees at its Highland Ranch and Westminster locations near Denver, Colorado.  

Clever Deduction 

The source for that bit of news appears to have been one of Avaya’s former employees, who provided me with said news before his (or her) corporate email account was shut down. He (or she) actually sent the information to me as a comment to an earlier post I wrote about Avaya’s seemingly frozen IPO plans. 

Regular readers of this blog will know that I invite readers to submit confidential information via email. If sources wish to remain anonymous, I am more than happy to oblige.

In this case, what surprised me was that the information arrived in what would have been a public blog comment attributed to an employee (well, former employee) of Avaya. When I saw the email address, I replied to ask whether the individual wished to have me withhold the comment and remain so that he or she could remain an anonymous source. My reply bounced, leading me to deduce in Sherlock Holmesian fashion that the source was, in fact, one of the Avaya employees affected by the cuts. A brief online investigation confirmed that the individual in question worked at one of the locations hit by the purge. 

IPO Preferred to “Nortel Option”

Even though the individual no longer works at Avaya, I don’t feel comfortable publishing the comment. I don’t want this person to potentially compromise his (or her) career prospects because of something that might have been done hastily and without due consideration  to professional consequences. 

As for Avaya, the update on the IPO plans, however tentative, couldn’t come at a better time. Toward the end of last year, Jeff Hawkes offered a detailed analysis and sobering assessment of Avaya’s prospects. Hawkes concluded that the company, encumbered with substantial long-term debt, faced the choice of a public offering or an asset sale, along the lines of the one Nortel executed as part of bankruptcy proceedings. In fact, Hawkes suggested there was strong possibility that Avaya might file for “Chapter 7 or 11 bankruptcy.” 

For now, though, the slow-rolling IPO is back on track. We’ll have to see whether it reaches its destination this spring. 

Attack on Nortel Not an Anomaly

In my last post,, I promised to offer a subsequent entry on why public companies are reluctant to publicize breaches of their corporate networks.

I also suggested that such attacks probably are far more common than we realize. What happened to Nortel likely is occurring to a number of other companies right now.

It’s easy to understand why public companies don’t like to disclose that they’ve been the victim of hacking exploits, especially if those attacks result in the theft of intellectual property and trade secrets.

Strong Sell Signals

As public companies, their shares are traded on stock markets. Not without reason, shareholders and prospective investors might be inclined to interpret significant breaches of corporate networks as strong sell signals.

After all, loss of intellectual property — source code, proprietary product designs, trade secrets, and strategic plans — damages brand equity. Upon learning that the company in which they hold shares had its intellectual property pilfered, investors might be inclined to deduce that the stolen assets will later manifest themselves as lost revenue, reduced margins, decreased market share, and diminished competitive advantage.

Hacking exploits that result in perceived or real loss of substantial intellectual property represent an investor-relations nightmare.  A public company that discloses a major cyber breach that resulted in the loss of valuable business assets is far more likely to be met with market dismay than with widespread sympathy.

Downplay Losses

So, if public companies are breached, they keep it to themselves. If, however, a company is compelled by circumstances beyond its control to make a public disclosure about being attacked, it will downplay the severity and the risks associated with the matter.

In early 2010, you will recall, Google announced that it was subjected to a persistent cyber attack  that originated in China. It was part of larger attack, called Operation Aurora, aimed at dozens of other companies.

Some companies acknowledged publicly that they were attacked. Those companies included Adobe Systems, Juniper Networks, and Rackspace. Other companies subjected to the attacks — but which were not as forthcoming about what transpired — reportedly included Yahoo, Symantec, Northrop Grumman, Morgan Stanley, and Dow Chemical.

After the Crown Jewels

At the time of the attacks, Google spun a media narrative that suggested the attacks were designed to spy on human-rights activists by cracking their email accounts. While that might have been a secondary objective of the attacks, the broader pattern of Operation Aurora suggests that the electronic interlopers from China were more interested in obtaining intellectual property and trade secrets than in reading the personal correspondence of human-rights activists.

Indeed, McAfee, which investigated the attacks, reported that the objective of the perpetrators was to gain access to and to potentially modify source-code repositories at the targeted companies. The attackers were after those companies’ “crown jewels.”

The companies that admitted being victims of Operation Aurora all downplayed the extent of the attacks and any possible losses they might have suffered. Perhaps they were telling the truth. We just don’t know.

Transfer of Wealth

Last summer, Dmitri Alperovitch, McAfee’s vice president of threat research, provided the following quote to Reuters:

“Companies and government agencies are getting raped and pillaged every day. They are losing economic advantage and national secrets to unscrupulous competitors. This is the biggest transfer of wealth in terms of intellectual property in history. The scale at which this is occurring is really, really frightening.”

What Alperovitch said might seem melodramatic, but it isn’t. He’s not the only knowledgeable observer who has seen firsthand the electronic pillage and plunder of corporate intellectual property on a vast scale. For the reasons cited earlier in this post, few companies want to put up their hands and acknowledge that they’ve been victimized.

Nortel, in apparently being subjected to a decade-long cyber attack, might have been a special case, but we should not assume that what happened to Nortel is anomalous. For all we know, the largest companies in the technology industry are being violated and plundered as you read this post.

Hackers Didn’t Kill Nortel

For a company that is dead in all meaningful respects, Nortel Networks has an uncanny knack of finding its way into the news. Just as late rapper Tupac Shakur’s posthumous song releases kept him in the public consciousness long after his untimely death, Nortel has its recurring scandals and misadventures to sustain its dark legacy.

Recently, Nortel has surfaced in the headlines for two reasons. First, there was (and is) the ongoing fraud trial of three former Nortel executives: erstwhile CEO Frank Dunn, former CFO Douglas Beatty, and ex-corporate controller Michael Gollogly. That unedifying spectacle is unfolding at a deliberate pace in a Toronto courtroom.

Decade of Hacking

While a lamentable story in its own right, the trial was overshadowed earlier this week by another development. In a story that was published in the Wall Street Journal, a former Nortel computer-security specialist alleged that the one-time telecom titan had been subject to decade-long hacking exploits undertaken by unknown assailants based in China. The objective of the hackers apparently was corporate espionage, specifically related to gaining access to Nortel’s intellectual property and trade secrets. The hacking began in 2000 and persisted well into 2009, according to the former Nortel employee.

After the report was published, speculation arose as to whether, and to what degree, the electronic espionage and implicit theft of intellectual property might have contributed to, or hastened, Nortel’s passing.

Presuming the contents of the Wall Street Journal article to be accurate, there’s no question that persistent hacking of such extraordinary scale and duration could not have done Nortel any good. Depending on what assets were purloined and how they were utilized — and by whom — it is conceivable, as some have asserted, that the exploits might have hastened Nortel’s downfall.

Abundance of Clowns

But there’s a lot we don’t know about the hacking episode, many questions that remain unanswered. Unfortunately, answers to those questions probably are not forthcoming. Vested interests, including those formerly at Nortel, will be reluctant to provide missing details.

That said, I think we have to remember that Nortel was a shambolic three-ring circus with no shortage of clowns at the head of affairs. As I’ve written before, Nortel was its own worst enemy. Its self-harm regimen was legendary and varied.

Just for starters, there was its deranged acquisition strategy, marked by randomness and profligacy. Taking a contrarian position to conventional wisdom, Nortel bought high and sold low (or not at all) on nearly every acquisition it made, notoriously overspending during the Internet boom of the 1990s that turned to bust in 2001.

Bored Directors

The situation was exacerbated by mismanaged assimilation and integration of those poorly conceived acquisitions. If Cisco wrote the networking industry’s how-to guide for acquisitions in the 1990s, Nortel obviously didn’t read it.

Nortel’s inability to squeeze value from its acquisitions was symptomatic of executive mismanagement, delivered by a long line of overpaid executives. And that brings us to the board of directors, which took complacency and passivity to previously unimagined depths of docility and indifference.

In turn, that fecklessness contributed to bookkeeping irregularities and accounting shenanigans that drew the unwanted attention of the Securities and Exchange Commission and the Ontario Securities Commission, and which ultimately resulted in the fraud trial taking place in Toronto.

Death by Misadventures

In no way am I excusing any hacking or alleged intellectual property theft that might have been perpetrated against Nortel. Obviously, such exploits are unacceptable. (I have another post in the works about why public companies are reluctant to expose their victimization in hack attacks, and why we should suspect many technology companies today have been breached, perhaps significantly. But that’s for another day).

My point is that, while hackers and intellectual-property thieves might be guilty of many crimes, it’s a stretch to blame them for Nortel’s downfall. Plenty of companies have been hacked, and continue to be hacked, by foreign interests in pursuit of industrial assets and trade secrets. Those companies, though harmed by such exploits, remain with us.

Nortel was undone overwhelmingly by its own hand, not by the stealthy reach of electronic assassins.

HP’s Project Voyager Alights on Server Value

Hewlett-Packard earlier this week announced the HP ProLiant Generation 8 (Gen8) line of servers, based on the HP ProActive Insight architecture. The technology behind the architecture and the servers results from Project Voyager, a two-year initiative to redefine data-center economics by automating every aspect of the server lifecycle.

You can read the HP press release on the announcement, which covers all the basics, and you also can peruse coverage at a number of different media outposts online.

Voyager Follows Moonshot and Odyssey

The Project Voyager-related announcement follows Project Moonshot and Project Odyssey announcements last fall. Moonshot, you might recall, related to low-energy computing infrastructure for web-scale deployments, whereas Odyssey was all about unifying mission-critical computing — encompassing Unix and x86-based Windows and Linux servers — in one system.

A $300-million, two-year program that yielded more than 900 patents, Project Voyager’s fruits, as represented by the ProActive Insight architecture, will span the entire HP Converged Infrastructure.

Intelligence and automation are the buzzwords behind HP’s latest server push. By enabling servers to “virtually take care of themselves,” HP is looking to reduce data-center complexity and cost, while increasing system uptime and boosting compute-related innovation. In support of the announcement, HP culled assorted facts and figures to assert that savings from the new servers can be significant across various enterprise deployment scenarios.

Taking Care of Business

In taking care of its customers, of course, HP is taking care of itself. HP says it tested the ProLiant servers in more than 100 real-world data centers, and that they include more than 150 client-inspired design innovations. That process was smart, and so were the results, which not only speak to real needs of customers, but also address areas that are beyond the purview of Intel (or AMD).

The HP launch eschewed emphasis on system boards, processors, and “feeds and speeds.” While some observers wondered whether that decision was taken because Intel had yet to launch its latest Xeon chips, the truth is that HP is wise to redirect the value focus away from chip performance and toward overall system and data-center capabilities.

Quest for Sustainable Value, Advantage 

Processor performance, including speeds and feeds, is the value-added purview of Intel, not of HP. All system vendors ultimately get the same chips from Intel (or AMD). They really can’t differentiate on the processor, because the processor isn’t theirs. Any gains they get from being first to market with a new Intel processor architecture will be evanescent.

They can, however, differentiate more sustainably around and above the processor, which is what HP has done here. Certainly, a lot of value-laden differentiation has been created, as the 900 patent filings attest. In areas such as management, conservation, and automation, HP has found opportunity not only to innovate, but also to make a compelling argument that its servers bring unique benefits into customer data centers.

With margin pressure unlikely to abate in server hardware, HP needed to make the sort of commitment and substantial investment that Project Voyager represented.

Questions About Competition, Patents

From a competitive standpoint, however, two questions arise. First, how easy (or hard) will it be for HP’s system rivals to counter what HP has done, thereby mitigating HP’s edge? Second, what sort of strategy, if any, does HP have in store for its Voyager-related patent portfolio? Come to think of it, those questions — and the answers to them — might be related.

As a final aside, the gentle folks at The Register inform us that HP’s new series of servers is called the ProLiant Gen8 rather than ProLiant G8 — the immediately predecessors are called ProLiant G7 (for Generation 7) — because the sound “gee-ate” is uncomfortably similar to a slang term for “penis” in Mandarin.

Presuming that to be true, one can understand why HP made the change.

SDN Aims to Ditch Bag of Protocols

Throughout much of last year, Scott Shenker delivered a presentation that is considered a seminal touchstone in software-defined networking. The presentation is called “The Future of Networking, and the Past of Protocols,” and an early version of it can be viewed here, though later iterations also are available online.

Shenker is a co-founder and chief scientist at Nicira Networks. He also is holds the title of Professor of Electrical Engineering and Computer Science at the University of California, Berkeley. Along with Nick McKeown and Martin Casado — the other co-founders of Nicira — he is widely regarded as a thought leader in the SDN community. (There are SDN intellectual luminaries beyond Nicira’s walls, but it’s notable that these three have combined their talents under one corporate roof.)

Technology is Easy, People Are Hard

In this post, I want to summarize a few salient thoughts featured in Shenker’s presentation.  I will refer to these ideas in subsequent posts (well, at least one, anyway) that explore the commercial potential and the cultural challenges SDN might face in an enterprise world, where — pardon the paraphrasing of the legendary showbiz quote attributed to Donald Wolfit — technology is easy, but people are hard. (Nicira probably knows as much, which is why it is targeting cloud service providers rather than enterprises, at least for now.)

In his presentation, Shenker starts with an academic paradox based on his experiences as a professor at UC Berkeley. He says his colleagues who teach operating systems or databases provide instruction on fundamental principles, such as synchronization and mutual exclusion. Conversely, when he teaches introductory networking, he teaches his students about a “bag of protocols.” There are no real principles in networking, he argues.

Beyond academia, in the realm of the practical and quotidian, Shenker notes that computation and storage have been virtualized and have become flexible and easy to manage, but “not so much with networks,” where protocol complexity reigns.

Masters of Complexity

He then asks why the intellectual foundations of networking are so weak, and wonders how those foundations can be made stronger. Shenker explains that networks were simple  and easy to mange initially, starting off with straightforward Ethernet and IP designs. New control requirements resulted in complexity. He says ACLs, VLANs, traffic engineering, middleboxes, and deep packet inspection have complicated what was an elegant architectural design.

Network infrastructure still works, Shenker says, because the networking industry and its professionals are masters of complexity. Unfortunately, the ability to master complexity is a mixed blessing. Complex systems typically are built on weak foundations. The systems are complex because the foundations are weak, and the networking industry has become adept at treating the symptoms rather than curing the disease.

He points out that good user interfaces are not produced by masters of complexity, noting that the ability to master complexity is very different from the ability to extract simplicity.  (He tells an amusing anecdote on this theme hearkening back to his time at Xerox PARC.) Moreover, when one masters complexity, one has to do it for every single problem. When you extract simplicity, the benefits last longer and can be applied more broadly.

Bad Design

Shenker examines how computer programming was simplified over time. He looks at how useful abstractions were defined and evolved to extract simplicity and make programming tasks easier. Abstractions shield users from low-level details.

While abstraction has been at the center of much work in computer science, Shenker explains that has not been true in networking. He says abstractions have been addressed in layers that provide data-plane service abstractions, as exemplified by IP’s best-effort delivery and TCP’s reliable byte stream. These abstractions convey ideas about what the network can do for us, but Shenker says they’re terrible interfaces because they violate the principle of modularity. It works, he says, but it’s based on bad system-design decisions.

At the control-plane, where useful abstractions do not exist, Shenker says SDN’s goal is to break the bad habit of adding to network complexity.  Networking addresses control issues today by defining new protocols, such as for routing, or by designing a new ad-hoc mechanism, such as for traffic engineering; or it leaves the problem to be addressed by manual operator configuration, as is done with access control and middleboxes.

He then looks at how such modular abstractions can be applied to network controls. It’s all about applying abstractions to simplify control tasks relating to forwarding models, distributed state, and detailed configuration.

In SDN, the forwarding model shields the above layers from the particular low-level forwarding design, which could involve possibilities such as a general x86 program, MPLS, or OpenFlow.

OpenFlow: Good Enough for Now

On OpenFlow, Shenker later in his presentation offers the following:

“OpenFlow is one possible solution (as a configuration mechanism); it’s clearly not the right solution. I mean, it’s a very good solution for now, but there’s nothing that says this is fundamentally the right answer. Think of Open Flow as x86 instruction set. Is the x86 instruction set correct? Is it the right answer? No, It’s good enough for what we use it for. So why bother changing it? That’s what Open Flow is. It’s the instruction set we happen to use, but let’s not get hung up on it.”

As for state distribution, Shenker said the control program should not have to deal with the vagaries of distributed state. An abstraction should shield the control program from sate dissemination/collection. A network operating system can provide that abstraction, delivering a global view of the underlying network.

The control program will operate on this network view, which is essentially a graph, providing input on required device configuration across the network.

Three Basic Interfaces

In Shenker’s view, there are three basic network interfaces tied to SDN abstractions. There’s the forwarding interface, which provides a flexible, abstract forwarding model; there’s the global network view, which shields higher layers from state dissemination/collection; and there is the abstract network view, which shields the control program from details of the physical network.

He points out that these abstractions “are not just academic playthings.” They change where we focus our attention and thus enable much greater functionality from less effort. As a result, there will be no more need to design distributed control protocols. Instead, you would define control programs (applications) over an abstract model.  Writing control programs becomes about what you want to have happen, not how to make it happen.

Cultural Factors

Bringing the presentation full circle, and reconciling it neatly with its title, Shenker contends that the future of networking lies in cleaner abstractions, not in the ongoing definition of complicated distributed protocols.  As Shenker puts it, “the era of  ‘a new protocol per problem’ is over.”

Networking definitely seems to be heading in the direction Shenker envisages, but I’m not sure the old networking establishment is prepared to bury its “bag of protocols” just yet. As Shenker himself has said, it takes years to internalize and evaluate abstractions. It might take longer for networking professionals — vendors and customers alike — to adjust to software-oriented network programmability.

HP Adds OpenFlow Support to Switches, Sets Stage for SDN Plans

Hewlett-Packard (HP) points to its long history as an OpenFlow proponent, and it’s true that HP has been involved with the protocol almost since its inception. It’s also true that HP continues to be heavily involved with OpenFlow, active in the academic-research community and as a sponsor and member in good standing of Indiana University’s SDN Interoperability Lab.

In that respect, it wasn’t a surprise to see HP announce last week that it is providing OpenFlow support on 16 of its switches, including the HP 3500, 5400 and 8200 series switches. Interestingly, these all come from what was once known as HP ProCurve, not from the 3Com/H3C side of the house. HP says it will extend OpenFlow support to all switches within its FlexNetwork Architecture by the end of the year.

Campus Angle

While the early focus of software-defined networking using OpenFlow has been on data centers and service-provider deployments — as represented by the board members of the Open Networking Foundation (ONF) — HP also sees promise for OpenFlow in enterprise campus applications. That’s an area not many other vendors, established or startups, have stressed.

As of now, HP has not disclosed its plans for controllers or the applications that would inform them. In relation to a controller platform, HP could build, buy, or partner. It could work with more than one controller, depending on its market focus and business objectives. HP’s involvement in the SDN community gives it good visibility into individual controller capabilities, controller-related interoperability challenges (which we know exist), and application development on controller platforms.

Saar Gillai, vice president of HP’s Advanced Technology Group and CTO of HP Networking,  indicated that the company would reveal at least some of its controller and application plans later in the year.

More to Come

When Gillai spoke about OpenFlow last fall, he said the critical factor to OpenFlow’s success will be determined by the SDN applications that it supports. HP was and remains interested in those applications.

Last fall, Gillai lamented what he viewed as OpenFlow hype, but he foresaw“interesting applications” emerging within the next 12 to 24 months. In enabling a growing number of its switches to support OpenFlow, HP still seems to be working according to that timeline.

Peeling the Nicira Onion

Nicira emerged from pseudo-stealth yesterday, drawing plenty of press coverage in the process. “Network virtualization” is the concise, two-word marketing message the company delivered, on its own and through the analysts and journalists who greeted its long-awaited official arrival on the networking scene.

The company’s website opened for business this week replete with a new look and an abundance of new content. Even so, the content seemed short on hard substance, and those covering the company’s launch interpreted Nicira’s message in a surprisingly varied manner, somewhat like blind men groping different parts of an elephant. (Onion in the title, now an elephant; I’m already mixing flora and fauna metaphors.)

VMware of Networking Ambiguity

Many made the point that Nicira aims to become the “VMware of networking.” Interestingly, Big Switch Networks has aspirations to wear that crown, asserting on its website that “networking needs a VMware.” The theme also has been featured in posts on Network Heresy, Nicira CTO Martin Casado’s blog. He and his colleagues have written alternately that networking both doesn’t and does need a VMware. Confused? That’s okay. Many are in the same boat . . . or onion field, as the case may be.

The point Casado and company were trying to make is that network virtualization, while seemingly overdue and necessary, is not the same as server virtualization. As stated in the first in that series of posts at Network Heresy:

“Virtualized servers are effectively self contained in that they are only very loosely coupled to one another (there are a few exceptions to this rule, but even then, the groupings with direct relationships are small). As a result, the virtualization logic doesn’t need to deal with the complexity of state sharing between many entities.

A virtualized network solution, on the other hand, has to deal with all ports on the network, most of which can be assumed to have a direct relationship (the ability to communicate via some service model). Therefore, the virtual networking logic not only has to deal with N instances of N state (assuming every port wants to talk to every other port), but it has to ensure that state is consistent (or at least safely inconsistent) along all of the elements on the path of a packet. Inconsistent state can result in packet loss (not a huge deal) or much worse, delivery of the packet to the wrong location.”

In Context of SDN Universe

That issue aside, many writers covering the Nicira launch presented information about the company and its overall value proposition consistently. Some articles were more detailed than others. One at MIT’s Technology Review provided good historical background on how Casado first got involved with the challenge of network virtualization and how Nicira was formed to deliver a solution.

Jim Duffy provided a solid piece touching on the company’s origins, its venture-capital investors, and its early adopters and the problems Nicira is solving for them. He also touched on where Nicira appears to fit within the context of the wider SDN universe, which includes established vendors such as Cisco Systems, HP, and Juniper Networks, as well as startup such as Big Switch Networks, Embrane, and Contextream.

In that respect, it’s interesting to note what Embrane co-founder and President Dante Malagrino told Duffy:

 “The introduction of another network virtualization product is further validation that the network is in dire need of increased agility and programmability to support the emergence of a more dynamic data center and the cloud.”

“Traditional networking vendors aren’t delivering this, which is why companies like Nicira and Embrane are so attractive to service providers and enterprises. Embrane’s network services platform can be implemented within the re-architected approach proposed by Nicira, or in traditional network architectures. At the same time, products that address Layer 2-3 and platforms that address Layer 4-7 are not interchangeable and it’s important for the industry to understand the differences as the network catches up to the cloud.”

What’s Nicira Selling?

All of which brings us back to what Nicira actually is delivering to market. The company’s website offers videos, white papers, and product data sheets addressing the Nicira Network Virtualization Platform (NVP) and its Distributed Network Virtualization Infrastructure (DNVI), but I found the most helpful and straightforward explanations, strangely enough, on the Frequently Asked Questions (FAQ) page.

This is an instance of a FAQ page that actually does provide answers to common questions. We learn, for example, that the key components of the Nicira Network Virtualization Platform (NVP) are the following:

– The Controller cluster, a distributed control system

– The Management software, an operations console

– The RESTful API that integrates into a range of Cloud Management Systems (CMS), including a Quantum plug-in for OpenStack.

Those components, which constitute the NVP software suite, are what Nicira sells, albeit in a service-oriented monthly subscription model that scales per virtual network port.

Open vSwitch, Minor Role for OpenFlow 

We then learn that the NVP communicates with the physical network indirectly, through Open vSwitch. Ivan Pepelnjak (I always worry that I’ll misspell his name, but not the Ivan part) provides further insight into how Nicira leverages Open vSwitch. As Nicira notes, the NVP Controller communicates directly with Open vSwitch (OVS), which is deployed in server hypervisors. The server hypervisor then connects to the physical network and end hosts connect to the vswitch. As a result, NVP does not talk directly to the physical network.

As for OpenFlow, its role is relatively minor. As Nicira explains: “OpenFlow is the communications protocol between the controller and OVS instances at the edge of the network. It does not directly communicate with the physical network elements and is thus not subject to scaling challenges of hardware-dependent, hop-by-hop OpenFlow solutions.”

Questions About L4-7 Network Services

Nicira sees its Network Virtualization Platform delivering value in a number of different contexts, including the provision of hardware-independent virtual networks; virtual-machine mobility across subnet boundaries (while maintaining L2 adjacency); edge-enforced, dynamic QoS and security policies (filters, tagging, policy routing, etc.) bound to virtual ports; centralized system-wide visibility & monitoring; address space isolation (L2 & L3); and Layer 4-7 services.

Now that last capability provokes some questions that cannot be answered in the FAQ.

Nicira says its NVP can integrate with third-party Layer 3-7 services, but it also says services can be created by Nicira or its customers.  Notwithstanding Embrane’s perfectly valid contention that its network-services platform can be delivered in conjunction with Nicira’s architectural model, there is a distinct possibility Nicira might have other plans.

This is something that bears watching, not only by Embrane but also by longstanding Layer 4-7 service-delivery vendors such as F5 Networks. At this point, I don’t pretend to know how far or how fast Nicira’s ambitions extend, but I would imagine they’ll be demarcated, at least partly, by the needs and requirements of its customers.

Nicira’s Early Niche

Speaking of which, Nicira has an impressive list of early adopters, including AT&T, eBay, Fidelity Investments, Rackspace, Deutsche Telekom, and Japan’s NTT. You’ll notice a commonality in the customer profiles, even if their application scenarios vary. Basically, these all are public cloud providers, of one sort or another, and they have what are called “web-scale” data centers.

While Nicira and Big Switch Networks both are purveyors of “network virtualization”  and controller platforms — and both proclaim that networking needs a VMware — they’re aiming at different markets. Big Switch is focusing on the enterprise and the private cloud, whereas Nicira is aiming for large public cloud-service providers or big enterprises that provide public-cloud services (such as Fidelity).

Nicira has taken care in selecting its market. An earlier post on Casado’s blog suggests that he and Nicira believe that OpenFl0w-based SDNs might be a solution in search of a problem already being addressed satisfactorily within many enterprises. I’m sure the team at Big Switch would argue otherwise.

At the same time, Nicira probably has conceded that it won’t be patronized by Open Networking Foundation (ONF) board members such as Google, Facebook, and Microsoft, each of which is likely to roll its own network-virtualization systems, controller platforms, and SDN applications. These companies not only have the resources to do so, but they also have a business imperative that drives them in that direction. This is especially true for Google, which views its data-center infrastructure as a competitive differentiator.

Telcos Viable Targets

That said, I can see at least a couple ONF board members that might find Nicira’s pitch compelling. In fact, one, Deutsche Telekom, already is on board, at least in part, and perhaps Verizon will come along later. The telcos are more likely than a Google to need assistance with SDN rollouts.

One last night on Nicira before I end this already-prolix post. In the feature article at Technology Review, Casado says it’s difficult for Nicira to impress a layperson with its technology, that “people do struggle to understand it.” That’s undoubtedly true, but Nicira needs to keep trying to refine its message, for its own sake as well as for those of prospective customers and other stakeholders.

That said, the company is stocked with impressive minds, on both the business and technology sides of the house, and I’m confident it will get there.

PSA on SDN

While I’m on the subject of software defined networking (SDN), I’d like to take this opportunity to make you aware of an event that’s scheduled to take place toward the end of this month, running concurrently with this year’s edition of the RSA Conference in San Francisco.

Billed interrogatively, “Are Software-Defined Networks Ready for Prime Time?”, the two-day seminar series will unspool at the St. Regis Hotel in San Francisco on February 28 and 29. Representatives from Indiana University, The Chasm Group, and Wiretap Ventures will discuss real-world lessons derived from the former’s SDN production environment.  (Full disclosure: Wiretap Ventures’ Matt Palmer and I were involved in technology-partnership discussions many years ago while working for different employers.)

The seminars are intended for CIOs, CISOs, and CTOs — as well as for security, storage, networking and mobility architects — employed at enterprises, service providers, government agencies, or universities. There will be a technical track that will run on the afternoon of February 28 and again on the morning of February 29, as well as a business track on the afternoon of February 29.

Details on the event can be found here.

Thinking About SDN Controllers

As recent posts on this blog attest, I have been thinking a lot lately about software defined networking (SDN). I’m not alone in that regard, and I’m no doubt behind the curve. Some impressive intellects and astute minds are active in SDN, and I merely do my best to keep up and try to assimilate what I learn from them.

I must admit, I find it a difficult task. Not only is the SDN technical community advancing quickly, propelling the technology forward at a fast pace, but the market has begun to take shape, with new product launches and commercial deployments occurring nearly every week.

Control Issues

Lately, I’ve turned my mind to controllers and their place in the SDN universe. I’ve drawn some conclusions that will be obvious to any habitué of the SDN realm, but that I hope might be useful to others trying, as I am, to capture a reasonably clear snapshot of where things stand. (We’ll need to keep taking snapshots, of course, but that’s always the case with emerging technologies, not just with SDN.)

To understand the role of controllers within the context of SDN, it helps to have analogies. Fortunately, those analogies have been provided by SDN thought leaders such as Nick McKeown, Scott Schenker, Martin Casado, and many others. This blog, as you know, has not been especially replete with pictures or diagrams, but I feel it necessary to point you to a few on this occasion. As such, feel free to consult this presentation by Nick McKeown; it contains several diagrams that vividly illustrate the controller’s (and the control layer’s) key role in SDN.

The controller is akin to a computer’s operating system (a network-control operating system, if you will) on which applications are written. It communicates with the underlying network hardware (switches and routers) through a protocol such as OpenFlow. Different types of SDN controller software, then, are analogous, in certain respects, to Windows, Linux, and Mac OS X., What’s more, just like those computer-based operating systems, today’s controller software is not interoperable.

Not Just OpenFlow

Given what I’ve just written, one ought to choose one’s controller with particular care. Everything I’ve read — please correct me if I’m wrong, SDN cognoscenti — suggests it won’t necessarily be easy to shift your development and programming efforts, not to mention your network applications, seamlessly from one controller to another. (Yes, development and programming, because the whole idea of SDN is to create abstractions that make the network programmable and software defined — or “driven,” as the IETF would have it.)

Moreover — and I think this is potentially important — we hear a lot of talk about “OpenFlow controllers,” but at least some SDN controllers can be (and are) conversant in protocols and mechanisms that extend beyond OpenFlow.

In fact, in a blog post this past August, Nicira’s Martin Casado distinguished between OpenFlow controllers and what he called “general SDN controllers,” which he defined as those “in which the control application is fully decoupled from both the underlying protocol(s) communicating with the switches, and the protocol(s) exchanging state between controller instances (assuming the controllers can run in a cluster).”

That’s significant. As much as OpenFlow and SDN have been conflated and intertwined in the minds of many, we need to understand that SDN controllers are at higher layer of network abstraction, providing a platform for network applications and a foundation for networking programmability. OpenFlow is just an underlying protocol that facilitates interaction between the controller and data-forwarding tables on switches. Some controllers leverage OpenFlow exclusively, and others are (or will be) capable of accommodating other protocols and mechanisms to achieve the same practical result.

Talking Onix

An example is Onix, the controller proffered by Nicira. In a paper that is available online,  the authors specify the following:

“The Onix design does not dictate a particular protocol for managing network element forwarding state. Rather, the primary interface to the application is the NIB, and any suitable protocol supported by the elements in the network can be used under the covers to keep the NIB entities in sync with the actual network state.”

To be sure, Onix supports OpenFlow, but it also supports “multiple controller/switch protocols (including OpenFlow),” according to Casado’s aforementioned blog post.

There are a number of controllers available, of course, including NEC’s ProgrammableFlow Controller Appliance and Big Switch Networks’ Floodlight, among others. NEC sells its controller for a list price of $75,000 and it has IBM as a partner, presumably to help enterprises get up and running with SDN implementations in exchange for professional-services fees.

Controller Considerations

But this brief post here isn’t intended to provide an enumeration and evaluation of the available SDN controllers on the marketplace. I’m not up to that job, and I respectfully defer the task to those with a lot more technical acumen than I possess. What I want to emphasize here is that the abstractions and platform support provided by an SDN controller qualify it as a critical component of the SDN design architecture, all the more so because controllers lack interoperability, at least for now.

As Nicira’s Casado wrote last spring, “ you most likely will not have interoperability at the controller level (unless a standardized software platform was introduced).”

So, think carefully about the controller. While the OpenFlow protocol provides essentially the same functionality regardless of the context in which it is used, the controller is a different story. There will be dominant controllers, niche controllers, controllers that are more scalable than others (perhaps with the help of applications that run on them), and there will be those that emerge as best of class for service providers, high-performance computing (HPC), and data centers, respectively.

There will be a lot riding on your controller, in more ways than one.

Mazzola Switches Again?

Many of you were avid readers of a post I wrote on Mario Mazzola, Cisco’s former chief development officer (CDO), and his rumored next move. I don’t have much additional detail to report — certainly nothing definitive — but I have heard further rumblings that I will now share with you in an unalloyed spirit of speculative generosity and kindness.

What I hear now is that Mazzola is involved with a startup switch company. I don’t know whether said company has been devised as a Cisco “spin-in,” or whether it will be an independent venture funded by Mazzola and others.  The rumor suggests that at least some of Mazzola’s former collaborators will join him in the venture.

Unfortunately, that’s all I’ve heard. I don’t know what type of switch the alleged company will develop, when the company might emerge from the shadows and into the light, or when I might have further information to convey.  As I learn more,  I’ll let you know.

For you SDN aficionados, I should have a new post tonight or tomorrow morning.