Category Archives: Telecommunications

For Huawei and ZTE, Suspicions Persist

About two weeks ago, the U.S. House Permanent Select Committee on Intelligence held a hearing on “the national-security threats posed by Chinese telecom companies doing business in the United States.” The Chinese telecom companies called to account were Huawei and ZTE, each of which is keen to expand its market reach into the United States.

It is difficult to know what to believe when it comes to the charges leveled against Huawei and ZTE. The accusations against the companies, which involve their alleged capacity to conduct electronic espionage for China and their relationships with China’s government, are serious and plausible but also largely unproven.

Frustrated Ambitions

One would hope these questions could be settled definitively and expeditiously, but this inquiry looks be a marathon rather than a sprint. Huawei and ZTE want to expand in the U.S. market, but their ambitions are thwarted by government concerns about national security.  As long as the concerns remain — and they show no signs of dissipating soon — the two Chinese technology companies face limited horizons in America.

Elsewhere, too, questions have been raised. Although Huawei recently announced a significant expansion in Britain, which received the endorsement of the government there, it was excluded from participating in Australia’s National Broadband Network (NBN). The company also is facing increased suspicion in India and in Canada, countries in which it already has made inroads.

Vehement Denials 

Huawei and ZTE say they’re facing discrimination and protectionism in the U.S.  Both seek to become bigger players globally in smartphones, and Huawei has its sights set on becoming a major force in enterprise networking and telepresence.

Obviously, Huawei and ZTE deny the allegations. Huawei has said it would be self-destructive for the company to function as an agent or proxy of Chinese-government espionage. Huawei SVP Charles Ding, as quoted in a post published on the Forbes website, had this to say:

 As a global company that earns a large part of its revenue from markets outside of China, we know that any improper behaviour would blemish our reputation, would have an adverse effect in the global market, and ultimately would strike a fatal blow to the company’s business operations. Our customers throughout the world trust Huawei. We will never do anything that undermines that trust. It would be immensely foolish for Huawei to risk involvement in national security or economic espionage.

Let me be clear – Huawei has not and will not jeopardise our global commercial success nor the integrity of our customers’ networks for any third party, government or otherwise. Ever.

A Telco Legacy 

Still, questions persist, perhaps because Western countries know, from their own experience, that telecommunications equipment and networks can be invaluable vectors for surveillance and intelligence-gathering activities. As Jim Armitage wrote in The Independent, telcos in Europe and the United States have been tapped repeatedly for skullduggery and eavesdropping.

In one instance, involving the tapping  of 100 mobile phones belonging to Greek politicians and senior civil servants in 2004 and 2005, a Vodafone executive was found dead of an apparent suicide. In another case, a former head of security at Telecom Italia fell off a Naples motorway bridge to his death in 2006 after discovering the illegal wiretapping of 5,000 Italian journalists, politicians, magistrates, and — yes — soccer players.

No question, there’s a long history of telco networks and the gear that runs them being exploited for “spookery” (my neologism of the day) gone wild. That historical context might explain at least some of the acute and ongoing suspicion directed at Chinese telco-gear vendors by U.S. authorities and politicians.

Advertisement

Avaya Executive Departures, Intrigue Continue

Like many other vendors, Avaya showed off its latest virtualized wares at VMworld in San Francisco this week. While putting its best face forward at VMware’s annual conference and exhibition, Avaya also experienced further behind-the-scenes executive intrigue.

Sources report that Carelyn Monroe, VP of Global Partner Support Services, resigned from the company last Friday. Monroe is said to have reported to Mike Runda, SVP and president of Avaya Client Services. She joined Avaya in 2009, coming over from Nortel.

Meanwhile, across the pond, Avaya has suffered another defection. James Stevenson, described as a “business-services expert” in a story published online by CRN ChannelWeb UK, has left Avaya to become director of operations for reseller Proximity Communications.

Prior to the departures of Monroe and Stevenson, CFO Anthony Massetti bolted for the exit door immediately after Avaya’s latest inauspicious quarterly results were filed with the Securities and Exchange Commission (SEC). Massetti was replaced by Dave Vellequette, who has a long history of of working alongside Avaya CEO Kevin Kennedy.

In some quarters, Kennedy’s reunion with Vellequette is being construed as a circle-the-wagons tactic in which the besieged CEO attempts to surround himself with steadfast loyalists. It probably won’t be long before we see a “Hitler parody” on YouTube about Avaya’s plight (like this one on interoperability problems with unified communications).

Juniper Steers QFabric Toward Midmarket

In taking its QFabric to mid-sized data centers, Juniper Networks has made the right decision. In my discussions with networking cognoscenti at customer organizations large and small, Juniper’s QFabric technology often engenders praise and respect. It also was perceived as beyond the reach, architecturally and financially, of many shops.

Now Juniper is attempting to get to those mid-market admirers that previously saw QFabric as above their station.

Quest for Growth

To be sure, Juniper targeted the original QFabric, the QFX 3000-G, at large enterprises and high-end service providers, addressing applications such as high-performance computing (HPC), high-frequency trading in financial services, and cloud services. In a blog post discussing the downsized QFabric QFX3000-M, R.K. Anand, EVP and general manager of Juniper’s Data Center Business Unit, writes, “ . . . the beauty of the “M” configuration is that it’s ideal for satellite data centers, new 10GbE pods and space-constrained data center environments.”

Juniper is addressing a gap here, and it’s a wise move. Still, some wonder whether it has come too late. It’s a fair question.

In pursuing the midmarket, Juniper is ratcheting up its competitive profile against the likes of Cisco Systems and HP, which also have been targeting the mid market for growth, a commodity in short supply in the enterprise-networking space these days.

Analysts are concerned about maturation and slow growth in the networking market, as well as increasing competition and “challenging” — that’s an analyst-speak euphemism for crappy –macroeconomic conditions.

Belated . . . Or Just Too Late

At its annual shindig for analysts, Juniper did little to allay those concerns, though the company understandably put an optimistic spin on its product strategy, competitive positioning, and ability to execute.  Needham and Company analyst Alex Henderson summarized proceedings as follows:

“Despite an upbeat tone to Juniper’s strategy positioning and its new product development story, management reset its long term revenue and margin targets to a lower level. Juniper lowered its revenue growth targets to 9-12% from a much older growth target of 20% plus. In addition, management lowered gross margin target to 63-66% from the prior target of 65-67%.”

Like its competitors, Juniper is eager to find growth markets, preferably those that will support robust margins. A smaller QFabric won’t necessarily provide a panacea for Juniper’s market dilemma, but it certainly won’t hurt.

It also gives Juniper’s channel partners reason to call on customers that might have been off their radar previously. As Dhritiman Dasgupta, senior director of Enterprise System and Routing at Juniper, told The VAR Guy, the channel is calling the new QFX-3000-M “their version” of the product.

We’ll have to see whether Juniper’s QFabric for mid-sized data centers qualifies as a belated arrival or as a move that simply came too late.

Tidbits: Cuts at Nokia, Rumored Cuts at Avaya

Nokia

Nokia says it will shed about 10,000 employees globally by the end of 2013 in a bid to reduce costs and streamline operations.

The company will close research-and-development centers, including one in Burnaby, British Columbia, and another in Ulm, Germany. Nokia will maintain its R&D operation in Salo, Finland, but it will close its manufacturing plant there.

Meanwhile, in an updated outlook, Nokia reported that “competitive industry dynamics” in the second quarter would hurt its smartphone sales more than originally anticipated. The company does not expect a performance improvement in the third quarter, and that dour forecast caused analysts and markets to react adversely.

Selling its bling-phone Vertu business to Swedish private-equity group EQT will help generate some cash, but, Nokia will retain a 10-percent minority stake in Vertu. Nokia probably should have said a wholesale goodbye to its bygone symbol of imperial ostentation.

Nokia might be saying goodbye to other businesses, too.  We shall see about Nokia-Siemens Networks, which I believe neither of the eponymous parties wants to own and would eagerly sell if somebody offering more than a bag of beans and fast-food discount coupons would step forward.

There’s no question that Nokia is bidding farewell to three vice presidents. Stepping down are Mary McDowell (mobile phones), Jerri DeVard (marketing), and Niklas Savander (EVP markets).

But Nokia is buying, too, shelling out an undisclosed sum for imaging company Scalado, looking to leverage that company’s technology to enhance the mobile-imaging and visualization capabilities of its Nokia Lumia smartphones.

Avaya

Meanwhile, staff reductions are rumored to be in the works at increasingly beleaguered Avaya.  Sources says a “large-scale” jobs cut is possible, with news perhaps surfacing later today, just two weeks before the end of the company’s third quarter.

Avaya’s financial results for its last quarter, as well as its limited growth profile and substantial long-term debt, suggested that hard choices were inevitable.

Debating SDN, OpenFlow, and Cisco as a Software Company

Greg Ferro writes exceptionally well, is technologically knowledgeable, provides incisive commentary, and invariably makes cogent arguments over at EtherealMind.  Having met him, I can also report that he’s a great guy. So, it is with some surprise that I find myself responding critically to his latest blog post on OpenFlow and SDN.

Let’s start with that particular conjunction of terms. Despite occasional suggestions to the contrary, SDN and OpenFlow are not inseparable or interchangeable. OpenFlow is a protocol, a mechanism that allows a server, known in SDN parlance as a controller, to interact with and program flow tables (for packet forwarding) on switches. It facilitates the separation of the control plane from the data plane in some SDN networks.

But OpenFlow is not SDN, which can be achieved with or without OpenFlow.  In fact, Nicira Networks recently announced two SDN customer deployments of its Network Virtualization Platform (NVP) — at DreamHost and at Rackspace, respectively — and you won’t find mention of OpenFlow in either press release, though OpenStack and its Quantum networking project receive prominent billing. (I’ll be writing more about the Nicira deployments soon.)

A Protocol in the Big Picture 

My point is not to diminish or disparage OpenFlow, which I think can and will be used gainfully in a number of SDN deployments. My point is that we have to be clear that the bigger picture of SDN is not interchangeable with the lower-level functionality of OpenFlow.

In that respect, Ferro is absolutely correct when he says that software-defined networking, and specifically SDN controller and application software, are “where the money is.” He conflates it with OpenFlow — which may or may not be involved, as we already have established — but his larger point is valid.  SDN, at the controller and above, is where all the big changes to the networking model, and to the industry itself, will occur.

Ferro also likely is correct in his assertion that OpenFlow, in and of itself, will  not enable “a choice of using low cost network equipment instead of the expensive networking equipment that we use today. “ In the near term, at least, I don’t see major prospects for change on that front as long as backward compatibility, interoperability with a bulging bag of networking protocols, and the agendas of the networking old guard are at play.

Cisco as Software Company

However, I think Ferro is wrong when he says that the market-leading vendors in switching and routing, including Cisco and Juniper, are software companies. Before you jump down my throat, presuming that’s what you intend to do, allow me to explain.

As Ferro says, Cisco and Juniper, among others, have placed increasing emphasis on the software features and functionality of their products. I have no objection there. But Ferro pushes his argument too far and suggests that the “networking business today is mostly a software business.”  It’s definitely heading in that direction, but Cisco, for one, isn’t there yet and probably won’t be for some time.  The key word, by the way, is “business.”

Cisco is developing more software these days, and it is placing more emphasis on software features and functionality, but what it overwhelmingly markets and sells to its customers are switches, routers, and other hardware appliances. Yes, those devices contain software, but Cisco sells them as hardware boxes, with box-oriented pricing and box-oriented channel programs, just as it has always done. Nitpickers will note that Cisco also has collaboration and video software, which it actually sells like software, but that remains an exception to the rule.

Talks Like a Hardware Company, Walks Like a Hardware Company

For the most part, in its interactions with its customers and the marketplace in general, Cisco still thinks and acts like a hardware vendor, software proliferation notwithstanding. It might have more software than ever in its products, but Cisco is in the hardware business.

In that respect, Cisco faces the same fundamental challenge that server vendors such as HP, Dell, and — yes — Cisco confront as they address a market that will be radically transformed by the rise of cloud services and ODM-hardware-buying cloud service providers. Can it think, figuratively and literally, outside the box? Just because Cisco develops more software than it did before doesn’t mean the answer is yes, nor does it signify that Cisco has transformed itself into a software vendor.

Let’s look, for example, at Cisco’s approach to SDN. Does anybody really believe that Cisco, with its ongoing attachment to ASIC-based hardware differentiation, will move toward a software-based delivery model that places the primary value on server-based controller software rather than on switches and routers? It’s just not going to happen, because  it’s not what Cisco does or how it operates.

Missing the Signs 

And that bring us to my next objection.  In arguing that Cisco and others have followed the market and provided the software their customers want, Ferro writes the following:

“Billion dollar companies don’t usually miss the obvious and have moved to enhance their software to provide customer value.”

Where to begin? Well, billion-dollar companies frequently have missed the obvious and gotten it horribly wrong, often when at least some individuals within the companies in question knew that their employer was getting it horribly wrong.  That’s partly because past and present successes can sow the seeds of future failure. As in Clayton M. Christensen’s classic book The Innovator’s Dilemma, industry leaders can have their vision blinkered by past successes, which prevent them from detecting disruptive innovations. In other cases, former market leaders get complacent or fail to acknowledge the seriousness of a competitive threat until it is too late.

The list of billion-dollar technology companies that have missed the obvious and failed spectacularly, sometimes disappearing into oblivion, is too long to enumerate here, but some  names spring readily to mind. Right at the top (or bottom) of our list of industry ignominy, we find Nortel Networks. Once a company valued at nearly $400 billion, Nortel exists today only in thoroughly digested pieces that were masticated by other companies.

Is Cisco Decline Inevitable?

Today, we see a similarly disconcerting situation unfolding at Research In Motion (RIM), where many within the company saw the threat posed by Apple and by the emerging BYOD phenomenon but failed to do anything about it. Going further back into the annals of computing history, we can adduce examples such as Novell, Digital Equipment Corporation, as well as the raft of other minicomputer vendors who perished from the planet after the rise of the PC and client-sever computing. Some employees within those companies might even have foreseen their firms’ dark fates, but the organizations in which they toiled were unable to rescue themselves.

They were all huge successes, billion-dollar companies, but, in the face of radical shifts in industry and market dynamics, they couldn’t change who and what they were.  The industry graveyard is full of the carcasses of company’s that were once enormously successful.

Am I saying this is what will happen to Cisco in an era of software-defined networking? No, I’m not prepared to make that bet. Cisco should be able to adapt and adjust better than the aforementioned companies were able to do, but it’s not a given. Just because Cisco is dominant in the networking industry today doesn’t mean that it will be dominant forever. As the old investment disclaimer goes, past performance does not guarantee future results. What’s more, Cisco has shown a fallibility of late that was not nearly as apparent in its boom years more than a decade ago.

Early Days, Promising Future

Finally, I’m not sure that Ferro is correct when he says Open Network Foundation’s (ONF) board members and its biggest service providers, including Google, will achieve CapEx but not OpEx savings with SDN. We really don’t know whether these companies are deriving OpEx savings because they’re keeping what they do with their operations and infrastructure highly confidential. Suffice it to say, they see compelling reasons to move away from buying their networking gear from the industry’s leading vendors, and they see similarly compelling reasons to embrace SDN.

Ferro ends his piece with two statements, the first of which I agree with wholeheartedly:

“That is the future of Software Defined Networking – better, dynamic, flexible and business focussed networking. But probably not much cheaper in the long run.”

As for that last statement, I believe there is insufficient evidence on which to render a verdict. As we’ve noted before, these are early days for SDN.

Report from Network Field Day 3: Infineta’s “Big Traffic” WAN Optimization

Last week, I had the privilege of serving as a delegate a Network Field Day 3 (NFD3), part of Tech Field Day.  It actually spanned two days, last Thursday and Friday, and it truly was a memorable and rewarding experience.

I learned a great deal from the vendor presentations (from SolarWinds, NEC, Arista, Infineta on Thursday; from Cisco and Spirent on Friday), and I learned just as much from discussions with my co-delegates, whom I invite you to get to know on Twitter and on their blogs.

The other delegates were great people, with sharp minds and exceptional technical aptitude. They were funny, too. As I said above, I was honored and privileged to spend time in their company.

Targeting “Big Traffic” 

In this post, I will cover our visit with Infineta Systems. Other posts, either directly about NFD3 or indirectly about the information I gleaned from the NFD3 presentations, will follow at later dates as circumstances and time permit.

Infineta contends that WAN optimization comprises two distinct markets: WAN optimization for branch traffic, and WAN optimization for what Infineta terms “big traffic.” Each has different characteristics.  WAN optimization for branch traffic is typified by relatively low bandwidth and going over relatively long distances, whereas WAN optimization for “big traffic” is marked by high bandwidth and traversal of various distances. Given their characteristics, Infineta asserts, the two types of WAN optimization require different system architectures.

Moreover, the two distinct types of WAN optimization also feature different categories of application traffic. WAN optimization for branch traffic is characterized by user-to-machine traffic, which involves a human directly interacting with a device and an application. Conversely, WAN optimization for big traffic, usually data-center to data-center in orientation, features machine-to-machine traffic.

Because different types of buyers involved, the sales processes for the two types of WAN optimization are different, too.

Applications and Use Cases

Infineta has chosen to go big-game hunting in the WAN-optimization market. It’s chasing Big Traffic with its Data Mobility Switch (DMS), equipped with 10-Gbps of processing capacity and a reputed ROI payback of less than a year.

Deployment of DMS is best suited for application environments that are bandwidth intensive, latency sensitive, and protocol inefficient. Applications that map to those characteristics include high-speed replication, large-scale data backup and archiving, huge file transfers, and the scale out of growing application traffic.  That means deployment typically occurs at between two or more data centers that can be hundreds or even thousands of miles apart, employing OC-3 to OC-192 WAN connections.

In Infineta’s presentation to us, the company featured use cases that covered virtual machine disk (VMDK) and database protection as well as high-speed data replication. In each instance, Infineta claimed compelling results in overall performance improvement, throughput, and WAN-traffic reduction.

Dedupe “Crown Jewels”

So, you might be wondering, how does Infineta attain those results? During a demonstration of DMS in action, Infineta tools us through the technology in considerable detail. Infineta says says its deduplication technologies are its “crown jewels,” and it has filed and received a mathematically daunting patent to defend them.

At this point, I need to make brief detour to explain that Infineta’s DMS is  hardware-based product that uses field programmable gate arrays (FPGAs), whereas Infineta’s primary competitors use software that runs on off-the-shelf PC systems. Infineta decided against a software-based approach — replete with large dictionaries and conventional deduplication algorithms — because it ascertained that the operational overhead and latency implicit in that approach inhibited the performance and scalability its customers required for their data-center applications.

To minimize latency, then, Infineta’s DMS was built with FPGA hardware designed around a multi-Gigabit switch fabric. The DMS is the souped-up vehicle that harnesses the power of the company’s approach to deduplication , which is intended to address traditional deduplication bottlenecks relating to disk I/O bandwidth, CPU, memory, and synchronization.

Infineta says its approach to deduplication is typified by an overriding focus on minimizing sequentiality and synchronization, buttressed and served by massive parallelism, computational simplicity, and fixed-size dictionary records.

Patent versus Patented Obtuseness

The company’s founder, Dr.K.V.S. (Ram) Ramarao, then explained Infineta’s deduplication patent. I wish I could convey it to you. I did everything in my limited power to grasp its intricacies and nuances — I’m sure everybody in the room could hear my rickety, wooden mental gears turning and smell the wood burning — but my brain blew a fuse and I lost the plot. Have no fear, though: Derick Winkworth, the notorious @cloudtoad on Twitter, likely will addressing Infineta’s deduplication patent in a forthcoming post at Packet Pushers. He brings a big brain and an even bigger beard to the subject, and he will succeed where I demonstrated only patented obtuseness.

Suffice it to say, Infineta says the techniques described in its patent result in the capacity to scale linearly in lockstep with additional computing resources, effectively obviating the aforementioned bottlenecks relating to disk I/O bandwidth, CPU, memory, and synchronization. (More information on Infineta’s Velocity Dedupe Engine is available on the company’s website.)

Although its crown jewels might reside in deduplication, Infineta also says DMS delivers the goods in TCP optimization, keeping the pipe full across all active connections.

Not coincidentally, Infineta claims to significantly get the measure of its competitors in areas such as throughput, latency, power, space, and “dollar-per-Mpbs” delivered. I’m sure those competitors will take issue with Infineta’s claims. As always, the ultimate arbiters are the customers that constitute the demand side of the marketplace.

Fast-Growing Market

Infineta definitely has customers — NaviSite, now part of Time Warner, among them — and if the exuberance and passion of its product managers and technologists are reliable indicators, the company will more than hold its own competitively as it addresses a growing market for WAN optimization between data centers.

Disclosure: As a delegate, my travel and accommodations were covered by Gestalt IT, which is remunerated by vendors for presentation slots at Network Field Day. Consequently, my travel costs (for airfare, for hotel accommodations, and for meals) were covered indirectly by the vendors, but no other recompense, except for the occasional tchotchke, was accepted by me from the vendors involved. I was not paid for my time, nor was I paid to write about the presentations I witnessed. 

Why Many Networking Professionals Will Resist Software-Defined Networking

In the long run, I think software defined networking (SDN) is destined for tremendous success, not only at massive cloud service providers, where it already is finding favor and increased adoption, but also at smaller service providers and even — with time and perseverance — at enterprises.

It just might not happen as quickly as some expect.

Shape of Networking to Come

In a presentation last autumn at the Open Networking Summit, Nicira co-founder Nick McKeown asserted that SDN would shape the future of networking in several key respects. He said it would do so by empowering network owners and operators, by speeding the pace of innovation, by diversifying the supply chain, and by delivering a robust foundation for programmability predicated on a standardized forwarding abstraction and provable network properties.

On the whole, McKeown probably will be right, and his technological reasoning seems entirely reasonable. As in any market, however, the commercial appeal of SDN will be determined by human factors as well as by technological considerations.

The enterprise market will be the toughest nut to crack, though, and not only because the early agenda of SDN, as defined by the board members of the Open Networking Foundation (ONF) and others, has been focused resolutely on providing solutions for the largest of cloud service providers.

Winning Hearts and Minds

Capturing enterprise hearts and minds will be difficult for SDN, and it will be hard not just because of technological challenges, such as backward compatibility with (and investments in) existing network infrastructure, but also because of the cultural milieu and entrenched mindset of enterprise networking professionals.

I’ve written before, on two occasions actually, about how human and institutional resistance to change can strongly inhibit the commercial adoption of technologies with otherwise compelling credentials and qualifications. Generally, people fear change, especially when they suspect that the change in question will affect them adversely.

And make no mistake, software-defined networking will inspire fear and resistance in some quarters, enterprise networking professionals prominent among them.

Networking’s Cultural Artifacts

Jennifer Rexford, professor of computer science at Princeton University and a former AT&T Research staffer, wrote that one of her colleagues once observed that computer-networking people “really loved their artifacts.” Those artifacts probably would include the many distributed routing protocols that have proliferated over the years.

Software-defined networking wants to loosen emotional attachment to those artifacts, just as it wants to jettison the burgeoning bag of protocols that distinguishes networking from computer programming and other disciplines.  But many networking professionals, including those in enterprise IT departments, see their mastery of complex protocols as hallmarks of who they are and what they do.

Getting the Network “Out of the Way”

Yet there’s more to it than that. Consider the workplace implications of software-defined networks. The whole idea of SDN is to make networks programmable, to put applications and those who program and manage them in the driver’s seat, and to get the network “out of the way” of the sweeping virtualized progress that has enveloped all other data-center infrastructure.

To survive and thrive in this brave new virtual world, networking professionals might have to become more like programmers. From an organizational standpoint, even though there are compelling business and technological reasons to adopt SDN, resistance from the fraternity of networking professionals will be stiff and difficult to overcome.

In the realm of the super-sized data centers at Google and elsewhere, this isn’t a serious problem. The concepts associated with “DevOps” and with thinking outside boxes, departmental and otherwise, thrive in those precincts. Google long has eschewed the purchase of servers and networking gear from vendors, and it does things its own way. To greater or lesser degrees, other large cloud-service providers now dance to a similar beat. But the enterprise? Well, that’s a different animal altogether.

Vendors in No Hurry

Some of the new SDN startups already are meeting with pockets of resistance. They’re seeing cleavage — schism might be too strong a word, though maybe not — between cloud architects and server-virtualization specialists on one side of the house and network professionals on the opposing side. The two camps see things differently,with perspectives and priorities that are difficult to reconcile. (There are exceptions to the rule, of course, with some networking professionals eager to embrace SDN, but they currently are in the minority.)

As we’ve seen, the board of directors at the Open Networking Foundation (ONF) isn’t concerned about how quickly the enterprise gets with the SDN program. I also would suggest that most networking vendors, which are excluded from the ONF’s board, aren’t in a hurry to push an SDN agenda that features logically centralized, server-based controllers. You’ll see SDN from these vendors, yes, but the control plane will be distributed until such time as enterprises and service providers (not on the ONF board) demand otherwise. That will be a while, I suspect.

Deferred Gratification

We tend to underestimate resistance to change in this industry.  Gartner devised the “trough of disillusionment”  and the technology hype cycle for good reason. Some technologies remain in that basin longer than others. Some never emerge from what becomes a bottomless pit rather than a trough.

That won’t happen to SDN.  As I wrote earlier, I think it has a bright future. Don’t be surprised, though, if the hype gets ahead of the reality. When it comes to technologies and markets, our inherent optimism occasionally is thwarted by our intrinsic resistance to change.

Networking Vendors Tilt at ONF Windmill

Closely following the latest developments and continuing progress of software-defined networking (SDN), I am reminded of what somebody who shall remain nameless said not long ago about why he chose to leave Cisco to pursue his career elsewhere.

He basically said that Cisco, as a huge networking company, is having trouble reconciling itself to the reality that the growing force known as cloud computing is not “network centric.” His words stuck with me, and I’ve been giving them a lot of thought since then.

All Computing Now

His opinion was validated earlier this week at a NetEvents symposium in Garmisch, Germany, where Dan Pitt, executive director of the Open Networking Foundation (ONF) made some statements about software-defined networking (SDN) that, while entirely consistent with what we’ve heard before from that community’s most fervent proponents, also seemed surprisingly provocative. Quoting Pitt, from a blog post published at ZDNet UK:

“In future, networking will become just an integral part of computing, using same tools as the rest of computing. Enterprises will get out of managing plumbing, operators will become software companies, IT will add more business value, and there will be more network startups from Generation Y.”

Pitt was asked what impact this architectural shift would have on network performance. He said that a 30,000-user campus could be supported by a four-year-old Dell PC.

Redefining Architecture, Redefining Value

Naturally, networking vendors can’t be elated at that prospect. Under the SDN master plan, the intelligence (and hence the value) of switching and routing gets moved to a server, or to a cluster of servers, on the edge of the network. Whether this is done with OpenFlow, Open vSwitch, or some other mechanism between the control plane and the switch doesn’t really matter in the big picture. What matters is that networking architectures will be redefined, and networking value will migrate into (and be subsumed within) a computing paradigm. Not to put too fine a point on it, but networking value will be inherent in applications and control-plane software, not in the dumb, physical hardware that will be relegated to shunting packets on the network.

At that same NetEvents symposium in Germany, a Computerworld UK story quoted Pitt saying something very similar to, though perhaps less eloquent than, what Berkeley professor and Nicira co-founder Scott Shenker said about network-protocol complexity.

Said Pitt:

“There are lots of networking protocols which make it very labour intensive to manage a network. There are too many “band aids” being used to keep a network working, and these band aids can actually cause many of the problems elsewhere in the network.”

Politics of ONF

I’ve written previously about the political dynamics of the Open Networking Foundation (ONF).

Just to recap, if you look at the composition of the board of directors at the ONF, you’ll know all you need to know about who wields power in that organization. The ONF board members are Google, Yahoo, Verizon, Deutsche Telekom, NTT, and Microsoft. Make no mistake about Microsoft’s presence. It is there as a cloud service provider, not as a vendor of technology products.

The ONF is run by large cloud service providers, and it’s run for large cloud service providers, though it’s conceivable that much of what gets done in the ONF will have applicability and value to cloud shops of smaller size and stature. I suppose it’s also conceivable that some of the ONF’s works will prove valuable at some point to large enterprises, though it should be noted that the enterprise isn’t a constituency that is foremost of mind to the ONF.

Vendors Not Driving

One thing is certain: Networking vendors are not steering the ONF ship. I’ve written that before, and I’ll no doubt write it again. In fact, I’ll quote Dan Pit to that effect right now:

“No vendors are allowed on the (ONF) board. Only the board can found a working group, approve standards, and appoint chairs of working groups. Vendors can be on the groups but not chair them. So users are in the driving seat.”

And those users — really the largest of the cloud service providers — aren’t about to move over. In fact, the power elite that governs that ONF has a definite vision in mind for the future of networking, a future that — as we’ve already seen — will make the networking subservient to applications, programmability, and computing.

Transition on the Horizon

As the SDN vision moves downstream from the largest service providers, such as those who run the show at the ONF, to smaller service providers and then to large enterprises, networking companies will have to transform themselves into software vendors — with software business models.

Can they do that? Some of them probably can, but others — including probably the largest of all — will have a difficult time making the transition, a prisoner of its own past success and circumscribed by the classic “innovator’s dilemma.”  Cisco, a networking colossus, has built a thriving franchise and dominant market position, replete with a full-fledged business model and an enormous sales machine. It will be hard to move away from a formula that’s filled the coffers all these years.

Still, move they must, though timing, as it often does, will count for a lot. The SDN wave won’t inundate the marketplace overnight, but, regardless of the underlying protocols and mechanisms that might run alongside or supersede OpenFlow, SDN seems set to eventually win adherents in CFO and CIO offices beyond the realm of the companies represented on the ONF’s board of directors. It will take some time, probably many years, but it’s a movement that will gain followers and momentum as it delivers quantifiable business benefits to those that adopt it.

Enterprise As Last Redoubt

The enterprise will be the last redoubt of conventional networking infrastructure, and it’s not difficult to envision Cisco doing everything in its power to keep it that way for as long as possible. Expect networking’s old guard to strongly resist the siren song of SDN. That’s only natural, even if — in the very long haul — it seems a vain pursuit and, ultimately, a losing battle.

At this point, I just want to emphasizes that SDN need not lead to the commoditization of networking. Granted, it might lead to the commoditization of certain types of networking hardware, but there’s still value, much of it proprietary, that software-centric networking vendors can bring to the SDN table. But, as I said earlier, for many vendors that will mean a shift in business model, product focus, and go-to-market strategy.

In that Computerworld piece, some wonder whether networking vendors could prevent the rise of software-defined networking by refusing to play along.

Not Going Away

Again, I can easily imagine the vendors slowing and impeding the ascent of SDN within enterprises, but there’s absolutely no way for them to forestall its adoption at the major service providers represented by the ONF board members. Those players have the capital and the operational resources, to say nothing of the business motivation, to roll their own switches, perhaps with the help of ODMs, and to program their own applications and networks. That train has left the station and it can’t be recalled by even the largest of networking vendors, who really have no leverage or say in the matter. They can play along and try to find a sinecure where they can continue to add value, or they can dig in their heels and get circumvented entirely.  It’s their choice.

Either way, the tension between the ONF and the traditional networking vendors is palpable. In the IETF, the vendors are casting glances and sometimes aspersions at the ONF, trying to figure out how they can mount a counterattack. The battle will be joined, but the ONF rules its own roost — and it isn’t going away.

Hackers Didn’t Kill Nortel

For a company that is dead in all meaningful respects, Nortel Networks has an uncanny knack of finding its way into the news. Just as late rapper Tupac Shakur’s posthumous song releases kept him in the public consciousness long after his untimely death, Nortel has its recurring scandals and misadventures to sustain its dark legacy.

Recently, Nortel has surfaced in the headlines for two reasons. First, there was (and is) the ongoing fraud trial of three former Nortel executives: erstwhile CEO Frank Dunn, former CFO Douglas Beatty, and ex-corporate controller Michael Gollogly. That unedifying spectacle is unfolding at a deliberate pace in a Toronto courtroom.

Decade of Hacking

While a lamentable story in its own right, the trial was overshadowed earlier this week by another development. In a story that was published in the Wall Street Journal, a former Nortel computer-security specialist alleged that the one-time telecom titan had been subject to decade-long hacking exploits undertaken by unknown assailants based in China. The objective of the hackers apparently was corporate espionage, specifically related to gaining access to Nortel’s intellectual property and trade secrets. The hacking began in 2000 and persisted well into 2009, according to the former Nortel employee.

After the report was published, speculation arose as to whether, and to what degree, the electronic espionage and implicit theft of intellectual property might have contributed to, or hastened, Nortel’s passing.

Presuming the contents of the Wall Street Journal article to be accurate, there’s no question that persistent hacking of such extraordinary scale and duration could not have done Nortel any good. Depending on what assets were purloined and how they were utilized — and by whom — it is conceivable, as some have asserted, that the exploits might have hastened Nortel’s downfall.

Abundance of Clowns

But there’s a lot we don’t know about the hacking episode, many questions that remain unanswered. Unfortunately, answers to those questions probably are not forthcoming. Vested interests, including those formerly at Nortel, will be reluctant to provide missing details.

That said, I think we have to remember that Nortel was a shambolic three-ring circus with no shortage of clowns at the head of affairs. As I’ve written before, Nortel was its own worst enemy. Its self-harm regimen was legendary and varied.

Just for starters, there was its deranged acquisition strategy, marked by randomness and profligacy. Taking a contrarian position to conventional wisdom, Nortel bought high and sold low (or not at all) on nearly every acquisition it made, notoriously overspending during the Internet boom of the 1990s that turned to bust in 2001.

Bored Directors

The situation was exacerbated by mismanaged assimilation and integration of those poorly conceived acquisitions. If Cisco wrote the networking industry’s how-to guide for acquisitions in the 1990s, Nortel obviously didn’t read it.

Nortel’s inability to squeeze value from its acquisitions was symptomatic of executive mismanagement, delivered by a long line of overpaid executives. And that brings us to the board of directors, which took complacency and passivity to previously unimagined depths of docility and indifference.

In turn, that fecklessness contributed to bookkeeping irregularities and accounting shenanigans that drew the unwanted attention of the Securities and Exchange Commission and the Ontario Securities Commission, and which ultimately resulted in the fraud trial taking place in Toronto.

Death by Misadventures

In no way am I excusing any hacking or alleged intellectual property theft that might have been perpetrated against Nortel. Obviously, such exploits are unacceptable. (I have another post in the works about why public companies are reluctant to expose their victimization in hack attacks, and why we should suspect many technology companies today have been breached, perhaps significantly. But that’s for another day).

My point is that, while hackers and intellectual-property thieves might be guilty of many crimes, it’s a stretch to blame them for Nortel’s downfall. Plenty of companies have been hacked, and continue to be hacked, by foreign interests in pursuit of industrial assets and trade secrets. Those companies, though harmed by such exploits, remain with us.

Nortel was undone overwhelmingly by its own hand, not by the stealthy reach of electronic assassins.

Peeling the Nicira Onion

Nicira emerged from pseudo-stealth yesterday, drawing plenty of press coverage in the process. “Network virtualization” is the concise, two-word marketing message the company delivered, on its own and through the analysts and journalists who greeted its long-awaited official arrival on the networking scene.

The company’s website opened for business this week replete with a new look and an abundance of new content. Even so, the content seemed short on hard substance, and those covering the company’s launch interpreted Nicira’s message in a surprisingly varied manner, somewhat like blind men groping different parts of an elephant. (Onion in the title, now an elephant; I’m already mixing flora and fauna metaphors.)

VMware of Networking Ambiguity

Many made the point that Nicira aims to become the “VMware of networking.” Interestingly, Big Switch Networks has aspirations to wear that crown, asserting on its website that “networking needs a VMware.” The theme also has been featured in posts on Network Heresy, Nicira CTO Martin Casado’s blog. He and his colleagues have written alternately that networking both doesn’t and does need a VMware. Confused? That’s okay. Many are in the same boat . . . or onion field, as the case may be.

The point Casado and company were trying to make is that network virtualization, while seemingly overdue and necessary, is not the same as server virtualization. As stated in the first in that series of posts at Network Heresy:

“Virtualized servers are effectively self contained in that they are only very loosely coupled to one another (there are a few exceptions to this rule, but even then, the groupings with direct relationships are small). As a result, the virtualization logic doesn’t need to deal with the complexity of state sharing between many entities.

A virtualized network solution, on the other hand, has to deal with all ports on the network, most of which can be assumed to have a direct relationship (the ability to communicate via some service model). Therefore, the virtual networking logic not only has to deal with N instances of N state (assuming every port wants to talk to every other port), but it has to ensure that state is consistent (or at least safely inconsistent) along all of the elements on the path of a packet. Inconsistent state can result in packet loss (not a huge deal) or much worse, delivery of the packet to the wrong location.”

In Context of SDN Universe

That issue aside, many writers covering the Nicira launch presented information about the company and its overall value proposition consistently. Some articles were more detailed than others. One at MIT’s Technology Review provided good historical background on how Casado first got involved with the challenge of network virtualization and how Nicira was formed to deliver a solution.

Jim Duffy provided a solid piece touching on the company’s origins, its venture-capital investors, and its early adopters and the problems Nicira is solving for them. He also touched on where Nicira appears to fit within the context of the wider SDN universe, which includes established vendors such as Cisco Systems, HP, and Juniper Networks, as well as startup such as Big Switch Networks, Embrane, and Contextream.

In that respect, it’s interesting to note what Embrane co-founder and President Dante Malagrino told Duffy:

 “The introduction of another network virtualization product is further validation that the network is in dire need of increased agility and programmability to support the emergence of a more dynamic data center and the cloud.”

“Traditional networking vendors aren’t delivering this, which is why companies like Nicira and Embrane are so attractive to service providers and enterprises. Embrane’s network services platform can be implemented within the re-architected approach proposed by Nicira, or in traditional network architectures. At the same time, products that address Layer 2-3 and platforms that address Layer 4-7 are not interchangeable and it’s important for the industry to understand the differences as the network catches up to the cloud.”

What’s Nicira Selling?

All of which brings us back to what Nicira actually is delivering to market. The company’s website offers videos, white papers, and product data sheets addressing the Nicira Network Virtualization Platform (NVP) and its Distributed Network Virtualization Infrastructure (DNVI), but I found the most helpful and straightforward explanations, strangely enough, on the Frequently Asked Questions (FAQ) page.

This is an instance of a FAQ page that actually does provide answers to common questions. We learn, for example, that the key components of the Nicira Network Virtualization Platform (NVP) are the following:

– The Controller cluster, a distributed control system

– The Management software, an operations console

– The RESTful API that integrates into a range of Cloud Management Systems (CMS), including a Quantum plug-in for OpenStack.

Those components, which constitute the NVP software suite, are what Nicira sells, albeit in a service-oriented monthly subscription model that scales per virtual network port.

Open vSwitch, Minor Role for OpenFlow 

We then learn that the NVP communicates with the physical network indirectly, through Open vSwitch. Ivan Pepelnjak (I always worry that I’ll misspell his name, but not the Ivan part) provides further insight into how Nicira leverages Open vSwitch. As Nicira notes, the NVP Controller communicates directly with Open vSwitch (OVS), which is deployed in server hypervisors. The server hypervisor then connects to the physical network and end hosts connect to the vswitch. As a result, NVP does not talk directly to the physical network.

As for OpenFlow, its role is relatively minor. As Nicira explains: “OpenFlow is the communications protocol between the controller and OVS instances at the edge of the network. It does not directly communicate with the physical network elements and is thus not subject to scaling challenges of hardware-dependent, hop-by-hop OpenFlow solutions.”

Questions About L4-7 Network Services

Nicira sees its Network Virtualization Platform delivering value in a number of different contexts, including the provision of hardware-independent virtual networks; virtual-machine mobility across subnet boundaries (while maintaining L2 adjacency); edge-enforced, dynamic QoS and security policies (filters, tagging, policy routing, etc.) bound to virtual ports; centralized system-wide visibility & monitoring; address space isolation (L2 & L3); and Layer 4-7 services.

Now that last capability provokes some questions that cannot be answered in the FAQ.

Nicira says its NVP can integrate with third-party Layer 3-7 services, but it also says services can be created by Nicira or its customers.  Notwithstanding Embrane’s perfectly valid contention that its network-services platform can be delivered in conjunction with Nicira’s architectural model, there is a distinct possibility Nicira might have other plans.

This is something that bears watching, not only by Embrane but also by longstanding Layer 4-7 service-delivery vendors such as F5 Networks. At this point, I don’t pretend to know how far or how fast Nicira’s ambitions extend, but I would imagine they’ll be demarcated, at least partly, by the needs and requirements of its customers.

Nicira’s Early Niche

Speaking of which, Nicira has an impressive list of early adopters, including AT&T, eBay, Fidelity Investments, Rackspace, Deutsche Telekom, and Japan’s NTT. You’ll notice a commonality in the customer profiles, even if their application scenarios vary. Basically, these all are public cloud providers, of one sort or another, and they have what are called “web-scale” data centers.

While Nicira and Big Switch Networks both are purveyors of “network virtualization”  and controller platforms — and both proclaim that networking needs a VMware — they’re aiming at different markets. Big Switch is focusing on the enterprise and the private cloud, whereas Nicira is aiming for large public cloud-service providers or big enterprises that provide public-cloud services (such as Fidelity).

Nicira has taken care in selecting its market. An earlier post on Casado’s blog suggests that he and Nicira believe that OpenFl0w-based SDNs might be a solution in search of a problem already being addressed satisfactorily within many enterprises. I’m sure the team at Big Switch would argue otherwise.

At the same time, Nicira probably has conceded that it won’t be patronized by Open Networking Foundation (ONF) board members such as Google, Facebook, and Microsoft, each of which is likely to roll its own network-virtualization systems, controller platforms, and SDN applications. These companies not only have the resources to do so, but they also have a business imperative that drives them in that direction. This is especially true for Google, which views its data-center infrastructure as a competitive differentiator.

Telcos Viable Targets

That said, I can see at least a couple ONF board members that might find Nicira’s pitch compelling. In fact, one, Deutsche Telekom, already is on board, at least in part, and perhaps Verizon will come along later. The telcos are more likely than a Google to need assistance with SDN rollouts.

One last night on Nicira before I end this already-prolix post. In the feature article at Technology Review, Casado says it’s difficult for Nicira to impress a layperson with its technology, that “people do struggle to understand it.” That’s undoubtedly true, but Nicira needs to keep trying to refine its message, for its own sake as well as for those of prospective customers and other stakeholders.

That said, the company is stocked with impressive minds, on both the business and technology sides of the house, and I’m confident it will get there.

U.S. National-Security Concerns Cast Pall over Huawei

As 2011 draws to a close, Huawei faces some difficult questions about its business prospects in the United States.  The company is expanding worldwide into enterprise networking and mobile devices, such as smartphones and tablets, even as it continues to grow its global telecommunications-equipment franchise.

Huawei is a company that generated 2010 revenue of about $28 billion, and it has an enviable growth profile for a firm of its size. But a dark cloud of suspicion continues to hang over it in the U.S. market, where it has not made headway commensurate with its success in other parts of the world. (As its Wikipedia entry states, Huawei’s products and services have been deployed in more than 140 countries, and it serves 45 of the world’s 50 largest telcos. None of those telcos are in the U.S.)

History of Suspicion

The reason it has not prospered in the U.S. is at primarily attributable to persistent government concerns about Huawei’s alleged involvement in cyber espionage as a reputed proxy for China. At this point, I will point out that none of the charges has been proven, and that any evidence against the company has been kept classified by U.S. intelligence agencies.

Nonetheless, innuendo and suspicions persist, and they inhibit Huawei’s ability to serve customers and grow revenue in the U.S. market. In the recent past, the U.S. government has admonished American carriers, including Sprint Nextel, not to buy Huawei’s telecommunications equipment on national-security concerns. On the same grounds, U.S. government agencies prevented Huawei from acquiring ownership stakes in U.S.-based companies such as 3Com, subsequently acquired by HP, and 3Leaf Systems. Moreover, Huawei was barred recently from participating in a nationwide emergency network, again for reasons of national security.

Through it all, Huawei has asserted that it has nothing to hide, that it operates no differently from its competitors and peers in the marketplace, and that it has no intelligence-gathering remit from the China or any other national government. Huawei even has welcomed an investigation by US authorities, saying that it wants to put the espionage charges behind it once and for all.

Investigation Welcomed

Well, it appears Huawei, among others, will be formally investigated, but it also seems the imbroglio with the U.S. authorities might continue for some time. We learned in November that the U.S. House Permanent Select Committee on Intelligence would investigate potential security threats posed by some foreign companies, Huawei included.

In making the announcement relating to the investigation, U.S. Representative Mike Rogers, a Michigan Republican and the committee’s chairman, said China has increased its cyber espionage in the United States. He cited connections between Huawei’s president, Ren Zhengfei, and the People’s Liberation Army, to which the Huawei chieftain once belonged.

For its part, as previously mentioned, Huawei says it welcomes an investigation. Here’s a direct quote from William Plummer, a Huawei spokesman, excerpted from a recent Bloomberg article:

“Huawei conducts its businesses according to normal business practices just like everybody in this industry. Huawei is an independent company that is not directed, owned or influenced by any government, including the Chinese government.”

Unwanted Attention from Washington

The same Bloomberg article containing that quote also discloses that the U.S. government has invoked  Cold War-era national-security powers to compel telecommunication companies, including AT&T Inc. and Verizon Communications Inc., to disclose confidential information about the components and composition of their networks in a hunt for evidence of Chinese electronic malfeasance.

Specifically, the U.S. Commerce Department this past spring requested a detailed accounting of foreign-made hardware and software on carrier networks, according to the Bloomberg article. It also asked the telcos and other companies about security-related incidents, such as the discovery of “unauthorized electronic hardware” or suspicious equipment capable of duplicating or redirecting data.

Brand Ambitions at Risk

The concerns aren’t necessarily exclusive to alleged Chinese cyber espionage, and Huawei is not the only company whose gear will come under scrutiny. Still, Huawei clearly is drawing a lot of unwanted attention in Washington.

While Huawei would like this matter to be resolved expeditiously in its favor, the investigations probably will continue for some time before definitive verdicts are rendered publicly. In the meantime, Huawei’s U.S. aspirations are stuck in arrested development.

To be sure, the damage might not be restricted entirely to the United States. As this ominous saga plays out, Huawei is trying to develop its brand in Europe, Asia, South America, Africa, and Australia. It’s making concerted advertising and marketing pushes for its smartphones in the U.K., among other jurisdictions, and it probably doesn’t want consumers there or elsewhere to be inundated with persistent reports about U.S. investigations into its alleged involvement with cyber espionage and spyware.

Indulge me for a moment as I channel my inner screenwriter.

Scenario: U.K. electronics retailer. Two blokes survey the mobile phones on offer. Bloke One picks up a Huawei smartphone. 

Bloke One: “I quite fancy this Android handset from Huawei. The price is right, too.”

Bloke Two: “Huawei? Isn’t that the dodgy Chinese company being investigated by the Yanks for spyware?

Bloke One puts down the handset and considers another option.

Serious Implications

Dark humor aside, there are serious implications for Huawei as it remains under this cloud of suspicion. Those implications conceivably stretch well beyond the shores of the United States.

Some have suggested that the U.S. government’s charges against Huawei are prompted more by protectionism than by legitimate concerns about national security. With the existing evidence against Huawei classified, there’s no way for the public, in the U.S. or elsewhere, to make an informed judgment.

Alcatel-Lucent Banks on Carrier Clouds

Late last week, I had the opportunity to speak with David Fratura, Alcatel-Lucent’s senior director of strategy for cloud solutions, about his company’s new foray into cloud computing, CloudBand, which is designed to give Alcatel-Lucent’s carrier customers a competitive edge in delivering cloud services to their enterprise clientele and — perhaps to a lesser extent — to consumers, too.

Like so many others in the telecommunications-equipment market, Alcatel-Lucent is under pressure on multiple fronts. In a protracted period of global economic uncertainty, carriers are understandably circumspect about their capital spending, focusing investments primarily on areas that will result in near-term reduced operating costs or similarly immediate new service revenues. Carriers are reluctant to spend much in hopeful anticipation of future growth for existing services; instead, they’re preoccupied with squeezing more value from the infrastructure they already own or with finding entirely new streams of service-based revenue growth, preferably at the lowest-possible cost of market entry.

Big Stakes, Complicated Game

Complicating the situation for Alcatel-Lucent — as well as for Nokia Siemens Networks and longtime market wireless-gear market leader Ericsson — are the steady competitive advances being made into both developed and developing markets by Chinese telco-equipment vendors Huawei and ZTE. That competitive dynamic is putting downward pressure on hardware margins for the old-guard vendors, compelling them to look to software and services for diversification, differentiation, and future growth.

For its part, Alcatel-Lucent has sought to establish itself as a vendor that can help its operator customers derive new revenue from mobile software and services and, increasingly, from cloud computing.

Alcatel-Lucent CEO Ben Verwaayen is banking on those initiatives to save his job as well as to revive the company’s growth profile. Word from sources close the company, as reported first by the Wall Street Journal, is that the boardroom knives are out for the man in Alcatel’s big chair, though Alcatel-Lucent chairman Philippe Camus felt compelled to respond to the intensifying scuttlebutt by providing Verwaayen with a qualified vote of confidence.

Looking Up 

With Verwaayen counting on growth markets such as cloud computing to pull him and Alcatel-Lucent out of the line of fire, CloudBand can be seen as something more than the standard product announcement. There’s a bigger context, encompassing not only Alcatel-Lucent’s ambitions but also the evolution of the broader telecommunications industry.

CloudBand, according to a company-issued press release, is designed to deliver a “foundation for a new class of ‘carrier cloud’ services that will enable communications service providers to bring the benefits of the cloud to their own networks and business operations, and put them in an ideal position to offer a new range of high-performance cloud services to enterprises and consumers.”

In a world where everybody is trying to contribute to or be the cloud, that’s a tall order, so let’s take a look at the architecture Alcatel-Lucent has brought forward to create its “carrier cloud.”

CloudBand Architecture

CloudBand comprises two distinct elements. First up is the CloudBand Management System, derived from research work at the venerable Bell Labs, which delivers orchestration and optimization of services between the communications network and the cloud. The second element is the CloudBand Node, which provides computing, storage, and networking hardware and associated software to host a wide range of cloud services. Alcatel-Lucent’s “secret sauce,” and hence its potential to draw meaningful long-term business from its installed base of carrier customers, is the former, but the latter also is of interest.

Hewlett-Packard, as part of a ten-year strategic global agreement with Alcatel-Lucent, will provide converged data-center infrastructure for the CloudBand nodes, including compute, storage, and networking technologies. While Alcatel-Lucent has said it can accommodate gear from other vendors in the nodes, HP’s offerings will be positioned as the default option in the CloudBand nodes. Alcatel-Lucent’s relationship with HP was intended to help “bridge the gap between the data center and the network,” and the CloudBand node definitely fits within that mandate.

Virtualized Network Elements in “Carrier Clouds”

By enabling operators to shift to a cloud-based delivery model, CloudBand is intended to help service providers market and deliver new services to customers quickly, with improved quality of service and at lower cost. Carriers can use CloudBand to virtualize their network elements, converting them to software and running them on demand in their “carrier clouds.” As a result, service providers  presumably will derive improved utilization from their network resources, saving money on the delivery of existing services — such as SMS and video — and testing and introducing new ones at lower costs.

If carriers embrace CloudBand only for this reason — to virtualize and better manage their network elements and resources for more efficient and cost-effective delivery of existing services — Alcatel-Lucent should do well with the offering. Nonetheless, the company has bigger ambitions for CloudBand.

Alcatel-Lucent has done market research indicating that enterprise IT decision makers’ primary concern about the cloud involves performance rather than security, though both ranked highly. Alcatel-Lucent also found that those same enterprise IT decision makers believe their communications service providers — yes, carriers — are best equipped to deliver the required performance and quality of service.

Helping Carriers Capture Cloud Real Estate 

Although Alcatel-Lucent talks a bit about consumer-oriented cloud services, it’s clear that the enterprise is where it really believes it can help its carrier customers gain traction. That’s an important distinction, too, because it means Alcatel-Lucent might be able to help its customers carve out a niche beyond consumer-focused cloud purveyors such as Google, Facebook, Apple, and even Microsoft. It also means it might be able to assist carriers in differentiate themselves from infrastructure-as-a-service (IaaS) leader Amazon Web Services (AWS), which became the service of choice for technology startups, and from the likes of Rackspace.

As Alcatel-Lucent’s Fratura emphasized, many businesses, from SMBs up to large enterprises, already obtain hosted services and software-as-a-service (SaaS) offerings from carriers today. What Alcatel-Lucent proposes with CloudBand is designed to help them capture more of the cloud market.

It just might work, but it won’t be easy. As Ray Le Maistre at LightReading wrote, cloud solutions on this scale are not a walk on the beach or a day at the park (yes, you saw what I did there). What’s more, Alcatel-Lucent will have to hope that a sufficient number of its carrier customers can deploy, operate, and manage CloudBand to full effect. That’s not a given, even if Alcatel-Lucent offers CloudBand as managed service and even though it already sells and delivers professional services to carriers.

Alcatel-Lucent says CloudBand will be available for deployment in the first half of 2012.  At first, CloudBand will run exclusively on Alcatel-Lucent technology, but the company claims to be working with the Alliance for Telecommunications Industry Solutions (ATIS)  and the Internet Engineering Task Force (IETF) to establish standards to enable CloudBand to run on gear from other vendors.

With CloudBand, Alcatel-Lucent, at least within the content of its main telecommunications-equipment competitors, is seen as having first run at the potentially lucrative market opportunity of cloud enabling the carrier community. Much now will depend on how well it executes and on how effectively its competitors respond to the initiative.

The Carrier Factor

In addition, of course, the carriers themselves are a factor. Although they undoubtedly want to get their hands around the cloud business opportunity, there’s some question as to whether they have the wherewithal to get the job done. The rise of cloud services from Google, Apple, Facebook, Amazon was partly a result of carriers missing a golden opportunity. One would like to think they’ve learned from those sobering experiences, but one also can’t be sure they won’t run to prior form.

From what I have heard and seen, the Alcatel-Lucent vision for CloudBand is compelling. It brings the benefits of virtualization and orchestration to carrier network infrastructure, enabling the latter to manage their resources cost-effectively and innovatively. If they seize the opportunity, they’ll save money on their own existing services and be in a great position to deliver range of cloud-based enterprise services to their business customers.

Alcatel-Lucent should find a receptive audience for CloudBand among its carrier installed base. The question is whether those Alcatel-Lucent customers will be able to get full measure from the technology and from the business opportunity the cloud represents.