Monthly Archives: October 2011

No Word on Avaya’s Long-Pending IPO

Like many other prospective public offerings, Avaya’s pending trick-or-treat IPO would appear to be in suspended animation. The company and its agents wanted to get the deal done this year, but there’s been no word on whether it will go ahead before the sands in 2011’s hourglass run down.

Avaya signaled its intentions and filed the requisite paperwork in June, but then economic conditions worsened. Here’s an excerpt from a post I wrote about the pending IPO when all the leaves were still on the trees:

“We don’t know when Avaya will have its IPO, but we learned a couple weeks ago that the company will trade under the symbol ‘AVYA‘ on the New York Stock Exchange.

Long before that, back in June, Avaya first indicated that it would file for an IPO, from which it hoped to raise about $1 billion. Presuming the IPO goes ahead before the end of this year, Avaya could find itself valued at $5 billion or more, which would be about 40 percent less than private-equity investors Silver Lake and TPG paid to become owners of the company back in 2007.”

Making Moves While Waiting for Logjam to Clear

Speaking of Silver Lake and TPG, they must feel a particular urgency to get this deal consummated.  As mentioned in my previous post, they want to use the proceeds to pay down rather substantial debt (total indebtedness was $6.176 billion as of March 31), redeem preferred stock, and pay management termination fees to Avaya’s sponsors, which happen to be Silver Lake and TPG.  That’s plenty of incentive.

The lead underwriters for the transaction, when it eventually occurs, will be J.P. Morgan, Morgan Stanley, and Goldman Sachs & Company.

Avaya hasn’t been sitting on its hands while waiting to go public. The company acquired SIP-security specialist Sipera, a purveyor of session border controllers (SBC) and unified-communications (UC) security solutions, early this month. It followed that move with the acquisition of Aurix, a UK-based provider of speech analytics and audio data-mining technology.

Financials terms were not disclosed regarding either transaction.

Advertisement

Brocade Engages Qatalyst Again, Hopes for Different Result

The networking industry’s version of Groundhog Day resurfaced late last week when the Wall Street Journal published an article in which “people familiar with the matter” indicated that Brocade Communications Systems was up for sale — again.

Just like last time, investment-banking firm Qatalyst Partners, headed by the indefatigable Frank Quattrone, appears to have been retained as Brocade’s agent. Quattrone and company failed to find a buyer for Brocade last time, and many suspect the same fate will befall the principals this time around.

Changed Circumstances

A few things, however, are different from the last time Brocade was put on the block and Qatalyst beat Silicon Valley’s bushes seeking prospective buyers. For one thing, Brocade is worth less now than it was back then. The company’s shares are worth roughly half as much as they were worth during fevered speculation about its possible acquisition back in the early fall of 2009. With a current market capitalization of about $2.15 billion, Brocade would be easier for a buyer to digest these days.

That said, the business case for Brocade acquisition doesn’t seem as compelling now as it was then. The core of its commercial existence, still its Fibre Channel product portfolio, is well on its way to becoming a slow-growth legacy business. What’s worse, it has not become a major player in Ethernet switching subsequent to its $3 billion purchase of Foundry Networks in 2008. Running the numbers, prospective buyers would be disinclined to pay much of a premium for Brocade today unless they held considerable faith in the company’s cloud-networking vision and strategy, which isn’t at all bad but isn’t assured to succeed.

Unfortunately, another change is that fewer prospective buyers would seem to be in the market for Brocade these days. Back in 2009, Dell, HP, Oracle, IBM all were mentioned as possible acquirers of the company. One would be hard pressed to devise a plausible argument for any of those vendors to make a play for Brocade now.

Dell is busily and happily assimilating and integrating Force10 Networks; HP is still trying to get its networking house in order and doesn’t need the headaches and overlaps an acquisition of Brocade would entail; IBM is content to stand pat for now with its BLADE Network Technologies acquisition; and, as for Oracle, Larry Ellison was adamant that he wanted no part of Brocade. Admittedly, Ellison is known for his shrewdness and occasional reverses, but he sured seemed convincing regarding Oracle’s position on Brocade.

Sorting Out the Remaining Candidates

So, that leaves, well, who exactly? Some believe Cisco might buy up Brocade as a consolidation play, but that seems only a remote possibility. Others see Juniper Networks similarly making a consolidation play for Brocade. It could happen, I suppose, but I don’t think Juniper needs a distraction of that scale just as it is reaching several strategic crossroads (delivery of product roadmap, changing industry dynamics, technological shifts in its telco and service-provider markets). No, that just wouldn’t seem a prudent move, with the risks significantly outweighing the potential rewards.

Some say that private-equity players, some still flush with copious cash in their coffers, might buy Brocade. They have the means and the opportunity, but is the motive sufficient? It all comes back to believing that Brocade is on a strategic path that will make it more valuable in the future than it is today. In that regard, the company’s recent past performance, from a valuation standpoint, is not encouraging.

A far-out possibility, one that I would classify as remotely unlikely, envisions EMC buying Brocade. That would signal an abrupt end to the Cisco-EMC partnership, and I don’t see a divorce, were it to transpire, occurring quite so suddenly or irrevocably.

I do, however, see one dark-horse vendor that could make a play for Brocade, and might already have done so.

Could it Be . . . Hitachi?

That vendor? It’s Hitachi Data Systems. Yes, you’re probably wondering whether I’ve partaken of some pre-Halloween magic mushrooms, but I’ve made at least a half-way credible case for a Hitachi acquisition of Brocade previously. With its well-hidden Unified Compute Platform (UCP), Hitachi has aspirations to compete against Cisco, HP, Dell and others in converged data-center infrastructure. Hitachi owns 60 percent of a networking joint venture, with NEC as the junior partner, called Alaxala. If you go to the Alaxala website, you’ll see the joint venture’s current networking portfolio, which is bereft of Fibre Channel switches.

The question is, does Hitachi want them? Today, as indicated on the Hitachi website, the company partners with Brocade, Cisco, Emulex (adapters), and QLogic (adapters) for Fibre Channel networking and with Brocade and QLogic (adapters) for iSCSI networking.

The last time Brocade was said to the market, the anticlimactic outcome left figurative egg on the faces of Brocade directors and on those of the investment bankers at Qatalyst, which has achieved a relatively good batting average as a sales agent. Let’s assume — and, believe me, it’s a safe assumption — that media leaks about potential acquisitions typically are carefully contrived occurrences, done either to make a market or to expand a market in which there’s a single bidder that has declared intent and made an offer. In the latter case, the leak is made to solicit a competitive bid and drive up value.

Hold the Egg this Time

I’m not sure what transpired the first time Qatalyst was contracted to find a buyer for Brocade. The only sure inference is that the result (or lack thereof) was not part of the plan. Giving both parties the benefit of the doubt, one would think lessons were learned and they would not want to perform a reprise of the previous script. So, while perhaps last time there wasn’t a bidder or the bidder withdrew its offer after the media leak was made, I think there’s a prospective buyer firmly at the table this time. I also think Brocade wants to see whether a better offer can be had.

My educated guess, with the usual riders and qualifications in effect,* is that perhaps Hitachi or a private-equity concern (Silver Lake, maybe) is at the table. With the leak, Brocade and Qatalyst are playing for time and leverage.

We’ll see, perhaps sooner rather than later.

* I could, alas, be wrong.

The Politics of OpenFlow

“There’s something happening here, but what is ain’t exactly clear.”  —  Buffalo Springfield, “For What It’s Worth.”

Software-defined networking (SDN) and its protocol of choice, OpenFlow, have been in the news for the past couple weeks, and I suspect we’ll have to get used to it. I feel quite comfortable claiming that neither is a fad, and the salient question is not whether they will take off but how far and how fast they will go.

Those, by the way, are good questions. To get answers, I think we first have to understand the technology and its applicability — as many are doing — and we also have to understand who’s behind the SDN curtain, why those particular entities are driving change, and how serious they are about realizing their objectives.

Strange as it might seem, we can benefit from an understanding of the political economics of OpenFlow. By political economy, I refer to the industry politics that are the driving force behind the economics of OpenFlow-based SDNs.

New Industry Dynamics

I’m pondering this subject increasingly because, apologies to Buffalo Springfield, something is happening here that is new and strange. It is happening because the industry — its technologies and markets — is evolving toward new business structures and away from old ones.

I’ll try not to bore you, but let’s briefly set the context. In the old client-server and even in the first wave of the Internet or the Web-based era of distributed computing,  the vendors were in the catbird seats. To varying degrees, everybody — service providers, enterprises, SMBs — looked to them for direction and guidance, not to mention solutions. If the vendors weren’t exactly trusted by their customers, they were needed and valued.

Enterprises Lacked Political Clout

Enterprises come in all shapes and sizes, and they span numerous vertical markets. For that reason, they tend not have overwhelming commonality of interests, and they don’t organize themselves in common cause.  As we have seen, that’s not the case with today’s largest cloud service providers. They are similar to one another in many operational and business respects, they have common interests, and they are working in concert to pursue shared business objectives.

Today we all talk about cloud computing, which has been hyped to death, but one factor that perhaps hasn’t been appreciated fully is that it is a major political change agent for the industry. With cloud computing, power shifts from the vendor community to the service-provider community. As applications and services move to the cloud, market value accompanies them. As Google and Facebook and various other cloud-service providers gain scale, they also gain economic and political power within the industry.

So, what does that mean? Well, it can mean many things, but what it means for the networking industry is that the game is changing, and in ways that must be unnerving to the boards of directors at companies such as Cisco Systems.

Google as Pioneer and Extreme Example

Let’s take Google as an example, albeit an admittedly extreme one. Google tends to make its own technology infrastructure rather than buy it from vendors. It makes it own servers, and it was one of the first service providers (as Andrew Schmitt uncovered a few years ago) to design and build its own switches. As I think about the likely origins of the Open Networking Foundation (ONF), the current manifestation of software defined networking, and the development of OpenFlow as a mechanism for realizing the business benefits of SDN, I believe we need to look back to Google’s pioneering efforts to build its own networking infrastructure. In retrospect, that was a watershed moment, and it resulted in what we’re seeing today with SDNs and OpenFlow. It was doubtless motivated by the same business and technology considerations.

To reiterate, as cloud computing rises, technology’s hierarchy of power also changes. As mentioned above, as SMBs and enterprises increasingly move applications to the cloud, where they can be delivered as services by operators such as Google and others of its ilk, two things happen: enterprise-oriented vendors find potentially themselves with a smaller market to serve, and the cloud-service providers begin to assert themselves in a number of ways, which includes setting the technology agenda for the industry.

The Open Network Foundation (ONF), for example, is run by and for service-provider community. Networking vendors do not control or drive that organization, and they never will. It is controlled by the six founding members, and they’re all major service providers. Make no mistake, the organization was constructed that way for a reason, with a clear purpose in mind. Those who politically control an organization necessarily set its agenda. The agenda of the ONF, and certainly the development of OpenFlow, is skewed definitively toward their interests. At this point, the ONF’s conception of software-defined networking is not concerned with enterprise needs or requirements. It might get there some day. I know the investors behind Big Switch Networks are hoping it does. But it’s not there now.

Inexorable Cloud Drivers 

I said earlier that Google, in this context, was an extreme example of a service provider. Not every cloud purveyor will design and deliver its own switches, and few and far between would try to tackle the challenge of core routing, as Google seems to be doing now. Still, Google and others behind the ONF have evinced enlightened self-interest. They know that the more they can move the world toward a model of highly efficient and effective cloud-based IT infrastructure (servers, storage, networking), predicated on bare-bones industry-standard hardware and orchestrated by an application-driven software-management layer, the more they will drive down their cost of production and operation. As that is achieved, they won’t just lower their own cost structures, but they will hasten the shift of consumer and enterprise applications and services to the public cloud. It’s a matter of scale, cost, and market dynamics.

NTT sees it, so do all the others. Even those who haven’t joined the Open Networking Foundation, such as Rackspace, are seeking to leverage OpenFlow.

It’s not that these service providers dislike Cisco or Juniper. As I said before, it’s just business. What Cisco and Juniper do, how they work and what they do, might have sufficed before, but it that is not an optimal model for these service providers now — or in the future.

I’m not a stock-market prognosticator, but I realize this scenario has implications for investors in networking companies. Some vendors are more exposed than others to this shift and to these developments. I will deal with those companies and their changed circumstances in subsequent posts. I don’t want to muddy the waters by delving into company-specific fortunes at this time. Suffice it to say, there’s a reason why Juniper Networks and Cisco Systems, both of which have significant exposure to the service-provider community, are scrambling to get on the OpenFlow bandwagon. It’s better to part of this parade than to be left behind, and maintaining a presence in major service-provider accounts is better than having no presence at all.

Nobody Dies, but Some Get Hurt 

Don’t get me wrong. I realize that the enterprise is a big networking market — still the biggest of all — and that the cloud and its technological agenda won’t vaporize that market overnight. Nobody is going to get “killed” or fatally disabled in the next few months, or even probably in the next few years. (I hate that “killer” talk people throw around on the Interwebs. It’s hype, and it doesn’t advance any sort of meaningful discourse or understanding at all.)

For that reason, I think it’s entirely relevant to discuss the current shortcomings of OpenFlow-based SDNs for enterprise networks. Along those lines, Ethan Banks offered some cogent thoughts yesterday on the topic after taking in the Open Flow Symposium.

As for me, I see what the progenitors of the ONF are trying to achieve. I understand why they are doing it, and I think it’s a big deal in a number of respects. As we move increasingly to the cloud, the major service providers, as represented by the demographics of the ONF board members, are moving to the fore, asserting their growing power.

Platform CEO Discusses IBM Deal, Says Partnerships Unaffected

In an email message addressed to me and a number of other recipients over the weekend, Platform Computing CEO Songnian Zhou referred to a blog post  he wrote — he jokingly referred to it as the “world’s most long-winded” — that explains why and how his company’s acquisition by IBM unfolded.

The post covers Platform’s 19-year chronology as well as the big-picture evolution of distributed computing. It’s a good read, well worth checking out. Zhou knows his subject matter well, writes in a refreshingly jargon- and hype-free style, and he covers a lot of ground. What’s more, the post isn’t nearly as prolix as he make it out to be.

Breaking It Down

As for how the acquisition came about, Zhou explains that it was driven by market dynamics and technological advances:

“The foundation of this acquisition is the ever expanding technical computing market going mainstream. IDC has been tracking this technical computing systems market segment at $14B, or 20% of the overall systems market. It is growing at 8%/year, or twice the growth rate of servers overall. Both IDC and users also point out that the biggest bottleneck to wider adoption is the complexity of clusters and grids, and thus the escalating needs for middleware and management software to hide all the moving parts and just deliver IT as a service. You see, it’s well worth paying a little for management software to get the most out of your hardware. Platform has a single mission: to rapidly deliver effective distributed computing management software to the enterprise. On our own, especially in the early days when going was tough, we have been doing a pretty good job for some enterprises in some parts of the world. But, we are only 536 heroes. Combined with IBM, we can get to all the enterprises worldwide. We have helped our customers to run their businesses better, faster, cheaper. After 19 years, IBM convinced us that there can also be a “better, faster, cheaper” way to help more customers and to grow our business. As they say, it’s all about leverage and scale.”

In a previous post I wrote about acquisition, I wondered, as have others, about how IBM’s ownership of Platform and its technology might affect the latter’s ability to support heterogeneous systems encompassing servers from IBM’s competitors. Zhou suggests that won’t be a problem, that a post-acquisition Platform  “will work even harder to add value to our partners, including IBM’s competitors.”

That said, even assuming that IBM takes a systems-centric view with Platform, continuing to allow the acquired company to support heterogeneous environments, one has to wonder whether Dell, HP and others will be as receptive to Platform as they were before. It’s a fair question, and those vendors, as well as Platform’s installed base of customers, ultimately will provide the answer.

ONF Deadly Serious About OpenFlow-Based SDNs

Yes, I’m back for further cogitation on software-defined networking (SDN) and OpenFlow.

As I wrote in my last post, relating to Cisco’s recent support for OpenFlow, I wasn’t able to attend the Open Networking Summit held last week at Stanford University.  I have, however, been reading coverage of the conference, and I am now convinced of a few fundamental SDN market realities.

Let’s start with who’s steering this particular SDN ship. The Open Networking Foundation (ONF) has been the driving force behind OpenFlow-based SDN. As I’ve written before, perhaps to the point of mind-numbing redundancy, the ONF is controlled not by networking vendors, but by the behemoths of the cloud service-provider community.

Control and the Power 

Networking vendors can be (and are) ONF members, but one needs to appreciate their place in the foundation’s hierarchy.  They are second-class citizens, and they are not setting the agenda. One more time, I will list the “founding and board members” of the ONF: Deutsche Telekom, Verizon, Google, Facebook, Microsoft, and Yahoo. Microsoft is there by dint of its status as a cloud service provider, not because it is a technology vendor.

Any doubts about where control and power reside within ONF were put to definitive rest in a recap of a third day of the Open Networking Summit provided by Dell’s Art Fewell on the NetworkWorld website:

“ . . . . Open Networking Foundation (ONF) Director Dan Pitt gave an excellent presentation that demonstrated that the ONF put a lot of thought into how they designed and structured the organization to incorporate lessons learned from older standards bodies, software communities and from the devops and open source movements. He noted that the ONF’s charter would not allow technology vendors to serve on the board of directors, but rather it should be governed by the network operators who have to live with the results. Working group chairs are assigned by the board, and a system of checks and balances has been put into place to try to prevent the problems that some standards organizations have become notorious for.”

It’s All About the Money

The message is clear. The network operators know what they want from SDN and OpenFlow, and they believe they know how to get it. What’s more, they don’t want the networking vendors compromising, subverting, or undermining the result.* (*Not that they’d do that sort of thing, of course.)

What, then, is the overriding objective these big network operators have in mind? Well, it’s to save money, as I explained in my previous post. SDN, and especially SDN enabled by an industry-standard protocol such as OpenFlow, is perceived by the major service providers as a means of substantially reducing network-related capital and, more to the point, operating expenditures. Service-provider executives, especially the mahogany-row bean counters, get excited about that sort of thing.

As Stacey Higginbotham notes, recounting an Open Networking Summit address given by a representative of Verizon:

“Stuart Elby, VP and network architecture & technology chief technologist for Verizon Digital Media Services, laid out how the promise of software-defined networking could make the company’s cost curve match its revenue by cutting down on the need for expensive gear that is costly to buy and even more costly to operate. In a conversation before his presentation, Elby explained how Verizon’s network can view every single packet on the network, but how keeping track of those packets is both a big data problem and expensive from a network management perspective.”

Verizon’s Compelling Chart

Verizon is not alone. Every one of the founding players in ONF sees the same business value in OpenFlow-enabled SDN. In the eyes of the ONF’s most powerful players, conventional network infrastructure is holding back substantial business benefits. It’s not personal, but it is business. And it is how and why major tectonic shifts in this industry come about.

Along those lines, Elby presented a visually powerful illustration that makes clear just how big an issue network-related costs are for Verizon. The chart is reproduced in Higginbotham’s article at GigaOM and in Fewell’s piece at NetworkWorld. If you haven’t seen it, I suggest you take a look. It really is worth a thousand words, but I’ll summarize as follows: Verizon’s network operating costs soon will surpass its revenues, resulting in what Verizon quaintly calls a “non-sustainable business case.” Therefore, there is an urgent need for a solution that lowers network-equipment expenditures, through utilization of off-the-shelf hardware, and enables a business case that better aligns operating costs with revenues. Verizon sees SDN and OpenFlow as the ticket to “inexpensive feature insertion for new services and revenue uplift.”

Verizon is not alone. It’s safe to say the others on the ONF board are dealing with variations of the same problem and are seeking similar solutions.

Google Goes Further

Google, for one, isn’t stopping at switches. As Higginbotham explored in an earlier post at GigaOM last week, Google is a fervent proponent of Quagga and the Open Sourcing Routing Project. The search giant’s goals are practical, namely  “cheaper, highly programmable routers it can use in its (core) network.” Called the Open LSR, Google’s router, as Higginbotham writes, is “an open-source router that consists of a switch made with merchant silicon and running Open vSwitch that talks to a server that has an OpenFlow-based controller and uses Quagga to generate the routing tables and forwarding information.”

As if the theme needs further belaboring, it’s all about taking cost out of network infrastructure. Google is working with others in the service-provider community to make its low-cost routing dream a reality.

It is clear, then, that the largest service providers, and perhaps may smaller ones besides, want to gain more control over their networks and with the costs associated with them. They have constructed the Open Network Foundation with a clear purpose in mind, they see SDN and Open Flow as solutions to a clearly articulated business problem, and they seem determined to see it through to fruition.

What About the Enterprise?

What remains to be seen is how willing enterprises will be to go along for the SDN ride. This is a point that was hammered home by Peter Cristy of the Internet Research Group, who, as reported by Fewell, told the audience at the Open Networking Summit that SDN and OpenFlow are likely to face significant challenges in cracking the enterprise market. Cristy’s points were valid. His most salient observations were that there have been few OpenFlow “killler apps,” and that enterprises do not favor “reproducing the same thing with new technology,” especially if that technology is new and complicated.

He’s right. But we have to remember that the ONF is captained by service providers, and they are not leading their particular SDN charge because they are motivated by altruistic concern for enterprise networks and their stewards. No, for now at least, the ONF’s conception of SDNs will be applicable to the demographic represented by the composition of the ONF board. Enterprises will have to wait, it seems, and that’s probably good news for the established order of networking vendors, especially for Cisco Systems.

Assessing Market Implications

Still, I have to wonder. Cristy is correct to note that the enterprise accounts for the “biggest part of the networking market.” Nonetheless, times are changing. As more applications move to the cloud, and to cloud service providers, SDN and presumably OpenFlow are likely to increasingly affect the top and bottom lines of networking vendors.

Those companies — Cisco, Juniper, and all the rest — have to keep a wary eye on SDN developments. Even if networking vendors eventually lose a chunk of business at network service providers, they’ll still have the enterprise, presuming they can position themselves correctly and anticipate change rather than react belatedly to it.

There’s a lot at stake as this story plays out in the months and years ahead.

Thoughts on Cisco’s OpenFlow Conversion

It has not been easy finding time to write this past week. In addition to work and other demands on my time, I had been suffering from a blockage in my ear that impaired my hearing, upset my balance, and generally annoyed the hell out of me.  That problem has been resolved, and I’m back to being as normal as I get.

Ensconced at the keyboard once again, however, I found myself suffering from writer’s block after having rid myself of ear block. So, I consulted the Idea Generator on my iPhone, and it offered this troika for inspiration: “narcotic neon coat.” I gave that trio of words due consideration, then I decided to write about OpenFlow. Trust me, it’s really for the best.

More than OpenFlow

Some of you might contend that OpenFlow has received too much attention. That’s fair, I suppose, but value judgements about whether a topic has gotten too little, too much, or just enough attention are subjective, and also subject to changing circumstances.

Others might argue that software-defined networking encompasses more than OpenFlow. If that’s your claim, you’d be right. OpenFlow is just one mechanism or means of realizing a software-defined network. There are other ways to get it done, standards-based and proprietary. That said, OpenFlow has major industry backers and momentum, it’s becoming inextricably linked with SDN, and it’s been reluctant to surrender the spotlight.

No matter how you slice it, this was a big week for SDN and for OpenFlow. At Stanford University, the Open Networking Summit was in full swing, dedicated to discourse on SDNs and how they could be realized with OpenFlow.

Crowded at the Summit

I wasn’t there, but many were. More than 600 people applied to attend the summit, but only 350 could be accommodate by organizers, who now have decided to hold the next instance of the event in April rather than waiting a full year until the following October.

Notwithstanding the hype, then, Open Flow has emerged as a networking topic for all seasons. Certainly the great and the good of the networking industry would seem to agree. Cisco Systems was well represented at the summit, and Cisco got out the message that it is a believer in SDN and plans to support OpenFlow on its Nexus switches, starting with the low-latency Nexus 3000 line. A specific timetable hasn’t been provided. (Or, if it has, I haven’t seen it.)

Cisco: SDN Next Evolution of Networking

In a Cisco blog post penned by Omar Sultan, David Meyer, a Cisco distinguished engineer (as opposed to the undistinguished ones), had the following to say about why Cisco, in supporting OpenFlow, has made what many might interpret as a counterintuitive move:

“. . . . Cisco had always embraced disruption–we don’t always get it right on the first shot, but we usually get it in the end.  Take server virtualization as an example–while we may not have been first off the line, we now have the broadest and strongest portfolio of virtualization networking technologies in the market.  Critics only saw the short-term impact to our switching revenue (less ports sold) but we saw the transformational value of virtualization. We see SDN in a similar light–as the next evolution of networking and we see OF as an excellent mechanism to drive maturation of both the technology and the underlying thinking.”

That last sentence is commendable for its clarity and transparency and it bears further inspection. Cisco sees SDN as the next evolution in networking, and it perceives OpenFlow as “an excellent mechanism to drive maturation of both the technology and the underlying thinking.”

OpenFlow if Necessary, But Not Necessarily OpenFlow

Now I will foul the waters with my interpretation of what it signifies, beyond the obvious. By necessity, I will veer into the murky shallows of speculation and ambiguity, because — until Cisco provides further elaboration — we won’t know, at least for now, how Cisco ultimately will play its SDN cards. (Yes, I mixed metaphors in that last sentence. So shoot me — but only figuratively).

My take, which might be worth the proverbial two cents, is that Cisco is all in on SDN. As for OpenFlow, I think Cisco is less enamored.  I read Meyer’s and Cisco’s comments and I get the feeling Cisco is saying that it will support OpenFlow as an SDN mechanism if necessary, but not necessarily OpenFlow as its preferred SDN mechanism. Meyer says Open Flow can drive maturation of SDN technology and thinking, but he hasn’t said that it ultimately will be the only means, or event Cisco’s preferred means, of achieving SDN.

I know that others, including Craig Matsumoto of Light Reading, see a close conjoining of SDN and OpenFlow in Cisco’s positioning. I respectfully disagree, though I could, as always (does it even bear saying?), be wrong.

Diverged Business Interests of ONF’s Board and Networking Vendors

Matsumoto has posited that OpenFlow is looking less like a threat to Cisco’s and its business model. At this point, it’s still hard to say, but I think Cisco would suffer materially in the long run if OpenFlow matures as the Open Networking Foundation’s six founding board members — which include carriers and large cloud service providers such as Deutsche Telekom, Verizon, Facebook, Google, Microsoft, and Yahoo — would like it to do, and if the public cloud fulfills the bulk of its commercial promise.

Further, I think the goal of the the ONF Founding Six is completely virtualized infrastructure (compute, storage, networking) run on wall-to-wall, bare-bones hardware, overseen by a management layer of software and driven by applications and services. This would bring lower capital expenditures for gear, and reduced operational expenditures for network management.

I realize there’s been a search for OpenFlow’s killer app — and that search should continue, obviously — but the founders of ONF seem to be focused primarily on cost savings. For them, it’s not about doing something strikingly new or revolutionary, but about getting more from less, and for less. In that context, OpenFlow makes sense — at least for them — as it delivers quantifiable business benefits that they have not been able to derive from current network infrastructure.

In the Enterprise, A Different Story

Of course, what the ONF founders want might not be what enterprise IT buyers need. There’s an opening here for Cisco, for HP Networking, for Juniper, for Arista Networks, and for all the other networking vendors to define SDN in ways that are more amenable to those enterprise buyers across a wide range of horizontal and vertical markets.

If all else fails, though, and OpenFlow becomes an SDN juggernaut, there’s always recourse to “embrace and extend,” particularly at the management layer. It’s not as though vendors haven’t cracked open that chestnut before.

Nicira Downplays OpenFlow on Road to Network Virtualization

While recent discussions of software-defined networking (SDN) and network virtualization have focused nearly exclusively on the OpenFlow protocol, various parties are making the point that OpenFlow is just one facet of a bigger story.

One of those parties is Nicira Networks, which was treated to favorable coverage in the New York Times earlier today. In the article, the words “software-defined networking” and “OpenFlow” are conspicuous by their absence. Sure, the big-picture concept of software-defined networking hovers over proceedings, but Nicira takes pains to position itself as a purveyor of “network virtualization,” which is a neater, simpler concept for the broader technology market to grasp.

VMware of Networking

Indeed, leveraging the idea of network virtualization, Nicira positions itself as the VMware of networking, contending that it will resolve the problem of inflexible, inefficient, complex, and costly data-center networks with a network hypervisor that decouples network services from the underlying hardware. Nicira’s goal, then, is to be the first vendor to bring network virtualization up to speed with server and storage virtualization.  

GigaOM’s Stacey Higginbotham takes issue with the New York Times article and with Nicira’s claims relating to its putatively peerless place in the networking firmament. Writes Higginbotham: 

“The article . . . .  does a disservice to the companies pursing network virtualization by conflating the idea of flexible and programmable networks with Nicira becoming “to networking something like what VMWare was to computer servers.” This is a nice trick for the lay audience, but unlike server virtualization, which VMware did pioneer and then control, network virtualization currently has a variety of vendors pushing solutions that range from being tied to the hardware layer (hello, Juniper and Xsigo) to the software (Embrane and Nicira). In addition to there being multiple companies pushing their own standards, there’s an open source effort to set the building blocks and standards in place to create virtualized networks.”

The ONF Factor

The open-source effort in question is the Open Networking Foundation (ONF), which is promulgating OpenFlow as the protocol by which software-defined networking will be attained. I have written about OpenFlow and the ONF previously, and will have more to say on both shortly. Recently, I also recounted HP’s position on OpenFlow

Nicira says nothing about OpenFlow, which suggests the company is playing down the protocol or might  be going in a different direction to realize its vision of network virtualization. As has been noted, there’s more than one road to software-defined networking, even though OpenFlow is a path that has been well traveled thus far by industry notables, including six major service providers that are the ONF’s founding board members (Google, Deutsche Telekom, Verizon, Microsoft, Facebook, and Yahoo.)

Then again, you will find Nicira Networks among the ONF’s membership, along with a number of other established and nascent networking vendors. Nicira sees a role for OpenFlow, then, though it clearly wants to put the emphasis on its own software and the applications and services that it enables. There’s nothing wrong with that. In fact, it’s a perfectly sensible strategy for a vendor to pursue.

Tension Between Vendors and Service Providers

Alan S. Cohen, a recent addition to the Nicira team, put it into pithy perspective on his personal blog, where he wrote about why he joined Nicira and why the network will be virtualized. Wrote Cohen:

“Virtualization and the cloud is the most profound change in information technology since client-server and the web overtook mainframes and mini computers.  We believe the full promise of virtualization and the cloud can only be fully realized when the network enables rather than hinders this movement.  That is why it needs to be virtualized.

Oh, by the way, OpenFlow is a really small part of the story.  If people think the big shift in networking is simply about OpenFlow, well, they don’t get it.”

So, the big service providers might see OpenFlow as a nifty mechanism that will allow them to reduce their capital expenditures on high-margin networking gear while also lowering their operational expenditures on network management,  but the networking vendors — neophytes and veterans alike — still seek and need to provide value (and derive commensurate margins) above and beyond OpenFlow’s parameters. 

Update on IBM’s Acquisition of Platform Computing

Despite my best efforts, I have been unable to obtain specific details relating to the price that IBM paid to acquire high-performance computing (HPC) workload-management pioneer Platform Computing. If anything further surfaces on that front, I’ll let you know.

In the meantime, others have made some good observations regarding the logic behind the acquisition and the potential ramifications of the move. Dan Kusnetzky who has longstanding familiarity with Platform in both a vendor and analyst capacity, provides a succinct explanation of what Platform does and then provides the following verdict:

“I believe IBM will be able to take this technology, integrate it into its “Smarter Computing” marketing programs and introduce many organizations to the benefits of harnessing together the power of a large number of systems to tackle very large and complex workloads.

This is a good match. “

Meanwhile, Curt Monash recounts details of a briefing he had with Platform in August. He suspects that IBM acquired Platform for its MapReduce offering, but, as Kusnetzky suggests, I think IBM also sees a lot of untapped potential in Platform’s traditional HPC-oriented technical markets, where the company already has an impressive roster  of blue-chip customers that have achieved compelling business results in cost savings and time-to-market improvements with the company’s cluster-management and load-sharing software.

There’s a lot of bluster about the cloud in relation to this acquisition, and that undoubtedly is a facet IBM will try to exploit in the future, but today Platform still does a robust business with its flagship software in scientific and technical computing. 

Platform apparently told Monash that it had “close to $100 million in revenue” and about 500 employees. The employee count seems about right, but I suspect the revenue number is exaggerated. According to a CBC news item on the acquisition, market-research firm Branham Group Inc. estimated that Platform generated revenue of about $71.6 million in its 2010 fiscal year. Presuming the Branham numbers to be correct, Platform would have 2011 fiscal year revenue ranging from $75 million to $80 million.

Finally, Ian Lumb, formerly an employee at Platform (as was your humble scribe) considers the potential implications of the acquisition on Platform’s long-heralded capacity to manage heterogeneous systems and workloads for its customers. This is a point that many analysts missed, and Lumb does an excellent job framing the dilemma IBM faces. Ostensibly, as Lumb notes, it will be business as usual for Platform and its support of heterogeneous systems, including those of IBM competitors such as Dell and HP.

But IBM faces a conundrum. Even if it were to choose to continue to support Platform’s heterogeneous-systems approach in deference to customer demand, the practicalities of doing so would prove daunting. Lumb explains why:

“To deliver a value-rich solution in the HPC context, Platform has to work (extremely) closely with the ‘system vendor’. In many cases, this closeness requires that Intellectual Property (IP) of a technical and/or business nature be communicated – often well before solutions are introduced to the marketplace and made available for purchase. Thus Platform’s new status as an IBM entity, has the potential to seriously complicate matters regarding risk, trust, etc., relating to the exchange of IP.

Although it’s been stated elsewhere that IBM will allow Platform measures of post-acquisition independence, I doubt that this’ll provide sufficient comfort for matters relating to IP. While NDAs specific to the new (and independent) Platform business unit within IBM may offer some measure of additional comfort, I believe that technically oriented approaches offer the greatest promise for mitigating concerns relating to risk, trust, etc., in the exchange of IP.”

It will be interesting to see how IBM addresses that challenge. Platform’s competitors, as Lumb writes, already are attempting to capitalize on the issue. 

More Coming on IBM’s Platform Acquisition

Well, that’s one I got right.

I’ll have some commentary and perhaps some additional detail on IBM’s acquisition of Platform Computing shortly. I just need a dollop of copious spare time. If anybody has any they can share, please send it electronically at your earliest convenience.

With Latest Moves, HP Networking Responds to Customers, Partners, Competitors

Although media briefings took place yesterday in New York, HP officially announced new networking  products and services this morning based on its HP FlexNetwork Architecture.

Bethany Mayer, senior VP and general manager of HP Networking, launched proceedings yesterday, explaining that changing and growing requirements, including a shift toward server-to-server traffic (“east-west” traffic flows driven by inexorable virtualization) and the need for greater bandwidth, are overwhelming today’s networks. Datacenter networks aren’t keeping pace, bandwidth capacity in branch offices isn’t where it needs to be, there is limited support for third-party virtualized appliances, and networks are straining to accommodate the proliferation of mobile devices.

Quoting numbers from the Dell’Oro Group, Mayer said HP continues to take market share from Cisco in switching, with HP gaining share of about 3.8 percent and Cisco dropping about 6.5 percent. What’s more, Mayer cited data from analyst firm Robert W. Baird. indicating that 75 percent of enterprise-network purchase discussions involve HP. Apparently Baird also found that HP is influencing terms or winning deals about 33 percent of the time.

The Big Picture

Saar Gillai, vice president of HP’s Advanced Technology Group and CTO of HP Networking, followed with a presentation on HP Networking’s vision. Major trends he cited are virtualization, cloud computing, consumerization of IT, mobility, and unified communications. Challenges that accompany these trends include complexity, management, security, time to service, and cost.

In summary, Gillai said that the networks installed at customer sites today just weren’t designed to address the challenges they’re facing. To reinforce that point, Gillai provided a brief history of enterprise application delivery that took us from the 60s, when we had mainframes, through the client-server era and the Web-based applications of the 90s through to today’s burgeoning cloud environments.

He explained that enterprise networks have evolved along with their application delivery models.  Before, they were relatively static (serving employees onsite, for the most part), with well-defined perimeters and applications that were limited qualitatively and quantitatively. Today, though, enterprise networks must accommodate not only connected employees, but also connected customers, partners, contractors, and suppliers. The perimeter is fragmented, the network distributed, the applications mobile (even in the data center with virtualization), client devices (such as smartphones and tablets) proliferating, and wireless LANs, the public cloud and the Internet also prominently in the picture.

Connecting Users to Services

What’s the right approach for networks to take? Gillai says HP is advancing toward delivering networks that focus on connecting users to the services they need rather than on managing infrastructure. HP’s vision of enterprise-network architecture conceives of a pool of virtualized resources where managing and provisioning are done.  This network has a top layer of management/provisioning, a layer below inhabited by a control plane, and then a layer below that one comprising physical network infrastructure. In that regard, Gillai drew an analogy with server virtualization, with the control plane functioning as an abstraction layer.

With talk of a management layer sitting above a control plane that rides atop physical infrastructure, the HP vision seems strikingly similar to the defining principles of software-defined networking as realized through the OpenFlow protocol.

OpenFlow: It’s About the Applications

On OpenFlow, however, Gillai was guardedly optimistic, if not a little ambiguous. Although noting that HP has been an early proponent of OpenFlow and that the company sees promise in the technology, Gillai said the critical factor to OpenFlow’s success will be determined by the applications that run on it. HP is interested in those applications, but is less interested in the OpenFlow controller, which it does not see as a point of differentiation.

Gillai is of the opinion that the OpenFlow hype has moved considerably ahead of its current reality. He said OpenFlow, as a specific means of enabling software-defined networking, is evolutionary as opposed to revolutionary. He also said considerable work remains to be done before OpenFlow will be suitable for the enterprise market. Among the issues that need to be resolved, according to Gillai, is support for IPv6 and the “routing problem” of having a number of controllers communicate with each other.

On the Open Networking Foundation (ONF), the private non-profit organization whose first goal is to create a switching ecosystem to support the OpenFlow interface, Gillai suggested that the founding and board members — comprising Deutsche Telekom, Google, Microsoft, Facebook, Verizon, and Yahoo — have a clear vision of what they want OpenFlow to achieve.

“If the network could become programmable, their life will be great,” Gillai said of the ONF founders, all of whom are service providers with vast data centers.

Despite Gillai’s reservations about OpenFlow hype, he indicated that he believes “interesting applications” for it should begin emerging within the next 12 to 24 months. He also said that it “would not be big surprise” if HP were to leverage OpenFlow for forthcoming control-plane technology.

ToR Switch for the Data Center

As for the products and services announced, let’s begin in the data center, seen by all the major networking vendors as a lucrative growth market as well as venue for increasingly intense competition.

HP FlexFabric solutions for the data center include the new 10-GbE HP 5900 top-of-rack (ToR) switch and the updated HP 12500 switch series.

HP says the new HP 5900 series of 10-GbE ToR switches provides up to 300 percent greater network scalability while reducing the the number of logical devices in the server access layer by 50 percent, thereby decreasing total cost of ownership by 50 percent.

Lead Time and Changes to Product Naming

The switch is powered by the HP Intelligent Resilient Framework (IRF), which allows four HP 5900 switches to be virtualized so that they can operate as a single switch. The HP 5900 top-of-rack switch series is expected to be available in Q1 2012 in the United States with a starting list price of $38,000.

It bears noting that HP typically refrains from announcing switches this far ahead of release data. That it has announced the HP 5900 ToR switch six months before it will ship would appear to suggest both that customers are clamoring for a ToR switch and also that competitors have been exploiting the absence of such a switch in HP’s product portfolio. Although the 5900 isn’t ready to ship today, HP wants the world to know it’s coming soon.

HP says its HP 12500 switch series benefits from improved network resiliency and performance  as a result of  the addition of the updated HP IRF technology. The switch provides full IPv6 support, and HP says it doubles throughput and reduces network recovery time by more than 500 times. The HP 10500 campus core switch is available now worldwide starting at $38,000.

You might have noticed, incidentally, something different about the naming convention associated with new HP switches. HP has decided that, as of new, its networking products will just have numbers rather than alphabetical prefixes followed by numbers. This has been done to simplify matters, for HP and for its customers.

FlexCampus Moves 

On the campus front, new HP FlexCampus offerings include the HP 3800 stackable switches, which HP says provide up to 450 percent higher performance. HP also is offering a new reference architecture for campus environments that unifies wired and wireless networks to support mobility and high-bandwidth multimedia applications. The HP 3800 line of switches is available now worldwide starting at $4,969.

Although HP did not say it, at least one of its primary competitors has cited a lack of HP reference architectures for customers, particularly for campus environments. HP clearly is responding.

HP also unveiled virtualized services modules for the HP 5400zl and 8200zl switches, which it claims are the first in the industry to converge blade servers at the branch into a network infrastructure capable of hosting multiple applications and services. The company claims its HP Advanced Services zl Module with VMware vSphere 5 and HP Advanced Services zl Module with Citrix XenServer deliver a 57-percent cut in power consumption and a 43-percent reduction in space relative to competing products. Available now worldwide, the vSphere HP Advanced Series zl Module with VMware vSphere 5 (including support and subscription, 8GB of RAM) starts at $5,299. The HP Advanced Services zl Module with Citrix XenServer (including support and subscription, 4GB of RAM) starts at $4,499.

Emphasis on Simplicity and Evolution

HP also rolled out HP FlexManagement with integrated mobile network access control (NAC) in HP Intelligent Management Center (IMV) 5.1 to streamline enterprise access for mobile devices and to protect against mobile-application threats. HP Intelligent Management Center 5.1 is expected to be available in Q1 2012 with a list price of $6,995.

Also introduced are new services to facilitate migration to IPv6 and new financing to allow HP’s U.S-based channel partners to lease HP Networking products as demonstration equipment.

Key words associated with this slate of HP Networking announcements were “evolutionary” and “simplification.” As the substance and tone of the announcements suggest, HP Networking is responding to its customers and partners — and also to its competitors — closing gaps in its portfolio and looking to position itself to achieve further market-share gains.

IBM Rumored to be in Acquisition Talks with Platform Computing

Yes, I’m writing another post with a connection to the Open Virtualization Alliance (OVA), though I assure you I have not embarked on an obsessive serialization. That might occur at a later date, most likely involving a different topic, but it’s not on the cards now.

As for this post, the connection to OVA is glancing and tangential, relating to a company that recently joined the association (then again, who hasn’t of late?), but really made its bones — and its money — with workload-management solutions for high-performance computing. Lately, the company in question has gone with the flow and recast itself as a purveyor of private cloud computing solutions. (Again, who hasn’t?)

Talks Relatively Advanced

Yes, we’re talking about Platform Computing, rumored by some dark denizens of the investment-banking community to be a takeover target of none other than IBM. Apparently, according to sources familiar with the situation (I’ve always wanted to use that phrase), the talks are relatively advanced. That said, a deal is not a deal until pen is put to paper.

IBM and Platform first crossed paths, and began working together, many years ago in the HPC community, so their relationship is not a new one. The two companies know each other well.

Rich Heritage in Batch, Workload Management

Platform Computing broadly focuses on two sets of solutions. Its legacy workload-management business is represented by Load Sharing Facility (LSF), which is now part of its cluster-management product portfolio, which — like LSF in its good old days — is targeted squarely at the HPC world. With its rich heritage in batch applications, LSF also is part of Platform’s workload-management software for grid infrastructure.

Like so many others, Platform has refashioned itself as a cloud-computing provider. The company, and some of its customers, found that its core technologies could be adapted and repurposed for the ever-ambiguous private cloud.

Big Data, Too

Perhaps sensitive about being hit by charges of “cloud washing,” Platform contends that it offers “private cloud computing for the real world” through cloud bursting for HPC and private-cloud solutions for enterprise data centers. Not surprisingly given its history, Platform is most convincing and compelling when addressing the requirements of the HPC crowd.

That said, the company has jumped onto the Big Data bandwagon with gusto. It offers Platform MapReduce for vertical markets such as financial services (long a Platform vertical), telecommunications, government (fraud detection and cyber security, regulatory compliance, energy), life sciences, and retail.

Platform recently announced that its ISF, not be confused with LSF, was recognized as a finalist in the “Private Cloud Computing” category for the 2011 Best of VMworld awards. And, of course, to bring this post full circle, Platform was one of 134 new members to join the aforementioned Open Virtualization Association (OVA).

OVA Members Hope to Close Ground

I discussed the fast-growing Open Virtualization Alliance (OVA) in a recent post about its primary objective, which is to commoditize VMware’s daunting market advantage. In catching up on my reading, I came across an excellent piece by InformationWeek’s Charles Babcock that puts the emergence of OVA into historical perspective.

As Babcock writes, the KVM-centric OVA might not have come into existence at all if an earlier alliance supporting another open-source hypervisor hadn’t foundered first. Quoting Babcock regarding OVA’s vanguard members:

Hewlett-Packard, IBM, Intel, AMD, Red Hat, SUSE, BMC, and CA Technologies are examples of the muscle supporting the alliance. As a matter of fact, the first five used to be big backers of the open source Xen hypervisor and Xen development project. Throw in the fact Novell was an early backer of Xen as the owner of SUSE, and you have six of the same suspects. What happened to support for Xen? For one, the company behind the project, XenSource, got acquired by Citrix. That took Xen out of the strictly open source camp and moved it several steps closer to the Microsoft camp, since Citrix and Microsoft have been close partners for over 20 years.

Xen is still open source code, but its backers found reasons (faster than you can say vMotion) to move on. The Open Virtualization Alliance still shares one thing in common with the Xen open source project. Both groups wish to slow VMware’s rapid advance.

Wary Eyes

Indeed, that is the goal. Most of the industry, with the notable exception of VMware’s parent EMC, is casting a wary eye at the virtualization juggernaut, wondering how far and wide its ambitions will extend and how they will impact the market.

As Babcock points out, however, by moving in mid race from one hypervisor horse (Xen) to another (KVM), the big backers of open-source virtualization might have surrendered insurmountable ground to VMware, and perhaps even to Microsoft. Much will depend on whether VMware abuses its market dominance, and whether Microsoft is successful with its mid-market virtualization push into its still-considerable Windows installed base.

Long Way to Go

Last but perhaps not least, KVM and the Open Virtualization Alliance (OVA) will have a say in the outcome. If OVA members wish to succeed, they’ll not only have to work exceptionally hard, but they’ll also have to work closely together.

Coming from behind is never easy, and, as Babcock contends, just trying to ride Linux’s coattails will not be enough. KVM will have to continue to define its own value proposition, and it will need all the marketing and technological support its marquee backers can deliver. One area of particular importance is operations management in the data center.

KVM’s market share, as reported by Gartner earlier this year, was less than one percent in server virtualization. It has a long way to go before it causes VMware’s executives any sleepless nights. That it wasn’t the first choice of its proponents, and that it has lost so much time and ground, doesn’t help the cause.