Daily Archives: March 29, 2010

Cisco’s Tandberg Acquisition Officially Approved, Dance for Polcyom Begins

When I first learned of the alleged acquisitive interest Apax Partners was said to have expressed toward Polycom, I dismissed it as nothing more than a media head fake.

Let’s consider: When news of that sort is leaked, it’s made public for a reason. In this context, it seemed, the reason was to bring others to the table. Somebody who has an interest in Polycom being acquired wanted to engender a bidding war for the company. It happens all the time.

There was something else, too. Apax didn’t seem a likely acquirer. Where were the direct synergies with Polycom in Apax’s investment portfolio? Where were the connections between Apax’s people and major vendors in the videoconferencing and unified-communications worlds? The deal didn’t offer enough risk mitigation for Apax; the pieces didn’t fit together.

Even if Apax had wanted to acquire Polycom, I’m not sure it had the conviction or the stomach to conclude the deal at the price Polycom would have commanded.

Now, though, Cisco’s acquisition of Tandberg has been consummated, and Polycom stands exposed. Polycom was Tandberg’s videoconfencing rival, and it’s a company of considerable importance to the UC strategies of more than one vendor.

We must consider the Cisco-Tandberg context, because contrivances like the leaked report of Apax’s interest in Polycom tend not to occur in a vacuum. Who’s supposed to step from the shadows and make a welcome bid, at an appetizing price, for Polycom?

There are a few candidates, including one that already has tipped its hand. That player is The Gores Group, 51-percent owner of Siemens Enterprise Communications. But The Gores Group’s bid was leaked, too, and we have to wonder why. Expect others to enter the picture, publicly or otherwise.

An obvious candidate is Avaya. Even though Avaya has barely digested its acquisition of Nortel’s enterprise business, it might feel as though it cannot let Polycom fall into other hands. In a perfect world, Avaya would not have to pursue Polycom now, immediately after assimilating and integrating Nortel.

Nonetheless, strategic imperatives might necessitate a move. Avaya is backed by the high rollers at Silver Lake, who rarely think small. They might not be willing to pass up the opportunity of taking Polycom off the board.

Who else? Not Dell. I can’t see it happening.

I don’t think HP will make the move, either. It’s got is own telepresence systems already, it’s very close to Microsoft in unified communications, and it wants to leverage Microsoft in the battle against their common enemy, Cisco.

Juniper is a possibility, but the company has signaled that it will grow organically, not through big-ticket M&A. Juniper will stay focused on building its intelligent network infrastructure and try not to get distracted by the action in the M&A casino.

IBM could make a move for Polycom, but I don’t think it will. Microsoft also enters the equation.

Yes, Polycom sells hardware, and Microsoft has steered clear of stepping on the toes of hardware partners such as HP. But there’s a way Microsoft could structure a deal that would be amenable to HP and its other hardware partners. All it takes a little creativity and ingenuity, and Microsoft retains plenty of that commodity on the enterprise side of its business.

If I were making book on which company will acquire Polycom, I’d make Silver Lake-baked Avaya the favorite, with Gores-backed Siemens Enterprise Communications the second choice, Microsoft the third option, with IBM next. Of course, in no way do I encourage illicit gambling on prospective M&A activity.

If you have theory on whether Polcyom will be acquired, and by whom, feel fee to share your thoughts below.

Advertisement

The Long, Winding Road to Application-Intelligent Networks

It seems that we’ve been talking about application-aware networking for a long time. I admit that I was talking about it more than a decade ago, back when I initiated a technology partnership between my employer at the time, a workload-management vendor, and a network colossus (a previous employer of mine) that shall also remain nameless.

My brainwave — if I might beg your kind indulgence — was that the availability of applications and services over the Internet could be greatly enhanced if one could somehow feed upper-layer intelligence about content and applications down to the switches and routers that were making load-balancing decisions at the network and transport layers.

This was back in 1998, so it was considered heady stuff. The partnership was necessary because my employer at the time knew a lot about the characteristics and real-time behavior of applications but knew little about the network; whereas the network-infrastructure vendor knew all about the network but not nearly enough about the applications it supported.

Because I had worked for both companies, I understood how they could combine forces to create an application-intelligent infrastructure that would provide unprecedented availability over and across the Internet. In theory, everything should have worked without a hitch. In theory.

The real challenge was getting the two companies to understand each other. It wasn’t a clash of corporate cultures so much as a failure to speak the same language. One focused on how and where applications were processed and the other focused on the pipes that stitched everything together. They had different assumptions, used different vocabularies, and they viewed the world — the data center, anyway — from different perspectives. Each had difficulty understanding what the other was saying. For a long time, l felt like an interpreter in the diplomatic service.

You’d think by now these problems would be behind us, right? You’d think that in an era when networking titans are buying videoconferencing vendors, computer vendors are buying networking vendors, and everybody seemingly has a strategy for the converged data center, that maybe the networking world and the computing world would speak the same language, or at least have an implicit understanding of what the other side is saying. Apparently, though, the problem persists. The chasm between the two worlds might not be as vast, but a schism still precludes a meaningful mind meld.

I reached that sad conclusion after reading a blog post by Lori MacVittie at F5’s DevCentral. In her post, she recounts finding an article that promised to explore the profound smartness of application-aware networking. Instead of reading about a network capable of understanding and dynamically supporting an application’s data and behavior, she read a piece that talked about “container concerns” such as CPU, RAM, and memory, with a dollop about 10-GbE networking tossed into the mix for good measure. As she writes:

Application-awareness is more than just CPU and RAM and network bandwidth. That’s just one small piece of the larger contextual pie. It’s about the user’s environment and the network and the container and the application and the individual request being made. At the moment it’s made. Not based on historical trends.

So, the technology isn’t the problem. The defining concept of application-aware networking has been with us for some time, and the technologies clearly exist to facilitate its widespread deployment. What’s preventing it from coming together is the balkanized thinking and — why not say it? — the entrenched politics of the traditional data center and its vendor ecosystem.

We’ll get there, though. The wheels are in motion, and the destination is in sight.The trip just took longer than some of us though it would.

Lesson learned: Never underestimate institutional resistance to change that is seen to threaten an established order. Keep that lesson in mind as you consider all those rosy near-term projections for cloud computing.