Category Archives: ISPs

Nicira Downplays OpenFlow on Road to Network Virtualization

While recent discussions of software-defined networking (SDN) and network virtualization have focused nearly exclusively on the OpenFlow protocol, various parties are making the point that OpenFlow is just one facet of a bigger story.

One of those parties is Nicira Networks, which was treated to favorable coverage in the New York Times earlier today. In the article, the words “software-defined networking” and “OpenFlow” are conspicuous by their absence. Sure, the big-picture concept of software-defined networking hovers over proceedings, but Nicira takes pains to position itself as a purveyor of “network virtualization,” which is a neater, simpler concept for the broader technology market to grasp.

VMware of Networking

Indeed, leveraging the idea of network virtualization, Nicira positions itself as the VMware of networking, contending that it will resolve the problem of inflexible, inefficient, complex, and costly data-center networks with a network hypervisor that decouples network services from the underlying hardware. Nicira’s goal, then, is to be the first vendor to bring network virtualization up to speed with server and storage virtualization.  

GigaOM’s Stacey Higginbotham takes issue with the New York Times article and with Nicira’s claims relating to its putatively peerless place in the networking firmament. Writes Higginbotham: 

“The article . . . .  does a disservice to the companies pursing network virtualization by conflating the idea of flexible and programmable networks with Nicira becoming “to networking something like what VMWare was to computer servers.” This is a nice trick for the lay audience, but unlike server virtualization, which VMware did pioneer and then control, network virtualization currently has a variety of vendors pushing solutions that range from being tied to the hardware layer (hello, Juniper and Xsigo) to the software (Embrane and Nicira). In addition to there being multiple companies pushing their own standards, there’s an open source effort to set the building blocks and standards in place to create virtualized networks.”

The ONF Factor

The open-source effort in question is the Open Networking Foundation (ONF), which is promulgating OpenFlow as the protocol by which software-defined networking will be attained. I have written about OpenFlow and the ONF previously, and will have more to say on both shortly. Recently, I also recounted HP’s position on OpenFlow

Nicira says nothing about OpenFlow, which suggests the company is playing down the protocol or might  be going in a different direction to realize its vision of network virtualization. As has been noted, there’s more than one road to software-defined networking, even though OpenFlow is a path that has been well traveled thus far by industry notables, including six major service providers that are the ONF’s founding board members (Google, Deutsche Telekom, Verizon, Microsoft, Facebook, and Yahoo.)

Then again, you will find Nicira Networks among the ONF’s membership, along with a number of other established and nascent networking vendors. Nicira sees a role for OpenFlow, then, though it clearly wants to put the emphasis on its own software and the applications and services that it enables. There’s nothing wrong with that. In fact, it’s a perfectly sensible strategy for a vendor to pursue.

Tension Between Vendors and Service Providers

Alan S. Cohen, a recent addition to the Nicira team, put it into pithy perspective on his personal blog, where he wrote about why he joined Nicira and why the network will be virtualized. Wrote Cohen:

“Virtualization and the cloud is the most profound change in information technology since client-server and the web overtook mainframes and mini computers.  We believe the full promise of virtualization and the cloud can only be fully realized when the network enables rather than hinders this movement.  That is why it needs to be virtualized.

Oh, by the way, OpenFlow is a really small part of the story.  If people think the big shift in networking is simply about OpenFlow, well, they don’t get it.”

So, the big service providers might see OpenFlow as a nifty mechanism that will allow them to reduce their capital expenditures on high-margin networking gear while also lowering their operational expenditures on network management,  but the networking vendors — neophytes and veterans alike — still seek and need to provide value (and derive commensurate margins) above and beyond OpenFlow’s parameters. 

Advertisements

Bit-Business Crackup

I have been getting broadband Internet access from the same service provider for a long time. Earlier this year, my particular cable MSO got increasingly aggressive about a “usage-based billing” model that capped bandwidth use and incorporated additional charges for “overage,” otherwise known as exceeding one’s bandwidth cap.  If one exceeds one’s bandwidth cap, one is charged extra — potentially a lot extra.

On the surface, one might suppose the service provider’s intention is to bump subscribers up to the highest bandwidth tiers. That’s definitely part of the intent, but there’s something else afoot, too.

Changed Picture

I believe my experience illustrates a broader trend, so allow me elaborate. My family and I reached the highest tier under the service provider’s usage-based-billing model. Even at the highest tier, though, we found the bandwidth cap abstemious and restrictive. Consequently, rather pay exorbitant overages or be forced to ration bandwidth as if it were water during a drought, we decided to look for another service provider.

Having made our decision, I expected my current service provider to attempt to keep our business. That didn’t happen. We told the service provider why we were leaving — the caps and surcharges were functioning as inhibitors to Internet use — and then set a date when service would be officially discontinued. That was it.  There was no resistance, no counteroffers or proposed discounts, no meaningful attempt to keep us as subscribers.

That sequence of events, and particularly that final uneventful interaction with the service provider, made me think about the bigger picture in the service-provider world. For years, the assumption of telecommunications-equipment vendors has been that rising bandwidth tides would lift all boats.  According to this line of reasoning, as long as consumers and businesses devoured more Internet bandwidth, network-equipment vendors would benefit from steadily increasing service-provider demand. That was true in the past, but the picture has changed.

Paradoxical Service

It’s easy to understand why the shift has occurred. Tom Nolle, president of CIMI Corp., has explained the phenomenon cogently and repeatedly over at his blog. Basically, it all comes down to service-provider monetization, which results from revenue generation.

Service providers can boost revenue in two basic ways: They can charge more for existing services, or they can develop and introduce new services. In most of the developed world, broadband Internet access is a saturated market. There’s negligible growth to be had. To make matters worse, at least from the service-provider perspective, broadband subscribers are resistant to paying higher prices, especially as punishing macroeconomic conditions put the squeeze on budgets.

Service providers have resorted to usage-based billing, with its associated tiers and caps, but there’s a limit to how much additional revenue they can squeeze from hard-pressed subscribers, many of whom will leave (as I did) when they get fed up with metering, overage charges, and with the paradoxical concept of service providers that discourage their subscribers from actually using the Internet as a service.

The Problem with Bandwidth

The twist to this story — and one that tells you quite a bit about the state of the industry — is that service providers are content to let disaffected subscribers take their business elsewhere. For service providers, the narrowing profit margins related to providing increasing amounts of Internet bandwidth are not worth the increasing capital expenditures and, to a lesser extent, growing operating costs associated with scaling network infrastructure to meet demand.

So, as Nolle points out, the assumption that increasing bandwidth consumption will necessarily drive network-infrastructure spending at service providers is no longer tenable. Quoting Nolle:

 “We’re seeing a fundamental problem with bandwidth economics.  Bits are less profitable every year, and people want more of them.  There’s no way that’s a temporary problem; something has to give, and it’s capex.  In wireline, where margins have been thinning for a longer period and where pricing issues are most profound, operators have already lowered capex year over year.  In mobile, where profits can still be had, they’re investing.  But smartphones and tablets are converting mobile services into wireline, from a bandwidth-economics perspective.  There is no question that over time mobile will go the same way.  In fact, it’s already doing that.

To halt the slide in revenue per bit, operators would have to impose usage pricing tiers that would radically reduce incentive to consume content.  If push comes to shove, that’s what they’ll do.  To compensate for the slide, they can take steps to manage costs but most of all they can create new sources of revenue.  That’s what all this service-layer stuff is about, of course.”

Significant Implications

We’re already seeing usage-pricing tiers here in Canada, and I have a feeling they’ll be coming to a service provider near you.

Yes, alternative service providers will take up (and are taking up) the slack. They’ll be content, for now, with bandwidth-related profit margins less than those the big players would find attractive. But they’ll also be looking to buy and run infrastructure at lower prices and costs than did incumbent service providers, who, as Nolle says, are increasingly turning their attention to new revenue-generating services and away from “less profitable bits.”

This phenomenon has significant implications for consumers of bandwidth, for service providers who purvey that bandwidth, for network-equipment vendors that provide gear to help service providers deliver bandwidth, and for market analysts and investors trying to understand a world they thought they knew.