Category Archives: ISPs

For Huawei and ZTE, Suspicions Persist

About two weeks ago, the U.S. House Permanent Select Committee on Intelligence held a hearing on “the national-security threats posed by Chinese telecom companies doing business in the United States.” The Chinese telecom companies called to account were Huawei and ZTE, each of which is keen to expand its market reach into the United States.

It is difficult to know what to believe when it comes to the charges leveled against Huawei and ZTE. The accusations against the companies, which involve their alleged capacity to conduct electronic espionage for China and their relationships with China’s government, are serious and plausible but also largely unproven.

Frustrated Ambitions

One would hope these questions could be settled definitively and expeditiously, but this inquiry looks be a marathon rather than a sprint. Huawei and ZTE want to expand in the U.S. market, but their ambitions are thwarted by government concerns about national security.  As long as the concerns remain — and they show no signs of dissipating soon — the two Chinese technology companies face limited horizons in America.

Elsewhere, too, questions have been raised. Although Huawei recently announced a significant expansion in Britain, which received the endorsement of the government there, it was excluded from participating in Australia’s National Broadband Network (NBN). The company also is facing increased suspicion in India and in Canada, countries in which it already has made inroads.

Vehement Denials 

Huawei and ZTE say they’re facing discrimination and protectionism in the U.S.  Both seek to become bigger players globally in smartphones, and Huawei has its sights set on becoming a major force in enterprise networking and telepresence.

Obviously, Huawei and ZTE deny the allegations. Huawei has said it would be self-destructive for the company to function as an agent or proxy of Chinese-government espionage. Huawei SVP Charles Ding, as quoted in a post published on the Forbes website, had this to say:

 As a global company that earns a large part of its revenue from markets outside of China, we know that any improper behaviour would blemish our reputation, would have an adverse effect in the global market, and ultimately would strike a fatal blow to the company’s business operations. Our customers throughout the world trust Huawei. We will never do anything that undermines that trust. It would be immensely foolish for Huawei to risk involvement in national security or economic espionage.

Let me be clear – Huawei has not and will not jeopardise our global commercial success nor the integrity of our customers’ networks for any third party, government or otherwise. Ever.

A Telco Legacy 

Still, questions persist, perhaps because Western countries know, from their own experience, that telecommunications equipment and networks can be invaluable vectors for surveillance and intelligence-gathering activities. As Jim Armitage wrote in The Independent, telcos in Europe and the United States have been tapped repeatedly for skullduggery and eavesdropping.

In one instance, involving the tapping  of 100 mobile phones belonging to Greek politicians and senior civil servants in 2004 and 2005, a Vodafone executive was found dead of an apparent suicide. In another case, a former head of security at Telecom Italia fell off a Naples motorway bridge to his death in 2006 after discovering the illegal wiretapping of 5,000 Italian journalists, politicians, magistrates, and — yes — soccer players.

No question, there’s a long history of telco networks and the gear that runs them being exploited for “spookery” (my neologism of the day) gone wild. That historical context might explain at least some of the acute and ongoing suspicion directed at Chinese telco-gear vendors by U.S. authorities and politicians.

Inevitability of Virtualized Infrastructure

As a previous post, Infrastructure Virtualization Versus Converged Infrastructure, attests, I strongly believe that virtualization is leading us to a future in which underlying hardware becomes largely undifferentiated and interchangeable. Applications and orchestration will reside in software riding atop the virtualization layer, which effectively will function as an abstraction buffer above hardware infrastructure.  The latter will eventually include hardware for computer, networking, and storage.

Vendors that ride hardware-based business models will have trouble adapting to this new reality. Many of these companies have hordes of software developers and software engineers, but they inextricably intertwine their software and hardware as a matter of business practice, selling the latter as proprietary boxes that often cannot interoperate with, or be swapped out for, competing hardware. It’s classic hardware-based vendor lock-in, and it’s been with us for many years. This applies to vendors that sell all three main types of hardware infrastructure, and to those that sell them tied together as converged infrastructure.

Loosening a Tenacious Grip

Proprietary data-center hardware would appear to be running on borrowed time, though it will not disappear overnight. Its grip will be especially tenacious in the enterprise, though the pull of the cloud eventually will weaken its hold. Proprietary compute infrastructure will be the first to succumb, but networking and storage will fall, too. The economic and operational logic powering the transition is inexorable, so it’s a question of when, not whether, it will happen.

While CapEx cost savings are an obvious benefit, operational flexibility (shifting workloads with agility and less effort) and OpEx savings also are factors. Infrastructure hardware will be cheaper, as well as easier and less costly to run. Pools of industry-standard hardware will be reallocated on demand to serve the needs of application workloads. Data-center customers no longer will be constrained by the hardware-release schedules of their previous vendors of choice. Customers also will be able to take advantage of the latest industry-standard chipsets, which will power hardware with improved energy efficiency and better cooling characteristics.

In servers, and now in storage, Facebook’s Open Compute Project (OCP) has sought to expedite the move to off-the-shelf hardware. Last week at Oscon, Frank Frankovsky, a vice president at  Facebook and the chairman and president of the OCP, rallied the open-source troops by arguing that proprietary x86 systems are “gratuitously differentiated.” He called for all hardware-design specifications to be open.

OCP as Competitive Cudgel

That would benefit Facebook, which launched OCP as a vehicle to help it lower data-center CapEx and OpEx, boost operational flexibility, and — last but not least — mitigate a competitive advantage held by Google, which had a massive head start in rationalizing and fine-tuning its data centers and IT infrastructure. In fact, Google cloaks its IT operations in extreme secrecy, believing that its practices and technologies deliver substantial competitive advantage over its main rivals, including Facebook. The latter must agree, because the animating idea behind Open Compute is to create a market, demand and supply, for commodity server hardware will reduce or eliminate Google’s edge.

Some have wondered why Google hasn’t joined OCP, but the answer should be obvious. Google believes it has cracked the infrastructure code, and it is therefore disinclined to share its insights and best practices with its competitors. Google isn’t a fan of proprietary vanity hardware — it’s been designing its own gear, then going to server and network ODMs, for some time now — but Google feels it has nothing to gain, and much to lose, from opening its kimono to the OCP crowd.

With networking, though, Google felt it needed a little help from its friends — as well as from its enemies. That explains why it allied with Facebook and other cloud-service providers in the Open Networking Foundation (ONF), which I have written about here on many occasions. The goal of the ONF, as with OCP, is to slip the proprietary shackles of hardware vendors, whose gear functions as an impediment to operational agility as well as a costs that could be reduced through SDN-style network virtualization. Google’s communitarian approach to addressing the network-virtualization riddle suggests that it believes it cannot achieve the desired outcome on its own.

Cracking the Nut

Whereas compute hardware was well on its way to standardization, networking hardware, until the ONF, was akin to a vertically integrated mainframe system, replete with a proliferating number of both proprietary and industry-standard protocols. Networking is a bigger, and tougher, nut to crack.

But crack it will, first at the big cloud-service providers, then, as the cloud gains momentum, at enterprises.

PS: I will post something tomorrow about VMware’s just-announced acquisition of Nicira, which is big news no matter how you slice it.  I wrote the above post before I learned of the acquisition.

Direct from ODMs: The Hardware Complement to SDN

Subsequent to my return from Network Field Day 3, I read an interesting article published by Wired that dealt with the Internet giants’ shift toward buying networking gear from original design manufacturers (ODMs) rather than from brand-name OEMs such as Cisco, HP Networking, Juniper, and Dell’s Force10 Networks.

The development isn’t new — Andrew Schmitt, now an analyst at Infonetics, wrote about Google designing its own 10-GbE switches a few years ago — but the story confirmed that the trend is gaining momentum and drawing a crowd, which includes brokers and custom suppliers as well as increasing numbers of buyers.

In the Wired article, Google, Microsoft, Amazon, and Facebook were explicitly cited as web giants buying their switches directly from ODMs based in Taiwan and China. These same buyers previously procured their servers directly from ODMs, circumventing brand-name server vendors such as HP and Dell.  What they’re now doing with networking hardware, then, is a variation on an established theme.

The ONF Connection

Just as with servers, the web titans have their reasons for going directly to ODMs for their networking hardware. Sometimes they want a simpler switch than the brand-name networking vendors offer, and sometimes they want certain functionality that networking vendors do not provide in their commercial products. Most often, though, they’re looking for cheap commodity switches based on merchant silicon, which has become more than capable of handling the requirements the big service providers have in mind.

Software is part of the picture, too, but the Wired story didn’t touch on it. Look at the names of the Internet companies that have gone shopping for ODM switches: Google, Microsoft, Facebook, and Amazon.

What do those companies have in common besides their status as Internet giants and their purchases of copious amounts of networking gear? Yes, it’s true that they’re also cloud service providers. But there’s something else, too.

With the exception of Amazon, the other three are board members in good standing of the Open Networking Foundation (ONF). What’s more,  even though Amazon is not an ONF board member (or even a member), it shares the ONF’s philosophical outlook in relation to making networking infrastructure more flexible and responsive, less complex and costly, and generally getting it out of the way of critical data-center processes.

Pica8 and Cumulus

So, yes, software-defined networking (SDN) is the software complement to cloud-service providers’ direct procurement of networking hardware from ODMs.  In the ONF’s conception of SDN, the server-based controller maps application-driven traffic flows to switches running OpenFlow or some other mechanism that provides interaction between the controller and the switch. Therefore, switches for SDN environments don’t need to be as smart as conventional “vertically integrated” switches that combine packet forwarding and the control plane in the same box.

This isn’t just guesswork on my part. Two companies are cited in the Wired article as “brokers” and “arms dealers” between switch buyers and ODM suppliers. Pica8 is one, and Cumulus Networks is the other.

If you visit the Pica8 website,  you’ll see that the company’s goal is “to commoditize the network industry and to make the network platforms easy to program, robust to operate, and low-cost to procure.” The company says it is “committed to providing high-quality open software with commoditized switches to break the current performance/price barrier of the network industry.” The company’s latest switch, the Pronto 3920, uses Broadcom’s Trident+ chipset, which Pica8 says can be found in other ToR switches, including the Cisco Nexus 3064, Force10 S4810, IBM G8264, Arista 7050S, and Juniper QFC-3500.

That “high-quality open software” to which Pica8 refers? It features XORP open-source routing code, support for Open vSwitch and OpenFlow, and Linux. Pica8 also is a relatively longstanding member of ONF.

Hardware and Software Pedigrees

Cumulus Networks is the other switch arms dealer mentioned in the Wired article. There hasn’t been much public disclosure about Cumulus, and there isn’t much to see on the company’s website. From background information on the professional pasts of the company’s six principals, though, a picture emerges of a company that would be capable of putting together bespoke switch offerings, sourced directly from ODMs, much like those Pica8 delivers.

The co-founders of Cumulus are J.R. Rivers, quoted extensively in the Wired article, and Nolan Leake. A perusal of their LinkedIn profiles reveals that both describe Cumulus as “satisfying the networking needs of large Internet service clusters with high-performance, cost-effective networking equipment.”

Both men also worked at Cisco spin-in venture Nuova Systems, where Rivers served as vice president of systems architecture and Leake served in the “Office of the CTO.” Rivers has a hardware heritage, whereas Leake has a software background, beginning his career building a Java IDE and working at senior positions at VMware and 3Leaf Networks before joining Nuova.

Some of you might recall that 3Leaf’s assets were nearly acquired by Huawei, before the Chinese networking company withdrew its offer after meeting with strenuous objections from the Committee on Foreign Investment in the United States (CFIUS). It was just the latest setback for Huawei in its recurring and unsuccessful attempts to acquire American assets. 3Com, anyone?

For the record, Leake’s LinkedIn profile shows that his work at 3Leaf entailed leading “the development of a distributed virtual machine monitor that leveraged a ccNUMA ASIC to run multiple large (many-core) single system image OSes on a Infiniband-connected cluster of commodity x86 nodes.”

For Companies Not Named Google

Also at Cumulus is Shrijeet Mukherjee, who serves as the startup company’s vice president of software engineering. He was at Nuova, too, and worked at Cisco right up until early this year. At Cisco, Mukherjee focused on” virtualization-acceleration technologies, low-latency Ethernet solutions, Fibre Channel over Ethernet (FCoE), virtual switching, and data center networking technologies.” He boasts of having led the team that delivered the Cisco Virtualized Interface Card (vNIC) for the UCS server platform.

Another Nuova alumnus at Cumulus is Scott Feldman, who was employed at Cisco until May of last year. Among other projects, he served in a leading role on development of “Linux/ESX drivers for Cisco’s UCS vNIC.” (Do all these former Nuova guys at Cumulus realize that Cisco reportedly is offering big-bucks inducements to those who join its latest spin-in venture, Insieme?)

Before moving to Nuova and then to Cisco, J.R. Rivers was involved with Google’s in-house switch design. In the Wired article, Rivers explains the rationale behind Google’s switch design and the company’s evolving relationship with ODMs. Google originally bought switches designed by the ODMs, but now it designs its own switches and has the ODMs manufacture them to the specifications, similar to how Apple designs its iPads and iPhones, then  contracts with Foxconn for assembly.

Rivers notes, not without reason, that Google is an unusual company. It can easily design its own switches, but other service providers possess neither the engineering expertise nor the desire to pursue that option. Nonetheless, they still might want the cost savings that accrue from buying bare-bones switches directly from an ODM. This is the market Cumulus wishes to serve.

Enterprise/Cloud-Service Provider Split

Quoting Rivers from the Wired story:

“We’ve been working for the last year on opening up a supply chain for traditional ODMs who want to sell the hardware on the open market for whoever wants to buy. For the buyers, there can be some very meaningful cost savings. Companies like Cisco and Force10 are just buying from these same ODMs and marking things up. Now, you can go directly to the people who manufacture it.”

It has appeal, but only for large service providers, and perhaps also for very large companies that run prodigious server farms, such as some financial-services concerns. There’s no imminent danger of irrelevance for Cisco, Juniper, HP, or Dell, who still have the vast enterprise market and even many service providers to serve.

But this is a trend worth watching, illustrating the growing chasm between the DIY hardware and software mentality of the biggest cloud shops and the more conventional approach to networking taken by enterprises.

Vello Attempts SDN Makeover

Like a wide-area networking phoenix, Vello Systems rose from the ashes of OpVista, a purveyor of optical-transport systems for cable and telco customers. Leveraging its past in pursuit of a brighter future, Vello now has recast itself as an emergent player in software-defined networking (SDN) and network virtualization.

As one might expect from a company whose roots are in optical transport, Vello is a company in transition, shifting focus and resources from a market opportunity that never fulfilled its promise to one whose promise remains undimmed. Indeed, the company apparently has ample resources at its disposal, with Dow Jones Newswires reporting this past October (I hope DJN doesn’t block access to that URL) that Vello had raised $25 million to continue its transformation from an optical-transport, carrier-oriented vendor to one focused on cloud data centers at enterprises and service providers.

Gridless Optics and “Cloud Switching”

Vello now sells two WAN boxes — it calls them the “CX family of cloud infrastructure systems” — both of which are managed by the CloudMaster software suite.  The CX family comprises the 17-slot, 14-RU CX16000 and the 5-slot, 3-RU CX4000. Vello says the systems are designed to expand 10-gigabit fiber networks to terabits of virtualized capacity, a result the company says is achieved through a combination of gridless optics and “cloud switching” software. The systems are powered by the real-time, Linux-based VellOS, which Vellos says delivers extensive programmability and performance.

In a presentation given a few weeks ago at the Cloud-Net Summit in London, Karl May, Vello’s president and CEO, identified three strategic market opportunities for his company and its products: enterprise Internet services, with early emphasis on content-delivery networks (CDNs) for financial-services companies; public cloud computing, with a focus on data-center internetworking; and enterprise data, where data continuity as a service (DCaaS) is targeted. (Yes, the “aaS” acronyms multiply like rabbits).

Old Wine in a New Bottle?  

May and others at Vello have emphasized previously that those market opportunities aren’t being pursued through a simple repurposing of the intellectual property that Vello obtained from OpVista. In the article published by Dow Jones Newswires, May said that substantial new product development had been done in pursuit of the company’s rejigged target markets.

Vello’s pitch in those markets will be that its technology can deliver low-latency, scalable, reliable, and cost-effective interconnect between data centers. Having secured its $25 million in funding, which was disclosed in an SEC filing, Vello says it won’t need further financing. It claims to be generating revenue from product sales to existing customers, and it says some of its financial-services customers are among its investors.

That said, many of those existing customers were inherited from the defunct OpVista. The challenge for Vello will be to keep those customers onboard, and to add plenty of new ones, as it relegates OpVista to a historical footnote in the company’s history.

Why Many Networking Professionals Will Resist Software-Defined Networking

In the long run, I think software defined networking (SDN) is destined for tremendous success, not only at massive cloud service providers, where it already is finding favor and increased adoption, but also at smaller service providers and even — with time and perseverance — at enterprises.

It just might not happen as quickly as some expect.

Shape of Networking to Come

In a presentation last autumn at the Open Networking Summit, Nicira co-founder Nick McKeown asserted that SDN would shape the future of networking in several key respects. He said it would do so by empowering network owners and operators, by speeding the pace of innovation, by diversifying the supply chain, and by delivering a robust foundation for programmability predicated on a standardized forwarding abstraction and provable network properties.

On the whole, McKeown probably will be right, and his technological reasoning seems entirely reasonable. As in any market, however, the commercial appeal of SDN will be determined by human factors as well as by technological considerations.

The enterprise market will be the toughest nut to crack, though, and not only because the early agenda of SDN, as defined by the board members of the Open Networking Foundation (ONF) and others, has been focused resolutely on providing solutions for the largest of cloud service providers.

Winning Hearts and Minds

Capturing enterprise hearts and minds will be difficult for SDN, and it will be hard not just because of technological challenges, such as backward compatibility with (and investments in) existing network infrastructure, but also because of the cultural milieu and entrenched mindset of enterprise networking professionals.

I’ve written before, on two occasions actually, about how human and institutional resistance to change can strongly inhibit the commercial adoption of technologies with otherwise compelling credentials and qualifications. Generally, people fear change, especially when they suspect that the change in question will affect them adversely.

And make no mistake, software-defined networking will inspire fear and resistance in some quarters, enterprise networking professionals prominent among them.

Networking’s Cultural Artifacts

Jennifer Rexford, professor of computer science at Princeton University and a former AT&T Research staffer, wrote that one of her colleagues once observed that computer-networking people “really loved their artifacts.” Those artifacts probably would include the many distributed routing protocols that have proliferated over the years.

Software-defined networking wants to loosen emotional attachment to those artifacts, just as it wants to jettison the burgeoning bag of protocols that distinguishes networking from computer programming and other disciplines.  But many networking professionals, including those in enterprise IT departments, see their mastery of complex protocols as hallmarks of who they are and what they do.

Getting the Network “Out of the Way”

Yet there’s more to it than that. Consider the workplace implications of software-defined networks. The whole idea of SDN is to make networks programmable, to put applications and those who program and manage them in the driver’s seat, and to get the network “out of the way” of the sweeping virtualized progress that has enveloped all other data-center infrastructure.

To survive and thrive in this brave new virtual world, networking professionals might have to become more like programmers. From an organizational standpoint, even though there are compelling business and technological reasons to adopt SDN, resistance from the fraternity of networking professionals will be stiff and difficult to overcome.

In the realm of the super-sized data centers at Google and elsewhere, this isn’t a serious problem. The concepts associated with “DevOps” and with thinking outside boxes, departmental and otherwise, thrive in those precincts. Google long has eschewed the purchase of servers and networking gear from vendors, and it does things its own way. To greater or lesser degrees, other large cloud-service providers now dance to a similar beat. But the enterprise? Well, that’s a different animal altogether.

Vendors in No Hurry

Some of the new SDN startups already are meeting with pockets of resistance. They’re seeing cleavage — schism might be too strong a word, though maybe not — between cloud architects and server-virtualization specialists on one side of the house and network professionals on the opposing side. The two camps see things differently,with perspectives and priorities that are difficult to reconcile. (There are exceptions to the rule, of course, with some networking professionals eager to embrace SDN, but they currently are in the minority.)

As we’ve seen, the board of directors at the Open Networking Foundation (ONF) isn’t concerned about how quickly the enterprise gets with the SDN program. I also would suggest that most networking vendors, which are excluded from the ONF’s board, aren’t in a hurry to push an SDN agenda that features logically centralized, server-based controllers. You’ll see SDN from these vendors, yes, but the control plane will be distributed until such time as enterprises and service providers (not on the ONF board) demand otherwise. That will be a while, I suspect.

Deferred Gratification

We tend to underestimate resistance to change in this industry.  Gartner devised the “trough of disillusionment”  and the technology hype cycle for good reason. Some technologies remain in that basin longer than others. Some never emerge from what becomes a bottomless pit rather than a trough.

That won’t happen to SDN.  As I wrote earlier, I think it has a bright future. Don’t be surprised, though, if the hype gets ahead of the reality. When it comes to technologies and markets, our inherent optimism occasionally is thwarted by our intrinsic resistance to change.

Exploring OpenStack’s SDN Connections

Pursuant to my last post, I will now examine the role of OpenStack in the context of software-defined networking (SDN). As you will recall, it was one of the alternative SDN enabling technologies mentioned in a recent article and sidebar at Network World.

First, though, I want to note that, contrary to the concerns I expressed in the preceding post, I wasn’t distracted by a shiny object before getting around to writing this installment. I feared I would be, but my powers of concentration and focus held sway. It’s a small victory, but I’ll take it.

Road to Quantum

Now, on to OpenStack, which I’ve written about previously, though admittedly not in the context of SDNs. As for how networking evolved into a distinct facet of OpenStack, Martin Casado, chief technology officer at Nicira, offers a thorough narrative at the Open vSwitch website.

Casado begins by explaining that OpenStack is a “cloud management system (CMS) that orchestrates compute, storage, and networking to provide a platform for building on demand services such as IaaS.” He notes that OpenStack’s primary components were OpenStack Compute (Nova), Open Stack Storage (Swift), and OpenStack Image Services (Glance), and he also provides an overview of their respective roles.

Then he asks, as one might, what about networking? At this point, I will quote directly from his Open vSwitch post:

“Noticeably absent from the list of major subcomponents within OpenStack is networking. The historical reason for this is that networking was originally designed as a part of Nova which supported two networking models:

● Flat Networking – A single global network for workloads hosted in an OpenStack Cloud.

●VLAN based Networking – A network segmentation mechanism that leverages existing VLAN technology to provide each OpenStack tenant, its own private network.

While these models have worked well thus far, and are very reasonable approaches to networking in the cloud, not treating networking as a first class citizen (like compute and storage) reduces the modularity of the architecture.”

As a result of Nova’s networking shortcomings, which Casado enumerates in detail,  Quantum, a standalone networking component, was developed.

Network Connectivity as a Service

The OpenStack wiki defines Quantum as “an incubated OpenStack project to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).” On that same wiki, Quantum is touted as being able to support advanced network topologies beyond the scope of  Nova’s FlatManager or VLanManager; as enabling anyone to “build advanced network services (open and closed source) that plug into Openstack networks”; and as enabling new plugins (open and closed source) that introduce advanced network capabilities.

Okay, but how does it relate specifically to SDNs? That’s a good question, and James Urquhart has provided a clear and compelling answer, which later was summarized succinctly by Stuart Miniman at Wikibon. What Urquhart wrote actually connects the dots between OpenStack’s Quantum and OpenFlow-enabled SDNs. Here’s a salient excerpt:

“. . . . how does OpenFlow relate to Quantum? It’s simple, really. Quantum is an application-level abstraction of networking that relies on plug-in implementations to map the abstraction(s) to reality. OpenFlow-based networking systems are one possible mechanism to be used by a plug-in to deliver a Quantum abstraction.

OpenFlow itself does not provide a network abstraction; that takes software that implements the protocol. Quantum itself does not talk to switches directly; that takes additional software (in the form of a plug-in). Those software components may be one and the same, or a Quantum plug-in might talk to an OpenFlow-based controller software via an API (like the Open vSwitch API).”

Cisco’s Contribution

So, that addresses the complementary functionality of OpenStack’s Quantum and OpenFlow, but, as Urquhart noted, OpenFlow is just one mechanism that can be used by a plug-in to deliver a Quantum abstraction. Further to that point, bear in mind that Quantum, as recounted on the OpenStack wiki, can be used  to “build advanced network services (open and closed source) that plug into OpenStack networks” and to facilitate new plugins that introduce advanced network capabilities.

Consequently, when it comes to using OpenStack in SDNs, OpenFlow isn’t the only complementary option available. In fact, Cisco is in on the action, using Quantum to “develop API extensions and plug-in drivers for creating virtual network segments on top of Cisco NX-OS and UCS.”

Cisco portrays itself as a major contributor to OpenStack’s Quantum, and the evidence seems to support that assertion. Cisco also has indicated qualified support for OpenFlow, so there’s a chance OpenStack and OpenFlow might intersect on a Cisco roadmap. That said, Cisco’s initial OpenStack-related networking forays relate to its proprietary technologies and existing products.

Citrix, Nicira, Rackspace . . . and Midokura

Other companies have made contributions to OpenStack’s Quantum, too. In a post at Network World, Alan Shimel of The CISO Group cites the involvement of Nicira, Cisco, Citrix, Midokura, and Rackspace. From what Nicira’s Casado has written and said publicly, we know that OpenFlow is in the mix there. It seems to be in the picture at Rackspace, too. Citrix has posted blog posts about Quantum, including this one, but I’m not sure where they’re going with it, though XenServer, Open vSwitch, and, yes, OpenFlow are likely to be involved.

Finally, we have Midokura, a Japanese company that has a relatively low profile, at least on this side of the Pacific Ocean. According to its website, it was established early in 2010, and it had just 12 employees in the end of April 2011.

If my currency-conversion calculations (from Japanese yen) are correct, Midokura also had about $1.5 million in capital as of that date. Earlier that same month, the company announced seed funding of about $1.3 million. Investors were Bit-Isle, a Japanese data-center company; NTT Investment Partners, an investment vehicle of  Nippon Telegraph & Telephone Corp. (NTT); 1st Holdings, a Japanese ISV that specializes in tools and middleware; and various individual investors, including Allen Miner, CEO of SunBridge Corporation.

On its website, Midokura provides an overview of its MidoNet network-virtualization platform, which is billed as providing a solution to the problem of inflexible and expensive large-scale physical networks that tend to lock service providers into a single vendor.

Virtual Network Model in Cloud Stack

In an article published  at TechCrunch this spring, at about the time Midokura announced its seed round, the company claimed to be the only one to have “a true virtual network model” in a cloud stack. The TechCrunch piece also said the MidoNet platform could be integrated “into existing products, as a standalone solution, via a NaaS model, or through Midostack, Midokura’s own cloud (IaaS/EC2) distribution of OpenStack (basically the delivery mechanism for Midonet and the company’s main product).”

Although the company was accepting beta customers last spring, it hasn’t updated its corporate blog since December 2010. Its “Events” page, however, shows signs of life, with Midokura indicating that it will be attending or participating in the grand opening of Rackspace’s San Francisco office on December 1.

Perhaps we’ll get an update then on Midokura’s progress.

Thoughts on Cisco’s OpenFlow Conversion

It has not been easy finding time to write this past week. In addition to work and other demands on my time, I had been suffering from a blockage in my ear that impaired my hearing, upset my balance, and generally annoyed the hell out of me.  That problem has been resolved, and I’m back to being as normal as I get.

Ensconced at the keyboard once again, however, I found myself suffering from writer’s block after having rid myself of ear block. So, I consulted the Idea Generator on my iPhone, and it offered this troika for inspiration: “narcotic neon coat.” I gave that trio of words due consideration, then I decided to write about OpenFlow. Trust me, it’s really for the best.

More than OpenFlow

Some of you might contend that OpenFlow has received too much attention. That’s fair, I suppose, but value judgements about whether a topic has gotten too little, too much, or just enough attention are subjective, and also subject to changing circumstances.

Others might argue that software-defined networking encompasses more than OpenFlow. If that’s your claim, you’d be right. OpenFlow is just one mechanism or means of realizing a software-defined network. There are other ways to get it done, standards-based and proprietary. That said, OpenFlow has major industry backers and momentum, it’s becoming inextricably linked with SDN, and it’s been reluctant to surrender the spotlight.

No matter how you slice it, this was a big week for SDN and for OpenFlow. At Stanford University, the Open Networking Summit was in full swing, dedicated to discourse on SDNs and how they could be realized with OpenFlow.

Crowded at the Summit

I wasn’t there, but many were. More than 600 people applied to attend the summit, but only 350 could be accommodate by organizers, who now have decided to hold the next instance of the event in April rather than waiting a full year until the following October.

Notwithstanding the hype, then, Open Flow has emerged as a networking topic for all seasons. Certainly the great and the good of the networking industry would seem to agree. Cisco Systems was well represented at the summit, and Cisco got out the message that it is a believer in SDN and plans to support OpenFlow on its Nexus switches, starting with the low-latency Nexus 3000 line. A specific timetable hasn’t been provided. (Or, if it has, I haven’t seen it.)

Cisco: SDN Next Evolution of Networking

In a Cisco blog post penned by Omar Sultan, David Meyer, a Cisco distinguished engineer (as opposed to the undistinguished ones), had the following to say about why Cisco, in supporting OpenFlow, has made what many might interpret as a counterintuitive move:

“. . . . Cisco had always embraced disruption–we don’t always get it right on the first shot, but we usually get it in the end.  Take server virtualization as an example–while we may not have been first off the line, we now have the broadest and strongest portfolio of virtualization networking technologies in the market.  Critics only saw the short-term impact to our switching revenue (less ports sold) but we saw the transformational value of virtualization. We see SDN in a similar light–as the next evolution of networking and we see OF as an excellent mechanism to drive maturation of both the technology and the underlying thinking.”

That last sentence is commendable for its clarity and transparency and it bears further inspection. Cisco sees SDN as the next evolution in networking, and it perceives OpenFlow as “an excellent mechanism to drive maturation of both the technology and the underlying thinking.”

OpenFlow if Necessary, But Not Necessarily OpenFlow

Now I will foul the waters with my interpretation of what it signifies, beyond the obvious. By necessity, I will veer into the murky shallows of speculation and ambiguity, because — until Cisco provides further elaboration — we won’t know, at least for now, how Cisco ultimately will play its SDN cards. (Yes, I mixed metaphors in that last sentence. So shoot me — but only figuratively).

My take, which might be worth the proverbial two cents, is that Cisco is all in on SDN. As for OpenFlow, I think Cisco is less enamored.  I read Meyer’s and Cisco’s comments and I get the feeling Cisco is saying that it will support OpenFlow as an SDN mechanism if necessary, but not necessarily OpenFlow as its preferred SDN mechanism. Meyer says Open Flow can drive maturation of SDN technology and thinking, but he hasn’t said that it ultimately will be the only means, or event Cisco’s preferred means, of achieving SDN.

I know that others, including Craig Matsumoto of Light Reading, see a close conjoining of SDN and OpenFlow in Cisco’s positioning. I respectfully disagree, though I could, as always (does it even bear saying?), be wrong.

Diverged Business Interests of ONF’s Board and Networking Vendors

Matsumoto has posited that OpenFlow is looking less like a threat to Cisco’s and its business model. At this point, it’s still hard to say, but I think Cisco would suffer materially in the long run if OpenFlow matures as the Open Networking Foundation’s six founding board members — which include carriers and large cloud service providers such as Deutsche Telekom, Verizon, Facebook, Google, Microsoft, and Yahoo — would like it to do, and if the public cloud fulfills the bulk of its commercial promise.

Further, I think the goal of the the ONF Founding Six is completely virtualized infrastructure (compute, storage, networking) run on wall-to-wall, bare-bones hardware, overseen by a management layer of software and driven by applications and services. This would bring lower capital expenditures for gear, and reduced operational expenditures for network management.

I realize there’s been a search for OpenFlow’s killer app — and that search should continue, obviously — but the founders of ONF seem to be focused primarily on cost savings. For them, it’s not about doing something strikingly new or revolutionary, but about getting more from less, and for less. In that context, OpenFlow makes sense — at least for them — as it delivers quantifiable business benefits that they have not been able to derive from current network infrastructure.

In the Enterprise, A Different Story

Of course, what the ONF founders want might not be what enterprise IT buyers need. There’s an opening here for Cisco, for HP Networking, for Juniper, for Arista Networks, and for all the other networking vendors to define SDN in ways that are more amenable to those enterprise buyers across a wide range of horizontal and vertical markets.

If all else fails, though, and OpenFlow becomes an SDN juggernaut, there’s always recourse to “embrace and extend,” particularly at the management layer. It’s not as though vendors haven’t cracked open that chestnut before.

Nicira Downplays OpenFlow on Road to Network Virtualization

While recent discussions of software-defined networking (SDN) and network virtualization have focused nearly exclusively on the OpenFlow protocol, various parties are making the point that OpenFlow is just one facet of a bigger story.

One of those parties is Nicira Networks, which was treated to favorable coverage in the New York Times earlier today. In the article, the words “software-defined networking” and “OpenFlow” are conspicuous by their absence. Sure, the big-picture concept of software-defined networking hovers over proceedings, but Nicira takes pains to position itself as a purveyor of “network virtualization,” which is a neater, simpler concept for the broader technology market to grasp.

VMware of Networking

Indeed, leveraging the idea of network virtualization, Nicira positions itself as the VMware of networking, contending that it will resolve the problem of inflexible, inefficient, complex, and costly data-center networks with a network hypervisor that decouples network services from the underlying hardware. Nicira’s goal, then, is to be the first vendor to bring network virtualization up to speed with server and storage virtualization.  

GigaOM’s Stacey Higginbotham takes issue with the New York Times article and with Nicira’s claims relating to its putatively peerless place in the networking firmament. Writes Higginbotham: 

“The article . . . .  does a disservice to the companies pursing network virtualization by conflating the idea of flexible and programmable networks with Nicira becoming “to networking something like what VMWare was to computer servers.” This is a nice trick for the lay audience, but unlike server virtualization, which VMware did pioneer and then control, network virtualization currently has a variety of vendors pushing solutions that range from being tied to the hardware layer (hello, Juniper and Xsigo) to the software (Embrane and Nicira). In addition to there being multiple companies pushing their own standards, there’s an open source effort to set the building blocks and standards in place to create virtualized networks.”

The ONF Factor

The open-source effort in question is the Open Networking Foundation (ONF), which is promulgating OpenFlow as the protocol by which software-defined networking will be attained. I have written about OpenFlow and the ONF previously, and will have more to say on both shortly. Recently, I also recounted HP’s position on OpenFlow

Nicira says nothing about OpenFlow, which suggests the company is playing down the protocol or might  be going in a different direction to realize its vision of network virtualization. As has been noted, there’s more than one road to software-defined networking, even though OpenFlow is a path that has been well traveled thus far by industry notables, including six major service providers that are the ONF’s founding board members (Google, Deutsche Telekom, Verizon, Microsoft, Facebook, and Yahoo.)

Then again, you will find Nicira Networks among the ONF’s membership, along with a number of other established and nascent networking vendors. Nicira sees a role for OpenFlow, then, though it clearly wants to put the emphasis on its own software and the applications and services that it enables. There’s nothing wrong with that. In fact, it’s a perfectly sensible strategy for a vendor to pursue.

Tension Between Vendors and Service Providers

Alan S. Cohen, a recent addition to the Nicira team, put it into pithy perspective on his personal blog, where he wrote about why he joined Nicira and why the network will be virtualized. Wrote Cohen:

“Virtualization and the cloud is the most profound change in information technology since client-server and the web overtook mainframes and mini computers.  We believe the full promise of virtualization and the cloud can only be fully realized when the network enables rather than hinders this movement.  That is why it needs to be virtualized.

Oh, by the way, OpenFlow is a really small part of the story.  If people think the big shift in networking is simply about OpenFlow, well, they don’t get it.”

So, the big service providers might see OpenFlow as a nifty mechanism that will allow them to reduce their capital expenditures on high-margin networking gear while also lowering their operational expenditures on network management,  but the networking vendors — neophytes and veterans alike — still seek and need to provide value (and derive commensurate margins) above and beyond OpenFlow’s parameters. 

Bit-Business Crackup

I have been getting broadband Internet access from the same service provider for a long time. Earlier this year, my particular cable MSO got increasingly aggressive about a “usage-based billing” model that capped bandwidth use and incorporated additional charges for “overage,” otherwise known as exceeding one’s bandwidth cap.  If one exceeds one’s bandwidth cap, one is charged extra — potentially a lot extra.

On the surface, one might suppose the service provider’s intention is to bump subscribers up to the highest bandwidth tiers. That’s definitely part of the intent, but there’s something else afoot, too.

Changed Picture

I believe my experience illustrates a broader trend, so allow me elaborate. My family and I reached the highest tier under the service provider’s usage-based-billing model. Even at the highest tier, though, we found the bandwidth cap abstemious and restrictive. Consequently, rather pay exorbitant overages or be forced to ration bandwidth as if it were water during a drought, we decided to look for another service provider.

Having made our decision, I expected my current service provider to attempt to keep our business. That didn’t happen. We told the service provider why we were leaving — the caps and surcharges were functioning as inhibitors to Internet use — and then set a date when service would be officially discontinued. That was it.  There was no resistance, no counteroffers or proposed discounts, no meaningful attempt to keep us as subscribers.

That sequence of events, and particularly that final uneventful interaction with the service provider, made me think about the bigger picture in the service-provider world. For years, the assumption of telecommunications-equipment vendors has been that rising bandwidth tides would lift all boats.  According to this line of reasoning, as long as consumers and businesses devoured more Internet bandwidth, network-equipment vendors would benefit from steadily increasing service-provider demand. That was true in the past, but the picture has changed.

Paradoxical Service

It’s easy to understand why the shift has occurred. Tom Nolle, president of CIMI Corp., has explained the phenomenon cogently and repeatedly over at his blog. Basically, it all comes down to service-provider monetization, which results from revenue generation.

Service providers can boost revenue in two basic ways: They can charge more for existing services, or they can develop and introduce new services. In most of the developed world, broadband Internet access is a saturated market. There’s negligible growth to be had. To make matters worse, at least from the service-provider perspective, broadband subscribers are resistant to paying higher prices, especially as punishing macroeconomic conditions put the squeeze on budgets.

Service providers have resorted to usage-based billing, with its associated tiers and caps, but there’s a limit to how much additional revenue they can squeeze from hard-pressed subscribers, many of whom will leave (as I did) when they get fed up with metering, overage charges, and with the paradoxical concept of service providers that discourage their subscribers from actually using the Internet as a service.

The Problem with Bandwidth

The twist to this story — and one that tells you quite a bit about the state of the industry — is that service providers are content to let disaffected subscribers take their business elsewhere. For service providers, the narrowing profit margins related to providing increasing amounts of Internet bandwidth are not worth the increasing capital expenditures and, to a lesser extent, growing operating costs associated with scaling network infrastructure to meet demand.

So, as Nolle points out, the assumption that increasing bandwidth consumption will necessarily drive network-infrastructure spending at service providers is no longer tenable. Quoting Nolle:

 “We’re seeing a fundamental problem with bandwidth economics.  Bits are less profitable every year, and people want more of them.  There’s no way that’s a temporary problem; something has to give, and it’s capex.  In wireline, where margins have been thinning for a longer period and where pricing issues are most profound, operators have already lowered capex year over year.  In mobile, where profits can still be had, they’re investing.  But smartphones and tablets are converting mobile services into wireline, from a bandwidth-economics perspective.  There is no question that over time mobile will go the same way.  In fact, it’s already doing that.

To halt the slide in revenue per bit, operators would have to impose usage pricing tiers that would radically reduce incentive to consume content.  If push comes to shove, that’s what they’ll do.  To compensate for the slide, they can take steps to manage costs but most of all they can create new sources of revenue.  That’s what all this service-layer stuff is about, of course.”

Significant Implications

We’re already seeing usage-pricing tiers here in Canada, and I have a feeling they’ll be coming to a service provider near you.

Yes, alternative service providers will take up (and are taking up) the slack. They’ll be content, for now, with bandwidth-related profit margins less than those the big players would find attractive. But they’ll also be looking to buy and run infrastructure at lower prices and costs than did incumbent service providers, who, as Nolle says, are increasingly turning their attention to new revenue-generating services and away from “less profitable bits.”

This phenomenon has significant implications for consumers of bandwidth, for service providers who purvey that bandwidth, for network-equipment vendors that provide gear to help service providers deliver bandwidth, and for market analysts and investors trying to understand a world they thought they knew.