Category Archives: Rackspace

Cisco Not Going Anywhere, but Changes Coming to Networking

Initially, I intended not to comment on the Wired article on Nicira Networks. While it contained some interesting quotes and a few good observations, its tone and too much of its focus were misplaced. It was too breathless, trying to too hard to make the story fit into a simplistic, sensationalized narrative of outsized personalities and the threatened “irrelevance” of Cisco Systems.

There was not enough focus on how Nicira’s approach to network virtualization and its particular conception of software defined networking (SDN) might open new horizons and enable new possibilities to the humble network. On his blog, Plexxi’s William Koss, commenting not about the Wired article but about reaction to SDN from the industry in general, wrote the following:

In my view, SDN is not a tipping point.  SDN is not obsoleting anyone.  SDN is a starting point for a new network.  It is an opportunity to ask if I threw all the crap in my network in the trash and started over what would we build, how would we architect the network and how would it work?  Is there a better way?

Cisco Still There

I think that’s a healthy focus. As Koss writes, and I agree, Cisco isn’t going anywhere; the networking giant will be with us for some time, tending its considerable franchise and moving incrementally forward. It will react more than it “proacts” — yes, I apologize now for the Haigian neologism — but that’s the fate of any industry giant of a certain age, Apple excepted.

Might Cisco, more than a decade from now, be rendered irrelevant?  I, for one, don’t make predictions over such vast swathes of time. Looking that far ahead and attempting to forecast outcomes is a mug’s game. It is nothing but conjecture disguised as foresight, offered by somebody who wants to flash alleged powers of prognostication while knowing full well that nobody else will remember the prediction a month from now, much less years into the future.

As far out as we can see, Cisco will be there. So, we’ll leave ambiguous prophecies to the likes of Nostradamus, whom I believe forecast the deaths of OS/2, Token Ring, and desktop ATM.

Answers are Coming

Fortunately, I think we’re beginning to get answers as to where and how Nicira’s approaches to network virtualization and SDN can deliver value and open new possibilities. The company has been making news with customer testimonials that include background on how its technology has been deployed. (Interestingly, the company has issued just three press releases in 2012, and all of them deal with customer deployments of its Network Virtualization Platform (NVP).)

There’s a striking contrast between the moderation implicit in Nicira’s choice of press releases and the unchecked grandiosity of the Wired story. Then again, I understand that vendors have little control over what journalists (and bloggers) write about them.

That said, one particular quote in the Wired article provoked some thinking from this quarter. I had thought about the subject previously, but the following excerpt provided some extra grist for my wood-burning mental mill:

In virtualizing the network, Nicira lets you make such changes in software, without touching the underlying hardware gear. “What Nicira has done is take the intelligence that sits inside switches and routers and moved that up into software so that the switches don’t need to know much,” says John Engates, the chief technology officer of Rackspace, which has been working with Nicira since 2009 and is now using the Nicira platform to help drive a new beta version of its cloud service. “They’ve put the power in the hands of the cloud architect rather than the network architect.”

Who Controls the Network?

It’s the last sentence that really signifies a major break with how things have been done until now, and this is where the physical separation of the control plane from the switch has potentially major implications.  As Scott Shenker has noted, network architects and network professionals have made their bones by serving as “masters of complexity,” using relatively arcane knowledge of proprietary and industry-standard protocols to keep networks functioning amid increasing demands of virtualized compute and storage infrastructure.

SDN promises an easier way, one that potentially offers a faster, simpler, less costly approach to network operations. It also offers the creative possibility of unleashing new applications and new ways of optimizing data-center resources. In sum, it can amount to a compelling business case, though not everywhere, at least not yet.

Where it does make sense, however, cloud architects and the devops crowd will gain primacy and control over the network. This trend is reflected already in the press releases from Nicira. Notice that customer quotes from Nicira do not come from network architects, network engineers, or anybody associated with conventional approaches to running a network. Instead, we see encomiums to NVP offered by cloud-architects, cloud-architect executives, and VPs of software development.

Similarly, and not surprisingly, Nicira typically doesn’t sell NVP to the traditional networking professional. It goes to the same “cloudy” types to whom quotes are attributed in its press releases. It’s true, too, that Nicira’s SDN business case and value proposition play better at cloud service providers than at enterprises.

Potentially a Big Deal

This is an area where I think the advent of  the programmable server-based controller is a big deal. It changes the customer power dynamic, putting the cloud architects and the programmers in the driver’s seat, effectively placing the network under their control. (Jason Edelman has begun thinking about what the rise of SDN means for the network engineer.) In this model, the network eventually gets subsumed under the broader rubric of computing and becomes just another flexible piece of cloud infrastructure.

Nicira can take this approach because it has nothing to lose and everything to gain. Of course, the same holds true of other startup vendors espousing SDN.

Perhaps that’s why Koss closed his latest post by writing that “the architects, the revolutionaries, the entrepreneurs, the leaders of the next twenty years of networking are not working at the incumbents.”  The word “revolutionaries” seems too strong, and the incumbents will argue that Koss, a VP at startup Plexxi, isn’t an unbiased party.

They’re right, but that doesn’t mean he’s wrong.

Advertisements

Nicira Focuses on Value of NVP Deployments, Avoids Fetishization of OpenFlow

The continuing evolution of Nicira Networks has been intriguing to watch. At one point, not so long ago, many speculated on what Nicira, then still in a teasing stealth mode, might be developing behind the scenes. We now know that it was building its Network Virtualization Platform (NVP), and we’re beginning to learn about how the company’s early customers are deploying it.

Back in Nicira’s pre-launch days, the line between OpenFlow and software defined networking (SDN) was blurrier than it is today.  From the outset, though, Nicira was among the vendors that sought to provide clarity on OpenFlow’s role in the SDN hierarchy.  At the time — partly because the company was communicating in stealthy coyness  — it didn’t always feel like clarity, but the message was there, nonetheless.

Not the Real Story

For instance, when Alan Cohen first joined Nicira last fall to assume the role of VP marketing, he wrote the following on his personal blog:

Virtualization and the cloud is the most profound change in information technology since client-server and the web overtook mainframes and mini computers.  We believe the full promise of virtualization and the cloud can only be fully realized when the network enables rather than hinders this movement.  That is why it needs to be virtualized.

Oh, by the way, OpenFlow is a really small part of the story.  If people think the big shift in networking is simply about OpenFlow, well, they don’t get it.

A few months before Cohen joined the company, Nicira’s CTO Martin Casado had played down OpenFlow’s role  in the company’s conception of SDN. We understand now where Nicira was going, but at the time, when OpenFlow and SDN were invariably conjoined and seemingly inseparable in industry discourse, it might not have seemed as obvious.

Don’t Get Hung Up

That said, a compelling early statement on OpenFlow’s relatively modest role in SDN was delivered in a presentation by Scott Shenker, Nicira’s co-founder and chief scientist (as well as a professor of electrical engineering in the University of California at Berkeley’s Computer Science Department). I’ve written previously about Shenker’s presentation, “The Future of Networking, and the Past of Protocols,” but here I would just like to quote his comments on OpenFlow:

“OpenFlow is one possible solution (as a configuration mechanism); it’s clearly not the right solution. I mean, it’s a very good solution for now, but there’s nothing that says this is fundamentally the right answer. Think of OpenFlow as x86 instruction set. Is the x86 instruction set correct? Is it the right answer? No, It’s good enough for what we use it for. So why bother changing it? That’s what OpenFlow is. It’s the instruction set we happen to use, but let’s not get hung up on it.”

I still think too many industry types are “hung up” on OpenFlow, and perhaps not focused enough on the controller and above, where the applications will overwhelmingly define the value that SDN delivers.

As an open protocol that facilitates physical separation of the control and data-forwarding planes, OpenFlow has a role to play in SDN. Nonetheless, other mechanisms and protocols can play that role, too, and what really counts can be found at higher altitudes of the SDN value chain.

Minor Roles

In Nicira’s recently announced customer deployments, OpenFlow has played relatively minor supporting roles. Last week, for instance, Nicira announced at the OpenStack Design Summit & Conference that its Network Virtualization Platform (NVP) has been deployed at Rackspace in conjunction with OpenStack’s Quantum networking project. The goal at Rackspace was to automate network services independent of data-center network hardware in a bid to improve operational simplicity and to reduce the cost of managing large, multi-tenant clouds.

According to Brad McConnell, principal architect at Rackpspace, Quantum, Open vSwitch, and OpenFlow all were ingredients in the deployment. Quantum was used as the standardized API to describe network connectivity, and OpenFlow served as the underlying protocol that configured and managed Open vSwitch within hypervisors.

A week earlier, Nicira announced that cloud-service provider DreamHost would deploy its NVP to reduce costs and accelerate service delivery in its OpenStack datacenter. In the press release, the following quote is attributed to Carl Perry, DreamHost’s cloud architect:

“Nicira’s NVP software enables truly massive leaps in automation and efficiency.  NVP decouples network services from hardware, providing unique flexibility for both DreamHost and our customers.  By sidestepping the old network paradigm, DreamHost can rapidly build powerful features for our cloud.  Network virtualization is a critical component necessary for architecting the next-generation public cloud services.  Nicira’s plug-in technology, coupled with the open source Ceph and OpenStack software, is a technically sound recipe for offering our customers real infrastructure-as-a-service.”

Well-Placed Focus

You will notice that OpenFlow is not mentioned by Nicira in the press releases detailing NVP deployments at DreamHost and Rackspace. While OpenFlow is present at both deployments, Nicira correctly describes its role as a lesser detail on a bigger canvas.

At DreamHost, for example, NVP uses  OpenFlow for communication between the controller and Open vSwitch, but Nicira has acknowledged that other protocols, including SNMP, could have performed a similar function.

Reflecting on these deployments, I am reminded of Casado’s  earlier statement: “Open Flow is about as exciting as USB.”

For a long time now, Nicira has eschewed the fetishization of OpenFlow. Instead, it has focused on the bigger-picture value propositions associated with network virtualization and programmable networks. If it continues to do so, it likely will draw more customers to NVP.

LineRate’s L4-7 Pitch Tailored to Cloud

I’ve written previously about the growing separation between how large cloud service providers see their networks and how enterprises perceive theirs. The chasm seems to get wider by the day, with the major cloud shops adopting innovative approaches to reduce network-related costs and to increase service agility, while their enterprise brethren seem to be  assuming the role of conservative traditionalists — not that there’s anything inherently or necessarily wrong with that.

The truth is, the characteristics and requirements of those networks and the applications that ride on them have diverged, though ultimately a cloud-driven reconvergence is destined to occur.  For now, though, the cloudy service providers are going one way, and the enterprises — and most definitely the networking professionals within them — are standing firm on familiar ground.

It’s no surprise, then, to see LineRate Systems, which is bringing a software-on-commodity-box approach to L4-7 network services, target big cloud shops with its new all-software LineRate Proxy.

Targeting Cloud Shops

LineRate says its eponymous Proxy delivers a broad range of full-proxy Layer 4-7 network services, including load balancing, content switching, content filtering, SSL termination and origination, ACL/IP filtering, TCP optimization, DDoS blocking, application- performance visibility, server-health monitoring, and an IPv4/v6 translation gateway. The product has snared a customer — the online photo- and video-sharing service Photobucket — willing to sing its praises, and the company apparently has two other customers onboard.

As a hook to get those customers and others to adopt its product, LineRate offers pay-for-capacity subscription licensing and a performance guarantee that it says eliminates upfront capital expenditures and does away with the risks associated with capacity planning and the costs of over-provisioning. It’s a great way to overcome, or at least mitigate, the new-tech jitters that prospective customers might experience when approached by a startup.

I’ll touch on the company’s “secret sauce” shortly, but let’s first explain how LineRate got to where it is now. As CEO Steve Georgis explained in an interview late last week, LineRate has been around since 2008. It is a VC-backed company, based in Boulder, Colorado, which grew from research conducted at the University of Colorado by John Giacomoni, now LineRate’s chief technology officer (CTO), and by Manish Vachharajani, LineRate’s chief software architect.

Replacing L4-7 Hardware Appliances 

As reported by the Boulder County Business Report, LineRate closed a $4.75 million Series A round in April 2011, in which Boulder Ventures was the lead investor. Including seed investments, LineRate has raised about $5.4 million in aggregate, and it is reportedly raising a Series B round.

LineRate calls what it does “software defined network services” (SDNS) and company CEO Georgis says the overall SDN market comprises three layers: the Layer 2-3 network fabric, the Layer 4-7 network services, and the applications and web services that run above everything else. LineRate, obviously, plays in the middle, a neighborhood it shares with Embrane, among others.

LineRate contends that software is the new data path. As such, its raison d’être is to eliminate the need for specialized Layer 4-7 hardware appliances by replacing them with software, which it provides, running on industry-standard hardware, which can be and are provided by ODMs and OEMs alike.

LineRate’s Secret Sauce

The company’s software, and its aforementioned secret sauce, is called the LineRate Operating System (LROS). As mentioned above, it was developed from research work that Giacomoni and Vachharajani completed in high-performance computing (HPC), where their focus was on optimizing resource utilization of off-the-shelf hardware.

Based on FreeBSD but augmented with LineRate’s own TCP stack, LROS has been optimized to squeeze maximum performance from the x86 architecture. As a result, Georgis says, LROS can provide 5-10x more network-performance than can a general-purpose operating system, such as Linux or BSD. LineRate claims its software delivers sufficiently impressive performance — 20 to 40 Gbps network processing on a commodity x86 server, with what the company describes as “high session scalability” — to obviate the need for specialized L4-7 hardware appliances.

This sort of story is one that service providers are likely to find intriguing. We have seen variations on this theme at the big cloud shops, first with virtualized servers, then with switches and routers, and now — if LineRate has its way — with L4-7 appliances.

LineRate says it can back up its bluster with the ability to support hundreds of thousands of full-proxy L7 connections per second, amounting to two million concurrent active flows. As such, LineRate claims LROS’s ability to support scale-out high availability and its inherent multi-tenancy make well qualified for the needs of cloud-service providers.  The LineRate Proxy uses a REST API-based architecture, which the company says allows it to integrate with any cloud orchestration or data-center management framework.

Wondering About Service Reach?

At Photobucket.com, which has 23 million users that upload about four million photos and videos per day, the LineRate Proxy has been employed as a L7 HTTP load balancer and intelligent-content switch in a 10-Gbps network. The LineRate software runs on a pair of low-cost, high-availability x86 servers, doing away with the need to do a forklift upgrade on a legacy hardware solution that Georgis said included products from “a market-leading load-balancing vendor and a vendor that was once a market leader in the space.”

LineRate claims its scalable subscription model also paid off for Photobucket, by eliminating the need for long-term capacity planning and up-front capital expenditures. It says Photobucket benefits from its “guaranteed performance,” and that on-demand scaling has eliminated risks associated with under- or over-provisioning. On the whole, LineRate says its solution offered an entry cost 70 percent lower than that of a competing hardware-appliance purchase.

When the company first emerged, the founders indicated that load balancing would be the first L4-7 network service that it would target. It will be interesting to see whether its other early-adopter customers also are using the LineRate Proxy for load balancing. Will the product prove more specialized than the L4-7 Ginsu knife the company is positioning?

It’s too early to say. The answer will be provided by future deployments.

The estimable Ivan Pepelnjak offers his perspective, including astute commentary on how and where the LineRate Proxy is likely to find favor.

Not Just a Marketing Overlay

Ivan pokes gentle fun at LineRate’s espousal of SDNS, and his wariness is understandable. Even the least likely of networking vendors seem to be cloaking themselves in SDN garb these days, both to draw the fickle attention of trend-chasing venture capitalists and to catch the preoccupied eyes of the service providers that actually employ SDN technologies.

Nonetheless, there are aspects to what LineRate does that undeniably have a lot in common with what I will call an SDN ethos (sorry to be so effete). One of the key value propositions that LineRate promotes — in addition to its comparatively low cost of entry, its service-based pricing, and its performance guarantee — is the simple scale-out approach it offers to service providers.

As Ivan points out, “ . . . whenever you need more bandwidth, you can take another server from your compute pool and repurpose it as a networking appliance.” That’s definitely a page from the SDN playbook that the big cloud-service providers, such as those who run the Open Networking Foundation (ONF), are following. Ideally, they’d like to use virtualization and SDN to run everything on commodity boxes, perhaps sourced directly from ODMs, and then reallocate hardware dynamically as circumstances dictate.

In a comment on Ivan’s post, Brad Hedlund, formerly of Cisco and now of Dell, offers another potential SDN connection for the LineRate Proxy. Hedlund writes that it “would be really cool if they ran the Open vSwitch on the southbound interfaces, and partnered with Nicira and/or Big Switch, so that the appliance could be used as a gateway in overlay-based clouds such as, um, Rackspace.”

He might have something there. So, maybe, in the final analysis, the SDNS terminology is more than a marketing overlay.

Debating SDN, OpenFlow, and Cisco as a Software Company

Greg Ferro writes exceptionally well, is technologically knowledgeable, provides incisive commentary, and invariably makes cogent arguments over at EtherealMind.  Having met him, I can also report that he’s a great guy. So, it is with some surprise that I find myself responding critically to his latest blog post on OpenFlow and SDN.

Let’s start with that particular conjunction of terms. Despite occasional suggestions to the contrary, SDN and OpenFlow are not inseparable or interchangeable. OpenFlow is a protocol, a mechanism that allows a server, known in SDN parlance as a controller, to interact with and program flow tables (for packet forwarding) on switches. It facilitates the separation of the control plane from the data plane in some SDN networks.

But OpenFlow is not SDN, which can be achieved with or without OpenFlow.  In fact, Nicira Networks recently announced two SDN customer deployments of its Network Virtualization Platform (NVP) — at DreamHost and at Rackspace, respectively — and you won’t find mention of OpenFlow in either press release, though OpenStack and its Quantum networking project receive prominent billing. (I’ll be writing more about the Nicira deployments soon.)

A Protocol in the Big Picture 

My point is not to diminish or disparage OpenFlow, which I think can and will be used gainfully in a number of SDN deployments. My point is that we have to be clear that the bigger picture of SDN is not interchangeable with the lower-level functionality of OpenFlow.

In that respect, Ferro is absolutely correct when he says that software-defined networking, and specifically SDN controller and application software, are “where the money is.” He conflates it with OpenFlow — which may or may not be involved, as we already have established — but his larger point is valid.  SDN, at the controller and above, is where all the big changes to the networking model, and to the industry itself, will occur.

Ferro also likely is correct in his assertion that OpenFlow, in and of itself, will  not enable “a choice of using low cost network equipment instead of the expensive networking equipment that we use today. “ In the near term, at least, I don’t see major prospects for change on that front as long as backward compatibility, interoperability with a bulging bag of networking protocols, and the agendas of the networking old guard are at play.

Cisco as Software Company

However, I think Ferro is wrong when he says that the market-leading vendors in switching and routing, including Cisco and Juniper, are software companies. Before you jump down my throat, presuming that’s what you intend to do, allow me to explain.

As Ferro says, Cisco and Juniper, among others, have placed increasing emphasis on the software features and functionality of their products. I have no objection there. But Ferro pushes his argument too far and suggests that the “networking business today is mostly a software business.”  It’s definitely heading in that direction, but Cisco, for one, isn’t there yet and probably won’t be for some time.  The key word, by the way, is “business.”

Cisco is developing more software these days, and it is placing more emphasis on software features and functionality, but what it overwhelmingly markets and sells to its customers are switches, routers, and other hardware appliances. Yes, those devices contain software, but Cisco sells them as hardware boxes, with box-oriented pricing and box-oriented channel programs, just as it has always done. Nitpickers will note that Cisco also has collaboration and video software, which it actually sells like software, but that remains an exception to the rule.

Talks Like a Hardware Company, Walks Like a Hardware Company

For the most part, in its interactions with its customers and the marketplace in general, Cisco still thinks and acts like a hardware vendor, software proliferation notwithstanding. It might have more software than ever in its products, but Cisco is in the hardware business.

In that respect, Cisco faces the same fundamental challenge that server vendors such as HP, Dell, and — yes — Cisco confront as they address a market that will be radically transformed by the rise of cloud services and ODM-hardware-buying cloud service providers. Can it think, figuratively and literally, outside the box? Just because Cisco develops more software than it did before doesn’t mean the answer is yes, nor does it signify that Cisco has transformed itself into a software vendor.

Let’s look, for example, at Cisco’s approach to SDN. Does anybody really believe that Cisco, with its ongoing attachment to ASIC-based hardware differentiation, will move toward a software-based delivery model that places the primary value on server-based controller software rather than on switches and routers? It’s just not going to happen, because  it’s not what Cisco does or how it operates.

Missing the Signs 

And that bring us to my next objection.  In arguing that Cisco and others have followed the market and provided the software their customers want, Ferro writes the following:

“Billion dollar companies don’t usually miss the obvious and have moved to enhance their software to provide customer value.”

Where to begin? Well, billion-dollar companies frequently have missed the obvious and gotten it horribly wrong, often when at least some individuals within the companies in question knew that their employer was getting it horribly wrong.  That’s partly because past and present successes can sow the seeds of future failure. As in Clayton M. Christensen’s classic book The Innovator’s Dilemma, industry leaders can have their vision blinkered by past successes, which prevent them from detecting disruptive innovations. In other cases, former market leaders get complacent or fail to acknowledge the seriousness of a competitive threat until it is too late.

The list of billion-dollar technology companies that have missed the obvious and failed spectacularly, sometimes disappearing into oblivion, is too long to enumerate here, but some  names spring readily to mind. Right at the top (or bottom) of our list of industry ignominy, we find Nortel Networks. Once a company valued at nearly $400 billion, Nortel exists today only in thoroughly digested pieces that were masticated by other companies.

Is Cisco Decline Inevitable?

Today, we see a similarly disconcerting situation unfolding at Research In Motion (RIM), where many within the company saw the threat posed by Apple and by the emerging BYOD phenomenon but failed to do anything about it. Going further back into the annals of computing history, we can adduce examples such as Novell, Digital Equipment Corporation, as well as the raft of other minicomputer vendors who perished from the planet after the rise of the PC and client-sever computing. Some employees within those companies might even have foreseen their firms’ dark fates, but the organizations in which they toiled were unable to rescue themselves.

They were all huge successes, billion-dollar companies, but, in the face of radical shifts in industry and market dynamics, they couldn’t change who and what they were.  The industry graveyard is full of the carcasses of company’s that were once enormously successful.

Am I saying this is what will happen to Cisco in an era of software-defined networking? No, I’m not prepared to make that bet. Cisco should be able to adapt and adjust better than the aforementioned companies were able to do, but it’s not a given. Just because Cisco is dominant in the networking industry today doesn’t mean that it will be dominant forever. As the old investment disclaimer goes, past performance does not guarantee future results. What’s more, Cisco has shown a fallibility of late that was not nearly as apparent in its boom years more than a decade ago.

Early Days, Promising Future

Finally, I’m not sure that Ferro is correct when he says Open Network Foundation’s (ONF) board members and its biggest service providers, including Google, will achieve CapEx but not OpEx savings with SDN. We really don’t know whether these companies are deriving OpEx savings because they’re keeping what they do with their operations and infrastructure highly confidential. Suffice it to say, they see compelling reasons to move away from buying their networking gear from the industry’s leading vendors, and they see similarly compelling reasons to embrace SDN.

Ferro ends his piece with two statements, the first of which I agree with wholeheartedly:

“That is the future of Software Defined Networking – better, dynamic, flexible and business focussed networking. But probably not much cheaper in the long run.”

As for that last statement, I believe there is insufficient evidence on which to render a verdict. As we’ve noted before, these are early days for SDN.

Peeling the Nicira Onion

Nicira emerged from pseudo-stealth yesterday, drawing plenty of press coverage in the process. “Network virtualization” is the concise, two-word marketing message the company delivered, on its own and through the analysts and journalists who greeted its long-awaited official arrival on the networking scene.

The company’s website opened for business this week replete with a new look and an abundance of new content. Even so, the content seemed short on hard substance, and those covering the company’s launch interpreted Nicira’s message in a surprisingly varied manner, somewhat like blind men groping different parts of an elephant. (Onion in the title, now an elephant; I’m already mixing flora and fauna metaphors.)

VMware of Networking Ambiguity

Many made the point that Nicira aims to become the “VMware of networking.” Interestingly, Big Switch Networks has aspirations to wear that crown, asserting on its website that “networking needs a VMware.” The theme also has been featured in posts on Network Heresy, Nicira CTO Martin Casado’s blog. He and his colleagues have written alternately that networking both doesn’t and does need a VMware. Confused? That’s okay. Many are in the same boat . . . or onion field, as the case may be.

The point Casado and company were trying to make is that network virtualization, while seemingly overdue and necessary, is not the same as server virtualization. As stated in the first in that series of posts at Network Heresy:

“Virtualized servers are effectively self contained in that they are only very loosely coupled to one another (there are a few exceptions to this rule, but even then, the groupings with direct relationships are small). As a result, the virtualization logic doesn’t need to deal with the complexity of state sharing between many entities.

A virtualized network solution, on the other hand, has to deal with all ports on the network, most of which can be assumed to have a direct relationship (the ability to communicate via some service model). Therefore, the virtual networking logic not only has to deal with N instances of N state (assuming every port wants to talk to every other port), but it has to ensure that state is consistent (or at least safely inconsistent) along all of the elements on the path of a packet. Inconsistent state can result in packet loss (not a huge deal) or much worse, delivery of the packet to the wrong location.”

In Context of SDN Universe

That issue aside, many writers covering the Nicira launch presented information about the company and its overall value proposition consistently. Some articles were more detailed than others. One at MIT’s Technology Review provided good historical background on how Casado first got involved with the challenge of network virtualization and how Nicira was formed to deliver a solution.

Jim Duffy provided a solid piece touching on the company’s origins, its venture-capital investors, and its early adopters and the problems Nicira is solving for them. He also touched on where Nicira appears to fit within the context of the wider SDN universe, which includes established vendors such as Cisco Systems, HP, and Juniper Networks, as well as startup such as Big Switch Networks, Embrane, and Contextream.

In that respect, it’s interesting to note what Embrane co-founder and President Dante Malagrino told Duffy:

 “The introduction of another network virtualization product is further validation that the network is in dire need of increased agility and programmability to support the emergence of a more dynamic data center and the cloud.”

“Traditional networking vendors aren’t delivering this, which is why companies like Nicira and Embrane are so attractive to service providers and enterprises. Embrane’s network services platform can be implemented within the re-architected approach proposed by Nicira, or in traditional network architectures. At the same time, products that address Layer 2-3 and platforms that address Layer 4-7 are not interchangeable and it’s important for the industry to understand the differences as the network catches up to the cloud.”

What’s Nicira Selling?

All of which brings us back to what Nicira actually is delivering to market. The company’s website offers videos, white papers, and product data sheets addressing the Nicira Network Virtualization Platform (NVP) and its Distributed Network Virtualization Infrastructure (DNVI), but I found the most helpful and straightforward explanations, strangely enough, on the Frequently Asked Questions (FAQ) page.

This is an instance of a FAQ page that actually does provide answers to common questions. We learn, for example, that the key components of the Nicira Network Virtualization Platform (NVP) are the following:

– The Controller cluster, a distributed control system

– The Management software, an operations console

– The RESTful API that integrates into a range of Cloud Management Systems (CMS), including a Quantum plug-in for OpenStack.

Those components, which constitute the NVP software suite, are what Nicira sells, albeit in a service-oriented monthly subscription model that scales per virtual network port.

Open vSwitch, Minor Role for OpenFlow 

We then learn that the NVP communicates with the physical network indirectly, through Open vSwitch. Ivan Pepelnjak (I always worry that I’ll misspell his name, but not the Ivan part) provides further insight into how Nicira leverages Open vSwitch. As Nicira notes, the NVP Controller communicates directly with Open vSwitch (OVS), which is deployed in server hypervisors. The server hypervisor then connects to the physical network and end hosts connect to the vswitch. As a result, NVP does not talk directly to the physical network.

As for OpenFlow, its role is relatively minor. As Nicira explains: “OpenFlow is the communications protocol between the controller and OVS instances at the edge of the network. It does not directly communicate with the physical network elements and is thus not subject to scaling challenges of hardware-dependent, hop-by-hop OpenFlow solutions.”

Questions About L4-7 Network Services

Nicira sees its Network Virtualization Platform delivering value in a number of different contexts, including the provision of hardware-independent virtual networks; virtual-machine mobility across subnet boundaries (while maintaining L2 adjacency); edge-enforced, dynamic QoS and security policies (filters, tagging, policy routing, etc.) bound to virtual ports; centralized system-wide visibility & monitoring; address space isolation (L2 & L3); and Layer 4-7 services.

Now that last capability provokes some questions that cannot be answered in the FAQ.

Nicira says its NVP can integrate with third-party Layer 3-7 services, but it also says services can be created by Nicira or its customers.  Notwithstanding Embrane’s perfectly valid contention that its network-services platform can be delivered in conjunction with Nicira’s architectural model, there is a distinct possibility Nicira might have other plans.

This is something that bears watching, not only by Embrane but also by longstanding Layer 4-7 service-delivery vendors such as F5 Networks. At this point, I don’t pretend to know how far or how fast Nicira’s ambitions extend, but I would imagine they’ll be demarcated, at least partly, by the needs and requirements of its customers.

Nicira’s Early Niche

Speaking of which, Nicira has an impressive list of early adopters, including AT&T, eBay, Fidelity Investments, Rackspace, Deutsche Telekom, and Japan’s NTT. You’ll notice a commonality in the customer profiles, even if their application scenarios vary. Basically, these all are public cloud providers, of one sort or another, and they have what are called “web-scale” data centers.

While Nicira and Big Switch Networks both are purveyors of “network virtualization”  and controller platforms — and both proclaim that networking needs a VMware — they’re aiming at different markets. Big Switch is focusing on the enterprise and the private cloud, whereas Nicira is aiming for large public cloud-service providers or big enterprises that provide public-cloud services (such as Fidelity).

Nicira has taken care in selecting its market. An earlier post on Casado’s blog suggests that he and Nicira believe that OpenFl0w-based SDNs might be a solution in search of a problem already being addressed satisfactorily within many enterprises. I’m sure the team at Big Switch would argue otherwise.

At the same time, Nicira probably has conceded that it won’t be patronized by Open Networking Foundation (ONF) board members such as Google, Facebook, and Microsoft, each of which is likely to roll its own network-virtualization systems, controller platforms, and SDN applications. These companies not only have the resources to do so, but they also have a business imperative that drives them in that direction. This is especially true for Google, which views its data-center infrastructure as a competitive differentiator.

Telcos Viable Targets

That said, I can see at least a couple ONF board members that might find Nicira’s pitch compelling. In fact, one, Deutsche Telekom, already is on board, at least in part, and perhaps Verizon will come along later. The telcos are more likely than a Google to need assistance with SDN rollouts.

One last night on Nicira before I end this already-prolix post. In the feature article at Technology Review, Casado says it’s difficult for Nicira to impress a layperson with its technology, that “people do struggle to understand it.” That’s undoubtedly true, but Nicira needs to keep trying to refine its message, for its own sake as well as for those of prospective customers and other stakeholders.

That said, the company is stocked with impressive minds, on both the business and technology sides of the house, and I’m confident it will get there.

Exploring OpenStack’s SDN Connections

Pursuant to my last post, I will now examine the role of OpenStack in the context of software-defined networking (SDN). As you will recall, it was one of the alternative SDN enabling technologies mentioned in a recent article and sidebar at Network World.

First, though, I want to note that, contrary to the concerns I expressed in the preceding post, I wasn’t distracted by a shiny object before getting around to writing this installment. I feared I would be, but my powers of concentration and focus held sway. It’s a small victory, but I’ll take it.

Road to Quantum

Now, on to OpenStack, which I’ve written about previously, though admittedly not in the context of SDNs. As for how networking evolved into a distinct facet of OpenStack, Martin Casado, chief technology officer at Nicira, offers a thorough narrative at the Open vSwitch website.

Casado begins by explaining that OpenStack is a “cloud management system (CMS) that orchestrates compute, storage, and networking to provide a platform for building on demand services such as IaaS.” He notes that OpenStack’s primary components were OpenStack Compute (Nova), Open Stack Storage (Swift), and OpenStack Image Services (Glance), and he also provides an overview of their respective roles.

Then he asks, as one might, what about networking? At this point, I will quote directly from his Open vSwitch post:

“Noticeably absent from the list of major subcomponents within OpenStack is networking. The historical reason for this is that networking was originally designed as a part of Nova which supported two networking models:

● Flat Networking – A single global network for workloads hosted in an OpenStack Cloud.

●VLAN based Networking – A network segmentation mechanism that leverages existing VLAN technology to provide each OpenStack tenant, its own private network.

While these models have worked well thus far, and are very reasonable approaches to networking in the cloud, not treating networking as a first class citizen (like compute and storage) reduces the modularity of the architecture.”

As a result of Nova’s networking shortcomings, which Casado enumerates in detail,  Quantum, a standalone networking component, was developed.

Network Connectivity as a Service

The OpenStack wiki defines Quantum as “an incubated OpenStack project to provide “network connectivity as a service” between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).” On that same wiki, Quantum is touted as being able to support advanced network topologies beyond the scope of  Nova’s FlatManager or VLanManager; as enabling anyone to “build advanced network services (open and closed source) that plug into Openstack networks”; and as enabling new plugins (open and closed source) that introduce advanced network capabilities.

Okay, but how does it relate specifically to SDNs? That’s a good question, and James Urquhart has provided a clear and compelling answer, which later was summarized succinctly by Stuart Miniman at Wikibon. What Urquhart wrote actually connects the dots between OpenStack’s Quantum and OpenFlow-enabled SDNs. Here’s a salient excerpt:

“. . . . how does OpenFlow relate to Quantum? It’s simple, really. Quantum is an application-level abstraction of networking that relies on plug-in implementations to map the abstraction(s) to reality. OpenFlow-based networking systems are one possible mechanism to be used by a plug-in to deliver a Quantum abstraction.

OpenFlow itself does not provide a network abstraction; that takes software that implements the protocol. Quantum itself does not talk to switches directly; that takes additional software (in the form of a plug-in). Those software components may be one and the same, or a Quantum plug-in might talk to an OpenFlow-based controller software via an API (like the Open vSwitch API).”

Cisco’s Contribution

So, that addresses the complementary functionality of OpenStack’s Quantum and OpenFlow, but, as Urquhart noted, OpenFlow is just one mechanism that can be used by a plug-in to deliver a Quantum abstraction. Further to that point, bear in mind that Quantum, as recounted on the OpenStack wiki, can be used  to “build advanced network services (open and closed source) that plug into OpenStack networks” and to facilitate new plugins that introduce advanced network capabilities.

Consequently, when it comes to using OpenStack in SDNs, OpenFlow isn’t the only complementary option available. In fact, Cisco is in on the action, using Quantum to “develop API extensions and plug-in drivers for creating virtual network segments on top of Cisco NX-OS and UCS.”

Cisco portrays itself as a major contributor to OpenStack’s Quantum, and the evidence seems to support that assertion. Cisco also has indicated qualified support for OpenFlow, so there’s a chance OpenStack and OpenFlow might intersect on a Cisco roadmap. That said, Cisco’s initial OpenStack-related networking forays relate to its proprietary technologies and existing products.

Citrix, Nicira, Rackspace . . . and Midokura

Other companies have made contributions to OpenStack’s Quantum, too. In a post at Network World, Alan Shimel of The CISO Group cites the involvement of Nicira, Cisco, Citrix, Midokura, and Rackspace. From what Nicira’s Casado has written and said publicly, we know that OpenFlow is in the mix there. It seems to be in the picture at Rackspace, too. Citrix has posted blog posts about Quantum, including this one, but I’m not sure where they’re going with it, though XenServer, Open vSwitch, and, yes, OpenFlow are likely to be involved.

Finally, we have Midokura, a Japanese company that has a relatively low profile, at least on this side of the Pacific Ocean. According to its website, it was established early in 2010, and it had just 12 employees in the end of April 2011.

If my currency-conversion calculations (from Japanese yen) are correct, Midokura also had about $1.5 million in capital as of that date. Earlier that same month, the company announced seed funding of about $1.3 million. Investors were Bit-Isle, a Japanese data-center company; NTT Investment Partners, an investment vehicle of  Nippon Telegraph & Telephone Corp. (NTT); 1st Holdings, a Japanese ISV that specializes in tools and middleware; and various individual investors, including Allen Miner, CEO of SunBridge Corporation.

On its website, Midokura provides an overview of its MidoNet network-virtualization platform, which is billed as providing a solution to the problem of inflexible and expensive large-scale physical networks that tend to lock service providers into a single vendor.

Virtual Network Model in Cloud Stack

In an article published  at TechCrunch this spring, at about the time Midokura announced its seed round, the company claimed to be the only one to have “a true virtual network model” in a cloud stack. The TechCrunch piece also said the MidoNet platform could be integrated “into existing products, as a standalone solution, via a NaaS model, or through Midostack, Midokura’s own cloud (IaaS/EC2) distribution of OpenStack (basically the delivery mechanism for Midonet and the company’s main product).”

Although the company was accepting beta customers last spring, it hasn’t updated its corporate blog since December 2010. Its “Events” page, however, shows signs of life, with Midokura indicating that it will be attending or participating in the grand opening of Rackspace’s San Francisco office on December 1.

Perhaps we’ll get an update then on Midokura’s progress.

Rackspace’s Bridge Between Clouds

OpenStack has generated plenty of sound and fury during the past several months, and, with sincere apologies to William Shakespeare, there’s evidence to suggest the frenzied activity actually signifies something.

Precisely what it signifies, and how important OpenStack might become, is open to debate, of which there has been no shortage. OpenStack is generally depicted as an open-source cloud operating system, but that might be a generous interpretation. On the OpenStack website, the following definition is offered:

“OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.”

Just for fun and giggles (yes, that phrase has been modified so as not to offend reader sensibilities), let’s parse that passage, shall we? In the words of the OpenStackers themselves, their project is an open-source cloud-computing platform for public and private clouds, and it is reputedly simple to implement, massively scalable, and feature rich. Notably, it consists of a “series of interrelated projects delivering various components for a cloud infrastructure solution.”

Simple for Some

Given that description, especially the latter reference to interrelated projects spawning various components, one might wonder exactly how “simple” OpenStack is to implement and by whom. That’s a question others have raised, including David Linthicum in a recent piece at InfoWorld. In that article, Linthicum notes that undeniable vendor enthusiasm and burgeoning market momentum accrue to OpenStack — the community now has 138 member companies (and counting), including big-name players such as HP, Dell, Intel, Rackspace, Cisco, Citrix, Brocade, and others — but he also offers the following caveat:

“So should you consider OpenStack as your cloud computing solution? Not on its own. Like many open source projects, it takes a savvy software vendor, or perhaps cloud provider, to create a workable product based on OpenStack. The good news is that many providers are indeed using OpenStack as a foundation for their products, and most of those are working, or will work, just fine.”

Creating Value-Added Services

Meanwhile, taking issue with a recent InfoWorld commentary by Savio Rodrigues — who argued that OpenStack will falter while its open-source counterpart Eucalyptus will thrive — James Urquhart, formerly of Cisco and now VP of product strategy at enStratus, made this observation:

“OpenStack is not a cloud service, per se, but infrastructure automation tuned to cloud-model services, like IaaS, PaaS and SaaS. Intsall OpenStack, and you don’t get a system that can instantly bill customers, provide a service catalog, etc. That takes additional software.

What OpenStack represents is the commodity element of cloud services: the VM, object, server image and networking management components. Yeah, there is a dashboard to interact with those commodity elements, but it is not a value-add capability in-and-of itself.

What HP, Dell, Cisco, Citrix, Piston, Nebula and others are doing with OpenStack is creating value-add services on top (or within) the commodity automation. Some focus more on “being commodity”, others focus more on value-add, but they are all building on top of the core OpenStack projects.”

New Revenue Stream for Rackspace

All of which brings us, in an admittedly roundabout fashion, to Rackspace’s recent announcement of its Rackspace Cloud Private Edition, a packaged version of OpenStack components that can be used by enterprises for private-cloud deployments. This move makes sense for OpenStack on couple levels.

First off, it opens up a new revenue stream for the company. While Rackspace won’t try to make money on the OpenStack software or the reference designs — featuring a strong initial emphasis on Dell servers and Cisco networking gear for now, though bare-bones OpenCompute servers are likely to be embraced before long —  it will provide value-add, revenue-generating managed services to customers of Rackspace Cloud Private Edition. These managed services will comprise installation of OpenStack updates, analysis of system issues, and assistance with specific questions relating to systems engineering. Some security-related services also will be offered. While the reference architecture and the software are available now, Rackspace’s managed services won’t be available until January.

Building a Bridge

The launch of Rackspace Cloud Private Edition is a diversification initiative for Rackspace, which hitherto has made its money by hosting and managing applications and computing services for others in its own data centers. The OpenStack bundle takes it into the realm of provided managed services in its customers’ data centers.

As mentioned above, this represents a new revenue stream for Rackspace, but it also provides a technological bridge that will allow customers who aren’t ready for multi-tenant public cloud services today to make an easy transition to Rackspace’s data centers at some future date. It’s a smart move, preventing prospective customers from moving to another platform for private cloud deployment, ensuring in the process that said customers don’t enter the orbit of another vendor’s long-term gravitational pull.

The business logic coheres. For each customer engagement, Rackspace gets a payoff today, and potentially a larger one at a later date.