Category Archives: IBM

Big Switch Emphasizes Ecosystem, Channel

Big Switch Networks made the news very early today — one article was posted precisely at midnight ET — with an announcement of general availability of its SDN controller, two applications that run on it, and an ecosystem of partners.

Customers also are in the picture, though it wasn’t made explicit in the Big Switch press release whether Fidelity Investments and Goldman Sachs are running Big Switch’s products in production networks.  In a Network World article, however, Jim Duffy writes that Fidelity and Goldman Sachs are “production customers for the Big Switch Open SDN product suite.” 

Controller, Applications, Ecosystem

The company’s announced products, encompassed within its Open Software Defined Networking architecture, feature the Big Network Controller, a proprietary version of the open-source Floodlight controller, and the two aforementioned applications. An SDN controller without applications is like, well, an operating system without applications. Accordingly, Big Switch has introduced Big Virtual Switch, an application for network virtualization, and Big Tap, a unified network monitoring application. 

Big Virtual Switch is the company’s answer to Nicira’s Network Virtualization Platform (NVP).  Big Switch says the product supports up to 32,000 virtual-network segments and can be integrated with cloud-management platforms such as OpenStack (Quantum), CloudStack, Microsoft System Center, and VMware vCenter.  As Big Switch illustrates on its website, Big Virtual Switch can be deployed on Big Network Controller in pure overlay networks, in pure OpenFlow networks, and in hybrid network-virtualization environments.  

According to the company, Big Virtual Switch can deliver significant CAPEX and OPEX benefits. A graphical figure — tagged Economics of Big Virtual Switchincluded in a product data sheet claims the company’s L2/L3 network virtualization facilitates “up to 50% more VMs per rack” and delivers CAPEX savings of $500,000 per rack annually and OPEX savings of $30,000 per rack annually. For those estimates, Big Switch assumes a rack size of 40 servers and suggests savings can be accrued across severs, operating-system instances, storage, networking, and operations. 

Strategies in Flux

Big Virtual Switch and Big Tap are essential SDN applications, but the company’s ultimate success in the marketplace will turn on the support its Big Network Controller receives from third-party vendors. Big Switch is aware of its external dependencies, which is why it has placed so much emphasis on its ecosystem, which it says includes A10 Networks, Arista Networks, Broadcom, Brocade, Canonical, Cariden Technologies, Citrix, Cloudscaling, Coraid, Dell, Endace, Extreme Networks, F5 Networks, Fortinet, Gigamon, Infoblox, Juniper Networks, Mellanox Technologies, Microsoft, Mirantis, Nebula, Palo Alto Networks, Piston Cloud Computing, Radware, StackOps, ThreatSTOP, and vArmour. The Big Switch press release includes an appendix of “supporting quotes” from those companies, but the company will require more than lip service from its ecosystem. 

Some companies will find that their interests are well aligned with those of Big Switch, but others are likely to be less motivated to put energy and resources into Big Switch’s SDN platform.  If you consider the vendor names listed above, you might deduce that the SDN strategies of more than a few are in flux. Some are considering whether to offer SDN controllers of their own. Even those who have no controller aspirations might be disinclined to bet too heavily or too early on a controller platform. They’ll follow the customers and the money. 

A growing number of commercial controllers are on the market (VMware/Nicira, NEC, and Big Switch) or have been announced as coming to market (IBM, HP, Cisco). Others will follow. Loyalties will shift as controller fortunes wax and wane. 

Courting the Channel 

With that in mind, Big Switch is seeking to enlist channel partners as well as technology partners. In a CRN article, we learn that Big Switch “has begun to recruit systems integrator and data center infrastructure-focused solution providers that can consult and design network architecture using Big Switch software and products from a galaxy of ecosystem partners.” In fact, Big Switch wants all its commercial sales to go through channel partners. 

In the CRN piece, Dave Butler, VP of sales at Big Switch, is candid about the symbiotic relationship the company desires from partners:

“None of our products work well alone in a data center — this is a very rigorous and rich ecosystem of partners. We’ll pay a finder’s fee to anyone who brings the right opportunity to us, but we’re not really a product sale. We need the integrators that can create a bundled solution, because that’s what makes the difference.”

. . . . “We bring them (partners) in as the specialist, and they have probably a greater touch than we might. We are not taking deals direct. Then, you have to do all the work by yourself. This is a perfect solution for their services and expertise. And, they can make money with us.”

Needs a Little Help from Its Friends

The plan is clear. Big Switch’s vendor ecosystem is meant to attract channel partners that already are selling those vendors’ products and are interested in expanding into SDN solutions. The channel partners, including SIs and datacenter-solution providers, will then bring Big Switch’s SDN platform to customers, with whom they have existing relationships. 

In theory, it all coheres. Big Switch knows it can’t go it alone against industry giants. It knows it needs more than a little help from its friends in the vendor community and the channel. 

For Big Switch, the vendor ecosystem expedites channel recruitment, and an effective channel accelerates exposure to customers. Big Switch has to move fast and demonstrate staying power. The controller race is far from over. 

Advertisement

Amazon-RIM: Summer Reunion?

Think back to last December, just before the holidays. You might recall a Reuters report, quoting “people with knowledge of the situation,” claiming that Research in Motion (RIM) rejected takeover propositions from Amazon.com and others.

The report wasn’t clear on whether the informal discussions resulted in any talk of price between Amazon and RIM, but apparently no formal offer was made. RIM, then still under the stewardship of former co-CEOs Jim Balsillie and Mike Lazaridis, reportedly preferred to remain independent and to address its challenges alone.

I Know What You Discussed Last Summer

Since then, a lot has happened. When the Reuters report was published — on December 20, 2011 — RIM’s market value had plunged 77 percent during the previous year, sitting then at about $6.8 billion. Today, RIM’s market capitalization is $3.7 billion. What’s more, the company now has Thorsten Heins as its CEO, not Balsillie and Lazardis, who were adamantly opposed to selling the company. We also have seen recent reports that IBM approached RIM regarding a potential acquisition of the Waterloo, Ontario-based company’s enterprise business, and rumors have surfaced that RIM might sell its handset business to Amazon or Facebook.

Meanwhile, RIM’s prospects for long-term success aren’t any brighter than they were last winter, and activist shareholders, not interested in a protracted turnaround effort, continue to lobby for a sale of the company.

As for Amazon, it is said to be on the cusp of entering the smartphone market, presumably using a forked version of Android, which is what it runs on the Kindle tablet.  From the vantage point of the boardroom at Amazon, that might not be a sustainable long-term plan. Google is looking more like an Amazon competitor, and the future trajectory of Android is clouded by Google’s strategic considerations and by legal imbroglios relating to patents. Those presumably were among the reasons Amazon approached RIM last December.

Uneasy Bedfellows

It’s no secret that Amazon and Google are uneasy Android bedfellows. As Eric Jackson wrote just after the Reuters story hit the wires:

Amazon has never been a big supporter of Google’s Android OS for its Kindle. And Google’s never been keen on promoting Amazon as part of the Android ecosystem. It seems that both companies know this is just a matter of time before each leaves the other.

Yes, there’s some question as to how much value inheres in RIM’s patents. Estimates on their worth are all over the map. Nevertheless, RIM’s QNX mobile-operating system could look compelling to Amazon. With QNX and with RIM’s patents, Amazon would have something more than a contingency plan against any strategic machinations by Google or any potential litigiousness by Apple (or others).  The foregoing case, of course, rests on the assumption that QNX, rechristened BlackBerry 10, is as far along as RIM claims. It also rests on the assumption that Amazon wants a mobile platform all its own.

It was last summer when Amazon reportedly made its informal approach to RIM. It would not be surprising to learn that a reprise of discussions occurred this summer. RIM might be more disposed to consider a formal offer this time around.

Xsigo: Hardware Play for Oracle, Not SDN

When I wrote about Xsigo earlier this year, I noted that many saw Oracle as a potential acquirer of the I/O virtualization vendor. Yesterday morning, Oracle made those observers look prescient, pulling the trigger on a transaction of undisclosed value.

Chris Mellor at The Register calculates that Oracle might have paid about $800 million for Xsigo, but we don’t know. What we do know is that Xsigo’s financial backers were looking for an exit. We also know that Oracle was willing to accommodate it.

For the Love of InfiniBand, It’s Not SDN

Some think Oracle bought a software-defined networking (SDN) company. I was shocked at how many journalists and pundits repeated the mantra that Oracle had moved into SDN with its Xsigo acquisition. That is not right, folks, and knowledgeable observers have tried to rectify that misconception.

I’ve gotten over a killer flu, and I have a residual sinus headache that sours my usually sunny disposition, so I’m no mood to deliver a remedial primer on the fundamentals of SDN. Suffice it to say, readers of this forum and those familiar with the pronouncements of the ONF will understand that what Xsigo does, namely I/O virtualization, is not SDN.  That is not to say that what Xsigo does is not valuable, perhaps especially to Oracle. Nonetheless, it is not SDN.

Incidentally, I have seen a few commentators throwing stones at the Oracle marketing department for depicting Xsigo as an SDN player, comparing it to Nicira Networks, which VMware is in the process of acquiring for a princely sum of $1.26 billion. It’s probably true that Oracle’s marketing mavens are trying to gild their new lily by covering it with splashes of SDN gold, but, truth be told, the marketing team at Xsigo began dressing their company in SDN garb earlier this year, when it became increasingly clear that SDN was a lot more than an ephemeral science project involving OpenFlow and boffins in lab coats.

Why Confuse? It’ll be Obvious Soon Enough

At Network Computing, Howard Marks tries to get everybody onside. I encourage you to read his piece in its entirety, because it provides some helpful background and context, but his superbly understated money quote is this one: “I’ve long been intrigued by the concept of I/O virtualization, but I think calling it software-defined networking is a stretch.”

In this industry, words are stretched and twisted like origami until we can no longer recognize their meaning. The result, more often than not, is befuddlement and confusion, as we witnessed yesterday, an outcome that really doesn’t help anybody. In fact, I would argue that Oracle and Xsigo have done themselves a disservice by playing the SDN card.

As Marks points out, “Xsigo’s use of InfiniBand is a good fit with Oracle’s Exadata and other clustered solutions.” What’s more, Matt Palmer, who notes that Xsigo is “not really an SDN acquisition,” also writes that “Oracle is the perfect home for Xsigo.” Palmer makes the salient point that Xsigo is essentially a hardware play for Oracle, one that aligns with Oracle’s hardware-centric approaches to compute and storage.

Oracle: More Like Cisco Than Like VMWare

Oracle could have explained its strategy and detailed the synergies between Xsigo and its family of hardware-engineered “Exasystems” (Exadata and Exalogic) —  and, to be fair, it provided some elucidation (see slide 11 for a concise summary) — but it muddied the waters with SDN misdirection, confusing some and antagonizing others.

Perhaps my analysis is too crude, but I see a sharp divergence between the strategic direction VMware is heading with its acquisition of Nicira and the path Oracle is taking with its Exasystems and Xsigo. Remember, Oracle, after the Sun acquisition, became a proprietary hardware vendor. Its focus is on embedding proprietary hooks and competitive differentiation into its hardware, much like Cisco Systems and the other converged-infrastructure players.

VMware’s conception of a software-defined data center is a completely different proposition. Both offer virtualization, both offer programmability, but VMware treats the underlying abstracted hardware as an undifferentiated resource pool. Conversely, Oracle and Cisco want their engineered hardware to play integral roles in data-center virtualization. Engineered hardware is what they do and who they are.

Taking the Malocchio in New Directions

In that vein, I expect Oracle to look increasingly like Cisco, at least on the infrastructure side of the house. Does that mean Oracle soon will acquire a storage player, such as NetApp, or perhaps another networking company to fill out its data-center portfolio? Maybe the latter first, because Xsigo, whatever its merits, is an I/O virtualization vendor, not a switching or routing vendor. Oracle still has a networking gap.

For reasons already belabored, Oracle is an improbable SDN player. I don’t see it as the likeliest buyer of, say, Big Switch Networks. IBM is more likely to take that path, and I might even get around to explaining why in a subsequent post. Instead, I could foresee Oracle taking out somebody like Brocade, presuming the price is right, or perhaps Extreme Networks. Both vendors have been on and off the auction block, and though Oracle’s Larry Ellison once disavowed acquisitive interest in Brocade, circumstances and Oracle’s disposition have changed markedly since then.

Oracle, which has entertained so many bitter adversaries over the years — IBM, SAP, Microsoft, SalesForce, and HP among them — now appears ready to cast its “evil eye” toward Cisco.

Some Thoughts on VMware’s Strategic Acquisition of Nicira

If you were a regular or occasional reader of Nicira Networks CTO Martin Casado’s blog, Network Heresy, you’ll know that his penultimate post dealt with network virtualization, a topic of obvious interest to him and his company. He had written about network virtualization many times, and though Casado would not describe the posts as such, they must have looked like compelling sales pitches to the strategic thinkers at VMware.

Yesterday, as probably everyone reading this post knows, VMware announced its acquisition of Nicira for $1.26 billion. VMware will pay $1.05 billion in cash and $210 million in unvested equity awards.  The ubiquitous Frank Quattrone and his Quatalyst Partners, which reportedly had been hired previously to shop Brocade Communications, served as Nicira’s adviser.

Strategic Buy

VMware should have surprised no one when it emphasized that its acquisition of Nicira was a strategic move, likely to pay off in years to come, rather than one that will produce appreciable near-term revenue. As Reuters and the New York Times noted, VMware’s buy price for Nicira was 25 times the amount ($50 million) invested in the company by its financial backers, which include venture-capital firms Andreessen Horowitz, Lightspeed,and NEA. Diane Greene, co-founder and former CEO of VMware — replaced four years ago by Paul Maritz — had an “angel” stake in Nicira, as did as Andy Rachleff, a former general partner at Benchmark Capital.

Despite its acquisition of Nicira, VMware says it’s not “at war” with Cisco. Technically, that’s correct. VMware and its parent company, EMC, will continue to do business with Cisco as they add meat to the bones of their data-center virtualization strategy. But the die was cast, and  Cisco should have known it. There were intimations previously that the relationship between Cisco and EMC had been infected by mutual suspicion, and VMware’s acquisition of Nicira adds to the fear and loathing. Will Cisco, as rumored, move into storage? How will Insieme, helmed by Cisco’s aging switching gods, deliver a rebuttal to VMware’s networking aspirations? It won’t be too long before the answers trickle out.

Still, for now, Cisco, EMC, and VMware will protest that it’s business as usual. In some ways, that will be true, but it will also be a type of strategic misdirection. The relationship between EMC and Cisco will not be the same as it was before yesterday’s news hit the wires. When these partners get together for meetings, candor could be conspicuous by its absence.

Acquisitive Roads Not Traveled

Some have posited that Cisco might have acquired Nicira if VMware had not beaten it to the punch. I don’t know about that. Perhaps Cisco might have bought Nicira if the asking price were low, enabling Cisco to effectively kill the startup and be done with it. But Cisco would not have paid $1.26 billion for a company whose approach to networking directly contradicts Cisco’s hardware-based business model and market dominance. One typically doesn’t pay that much to spike a company, though I suppose if the prospective buyer were concerned enough about a strategic technology shift and a major market inflection, it might do so. In this case, though, I suspect Cisco was blindsided by VMware. It just didn’t see this coming — at least not now, not at such an early state of Nicira’s development.

Similarly, I didn’t see Microsoft or Citrix as buyers of Nicira. Microsoft is distracted by its cloud-service provider aspirations, and the $1.26 billion would have been too rich for Citrix.

IBM’s Moves and Cisco’s Overseas Cash Horde

One company I had envisioned as a potential (though less likely) acquirer of Nicira was IBM, which already has a vSwitch. IBM might now settle for the SDN-controller technology available from Big Switch Networks. The two have been working together on IBM’s Open Data Center Interoperable Network (ODIN), and Big Switch’s technology fits well with IBM’s PureSystems and its top-down model of having application workloads command and control  virtualized infrastructure. As the second network-virtualization domino to fall, Big Switch likely will go for a lower price than did Nicira.

On Twitter, Dell’s Brad Hedlund asked whether Cisco would use its vast cash horde to strike back with a bold acquisition of its own. Cisco has two problems here. First, I don’t see an acquisition that would effectively blunt VMware’s move. Second, about 90 percent of Cisco’s cash (more than $42 billion) is offshore, and CEO John Chambers doesn’t want to take a tax hit on its repatriation. He had been hoping for a “tax holiday” from the U.S. government, but that’s not going to happen in the middle of an election campaign, during a macroeconomic slump in which plenty of working Americans are struggling to make ends meet. That means a significant U.S.-based acquisition likely is off the table, unless the target company is very small or is willing to take Cisco stock instead of cash.

Cisco’s Innovator’s Dilemma

Oh, and there’s a third problem for Cisco, mentioned earlier in this prolix post. Cisco doesn’t want to embrace this SDN stuff. Cisco would rather resist it. The Cisco ONE announcement really was about Cisco’s take on network programmability, not about SDN-type virtualization in which overlay networks run atop an underyling physical network.

Cisco is caught in a classic innovator’s dilemma, held captive by the success it has enjoyed selling prodigious amounts of networking gear to its customers, and I don’t think it can extricate itself. It’s built a huge and massively successful business selling a hardware-based value proposition predicated on switches and routers. It has software, but it’s not really a software company.

For Cisco, the customer value, the proprietary hooks, are in its boxes. Its whole business model — which, again, has been tremendously successful — is based around that premise. The entire company is based around that business model.  Cisco eventually will have to reinvent itself, like IBM did after it failed to adapt to client-server computing, but the day of reckoning hasn’t arrived.

On the Defensive

Expect Cisco to continue to talk about the northbound interface (which can provide intelligence from the switch) and about network programmability, but don’t expect networking’s big leopard to change its spots. Cisco will try to portray the situation differently, but it’s defending rather than attacking, trying to hold off the software-based marauders of infrastructure virtualization as long as possible. The doomsday clock on when they’ll arrive in Cisco data centers just moved up a few ticks with VMware’s acquisition of Nicira.

What about the other networking players? Sadly, HP hasn’t figured out what to about SDN, even though OpenFlow is available on its former ProCurve switches. HP has a toe dipped in the SDN pool, but it doesn’t seeming willing to take the initiative. Juniper, which previously displayed ingenuity in bringing forward QFabric, is scrambling for an answer. Brocade is pragmatically embracing hybrid control planes to maintain account presence and margins in the near- to intermediate-term.

Arista Networks, for its part, might be better positioned to compete on networking’s new playing field. Arista Networks’ CEO Jayshree Ullal had the following to say about yesterday’s news:

“It’s exciting to see the return of innovative networking companies and the appreciation for great talent/technology. Software Defined Networking (SDN) is indeed disrupting legacy vendors. As a key partner of VMware and co-innovator in VXLANs, we welcome the interoperability of Nicira and VMWare controllers with Arista EOS.”

Arista’s Options

What’s interesting here is that Arista, which invariably presents its Extensible OS (EOS) as “controller friendly,” earlier this year demonstrated interoperability with controllers from VMware, Big Switch Networks, and Nebula, which has built a cloud controller for OpenStack.

One of Nebula’s investors is Andy Bechtolsheim, whom knowledgeable observers will recognize as the chief development officer (CDO) of, and major investor in, Arista Networks.  It is possible that Bechtolsheim sees a potential fit between the two companies — one building a cloud controller and one delivering cloud networking. To add fuel to this particular fire, which may or may not emit smoke, note that the Nebula cloud controller already features Arista technology, and that Nebula is hiring a senior network engineer, who ideally would have “experience with cloud infrastructure (OpenStack, AWS, etc. . . .  and familiarity with OpenFlow and Open vSwitch.”

 Open or Closed?

Speaking of Open vSwitch, Matt Palmer at SDN Centralwill feel some vindication now that VMware has purchased a company whose engineering team has made significant contributions to the OVS code. Palmer doubtless will cast a wary eye on VMware’s intentions toward OVS, but both Steve Herrod, VMware’s CTO, and Martin Casado, Nicira’s CTO, have provided written assurances that their companies, now combining, will not retreat from commitments to OVS and to Open Flow and Quantum, the OpenStack networking  project.

Meanwhile, GigaOm’s Derrick Harris thinks it would be bad business for VMware to jilt the open-source community, particularly in relation to hypervisors, which “have to be treated as the workers that merely carry out the management layer’s commands. If all they’re there to do is create virtual machines that are part of a resource pool, the hypervisor shouldn’t really matter.”

This seems about right. In this brave new world of virtualized infrastructure, the ultimate value will reside in an intelligent management layer.

PS: I wrote this post under a slight fever and a throbbing headache, so I would not be surprised to discover belatedly that it contains at least a couple typographical errors. Please accept my apologies in advance.

Understanding Cisco’s Relationship to SDN Market

Analysts and observers have variously applauded or denounced Cisco for its network-Cisco ONE programmability pronouncements last week.  Some pilloried the company for being tentative in its approach to SDN, contrasting the industry giant’s perceived reticence with its aggressive pursuit of previous emerging technology markets such as IP PBX, videoconferencing, and converged infrastructure (servers).

Conversely, others have lauded Cisco’s approach to SDN as far more aggressive than its lackluster reply to challenges in market segments such as application-delivery controllers (ADCs) and WAN optimization, where F5 and Riverbed, respectively, demonstrated how a tightly focused strategy and expertise above the network layer could pay off against Cisco.

Different This TIme

But I think they’ve missed a very important point about Cisco’s relationship to the emerging SDN market.  Analogies and comparisons should be handled with care. Close inspection reveals that SDN and the applications it enables represent a completely different proposition from the markets mentioned above.

Let’s break this down by examining Cisco’s aggressive pursuit of IP-based voice and video. It’s not a mystery as to why Cisco chose to charge headlong into those markets. They were opportunities for Cisco to pursue its classic market adjacencies in application-related extensions to its hegemony in routing and switching. Cisco also saw video as synergistic with its core network-infrastructure business because it generated bandwidth-intensive traffic that filled up existing pipes and required new, bigger ones.

Meanwhile, Cisco’s move into UCS servers was driven by strategic considerations. Cisco wanted the extra revenue servers provided, but it also wanted to preemptively seize the advantage over its former server partners (HP, Dell, IBM) before they decided to take the fight to Cisco. What’s more, all the aforementioned vendors confronted the challenge of continuing to grow their businesses and public-market stock prices in markets that were maturing and slowing.

Cisco’s reticence to charge into WAN optimization and ADCs also is explicable. Strategically, at the highest echelons within Cisco, the company viewed these markets as attractive, but not as essential extensions to its core business. The difficulty was not only that Cisco didn’t possess the DNA or the acumen to play in higher-layer network services — though that was definitely a problem — but also that Cisco did not perceive those markets as conferring sufficiently compelling rewards or strategic advantages to warrant the focus and resources necessary for market domination. Hence, we have F5 Networks and its ADC market leadership, though certainly F5’s razor-sharp focus and sustained execution factored heavily into the result.

To Be Continued

Now, let’s look at SDN. For Cisco, what sort of market does it represent? Is it an opportunity to extend its IP-based hegemony, like voice, video, and servers? No, not at all. Is it an adjunct market, such as ADCs and WAN optimization, that would be nice to own but isn’t seen as strategically critical or sufficiently large to move the networking giant’s stock-price needle? No, that’s not it, either.

So, what is SDN’s market relationship to Cisco?

Simply put, it is a potential existential threat, which makes it unlike IP PBXes, videoconferencing, compute hardware, ADCs, and WAN optimization. SDN is a different sort of beast, for reasons that have been covered here and elsewhere many times.  Therefore, it necessitates a different sort of response — carefully calculated, precisely measured, and thoroughly plotted. For Cisco, the ONF-sanctioned approach to SDN is not an opportunity that the networking giant can seize,  but an incipient threat to the lifeblood of its business that it must blunt and contain — and, whatever else, keep out of its enterprise redoubt.

Did Cisco achieve its objective? That’s for a subsequent post.

Dell’s Steady Progression in Converged Infrastructure

With its second annual Dell Storage Forum in Boston providing the backdrop, Dell made a converged-infrastructure announcement this week.  (The company briefed me under embargo late last week.)

The press release is available on the company’s website, but I’d like to draw attention to a few aspects of the announcement that I consider noteworthy.

First off, Dell now is positioned to offer its customers a full complement of converged infrastructure, spanning server, storage, and networking hardware, as well as management software. For customers seeking a single-vendor, one-throat-to-choke solution, this puts Dell  on parity with IBM and HP, while Cisco still must partner with EMC or with NetApp for its storage technology.

Bringing the Storage

Until this announcement, Dell was lacking the storage ingredients. Now, with what Dell is calling the Dell Converged Blade Data Center solution, the company is adding its EqualLogic iSCSI Blade Arrays to Dell PowerEdge blade servers and Dell Force10 MXL blade switching. Dell says this package gives customers an entire data center within a single blade enclosure, streamlining operations and management, and thereby saving money.

Dell’s other converged-infrastructure offering is the Dell vStart 1000. For this iteration of vStart, Dell is including, for the first time, its Compellent storage and Force10 networking gear in one integrated rack for private-cloud environments.

The vStart 1000 comes in two configurations: the vStart 1000m and the vStart 1000v. The packages are nearly identical — PowerEdge M620 servers, PowerEdge R620 management servers, Dell Compellent Series 40 storage, Dell Force10 S4810 ToR Networking and Dell Force10 S4810 ToR Networking, plus Brocade 5100 ToR Fibre-Channel Switches — but the vStart 1000m comes with Windows Server 2008 R2 Datacenter (with the Hyper-V hypervisor), whereas the vStart 1000v features trial editions of VMware vCenter and VMware vSphere (with the ESXi hypervisor).

An an aside, it’s worth mentioning that Dell’s inclusion of Brocade’s Fibre-Channel switches confirms that Dell is keeping that partnership alive to satisfy customers’ FC requirements.

Full Value from Acquisitions

In summary, then, is Dell delivering converged infrastructure with both its in-house storage options, demonstrating that it has fully integrated its major hardware acquisitions into the mix.   It’s covering as much converged ground as it can with this announcement.

Nonetheless, it’s fair to ask where Dell will find customers for its converged offerings. During my briefing with Dell, I was told that mid-market was the real sweet spot, though Dell also sees departmental opportunities in large enterprises.

The mid-market, though, is a smart choice, not only because the various technology pieces, individually and collectively, seem well suited to the purpose, but also because Dell, given its roots and lineage, is a natural player in that space. Dell has a strong mandate to contest the mid-market, where it can hold its own against any of its larger converged-infrastructure rivals.

Mid-Market Sweet Spot

What’s more, the mid-market — unlike cloud-service providers today and some large enterprise in the not-too-distant future — are unlikely to have the inclination, resources, and skills to pursue a DIY, software-driven, DevOps-oriented variant of converged infrastructure that might involve bare-bones hardware from Asian ODMs. At the end of the day, converged infrastructure is sold as packaged hardware, and paying customers will need to perceive and realize value from buying the boxes.

The mid-market would seem more than receptive to the value proposition that Dell is selling, which is that its converged infrastructure will reduce the complexity of IT management and deliver operational cost savings.

This finally leads us to a discussion of Dell’s take on converged infrastructure. As noted in an eChannelLine article, Dell’s notion of converged infrastructure encompasses operations management, services management, and applications management. As Dell continues down the acquisition trail, we should expect the company to place greater emphasis on software-based intelligence in those areas.

That, too, would be a smart move. The battle never ends, but Dell — despite its struggles in the PC market — is now more than punching its own weight in converged infrastructure.

At Dell, Networking’s Role Secondary but Integral

Dell made a networking announcement last week, and, for the most part, reaction was muted. That’s party because Dell’s networking narrative is evolving and in transition, and partly because the announcements related to incremental, though notable, progression.

To be fair, Dell’s networking narrative is part of a larger story the company is telling in the data center. Networking is integral to that story, but it’s not the centerpiece and never will be. Dell is working from the blueprint of its Virtual Network Architecture (VNA), so its purchase and stewardship of Force10 is framed within a bigger picture that involves not just converged infrastructure, but also workload-driven orchestration of virtualized environments.

Integration and Assimilation

Some good news for Dell is that its integration and assimilation of Force10 Networks seems to have gone well and is now complete.  Dell’s OpenManage Networking Manager (OMNM) 5.0. offers a new look and support for the full line of Dell networking products, including the Force10 portfolio. What’s more, in its Dell Force10 MXL blade interconnect, a  40Gb Ethernet switch for the M1000e Blade chassis, Dell brings delivers an apt metaphor as well as a blade-server switch.

In that sense, it’s helpful to recall that Dell’s acquisition of Force10 was motivated by a desire to integrate networking into an automated, orchestrated data center in which it already offered compute and storage. Dell concluded that needed to own networking technology just as it owned server and storage technology. It further deduced that it needed a comprehensive networking portfolio, extending across SAN and LAN environments. Just as it moved previously to shake its dependence on storage partners, it would do likewise in networking.

Dell sees networking as an integral enabling technology, but not as an end in itself. Dell believes it can be more flexible than HP and IBM in certain enterprise demographics, and it believes it can outflank Cisco by being less “network centric” and more open to developments such as software defined networking (SDN).Force10, which was thought to be between a rock and hard place just before being acquired, understands and accepts its role in the Dell universe.

Fitting Into VNA

The key to understanding Dell’s data-center strategy is Virtual Network Architecture (VNA). The announcement of the new blade-server switch fits into that plan. Dell says VNA’s purpose is to virtualize, automate, and orchestrates network services so that they can adapt readily to application and business requirements. Core elements of VNA include the following:

  • High-performance switching systems for the campus and the data center
  • Virtualized Layer 4-7 services
  • Comprehensive automation & orchestration software
  • Open workload/hypervisor interfaces

So, what does it all mean? It means Dell is taking an approach that it believes will be differentiated and add considerable value in customers’ and prospective customers’ data centers. On the networking front, Dell believes it has espoused a strategy that encompasses and envelops the rise of SDN while also taking and accommodating approach to the networking gear already present in customer accounts.

Workload-Oriented Approach

In an article at The VAR Guy, Nathan Eddy quotes Dario Zamarian, VP and GM of Dell Networking, as follows:

“We are taking a workload-oriented approach — as in, ‘What does each require first?’ as opposed to starting with the network first [and] then trying to fit the application to it. In other words, networking is the enabler. The ultimate goal of VNA is to make networking as simple to set up, automate, operate, and manage as servers. VNA is doing for networking what VMware did for servers.”

Well, that’s the plan. In theory, in a slide show, all the pieces are there, but Dell has to execute and deliver on the vision. One can identify holes in the structure, places where Dell will need to buy, partner, or build to close the gaps. It’s clearly doing that, though, as the Force10 acquisition and others recently attest.

Taking Force10’s technology forward in alignment with its plans, Dell not only announced  a 40GbE-enabled blade server switch. It also introduced fabric- and network-management tools to simplify operations in the data center and the campus, and it announced data-center enhancements (stacking technology, L2 multipathing, data-center bridging, automated workload mobility through auto-provisioning of VLANs) to Force10’s FTOS for its S4810 10/40G switching platform.

Encompassing SDN

On the SDN front, Dell announced interoperability with Big Switch Networks’ Open SDN architecture and its OpenFlow-based Floodlight controller. That interoperability will be showcased next week in joint demonstrations at Interop, with the application emphasis on cloud multi-tenancy.

Regardless of where Dell goes with SDN, and regardless of how quickly (or slowly) SDN makes encroachments into the enterprise, Dell’s VNA model accounts for it and much else besides. Dell believes it can win in workload and network orchestration, with its Advanced Infrastructure Manager (AIM) providing virtual-network programming interfaces and doubtless with some forthcoming orchestration technologies it has yet to introduce (or buy).

Dell’s VNA seems a viable plan. But can the company continue to execute on it? Dell would have more focus and resources to do so if it jettisoned its woebegone consumer business, but that divestiture doesn’t seem to be in the cards.

Debating SDN, OpenFlow, and Cisco as a Software Company

Greg Ferro writes exceptionally well, is technologically knowledgeable, provides incisive commentary, and invariably makes cogent arguments over at EtherealMind.  Having met him, I can also report that he’s a great guy. So, it is with some surprise that I find myself responding critically to his latest blog post on OpenFlow and SDN.

Let’s start with that particular conjunction of terms. Despite occasional suggestions to the contrary, SDN and OpenFlow are not inseparable or interchangeable. OpenFlow is a protocol, a mechanism that allows a server, known in SDN parlance as a controller, to interact with and program flow tables (for packet forwarding) on switches. It facilitates the separation of the control plane from the data plane in some SDN networks.

But OpenFlow is not SDN, which can be achieved with or without OpenFlow.  In fact, Nicira Networks recently announced two SDN customer deployments of its Network Virtualization Platform (NVP) — at DreamHost and at Rackspace, respectively — and you won’t find mention of OpenFlow in either press release, though OpenStack and its Quantum networking project receive prominent billing. (I’ll be writing more about the Nicira deployments soon.)

A Protocol in the Big Picture 

My point is not to diminish or disparage OpenFlow, which I think can and will be used gainfully in a number of SDN deployments. My point is that we have to be clear that the bigger picture of SDN is not interchangeable with the lower-level functionality of OpenFlow.

In that respect, Ferro is absolutely correct when he says that software-defined networking, and specifically SDN controller and application software, are “where the money is.” He conflates it with OpenFlow — which may or may not be involved, as we already have established — but his larger point is valid.  SDN, at the controller and above, is where all the big changes to the networking model, and to the industry itself, will occur.

Ferro also likely is correct in his assertion that OpenFlow, in and of itself, will  not enable “a choice of using low cost network equipment instead of the expensive networking equipment that we use today. “ In the near term, at least, I don’t see major prospects for change on that front as long as backward compatibility, interoperability with a bulging bag of networking protocols, and the agendas of the networking old guard are at play.

Cisco as Software Company

However, I think Ferro is wrong when he says that the market-leading vendors in switching and routing, including Cisco and Juniper, are software companies. Before you jump down my throat, presuming that’s what you intend to do, allow me to explain.

As Ferro says, Cisco and Juniper, among others, have placed increasing emphasis on the software features and functionality of their products. I have no objection there. But Ferro pushes his argument too far and suggests that the “networking business today is mostly a software business.”  It’s definitely heading in that direction, but Cisco, for one, isn’t there yet and probably won’t be for some time.  The key word, by the way, is “business.”

Cisco is developing more software these days, and it is placing more emphasis on software features and functionality, but what it overwhelmingly markets and sells to its customers are switches, routers, and other hardware appliances. Yes, those devices contain software, but Cisco sells them as hardware boxes, with box-oriented pricing and box-oriented channel programs, just as it has always done. Nitpickers will note that Cisco also has collaboration and video software, which it actually sells like software, but that remains an exception to the rule.

Talks Like a Hardware Company, Walks Like a Hardware Company

For the most part, in its interactions with its customers and the marketplace in general, Cisco still thinks and acts like a hardware vendor, software proliferation notwithstanding. It might have more software than ever in its products, but Cisco is in the hardware business.

In that respect, Cisco faces the same fundamental challenge that server vendors such as HP, Dell, and — yes — Cisco confront as they address a market that will be radically transformed by the rise of cloud services and ODM-hardware-buying cloud service providers. Can it think, figuratively and literally, outside the box? Just because Cisco develops more software than it did before doesn’t mean the answer is yes, nor does it signify that Cisco has transformed itself into a software vendor.

Let’s look, for example, at Cisco’s approach to SDN. Does anybody really believe that Cisco, with its ongoing attachment to ASIC-based hardware differentiation, will move toward a software-based delivery model that places the primary value on server-based controller software rather than on switches and routers? It’s just not going to happen, because  it’s not what Cisco does or how it operates.

Missing the Signs 

And that bring us to my next objection.  In arguing that Cisco and others have followed the market and provided the software their customers want, Ferro writes the following:

“Billion dollar companies don’t usually miss the obvious and have moved to enhance their software to provide customer value.”

Where to begin? Well, billion-dollar companies frequently have missed the obvious and gotten it horribly wrong, often when at least some individuals within the companies in question knew that their employer was getting it horribly wrong.  That’s partly because past and present successes can sow the seeds of future failure. As in Clayton M. Christensen’s classic book The Innovator’s Dilemma, industry leaders can have their vision blinkered by past successes, which prevent them from detecting disruptive innovations. In other cases, former market leaders get complacent or fail to acknowledge the seriousness of a competitive threat until it is too late.

The list of billion-dollar technology companies that have missed the obvious and failed spectacularly, sometimes disappearing into oblivion, is too long to enumerate here, but some  names spring readily to mind. Right at the top (or bottom) of our list of industry ignominy, we find Nortel Networks. Once a company valued at nearly $400 billion, Nortel exists today only in thoroughly digested pieces that were masticated by other companies.

Is Cisco Decline Inevitable?

Today, we see a similarly disconcerting situation unfolding at Research In Motion (RIM), where many within the company saw the threat posed by Apple and by the emerging BYOD phenomenon but failed to do anything about it. Going further back into the annals of computing history, we can adduce examples such as Novell, Digital Equipment Corporation, as well as the raft of other minicomputer vendors who perished from the planet after the rise of the PC and client-sever computing. Some employees within those companies might even have foreseen their firms’ dark fates, but the organizations in which they toiled were unable to rescue themselves.

They were all huge successes, billion-dollar companies, but, in the face of radical shifts in industry and market dynamics, they couldn’t change who and what they were.  The industry graveyard is full of the carcasses of company’s that were once enormously successful.

Am I saying this is what will happen to Cisco in an era of software-defined networking? No, I’m not prepared to make that bet. Cisco should be able to adapt and adjust better than the aforementioned companies were able to do, but it’s not a given. Just because Cisco is dominant in the networking industry today doesn’t mean that it will be dominant forever. As the old investment disclaimer goes, past performance does not guarantee future results. What’s more, Cisco has shown a fallibility of late that was not nearly as apparent in its boom years more than a decade ago.

Early Days, Promising Future

Finally, I’m not sure that Ferro is correct when he says Open Network Foundation’s (ONF) board members and its biggest service providers, including Google, will achieve CapEx but not OpEx savings with SDN. We really don’t know whether these companies are deriving OpEx savings because they’re keeping what they do with their operations and infrastructure highly confidential. Suffice it to say, they see compelling reasons to move away from buying their networking gear from the industry’s leading vendors, and they see similarly compelling reasons to embrace SDN.

Ferro ends his piece with two statements, the first of which I agree with wholeheartedly:

“That is the future of Software Defined Networking – better, dynamic, flexible and business focussed networking. But probably not much cheaper in the long run.”

As for that last statement, I believe there is insufficient evidence on which to render a verdict. As we’ve noted before, these are early days for SDN.

Fear Compels HP and Dell to Stick with PCs

For better or worse, Hewlett-Packard remains committed to the personal-computer business, neither selling off nor spinning off that unit in accordance with the wishes of its former CEO. At the same, Dell is claiming that it is “not really a PC company,” even though it will continue to sell an abundance of PCs.

Why are these two vendors staying the course in a low-margin business? The popular theory is that participation in the PC business affords supply-chain benefits such as lower costs for components that can be leveraged across servers. There might be some truth to that, but not as much as you might think.

At the outset, let’s be clear about something: Neither HP nor Dell manufactures its own PCs. Manufacture of personal computers has been outsourced to electronics manufacturing services (EMS) companies and original design manufacturers (ODMs).

Growing Role of the ODM

The latter do a lot more than assemble and manufacture PCs. They also provide outsourced R&D and design for OEM PC vendors.  As such, perhaps the greatest amount of added value that a Dell or an HP brings to its PCs is represented by the name on the bezel (the brand) and the sales channels and customer-support services (which also can be outsourced) they provide.

Major PC vendors many years ago decided to transfer manufacturing to third-party companies in Taiwan and China. Subsequently, they also increasingly chose to outsource product design. As a result, ODMs design and manufacture PCs. Typically ODMs will propose various designs to the PC vendors and will then build the models the vendors select. The PC vendor’s role in the design process often comes down to choosing the models they want, sometimes with vendor-specified tweaks for customization and market differentiation.

In short, PC vendors such as HP and Dell don’t really make PCs at all. They rebrand them and sell them, but their involvement in the actual creation of the computers has diminished markedly.

Apple Bucks the Trend 

At this point, you might be asking: What about Apple? Simply put, unlike its PC brethren, Apple always has insisted on controlling and owning a greater proportion of the value-added ingredients of its products.

Unlike Dell and HP, for example, Apple has its own operating system for its computers, tablets, and smartphones. Also unlike Dell and HP, Apple did not assign hardware design to ODMs. In seeking costs savings from outsourced design and manufacture, HP and Dell sacrificed control over and ownership of their portable and desktop PCs. Apple wagered that it could deliver a premium, higher-cost product with a unique look and feel. It won the bet.

A Spurious Claim?

Getting back to HP, does it actually derive economies of scale for its server business from the purchase of PC components in the supply chain? It’s possible, but it seems unlikely. The ODMs with which HP contracts for design and manufacture of its PCs would get a much better deal on component costs than would HP, and it’s now standard practice for those ODMs to buy common components that can be used in the manufacture and assembly of products for all their brand-name OEM customers. It’s not clear to me what proportion of components in HP’s PCs are supplied and integrated by the ODMs, but I suspect the percentage is substantial.

On the whole, then, HP and Dell might be advancing a spurious argument about remaining in the PC business because it confers savings on the purchase of components that can used in servers.

Diagnosing the Addiction

If so, then, why would HP and Dell remain in the PC game? Well, the answer is right there on the balance sheets of both companies. Despite attempts at diversification, and despite initiatives to transform into the next IBM, each company still has a revenue reliance on — perhaps even an addiction to — PCs.

According to calculations by Sterne Agee analyst Shaw Wu, about 70 to 75 percent of Dell revenue is connected to the sale of PCs. (Dell derived about 43 percent of its revenue directly from PCs in its most recent quarter.) In relative terms, HP’s revenue reliance on PCs is not as great — about 30% of direct revenue — but, when one considers the relationship between PCs and related related peripherals, including printers, the company’s PC exposure is considerable.

If either company were to exit the PC business, shareholders would react adversely. The departure from the PC business would leave a gaping revenue hole that would not be easy to fill. Yes, relative margins and profitability should improve, but at the cost of much lower channel and revenue profiles. Then there is the question of whether a serious strategic realignment would actually be successful. There’s risk in letting go of a bird in hand for one that’s not sure to be caught in the bush.

ODMs Squeeze Servers, Too

Let’s put aside, at least for this post, the question of whether it’s good strategy for Dell and HP to place so much emphasis on their server businesses. We know that the server business faces high-end disruption from ODMs, which increasingly offer hardware directly to large customers such as cloud service providers, oil-and-gas firms,  and major government agencies. The OEM (or vanity) server vendors still have the vast majority of their enterprise customers as buyers, but it’s fair to wonder about the long-term viability of that market, too.

As ODMs take on more of the R&D and design associated with server-hardware production, they must question just how much value the vanity OEM vendors are bringing to customers. I think the customers and vendors themselves are asking the same questions, because we’re now seeing a concerted effort in the server space by vendors such as Dell and HP to differentiate “above the board” with software and system innovations.

Fear Petrifies

Can HP really become a dominant purveyor of software and services to enterprises and cloud service providers? Can Dell be successful as a major player in the data center? Both companies would like to think that they can achieve those objectives, but it remains to be seen whether they have the courage of their convictions. Would they bet the business on such strategic shifts?

Aye, there’s the rub. Each is holding onto a commoditized, low-margin PC business not because they like being there, but because they’re afraid of being somewhere else.

Thinking About SDN Controllers

As recent posts on this blog attest, I have been thinking a lot lately about software defined networking (SDN). I’m not alone in that regard, and I’m no doubt behind the curve. Some impressive intellects and astute minds are active in SDN, and I merely do my best to keep up and try to assimilate what I learn from them.

I must admit, I find it a difficult task. Not only is the SDN technical community advancing quickly, propelling the technology forward at a fast pace, but the market has begun to take shape, with new product launches and commercial deployments occurring nearly every week.

Control Issues

Lately, I’ve turned my mind to controllers and their place in the SDN universe. I’ve drawn some conclusions that will be obvious to any habitué of the SDN realm, but that I hope might be useful to others trying, as I am, to capture a reasonably clear snapshot of where things stand. (We’ll need to keep taking snapshots, of course, but that’s always the case with emerging technologies, not just with SDN.)

To understand the role of controllers within the context of SDN, it helps to have analogies. Fortunately, those analogies have been provided by SDN thought leaders such as Nick McKeown, Scott Schenker, Martin Casado, and many others. This blog, as you know, has not been especially replete with pictures or diagrams, but I feel it necessary to point you to a few on this occasion. As such, feel free to consult this presentation by Nick McKeown; it contains several diagrams that vividly illustrate the controller’s (and the control layer’s) key role in SDN.

The controller is akin to a computer’s operating system (a network-control operating system, if you will) on which applications are written. It communicates with the underlying network hardware (switches and routers) through a protocol such as OpenFlow. Different types of SDN controller software, then, are analogous, in certain respects, to Windows, Linux, and Mac OS X., What’s more, just like those computer-based operating systems, today’s controller software is not interoperable.

Not Just OpenFlow

Given what I’ve just written, one ought to choose one’s controller with particular care. Everything I’ve read — please correct me if I’m wrong, SDN cognoscenti — suggests it won’t necessarily be easy to shift your development and programming efforts, not to mention your network applications, seamlessly from one controller to another. (Yes, development and programming, because the whole idea of SDN is to create abstractions that make the network programmable and software defined — or “driven,” as the IETF would have it.)

Moreover — and I think this is potentially important — we hear a lot of talk about “OpenFlow controllers,” but at least some SDN controllers can be (and are) conversant in protocols and mechanisms that extend beyond OpenFlow.

In fact, in a blog post this past August, Nicira’s Martin Casado distinguished between OpenFlow controllers and what he called “general SDN controllers,” which he defined as those “in which the control application is fully decoupled from both the underlying protocol(s) communicating with the switches, and the protocol(s) exchanging state between controller instances (assuming the controllers can run in a cluster).”

That’s significant. As much as OpenFlow and SDN have been conflated and intertwined in the minds of many, we need to understand that SDN controllers are at higher layer of network abstraction, providing a platform for network applications and a foundation for networking programmability. OpenFlow is just an underlying protocol that facilitates interaction between the controller and data-forwarding tables on switches. Some controllers leverage OpenFlow exclusively, and others are (or will be) capable of accommodating other protocols and mechanisms to achieve the same practical result.

Talking Onix

An example is Onix, the controller proffered by Nicira. In a paper that is available online,  the authors specify the following:

“The Onix design does not dictate a particular protocol for managing network element forwarding state. Rather, the primary interface to the application is the NIB, and any suitable protocol supported by the elements in the network can be used under the covers to keep the NIB entities in sync with the actual network state.”

To be sure, Onix supports OpenFlow, but it also supports “multiple controller/switch protocols (including OpenFlow),” according to Casado’s aforementioned blog post.

There are a number of controllers available, of course, including NEC’s ProgrammableFlow Controller Appliance and Big Switch Networks’ Floodlight, among others. NEC sells its controller for a list price of $75,000 and it has IBM as a partner, presumably to help enterprises get up and running with SDN implementations in exchange for professional-services fees.

Controller Considerations

But this brief post here isn’t intended to provide an enumeration and evaluation of the available SDN controllers on the marketplace. I’m not up to that job, and I respectfully defer the task to those with a lot more technical acumen than I possess. What I want to emphasize here is that the abstractions and platform support provided by an SDN controller qualify it as a critical component of the SDN design architecture, all the more so because controllers lack interoperability, at least for now.

As Nicira’s Casado wrote last spring, “ you most likely will not have interoperability at the controller level (unless a standardized software platform was introduced).”

So, think carefully about the controller. While the OpenFlow protocol provides essentially the same functionality regardless of the context in which it is used, the controller is a different story. There will be dominant controllers, niche controllers, controllers that are more scalable than others (perhaps with the help of applications that run on them), and there will be those that emerge as best of class for service providers, high-performance computing (HPC), and data centers, respectively.

There will be a lot riding on your controller, in more ways than one.

SDN’s Continuing Evolution

At the risk of understatement, I’ll begin this post by acknowledging that we are witness to intensifying discussion about the applicability and potential of software-defined networking (SDN). Frequently, such discourse is conjoined and conflated with discussion of OpenFlow.

But the two, as we know, are neither the same nor necessarily inextricable. Software-defined networking is a big-picture concept involving controller-driven programmable networks whereas OpenFlow is a protocol that enables interaction between a control plane and the data plane of a switch.

Not Necessarily Inextricable

A salient point to remember — there are others, I’m sure, but I’m leaning toward minimalism today — is that, while SDN and OpenFlow often are presented as joined at the hip, they need not be. You can have SDN without Open Flow. Furthermore, it’s worth bearing in mind that the real magic of SDN resides beyond OpenFlow’s reach, at a higher layer of  abstraction in the SDN value hierarchy.

So, with that in mind, let’s take a brief detour into SDN history, to see whether the past can inform the present and illuminate the future. I was fortunate enough to have some help on this journey from  Amin Tootoonchian, a PhD student in the Systems and Networking Group, Department of Computer Science, University of Toronto.

Tootoonchian is actively involved in research projects related to software-defined networking and OpenFlow. He wrote a paper in conjunction with Yashar Ganjali, his advisor and an assistant professor at the University of Toronto, on HyperFlow, an application that runs on the open-source NOX controller to create a logically centralized but physically distributed control plane for OpenFlow. Tootoonchian developed and implemented HyperFlow, and he also is working on the next release of NOX. Recently, he spent six months pursuing SDN research at the University of California Berkeley.

His ongoing research has afforded insights into the origins and evolution of SDN. During a discussion over a coffee, he kindly recommended some reference material for my edification and enlightenment. I’m all for generosity here, so I’m going to share those recommendations with you over in what might become a series of posts. (I’d like to be more definitive, I really would, but I never know where I’m going to steer this thing I call a blog. It all comes down to time, opportunity, circumstances, and whether I get hit by a bus.)

Anyway, let’s start, strangely enough, at the beginning, with SDN concepts that ultimately led to the development of the OpenFlow protocol.

4D and Ethane: SDN Milestones 

Tootoonchian pointed me to papers and previous research involving academic projects such as 4D and Ethane, which served as recent antecedents to OpenFlow. There are other papers and initiatives he mentioned, a few of which I will reference, if all goes according to my current plan, in forthcoming posts.

Before 4D and Ethane, however, there were other SDN predecessors, most of which were captured in a presentation by Edward Crabbe, network architect at Google. Helpfully titled “The (Long) Road to SDN,” Crabbe’s presentation was given at a Tech Field Day last autumn.

Crabbe draws an SDN evolutionary line from Ipsilon’s General Switch Management Protocol (GSMP) in 1996 through a number of subsequent initiatives — including the IEFT’s Forwarding and Control Element Separation (FORCES) and Path Computation Element (PCE) working groups — gradually progressing toward the advent of OpenFlow in 2008. He points to common threads in SDN that include partitioning of resources and control within network elements; and minimization of the network-element local control plane, involving offline control of forwarding state and offline control of network-element resource allocation.

As for why SDN has drawn growing interest, development and support, Crabbe cites two main reasons: cost and “innovation velocity.” I (and others) have touched on the cost savings previously, but Crabble’s particular view from the parapets of Google warrants attention.

Capex and Opex Savings 

In his presentation, Crabbe cites cost savings relating to both capital and operating expenditures.

On the capex side, he notes that SDN can deliver efficient use of  IT infrastructure resources, which, I note, results in the need to purchase fewer new resources. He makes particular mention of how efficient resource utilization applies to network element CPU and memory as well as to underlying network capacity. He also notes SDN’s facility at moving the “heaviest workloads off expensive, relatively slow embedded systems to cheap, fast, commodity hardware.” Unstated, but seemingly implicit, is that the former are often proprietary whereas the latter are not.

Crabbe also mentions that capex savings can accrue from SDN’s ability to “provide visibility into, and synchronized control of, network state, such that underlying capacity may be used more efficiently.” Again, efficient utilization of the resources one owns means one derives full value from them before having to allocate spending to the purchase of new ones.

As for lower operating expenditures, Crabbe broadly states that SDN enables reduced network complexity, which results in less operational overhead and fewer outages. He offers a number of supporting examples, and the case he makes is straightforward and valid. If you can reduce network complexity, you will mitigate operational risk, save time, boost network-related productivity, and perhaps get the opportunity to allocate valuable resources to other, potentially more productive uses.

Enterprise Narrative Just Beginning 

Speaking of which, that brings us to Crabbe’s assertion that SDN confers “innovation velocity.” He cites several examples of how and where such innovation can be expedited, including faster feature implementation and deployment; partitioning of resources and control for relatively safe experimentation; and implementations on “relatively simple, well-known systems with well-defined interfaces.” Finally, he also emphasizes that the decoupling of the control plane from the network element facilitates “novel decision algorithms and hardware uses.”

It makes sense, all of it, at least insofar as Google is concerned. Crabbe’s points, of course, are similarly valid for other web-scale, cloud service providers.  But what about enterprises, large and small? Well, that’s a question still to be explored and answered, though the early adopters IBM and NEC brought forward earlier this week indicate that SDN also has a future in at least a few enterprise application environments.

IBM and NEC Find Early Adopters for OpenFlow-based SDNs

News arrived today that IBM and NEC have joined forces to work on OpenFlow deployments. The two companies’ joint solution integrates IBM’s Open Flow-enabled RackSwitch G8264 10/40GbE top-of-rack switch with NEC’s ProgrammableFlow Controller, PF5240 1/10 Gigabit Ethernet Switch, and PF5820 10/40 Gigabit Ethernet Switch.

What’s more, the two technology partners boast early adopters, who are using OpenFlow-based software-defined networks (SDNs) for real-world applications.

Actual Deployments by Early Adopters

Granted, one of those organizations, Stanford University, is firmly ensconced in academia, but the other two are commercial concerns, which are using the technology for applications that apparently confer significant business value. As Stacey Higginbotham writes at GigaOm, these deployments validate the commercial potential of SDNs that utilize the OpenFlow protocol in enterprise environments.

The three early adopters cover some intriguing application scenarios. Tervela, a purveyor of a distributed data fabric,  says the joint solution delivers dynamic networking that ensures predictable performance of Big Data for complex, demanding applications such as global trading, risk analysis, and e-commerce.

Another early adopter is Selerity Corporation. At Network Computing, Mike Fratto provides an excellent overview of how Selerity — which provides real-time, machine-readable financial information to its subscribers — is using the technology to save money and reduce complexity by replacing a convoluted set of VLANs, high-end firewalls, and  application-level processes with flow rules defined on NEC’s Programmable Flow Controller.

More to Come

Stanford, which, along with the University of California Berkeley, first developed the OpenFlow protocol, is using the NEC-IBM networking gear to  deploy a campus-wide experimental network that will run alongside its production backbone network. As Higginbotham writes (see link above), Stanford is using network programmability to provision bandwidth on demand for campus researchers.

It’s good to read details about OpenFlow deployments and about how bigger-picture SDNs can be applied for real-world benefits. I suspect we’ll be reading about more SDN deployments as the year progresses.

One quibble I have with the IBM press release is that it does not demarcate clearly between where OpenFlow ends at the controller and where SDN abstraction and higher-layer application intelligence take over.

Applications Drive Adoption

Reading about these early deployments, I couldn’t help but conclude that most of the value — and doubtless professional-service revenue for IBM — is derived through the application logic that informs the controller. Those applications ride above OpenFlow, which only serves the purpose of allowing the controller to communicate with the switch so that it forwards packets in a prescribed manner.

Put another way, as pointed out by those with more technical acumen than your humble scribe, OpenFlow is a protocol for interaction between the control and the forwarding plane. It serves a commendable purpose, but it’s a purpose that can be fulfilled in other ways.

What’s compelling and potentially unique about emerging SDNs are the new applications that drive their adoption. Others have written about where SDNs do and don’t make sense, and now we’re beginning to see tangible confirmation from the marketplace, the ultimate arbiter of all things commercial.