Monthly Archives: January 2012

SDN Double Vision

There’s been some confusion — in my mind, anyway — about the software-defined networking (SDN) mandates being pursued by the Open Networking Foundation (ONF) and the  proposed Software-Driven Networking Protocol (SDNP) workgroup of the Internet Engineering Task Force (IETF). I mean, both invoke the same three-letter SDN acronym, though one is “defined” and the other is “driven.”

You might wonder, as did I, how the two differ. You and I would not be alone, because members of the ONF, the wider OpenFlow community, and IETF participants have grappled with the same question.

Fortunately, the emergent IETF workgroup, which ambled into view only recently, is endeavoring to bring some clarity to the picture. Before we explore its continuing attempt at elucidation, let’s first review what the Open Networking Foundation is all about.

Forwarding and Management Abstractions

According to the ONF, it is a non-profit trade organization whose mission is to promote the development and use of SDN technologies. As the ONF website explains, these SDN technologies embody two basic principles:

1, Software-Defined Forwarding: Forwarding functionality should be controllable by software through an open interface. This can be achieved with hardware that accepts from software a set of <header template, forwarding action> entries, where the designated forwarding actions (such as forward out a particular port, or drop) are applied to packets with headers matching the template (which can contain wildcards). OpenFlow is an example of such an interface.

2. Global Management Abstractions: Networks should support a basic set of global management abstractions upon which more advanced management tools can be built. These global management abstractions might include, for example, a global view of the network, triggers on network events (such as topology changes or new flows), and the ability to control network elements by inserting entries into their hardware forwarding tables.

Ambiguity Looms

Until now, the ONF’s focus has been nearly exclusively on OpenFlow, a protocol that allows interaction between a software-based control plane, residing on a server, and the data plane of a switch. That focus could change, as the above reference to “global management abstractions” suggests.

Alas, this is where ambiguity intrudes between the tasks the IETF’s would-be SDNP workgroup might assign itself and the mission and objectives of the ONF. There’s more than a little potential for overlap, conflict, and — at least in my case — confusion.

In a discussion thread on the SDNP birds-of-a-feather (BoF) mailing list this past fall, participants sought to draw lines of demarcation between their efforts and those of the ONF. Subsequent to that discussion, Harry Quackenboss, CEO of LAYERZngn, wrote a blog post on the topic.

Defining Scope

Quackenboss wrote that, though ONF’s vision is expansive, its standardization efforts have been confined to OpenFlow and to communication between an OpenFlow-enabled switch and an OpenFlow controller. On the other hand, he says, the IETF’s SDN discussions pertain to management frameworks, data models, and coordination of management across networks.  Therefore, the IETF discourse might include reference to OpenFlow as well as to other protocols, proprietary and open as well as established and new.

In the aforementioned SDNP discussion thread last fall, David Meyer, distinguished engineer at Cisco Systems, wrote that the nascent workgroup might  . . . .

“ . . . . provide a set of ‘network abstractions’ and APIs that provide programmatic automation (etc) of configuration, management, monitoring, data mining, telemetry, … to network services. This is the case for current control planes and will bet he case for future (OF/SDN or otherwise) control planes. So I claim provision of these kinds of APIs and abstractions (i.e., the  goal set of SDNP as I understand it) is largely orthogonal to OF/SDN.”

Paris in the Spring

Notwithstanding Meyer’s bid for consensus, the would-be workgroup has yet to cohere on a prescribed mission. Attempts to identify a problem statement and to propose specific use cases continue. The next session of the SDNP BoF is scheduled for IEFT meetings in Paris in March, by which time IEFT participants are expected to have chosen a name and a way forward.

In the meantime, if anybody reading this post is involved in both efforts and can provide further insight as to how the IEFF mandate will be distinguished from ONF’s charter, I would appreciate your comments below.

SDN’s Continuing Evolution

At the risk of understatement, I’ll begin this post by acknowledging that we are witness to intensifying discussion about the applicability and potential of software-defined networking (SDN). Frequently, such discourse is conjoined and conflated with discussion of OpenFlow.

But the two, as we know, are neither the same nor necessarily inextricable. Software-defined networking is a big-picture concept involving controller-driven programmable networks whereas OpenFlow is a protocol that enables interaction between a control plane and the data plane of a switch.

Not Necessarily Inextricable

A salient point to remember — there are others, I’m sure, but I’m leaning toward minimalism today — is that, while SDN and OpenFlow often are presented as joined at the hip, they need not be. You can have SDN without Open Flow. Furthermore, it’s worth bearing in mind that the real magic of SDN resides beyond OpenFlow’s reach, at a higher layer of  abstraction in the SDN value hierarchy.

So, with that in mind, let’s take a brief detour into SDN history, to see whether the past can inform the present and illuminate the future. I was fortunate enough to have some help on this journey from  Amin Tootoonchian, a PhD student in the Systems and Networking Group, Department of Computer Science, University of Toronto.

Tootoonchian is actively involved in research projects related to software-defined networking and OpenFlow. He wrote a paper in conjunction with Yashar Ganjali, his advisor and an assistant professor at the University of Toronto, on HyperFlow, an application that runs on the open-source NOX controller to create a logically centralized but physically distributed control plane for OpenFlow. Tootoonchian developed and implemented HyperFlow, and he also is working on the next release of NOX. Recently, he spent six months pursuing SDN research at the University of California Berkeley.

His ongoing research has afforded insights into the origins and evolution of SDN. During a discussion over a coffee, he kindly recommended some reference material for my edification and enlightenment. I’m all for generosity here, so I’m going to share those recommendations with you over in what might become a series of posts. (I’d like to be more definitive, I really would, but I never know where I’m going to steer this thing I call a blog. It all comes down to time, opportunity, circumstances, and whether I get hit by a bus.)

Anyway, let’s start, strangely enough, at the beginning, with SDN concepts that ultimately led to the development of the OpenFlow protocol.

4D and Ethane: SDN Milestones 

Tootoonchian pointed me to papers and previous research involving academic projects such as 4D and Ethane, which served as recent antecedents to OpenFlow. There are other papers and initiatives he mentioned, a few of which I will reference, if all goes according to my current plan, in forthcoming posts.

Before 4D and Ethane, however, there were other SDN predecessors, most of which were captured in a presentation by Edward Crabbe, network architect at Google. Helpfully titled “The (Long) Road to SDN,” Crabbe’s presentation was given at a Tech Field Day last autumn.

Crabbe draws an SDN evolutionary line from Ipsilon’s General Switch Management Protocol (GSMP) in 1996 through a number of subsequent initiatives — including the IEFT’s Forwarding and Control Element Separation (FORCES) and Path Computation Element (PCE) working groups — gradually progressing toward the advent of OpenFlow in 2008. He points to common threads in SDN that include partitioning of resources and control within network elements; and minimization of the network-element local control plane, involving offline control of forwarding state and offline control of network-element resource allocation.

As for why SDN has drawn growing interest, development and support, Crabbe cites two main reasons: cost and “innovation velocity.” I (and others) have touched on the cost savings previously, but Crabble’s particular view from the parapets of Google warrants attention.

Capex and Opex Savings 

In his presentation, Crabbe cites cost savings relating to both capital and operating expenditures.

On the capex side, he notes that SDN can deliver efficient use of  IT infrastructure resources, which, I note, results in the need to purchase fewer new resources. He makes particular mention of how efficient resource utilization applies to network element CPU and memory as well as to underlying network capacity. He also notes SDN’s facility at moving the “heaviest workloads off expensive, relatively slow embedded systems to cheap, fast, commodity hardware.” Unstated, but seemingly implicit, is that the former are often proprietary whereas the latter are not.

Crabbe also mentions that capex savings can accrue from SDN’s ability to “provide visibility into, and synchronized control of, network state, such that underlying capacity may be used more efficiently.” Again, efficient utilization of the resources one owns means one derives full value from them before having to allocate spending to the purchase of new ones.

As for lower operating expenditures, Crabbe broadly states that SDN enables reduced network complexity, which results in less operational overhead and fewer outages. He offers a number of supporting examples, and the case he makes is straightforward and valid. If you can reduce network complexity, you will mitigate operational risk, save time, boost network-related productivity, and perhaps get the opportunity to allocate valuable resources to other, potentially more productive uses.

Enterprise Narrative Just Beginning 

Speaking of which, that brings us to Crabbe’s assertion that SDN confers “innovation velocity.” He cites several examples of how and where such innovation can be expedited, including faster feature implementation and deployment; partitioning of resources and control for relatively safe experimentation; and implementations on “relatively simple, well-known systems with well-defined interfaces.” Finally, he also emphasizes that the decoupling of the control plane from the network element facilitates “novel decision algorithms and hardware uses.”

It makes sense, all of it, at least insofar as Google is concerned. Crabbe’s points, of course, are similarly valid for other web-scale, cloud service providers.  But what about enterprises, large and small? Well, that’s a question still to be explored and answered, though the early adopters IBM and NEC brought forward earlier this week indicate that SDN also has a future in at least a few enterprise application environments.

IBM and NEC Find Early Adopters for OpenFlow-based SDNs

News arrived today that IBM and NEC have joined forces to work on OpenFlow deployments. The two companies’ joint solution integrates IBM’s Open Flow-enabled RackSwitch G8264 10/40GbE top-of-rack switch with NEC’s ProgrammableFlow Controller, PF5240 1/10 Gigabit Ethernet Switch, and PF5820 10/40 Gigabit Ethernet Switch.

What’s more, the two technology partners boast early adopters, who are using OpenFlow-based software-defined networks (SDNs) for real-world applications.

Actual Deployments by Early Adopters

Granted, one of those organizations, Stanford University, is firmly ensconced in academia, but the other two are commercial concerns, which are using the technology for applications that apparently confer significant business value. As Stacey Higginbotham writes at GigaOm, these deployments validate the commercial potential of SDNs that utilize the OpenFlow protocol in enterprise environments.

The three early adopters cover some intriguing application scenarios. Tervela, a purveyor of a distributed data fabric,  says the joint solution delivers dynamic networking that ensures predictable performance of Big Data for complex, demanding applications such as global trading, risk analysis, and e-commerce.

Another early adopter is Selerity Corporation. At Network Computing, Mike Fratto provides an excellent overview of how Selerity — which provides real-time, machine-readable financial information to its subscribers — is using the technology to save money and reduce complexity by replacing a convoluted set of VLANs, high-end firewalls, and  application-level processes with flow rules defined on NEC’s Programmable Flow Controller.

More to Come

Stanford, which, along with the University of California Berkeley, first developed the OpenFlow protocol, is using the NEC-IBM networking gear to  deploy a campus-wide experimental network that will run alongside its production backbone network. As Higginbotham writes (see link above), Stanford is using network programmability to provision bandwidth on demand for campus researchers.

It’s good to read details about OpenFlow deployments and about how bigger-picture SDNs can be applied for real-world benefits. I suspect we’ll be reading about more SDN deployments as the year progresses.

One quibble I have with the IBM press release is that it does not demarcate clearly between where OpenFlow ends at the controller and where SDN abstraction and higher-layer application intelligence take over.

Applications Drive Adoption

Reading about these early deployments, I couldn’t help but conclude that most of the value — and doubtless professional-service revenue for IBM — is derived through the application logic that informs the controller. Those applications ride above OpenFlow, which only serves the purpose of allowing the controller to communicate with the switch so that it forwards packets in a prescribed manner.

Put another way, as pointed out by those with more technical acumen than your humble scribe, OpenFlow is a protocol for interaction between the control and the forwarding plane. It serves a commendable purpose, but it’s a purpose that can be fulfilled in other ways.

What’s compelling and potentially unique about emerging SDNs are the new applications that drive their adoption. Others have written about where SDNs do and don’t make sense, and now we’re beginning to see tangible confirmation from the marketplace, the ultimate arbiter of all things commercial.

Exploring the Symbiosis Between Merchant Silicon and Software-Defined Networking

In a recent post at EtherealMind.com, Greg Ferro examined possible implications associated with the impending dominance of merchant silicon in the networking industry.

Early in his post, Ferro reproduces a Broadcom graphic illustrating that the major switch vendors all employ Broadcom’s Trident chipset family in their gear. Vendors represented on the graphic include Cisco, Juniper, Dell, Arista, HP, IBM (BNT), and Alcatel-Lucent.

Abyss Awaits

Custom switching ASICs haven’t gone the way of eight-track cartridges just yet, but the technology industry’s grim reaper is quickening his loping stride and approaching at a baleful gallop, scythe at the ready. Interrelated economic and technological factors have conspired, as they will, to put the custom ASIC on a terminal path.

There’s a chicken-and-egg debate as to whether economics occasioned and hastened this technological change or whether the causation was reversed, but, either way, the result will be the same. At some point, for switching purposes, it will become counterproductive and economically untenable to continue to design, develop, and incorporate custom ASICs into shipping products.

What’s more, the custom ASIC’s trip to the boneyard will be expedited, at least in part, by the symbiotic relationship that has developed between merchant silicon and software-defined networking (SDN).

Difficult Adjustment for Some

Commercially, of course, merchant silicon preceded SDNs by a number of years. Recently, however, the two have converged dynamically, so much so that, as Ferro acknowledges, future differentiation in networking will derive overwhelmingly from advances in software rather than from those in hardware. Vendors will offer identical hardware. They will compete on the basis of their software, including the applications and, yes, the management capabilities they bring to market.

For companies that have marketed and sold their products primarily on the basis of hardware speeds and feeds and associated features and benefits, the adjustment will be difficult.  The bigger the ship, the harder it will be to turn.

There are some caveats, of course. While seemingly inevitable, this narrative could take some time to play out.  Although the commercial success of merchant silicon was not contingent on the rise of software-defined networks, the continued ascent of the latter will accelerate and cement the dominance of the former. To the extent that the SDN movement — perhaps torn between OpenFlow and other mechanisms and protocols — fragments or is otherwise slowed in its progress, the life of the custom ASIC might be prolonged.

Timing the Enterprise Transition

Similarly, even if we presuppose that SDN technology and its ecosystem progress smoothly and steadily, SDN is likely to gain meaningful traction first with service providers and only later with enterprises. That said, the line demarcating enterprises and service providers will move and blur as applications and infrastructure migrate, in whole or in part, to the cloud. It’s anybody’s guess as to when and exactly how that transition will transform the enterprise-networking market, but we can see the outlines of change on the horizon.

Nothing ever plays out in the real world exactly as it does on paper, so I expect complications to spoil the prescience of the foregoing forecast.

Still, I know one thing for sure: As the SDN phenomenon eventually takes hold, the role of the switch will change, and that means the design of the switch will change. If the switch is destined to become a dumbed-down data-forwarding box, it doesn’t need a custom ASIC. Merchant silicon is more than up to that task.

Big Switch Hopes Floodlight Draws Crowd

As the curtain came down on 2011, software-defined networking (SDN) and its open-source enabling protocol, OpenFlow, continued to draw plenty of attention. So far, 2012 has been no different, with SDN serving as a locus of intense activity, heady discourse, and steady technological advance.

Just last week, for instance, Big Switch Networks announced the release of Floodlight, a Java-based, Apache-licensed OpenFlow controller. In making Floodlight available under the Apache license, which allows the code to be reused for both research and commercial purposes, Big Switch hopes to establish the controller as a platform for OpenFlow application development.

Big Switch acknowledges that other OpenFlow controllers are available — the company even asks rhetorically, in a blog post accompanying the announcement, whether the world really needs another OpenFlow controller — but it believes that Floodlight is differentiated through its ease 0f use, extensibility, and robustness.

Controller as Platform 

I think we all realize by now that OpenFlow is just an SDN protocol. It allows data-path flow tables on switches to be programmed by a software-based controller, represented by the likes of Floodlight.  While OpenFlow might be essential as a mechanism for the realization of software-defined networks, it is not where SDN business value will be delivered or where vendors will find their pots of gold.

Next up in the hierarchy of SDN value are the controllers. As Big Switch recognizes, they can serve as platforms for SDN application development. Many vendors, including HP, believe that applications will define the value (and hence the money-making potential) in the SDN universe. That’s a fair assumption.

Big Switch Networks has indicated that it wants to be the “VMware of networking,” delivering network virtualization and providing enterprise-oriented OpenFlow applications. If it can establish its controller as a popular platform for OpenFlow application development, it will set a foundation both for its own commercial success as well as for enterprise OpenFlow in general.

Seeking Enterprise Value

The key to success, of course, will be the degree to which the applications, and the business value that accrues from them, are compelling. We’ll also see management and orchestration, perhaps integrated with the controller(s), but the commercial acceptance of the applications will determine the need and scope for automated management of the overall SDN environment. This is particularly true in the enterprise market that Big Switch has targeted.

What will those enterprise applications be? Well, if I knew answer to that question, I might be on a personal trajectory to obscene wealth, membership in an exclusive secret society, and perhaps ownership of a professional sports team (or, at minimum, a racehorse).

Service Providers Have Different Agenda

Meanwhile, in the rarefied heights of the largest cloud providers, such as the companies that populate the board at the Open Networking Foundation (ONF), I suspect that nearly everything of meaningful business value connected with OpenFlow and SDN will be done internally. Google and Facebook, for instance, will design and build (perhaps through ODMs) their own bare-bones servers and switches, and they will develop their own SDN controllers and applications. Their network infrastructure is a business asset, even a competitive advantage, and they will prefer to build and customize their own SDN environments rather than procure products and solutions from networking vendors, whether established players or startups.

Most enterprises, though, will be inclined to look toward the vendor community to equip them with SDN-related products, technologies, and expertise. This is presuming, of course, that an enterprise market for OpenFlow-based SDNs actually finds its legs.

Plenty of Work Ahead

So, again, it all comes back to the power and value of the applications, and this is why Big Switch is so keen to open-source its controller.  The enterprise market for OpenFlow-based SDNs won’t grow unless IT departments are comfortable adopting it. Vendors such as Big Switch will have to demonstrate that they are safe bets, capable of providing unprecedented value at minimal risk.

It’s a daunting challenge. OpenFlow definitely possesses long-term enterprise potential, but today it remains a long way from being able to check all the enterprise boxes. Big Switch, not to mention the enterprise OpenFlow community, needs a meaningful ecosystem to materialize sooner rather than later.

Why Nicira Says Networking Doesn’t Need a VMware

At Martin Casado’s Network Heresy blog yesterday, a guest post was offered by Andrew Lambeth, who once led the vDS distributed switching project at VMware but is now, like Casado, ensconced at Nicira.  The post was titled provocatively, “Networking Doesn’t Need a VMWare.”

It was different in substance and tone from Casado’s posts, which typically are balanced, logical, and carefully constructed. I appreciate those qualities. Words matter, and Casado invariably takes the time to choose the right ones and to compose posts that communicate complicated ideas clearly. Even better, he does so without undue vendor bias.

Maybe he’s really a shrewd master of manipulation, but I always get the impression Casado is sincere, that he means what he says and says what he means.  One actually learns something from reading his blog. That’s always refreshing, in this industry or any other.

Defining (or Redefining) Network Virtualization 

As I said, the post from Lambeth was a departure in more ways than one. It was logical and carefully constructed, just like Casado’s writing, but it did not attempt to achieve any sort of balance. Instead, given the venue, it was strikingly partisan and tendentious.

Despite the technical window-dressing, it was devised to differentiate and distinguish Nicira’s approach to network virtualization from those of other players in the space, established vendors and startups alike. It also sought, implicitly if not explicitly, to derogate OpenFlow in the still-unfolding SDN hierarchy of value.

Just to summarize, though I encourage you to read the post yourself, Lambeth argues that, while there’s industry consensus on the desirability of network virtualization, there’s a significant difference of opinion on how it should be achieved. Network virtualization is not at all the same as server virtualization, he writes, citing the need in the former for “scale (lots of it) and distributed state consistency.” He concludes by saying that the current preoccupation with the data path, the realm of OpenFlow, is akin to “worrying about a trivial component of an otherwise enormously challenging problem.”

Positioning and Differentiation

Commenting on Lambeth’s post, Chris Hoff, formerly of Cisco and now with Juniper Networks (and a prolific tweeter,  I might add), concluded correctly that it “smacks of positioning against both OpenFlow as well as other network virtualization startups.”

In issuing that positioning statement, Nicira not only is attempting to distance itself from the OpenFlow crowd; it also has at least a couple specific vendors in mind.

One obvious target is Big Switch Networks. If you visit that vendor’s website,  you will find that it expresses unqualified love for OpenFlow on its home page. It also says candidly that “networking needs a VMware.” Diametrically opposing that view, Nicira says networking doesn’t need a VMware. Furthermore, as I noted in a previous post, Nicira continues to  expend considerable effort to downplay the significance of OpenFlow.

Thinking Beyond Big Switch

But Nicira is thinking about competitors other than Big Switch, too. Readers of this blog will know that of one of my recurring themes — some would call it a conspiracy theory — is that the VCE partnership between Cisco and EMC is subject to increasing strain and tension. In short, EMC acquired VMware, Cisco didn’t, and now virtualization — and maybe VWware — is becoming integral to the future of networking.

Nicira’s Lambeth, formerly involved with distributed switching at VMware, and his counterparts at Big Switch agree that network virtualization is important. Where they disagree, perhaps, is in how it should be achieved.

Meanwhile, both vendors at one time or another, as Lambeth concedes at the outset of his post, have espoused variations on the claim that “networking needs a VMware.” Apparently, the team at Nicira has reconsidered that premise and is going in a different direction.

It might have adjusted course for reasons other than (or in addition to) those relating to architecture and technological requirements.

VMware’s Networking Ambitions

You see, VMware seems to believe that networking already has a VMware, whose name, conveniently enough, is VMware. Circumstantial evidence, including a recent post by VMware CTO Steve Herrod, suggests that VMware has ambitions that extend beyond server virtualization and well into network virtualization. Back in June, Greg Ferro also noted VMware’s interest in carving out a significant role for itself in network virtualization. In his commentary, Ferro cited a post by Allwyn Sequeira, security CTO at VMWare.

Herrod has predicted that “software-defined networking will become a mainstay of data- center architectures” in 2012. It’s safe to assume that he foresees his company playing a major part in making his prognostication a reality.

Questioning Cisco’s CES Presence

In a recent piece at Forbes, Roger Kay complained that parasitic vendors are killing the annual Consumer Electronics Show (CES) in Las Vegas, the 2012 edition of which kicks off next week. When Kay refers to parasites, he means vendors that avail themselves of nearby hotel suites, where they host and entertain a select audience of invitation-only customers and partners, while evading the time-sucking clutches of the hoi polloi that pack the show floor.

As a vendor strategy, Kay allows, the hotel-suite gambit might make sense, but he’s concerned about the effect of the big-vendor exodus from the show floor. Among the industry players Kay calls on the carpet are Microsoft (exhibiting for the last time at CES this year), Dell, Acer, and Cisco.

Avoiding the Floor, Not the Show

Cisco? Yes, that Cisco. The networking titan that was supposed to be refocusing away from consumerist distractions has decided to hole up in a Las Vegas hotel suite next week on the periphery of a consumer-oriented electronics trade show. Unlike Kay, my problem with Cisco at CES is not that it prefers a sumptuous hotel suite to the lesser glories of the show floor, but that it will be there at all.

In the long-ago spring of 2011, when Cisco announced that it was immolating its Flip video camcorder business, the company stated that it was refocusing around five key technology areas: routing, switching, and services; collaboration; data center virtualization and the cloud; architectures; and video. Despite the apparent contradiction that Cisco was killing the Flip video camcorder while strategically prioritizing video, it seemed pretty clear Cisco’s denotation of “video” encompassed enterprise-related video, such as telepresence and videoconferencing, rather than the consumer-oriented video represented by the defunct Flip.

Belated Acknowledgment

Or did it? After all, Cisco kept its consumer-oriented umi telepresence systems even as it binned Flip. Then again, Cisco belatedly acknowledged that particular error of omission, recently shuttering the umi business, such as it was.

That means Cisco finally is getting itself aligned with its strategic mandate — except, of course, when it isn’t. You see, Cisco still has its home-networking offerings, represented by the Linksys product portfolio, and, unless the company is exceptionally free with its definitions and interpretations, it would encounter great difficulty reconciling that business with its self-proclaimed strategic priorities.

Last year, Cisco said it would attempt to align the Linksys business with its core network-infrastructure business, though that would appear more a theoretical than a practical exercise. Meanwhile, some analysts expected Cisco to divest its low-growth, low-margin consumer businesses, but Cisco’s home-networking group, which definitely checks those divestiture-qualifying boxes, remains in the corporate fold.

Still, speculation persists about a potential sale of the Linksys unit, even as representatives of that unit attempt to portray it as a “key part” of Cisco’s strategy.  According to that defiant narrative, Linksys’ solutions are supposed to be the centerpiece of a master plan that would put Cisco at the forefront of home-entertainment networks that distribute Internet-based video throughout the home to devices such as television sets and BluRay players. But with Cisco’s recent retreat from its umi videoconferencing, the company has decided that it will refrain from handling at least one type of video content in the home.

More Strategic Rigor Required

Look, I understand why Cisco likes video. It consumes a lot of bandwidth, and that means Cisco’s customers, including telcos and cable MSOs as well as enterprises, will need to spend more on network infrastructure to accommodate the rising tide of video traffic. I get the synergies with its core businesses, I really do.

But is Cisco truly equipped as a vendor and a brand that can win the hearts and minds of consumers and cross the threshold into the home? The company’s track record would suggest that the answer to that question is an emphatic and resounding no. Furthermore, does Cisco really need to be in the home to capture its “fair share” of video-based revenue? Again, the answer would seem to be negative.

When I read that Cisco was ramping up for CES, even though it doesn’t have a booth on the show floor, I was reminded that the company still needs to apply more rigor to its refocusing efforts. In the big picture, perhaps the resources expended to stage a consumer-oriented promotional blitz in Las Vegas next week do not distract significantly from Cisco’s professed strategic priorities. Nonetheless, I would argue that its CES excursion doesn’t help, and that an opportunity cost is still being incurred.