Category Archives: Cisco

Dell’s Steady Progression in Converged Infrastructure

With its second annual Dell Storage Forum in Boston providing the backdrop, Dell made a converged-infrastructure announcement this week.  (The company briefed me under embargo late last week.)

The press release is available on the company’s website, but I’d like to draw attention to a few aspects of the announcement that I consider noteworthy.

First off, Dell now is positioned to offer its customers a full complement of converged infrastructure, spanning server, storage, and networking hardware, as well as management software. For customers seeking a single-vendor, one-throat-to-choke solution, this puts Dell  on parity with IBM and HP, while Cisco still must partner with EMC or with NetApp for its storage technology.

Bringing the Storage

Until this announcement, Dell was lacking the storage ingredients. Now, with what Dell is calling the Dell Converged Blade Data Center solution, the company is adding its EqualLogic iSCSI Blade Arrays to Dell PowerEdge blade servers and Dell Force10 MXL blade switching. Dell says this package gives customers an entire data center within a single blade enclosure, streamlining operations and management, and thereby saving money.

Dell’s other converged-infrastructure offering is the Dell vStart 1000. For this iteration of vStart, Dell is including, for the first time, its Compellent storage and Force10 networking gear in one integrated rack for private-cloud environments.

The vStart 1000 comes in two configurations: the vStart 1000m and the vStart 1000v. The packages are nearly identical — PowerEdge M620 servers, PowerEdge R620 management servers, Dell Compellent Series 40 storage, Dell Force10 S4810 ToR Networking and Dell Force10 S4810 ToR Networking, plus Brocade 5100 ToR Fibre-Channel Switches — but the vStart 1000m comes with Windows Server 2008 R2 Datacenter (with the Hyper-V hypervisor), whereas the vStart 1000v features trial editions of VMware vCenter and VMware vSphere (with the ESXi hypervisor).

An an aside, it’s worth mentioning that Dell’s inclusion of Brocade’s Fibre-Channel switches confirms that Dell is keeping that partnership alive to satisfy customers’ FC requirements.

Full Value from Acquisitions

In summary, then, is Dell delivering converged infrastructure with both its in-house storage options, demonstrating that it has fully integrated its major hardware acquisitions into the mix.   It’s covering as much converged ground as it can with this announcement.

Nonetheless, it’s fair to ask where Dell will find customers for its converged offerings. During my briefing with Dell, I was told that mid-market was the real sweet spot, though Dell also sees departmental opportunities in large enterprises.

The mid-market, though, is a smart choice, not only because the various technology pieces, individually and collectively, seem well suited to the purpose, but also because Dell, given its roots and lineage, is a natural player in that space. Dell has a strong mandate to contest the mid-market, where it can hold its own against any of its larger converged-infrastructure rivals.

Mid-Market Sweet Spot

What’s more, the mid-market — unlike cloud-service providers today and some large enterprise in the not-too-distant future — are unlikely to have the inclination, resources, and skills to pursue a DIY, software-driven, DevOps-oriented variant of converged infrastructure that might involve bare-bones hardware from Asian ODMs. At the end of the day, converged infrastructure is sold as packaged hardware, and paying customers will need to perceive and realize value from buying the boxes.

The mid-market would seem more than receptive to the value proposition that Dell is selling, which is that its converged infrastructure will reduce the complexity of IT management and deliver operational cost savings.

This finally leads us to a discussion of Dell’s take on converged infrastructure. As noted in an eChannelLine article, Dell’s notion of converged infrastructure encompasses operations management, services management, and applications management. As Dell continues down the acquisition trail, we should expect the company to place greater emphasis on software-based intelligence in those areas.

That, too, would be a smart move. The battle never ends, but Dell — despite its struggles in the PC market — is now more than punching its own weight in converged infrastructure.

VCs Weigh SDN’s Risks and Rewards

I’ve been thinking about a month-old post that Matthew Palmer wrote on the SDNCentral website.

In his post, Palmer considers that Arista, Insieme, and Vyatta were not financed by traditional venture capitalists. He further questions to what extent venture capitalists will plow money into the SDN space. He comes to the conclusion that it is “hard to believe there will be a large number of SDN startups being funded” by VCs.

My objective here is not to challenge Palmer’s conclusion, which seem about right. Instead, I want to examine his assumptions to see whether I can add anything to the discussion.

Slow-Growth Dead Zone

For a long time, VCs have eschewed the networking market. In recent years, Arista Networks emerged as the only new Ethernet-switching vendor to crash the established vendors’ party. Arista, as Palmer points out, was funded by its founders, not by VCs, who generally perceived networking, especially the enterprise variant, as a slow-growth dead zone controlled and dominated by Cisco Systems.

Meanwhile, the VCs had unfortunate experiences in the network-access control (NAC) market, where they sought to make bets in an area that was seen as peripheral to the big vendors’ wheelhouses.

As for SDN today, Palmer thinks most of the major VCs have done their bidding, and he believes Sequoia and Kleiner Perkins will fill out the field shortly. Beyond that, he doesn’t see much action.

Freeze Frame

He comes to that conclusion partly because of Cisco’s longstanding domination of the networking market. Writing that “Cisco learned a long time ago how to freeze markets and make markets look unattractive to competitors and investors,” Palmer believes the networking giant has put “everyone on notice” with its Insieme spin-in venture.  He believes Insieme, and whatever else Cisco does in SDN, will shut the door on SDN startups that aren’t already on the market with credible products and technologies that solve customer problems.

Perhaps VCs, as they have done in the recent past, will refrain from betting against the industry giant. That said, there already has been more VC activity in SDN than we’ve seen in network infrastructure for quite some time. In that respect, SDN demonstrably is different from the networking developments that have preceded it.

It’s different in others ways, too. I know I’ve hammered the same nail repeatedly in the past, but, at the risk of obsessive redundancy, I will do so again: The Open Networking Foundation (ONF) represents a powerful customer-driven dynamic that effectively challenges the vendor-led hegemony that has typified most networking markets and associated standards bodies. The ONF is run by and for service providers. Vendors are excluded from its board of directors, and their contributions are carefully circumscribed to conform with the dictates of the board.

Formidable Power

The catch is that the ONF is all about the needs and requirements of cloud service providers. The enterprise isn’t a primary consideration, though the development of enterprise-market demand for SDN products and technologies could further the strategic interests (economies of scale, innovation, vendor support, etc.) of the service-provider community.

Cisco is a formidable power, but it can’t impose its will on the ONF. In that respect, at least in the service-provider space, SDN is different from preceding network markets, such as Ethernet switching, which were basically incremental advancements in an established market model.

Call me crazy, but I believe that market and financial analysts should begin modeling scenarios in which the growth of SDN cuts into the service-provider revenues and margins of Cisco and Juniper. This will be particularly true in the cloud-service provider (IaaS) space initially, but it is likely to grow into other areas over time.

Enterprise Bulwark

The enterprise? That’s a tougher nut for SDN, for the reason I’ve cited earlier (ONF’s lack of an enterprise mandate), and for others as well. For starters, most enterprises don’t have the resources or the motivation (business case) to move away from networking models and relationships that have served them well.  As SDN evolves over time, that situation could change. For now, though, SDN is more a curiosity for enterprises than something they are considering for wholesale adoption.

Cisco and the other established networking vendors know the enterprise is safer ground for whatever SDN strategy or counterstrategy they present. In this respect, what Palmer terms “Insieme FUD” and other similar tactics are likely to be effective in the near term (the next two years.)

I really can’t quibble with Palmer’s conclusion — as I wrote above, it feels about right — but I think the VC investments we’ve seen heretofore in SDN already suggest that it is perceived differently from the linear networking markets that have preceded it.   I also believe there’s reason to think that SDN will lead to significant disruptions to the provision of networking solutions in the service-provider space.

How far can it go in the enterprise? For now, prospects are murky, but the game is in the early stages, and much will depend on how the SDN ecosystem evolves as well as on how effective Cisco and others are at leveraging the advantages of incumbency.

Why Established Networking Vendors Aren’t Leading SDN Charge

Expressing equal parts exasperation and incredulity, Greg Ferro wonders why industry-leading networking vendors aren’t taking the innovative initiative in offering compelling strategies for software-defined networking (SDN).

The answer seems clear enough.

Although applications will be critical to the long-term commercial success of SDN, Google and the other movers and shakers that direct the affairs of the Open Networking Foundation (ONF) originally were drawn to SDN because they were frustrated with the lack of responsiveness and innovation from established vendors. As a result, they devised a networking model that not only separated the control and data planes of network elements, but that also, in the word’s of Google’s Amin Vahdat, separated the “ evolution path for (network) hardware and software.”

Two Paths

Until now, those evolutionary paths have been converged and constrained inside the largely propriety boxes of networking vendors. Google and its confreres with the ONF perceived that state of affairs as the yoke of vendor oppression. The network, slow to evolve and innovate, was getting in the way of progress.  All the combustible ingredients of a cloud-service provider insurrection had cohered. Google, taking the lead in organizing the other major service providers under the rubric of the ONF, lit the fuse.

The effects of the explosion are just being felt, and the reverberations will echo for some time. The big service providers, and perhaps many smaller ones, are gravitating away from the orbit of networking’s ancien regime. The question now is whether enterprises will follow. At some point, that probably will happen, but how and when it will unfold are less clear. Enterprises, unlike the board members of the ONF, are too diverse and prolific to organize in pursuit of common interests. Accordingly, vendors are still able to set the enterprise agenda.

But enterprises will notice the benefits that SDN is capable of conferring, and the ONF’s overlords will seek to cultivate and sustain an ecosystem that can deliver parallel hardware and software innovation. Google, for example, has indicated that while it develops its own networking hardware today, it would be amenable to buying OpenFlow switches from the vendor community. Those switches, like to carry lower margins and prices than the gear sold by the major networking vendors, will probably come from ODMs using merchant silicon from Broadcom, Marvell, Fulcrum (Intel), and others.

Money’s in the Software

The major networking vendors are saying that the cleavage of the control and data planes is not a big deal, that it’s not necessary or isn’t a critical requirement for innovation and network programmability. Perhaps there is some merit to their arguments, but there’s no question that the separation of the control and data planes is not in their business interests. If some their assertions have merit, they also are self-serving.

Cisco, as we’ve discussed before, might be able to develop software, but its business model is predicated on the sale of routers and switches. Effectively, it would have to remake itself comprehensively to recast itself as a vendor of server-based controllers (software) and the applications the run on them. A proprietary hardware box, whether a server or switch, isn’t what the ONF wants.

If the ONF’s SDN vision prevails, the money is in software: server-based controllers, applications, management/orchestration frameworks, and so on. Successful vendors not only will have to be proficient at developing software; they’ll also have to be skilled at marketing and selling it. They’ll have to build their businesses around it.

This is the challenge the major networking vendors confront. It’s why they aren’t leading the SDN charge, and it also is why they are attempting to co-opt and subvert it.

Tidbits: Oracle-Arista Rumor, Controller Complexity, More Cisco SDN

This Week’s Rumor

Rumors that Oracle is considering an acquisition of Arista Networks have circulated this week. They’re likely nothing more than idle chatter. Arista has rejected takeover overtures previously, and it seems determined to go the IPO route.

Controller Complexity

Lori MacVittie provides consistently excellent blogging at F5 Networks’ DevCentral. In a post earlier this week, she examined the challenges and opportunities facing OpenFlow-based SDN controllers. Commenting on the code complexity of controllers, she writes the following:

This likely means unless there are some guarantees regarding the quality and thoroughness of testing (and thus reliability) of OpenFlow controllers, network operators are likely to put up a fight at the suggestion said controllers be put into the network. Which may mean that the actual use of OpenFlow will be limited to an ecosystem of partners offering “certified” (aka guaranteed by the vendor) controllers.

It’s a thought-provoking read, raising valid questions, especially in the context of enterprise customers.

Cisco SDN

Last week, Cisco and Morgan Stanley hosted a conference call on Cisco’s SDN strategy. (To the best of my knowledge, Morgan Stanley doesn’t have one — yet.)  Cisco was represented on the call by David Ward, VP and chief architect of the company’s Service Provider Division; and by Shashi Kiran, senior director of market management for Data Center/Virtualization and Enterprise Switching Group.

The presentation is available online. It doesn’t contain any startling revelations, and it functions partly as a teaser for forthcoming product announcements at CiscoLive in San Diego. Still, it’s worth a perusal for those of you seeking clues on where Cisco is going with its SDN plans. If you do check it out, you’ll notice on side three that a number of headlines are featured attesting to the industry buzz surrounding SDN.  Two bloggers are cited in that slide: Greg Ferro (EtherealMind) and, yes, yours truly, who gets cited for a recent interpretation of Cisco’s SDN maneuverings.

Putting an ONF Conspiracy Theory to Rest

We know that the Open Networking Foundation (ONF) is controlled by the six major service providers that constitute its board of directors.

It is no secret that the ONF is built this way by design. The board members wanted to make sure that they got what they wanted from the ONF’s deliberations, and they felt that existing standards bodies, such as the IETF and IEEE, were gerrymandered and dominated by vendors with self-serving agendas.

The ONF was devised with a different purpose in mind — not to serve the interests of the vendors, but to further the interests of the service-provider community, especially the service providers who sit on the ONF’s board of directors. In their view, conventional networking was a drag on their innovation and business agility, obstructing progress elsewhere in their data centers and IT operations. Whereas compute and storage resources had been virtualized and orchestrated, networking remained a relatively costly and unwieldy fiefdom ruled by “masters of complexity” rummaging manually through an ever-expanding bag of ad-hoc protocols.

Organizing for Clout

Not getting what they desired from their networking vendors, the service providers decided to seize the initiative. Acting on its own,  Google already had done just that, designing and deploying DIY networking gear.

The study of political elites tells us that an organized minority comprising powerful interests can impose its will on a disorganized majority.  In the past, as individual companies, the ONF board members had been unable to counter the agendas of the networking vendors. Together, they hoped to effect the change they desired.

So, we have the ONF, and it’s unlike the IETF and the IEEE in more ways than one. While not a standards body — the ONF describes itself as a “non-profit consortium dedicated to the transformation of networking through the development and standardization of a unique architecture called Software-Defined Networking (SDN)” — there’s no question that the ONF wants to ensure that it defines and delivers SDN according to its own rules  And at its own pace, too, not tied to the product-release schedules of networking vendors.

In certain respects, the ONF is all about consortium of customers taking control and dictating what it wants from the vendor community, which, in this case, should be understood to comprise not only OEM networking vendors, but also ODMs, SDN startups, and purveyors of merchant silicon.

Vehicle of Insurrection?

Just to ensure that its leadership could not be subverted, though, the ONF stipulated that vendors would not be permitted to serve on its board of directors. That means that representatives of Cisco, Juniper, and HP Networking, for example, will never be able to serve on the ONF board.

At least within their self-determined jurisdiction, the ONF’s board members call all the shots. Or do they?

Commenting on my earlier post regarding Cisco’s SDN counterstrategy, a reader, who wished to remain anonymous (Anon4This1), wrote the following:

Regarding this point: “Ultimately, [Cisco] does not control the ONF.”

That was one of the key reasons for the creation of the ONF. That is, there was a sense that existing standards bodies were under the collective thumb of large vendors. ONF was created such that only the ONF board can vote on binding decisions, and no vendors are allowed on the board. Done, right? Ah, well, not so fast. The ONF also has a Technical Advisory Group (TAG). For most decisions, the board actually acts on the recommendations of the TAG. The TAG does not have the same membership restrictions that apply to the ONF board. Indeed, the current chairman of the TAG is none other than influential Cisco honcho, Dave Ward. So if the ONF board listens to the TAG, and the TAG listens to its chairman… Who has more control over the ONF than anyone? https://www.opennetworking.org/about/tag

Board’s Iron Grip

If you follow the link provided by my anonymous commenter, you will find an extensive overview of the ONF’s Technical Advisory Group (TAG). Could the TAG, as constituted, be the tail that wags the ONF dog?

My analysis leads me to a different conclusion.  As I see it, the TAG serves at the pleasure of the ONF board of directors, individually and collectively. Nobody on the TAG does so without the express consent of the board of directors. Moreover, “TAG term appointments are annual and the chair position rotates quarterly.” Whereas Cisco’s Dave Ward serves as the current chair, his term will expire and somebody else will succeed him.

What about the suggestion that the “board actually acts on recommendations of the TAG,” as my commenter asserts. In many instances, that might be true, but the form and substance of the language on the TAG webpage articulates clearly that the TAG is, as its acronym denotes, an advisory body that reports to (and “responds to requests from”) the ONF board of directors.  The TAG offers technical guidance and recommendations, but the board makes the ultimate decisions. If the board doesn’t like what it’s getting from TAG members, annual appointments presumably can be allowed to expire and new members can succeed those who leave.

Currently, two networking-gear OEMs are represented on the ONF’s TAG. Cisco is represented by the aforementioned David Ward, and HP is represented by Jean Tourrilhes, an HP-Labs researcher in Networking and Communication who has worked with OpenFlow since 2008. These gentlemen seem to be on the TAG because those who run the ONF believe they can make meaningful contributions to the development of SDN.

No Coup

It’s instructive to note the company affiliations of the other six members serving on TAG. We find, for instance, Nicira CTO Martin Casado, as well as Verizon’s Dave McDysan, Google’s Amin Vahdat, Microsoft’s Albert Greenberg, Broadcom’s Puneet Agarwal, and Stanford’s Nick McKeown, who also is known as a Nicira co-founder and serves on that company’s board of directors.

If any company has pull, then, on the ONF’s TAG, it would seem to be Nicira Networks, not Cisco Systems. After all, Nicira has two of its corporate directors serving on the ONF’s TAG. Again, though, both gentlemen from Nicira are highly regarded and esteemed SDN proponents, who played critical roles in the advent and development of OpenFlow.

And that’s my point. If you look at who serves on the ONF’s TAG, you can clearly see why they’re in those roles and you can understand why the ONF board members would desire their contributions.

The TAG as a vehicle for an internal coup d’etat at the ONF? That’s one conspiracy theory that I’m definitely not buying.

Distributed, Hybrid, Northbound: Key Words in Cisco’s SDN Counterstrategy

When it has broached the topic of software-defined networking (SDN) recently, Cisco has attempted to reframe the discussion within the larger context of programmable networks. In Cisco’s conception of the evolving networking universe, the programmable network encompasses SDN, which in turn envelops OpenFlow.

We know by now that OpenFlow is a relatively small part of SDN. OpenFlow is a protocol that provides for the physical separation of the control and data planes, which heretofore have been combined within a switch or router. As such, OpenFlow enables server-based software (a controller) to determine how packets should be forwarded by network elements. As has been mentioned before, here and elsewhere, mechanisms other than OpenFlow could be used for the same purpose.

Logical Outcome

SDN is bigger than OpenFlow. It deals not only with the abstraction of the data plane, but also with higher-layer abstractions, at the control plane and above. The whole idea behind SDN is to put the applications, and the services they deliver, in the driver’s seat, so that the network does not become a costly encumbrance that impedes business agility and operational efficiency. In that sense, Cisco is right to suggest that programmable networks are a logical outcome that can and should result from the rise of SDN.

That said, the devil can always be found in the details, and we should note that Cisco’s definition of SDN, to the extent that it might invoke that acronym rather one of its own, is at variance with the definition that has been proffered by the Open Networking Foundation (ONF), which is controlled by the world’s largest cloud-service providers rather than by the world’s largest networking vendors. Cisco’s understanding of SDN looks a lot more like conventional networking, with a distributed or hybrid control plane instead of the logically centralized control plane favored by the ONF.

This post isn’t about value judgments, though. I am not here to bash Cisco, or anybody else for that matter, but to understand and interpret Cisco’s motivations as it formulates a counterstrategy to the ONF’s plans.

Bog-Standard Switches

Given the context, then, it’s easy to understand why Cisco favors the retention of the distributed — or, failing that, even a hybrid — control plane. Cisco is the market leader in switches and routers, and it owns a lot of valuable real estate on its customers’ networks.  If OpenFlow succeeds, not only in service-provider networks but also in the enterprise, Cisco is at risk of losing the market dominance it has worked so long and hard to build.

Frankly, there isn’t much differentiation to be achieved in bog-standard OpenFlow switches. If the Googles of the world get their way, the merchant silicon vendors all will support OpenFlow on their chipsets, and industry-standard boxes will be available from a number of ODMs and OEMs. It will be a prototypical buyer’s market, perhaps advancing quickly toward commoditization, and that’s not a prospect that Cisco shareholders and executives wish to entertain.

As Cisco comes to grips with SDN, then, it needs to rediscover the sort of leverage that it had before the advent of the ONF.  After all, if SDN is all about putting applications and other software literally in control of networks composed of industry-standard boxes, then network hardware will suffer a significant margin-squeezing demotion in the value hierarchy of customers.  And Cisco, as we’ve discussed before, develops more than its fair share of software, but remains a company wedded to a hardware-based business model.

Compromise and Accommodation 

Cisco would like to resist and undermine any potential market shift to the ONF’s server-based controllers. Fortunately for Cisco, many within the ONF are willing to acquiesce, at least initially and up to a point. A general consensus seems to have developed about the need for a hybrid control plane, which would accommodate both logically centralized controllers and distributed boxes. The ONF’s braintrust sees this move as a necessary compromise that will facilitate a long-term transition to a server-based model. It seems a logical and rational deduction — there’s a lot of networking gear installed out there that does not support the ONF’s conception of SDN — but it’s an opening for Cisco, nonetheless.

Beyond the issue of physical separation of the data plane and the control plane, Cisco has at least one other card to play.  You might have noticed that Cisco representatives have talked a lot during the past couple months about a “northbound interface” for SDN. As currently constituted, OpenFlow is a “southbound” interface, in that serves as a mechanism for a controller to program a switch. On a network diagram, that communication flows downward (hence southbound).

In SDN, a northbound interface would go upward, extending from the switch to the control plane and potentially beyond to applications and management/orchestration software. This is a discussion Cisco wants to have with the industry, at the ONF and elsewhere. Whereas southbound interfaces are all about what is done to a switch by external software, the northbound interface is a conduit by which the switch confers value — in the form of information intrinsic to the network — to the higher layers of abstraction.

Northbound Traffic

For now, the ONF has chosen not to define standard protocols or APIs for northbound interfaces, which could run from the networking devices up to the control plane and to higher layers of abstraction. Cisco, as the vendor with the largest installed base of gear in customer networks, finds itself in a logical position to play a role in helping to define those northbound interfaces.

Ideally, if programmable networks and SDN fulfill their potential, we’ll see the development of a virtuous feedback loop at the highest layers of abstraction, with software programming an underlying virtualized network and the network sending back state and other data that dynamically allows applications to perform even better.

Therefore, the northbound interface will be an important element in the future of SDN. Cisco hopes to leverage it, but more for the sustenance of its own business model than for the furtherance of the ONF’s objectives. Cisco holds some interesting cards, but it should be careful not to overplay them. Ultimately, it does not control the ONF.

As the SDN discourse elevates beyond OpenFlow, watch the traffic in the northbound lanes.

HP’s Latest Cuts: Will It Be Any Different This Time?

If you were to interpret this list of acquisitions by Hewlett-Packard as a past-performance chart, and focused particularly on recent transactions running from the summer of 2008 through to the present, you might reasonably conclude that HP has spent its money unwisely.

That’s particularly true if you correlate the list of transactions with the financial results that followed. Admittedly, some acquisitions have performed better than others, but arguably the worst frights in this house of M&A horrors have been delivered by the most costly buys.

M&A House of Horrors

As Exhibit A, I cite the acquisition of EDS, which cost HP nearly $14 billion. As a series of subsequent staff cuts and reorganizations illustrate, the acquisition has not gone according to plan. At least one report suggested that HP, which just announced that it will shed about 27,000 employees during the next two years, will make about half its forthcoming personnel cuts in HP Enterprise Services, constituted by what was formerly known as EDS. Rather than building on EDS, HP seems to be shrinking the asset systematically.

The 2011 acquisition of Autonomy, which cost HP nearly $11 billion, seems destined for ignominy, too. HP described its latest financial results from Autonomy as “disappointing,” and though HP says it still has high hopes for the company’s software and the revenue it might derive from it, many senior executives at Autonomy and a large number of its software developers already have decamped. There’s a reasonable likelihood that HP is putting lipstick on a slovenly pig when it tries to put the best face on its prodigious investment in Autonomy.

Taken together, HP wagered a nominal $25 billion on EDS and Autonomy. In reality, it has spent more than that when one considers the additional operational expenses involved in integrating and (mis)managing those assets.

Still Haven’t Found What They’re Looking For

Then there was the Palm acquisition, which involved HP shelling out $1.2 billion to Bono and friends. By the time the sorry Palm saga ended, nobody at HP was covered in glory. It was an unmitigated disaster, marked by strategic reversals and tactical blunders.

I also would argue that HP has not gotten full value from its 3Com purchase. HP bought 3Com for about $2.7 billion, and many expected the acquisition to help HP become a viable threat to Cisco in enterprise networking. Initially, HP made some market-share gains with 3Com in the fold, but those advances have stalled, as Cisco CEO John Chambers recently chortled.

It is baffling to many, your humble scribe included, that HP has not properly consolidated its networking assets — HP ProCurve, 3Com outside China, and H3C in China. Even to this day, the three groups do not work together as closely as they should. H3C in China apparently regards itself as an autonomous subsidiary rather than an integrated part of HP’s networking business.

Meanwhile,  HP runs two networking operating systems (NOS) across its gear. HP justifies its dual-NOS strategy by asserting that it doesn’t want to alienate its installed base of customers, but there should be a way to manage a transition toward a unified code base. There should also be a way for all the gear to be managed by the same software. In sum, there should be a way for HP to get better results from its investments in networking technologies.

Too Many Missteps

As for some of HP’s other acquisitions during the last few years, it overpaid for 3PAR in a game of strategic-bidding chicken against Dell, though it seems to have wrung some value from its relatively modest purchase of LeftHand Networks. The jury is still out on HP’s $1.5-billion acquisition of ArcSight and its security-related technologies.

One could argue that the rationales behind the acquisitions of at least some of those companies weren’t terrible, but that the execution — the integration and assimilation — is where HP comes up short. The result, however, is the same: HP has gotten poor returns on its recent M&A investments, especially those represented by the largest transactions.

The point of this post is that we have to put the latest announcement about significant employee cuts at HP into a larger context of HP’s ongoing strategic missteps. Nobody said life is fair, but nonetheless it seems clear that HP employees are paying for the sins of their corporate chieftains in the executive suites and in the company’s notoriously fractious boardroom.

Until HP decides what it wants to be when it grows up, the problems are sure to continue. This latest in a long line of employee culling will not magically restore HP’s fortunes, though the bleating of sheep-like analysts might lead you to think otherwise. (Most market analysts, and the public markets that respond to them, embrace personnel cuts at companies they cover, nominally because the staff reductions result in near-term cost savings. However, companies with bad strategies can slash their way to diminutive irrelevance.)

Different This Time? 

Two analysts refused to read from the knee-jerk script that says these latest cuts necessarily position HP for better times ahead. Baird and Co. analyst Jason Noland was troubled by the drawn-out timeframe for the latest job cuts, which he described as “disheartening” and suggested would put a “cloud over morale.” Noland showed a respect for history and a good memory, saying that it is uncertain whether these layoffs would bolster the company’s fortunes any more than previous sackings had done.

Quoting from a story first published by the San Jose Mercury News:

In June 2010, HP announced it was cutting about 9,000 positions “over a multiyear period to reinvest for future growth.” Two years earlier, it disclosed a “restructuring program” to eliminate 24,600 employees over three years. And in 2005, it said it was cutting 14,500 workers over the next year and a half.

Rot Must Stop

If you are good with sums, you’ll find that HP has announced more than 48,000 job cuts from 2005 through 2010. And now another 27,000 over the next two years. But this time, we are told, it will be different.

Noland isn’t the only analyst unmoved by that argument. Deutsche Bank analysts countered that past layoffs “have done little to improve HP’s competitive position or reduce its reliance on declining or troubled businesses.” To HP’s assertion that cost savings from these cuts would be invested in growth initiatives such as cloud computing, security technology, and data analytics, Deutsche’s analysts retorted that HP “has been restructuring for the past decade.”

Unfortunately, it hasn’t only been restructuring. HP also has been an acquisitive spendthrift, investing and operating like a drunken, peyote-slathered sailor.  The situation must change. The people who run HP need to formulate and execute a coherent strategy this time so that other stakeholders, including those who still work for the company, don’t have to pay for their sins.

Lessons for Cisco in Cius Failure

When news broke late last week that Cisco would discontinue development of its Android-based Cius, I remarked on Twitter that it didn’t take a genius to predict the demise of  Cisco’s enterprise-oriented tablet. My corroborating evidence was an earlier post from yours truly — definitely not a genius, alas — predicting the Cius’s doom.

The point of this post, though, will be to look forward. Perhaps Cisco can learn an important lesson from its Cius misadventure. If Cisco is fortunate, it will come away from its tablet failure with valuable insights into itself as well as into the markets it serves.

Negative Origins

While I would not advise any company to navel-gaze obsessively, introspection doesn’t hurt occasionally. In this particular case, Cisco needs to understand what it did wrong with the Cius so that it will not make the same mistakes again.

If Cisco looks back in order to look forward, it will find that it pursued the Cius for the wrong reasons and in the wrong ways.  Essentially, Cisco launched the Cius as a defensive move, a bid to arrest the erosion of its lucrative desktop IP-phone franchise, which was being undermined by unified-communications competition from Microsoft as well as from the proliferation of mobile devices and the rise of the BYOD phenomenon. The IP phone’s claim to desktop real estate was becoming tenuous, and Cisco sought an answer that would provide a new claim.

In that respect, then, the Cius was a reactionary product, driven by Cisco’s own fears of desktop-phone cannibalization rather than by the allure of a real market opportunity. The Cius reeked of desperation, not confidence.

Hardware as Default

While the Cius’ genetic pathology condemned it at birth, its form also hastened its demise. Cisco now is turning exclusively to software (Jabber and WebEx) as answers to enterprise-collaboration conundrum, but it could have done so far earlier, before the Cius was conceived. By the time Cisco gave the green light to Cius, Apple’s iPhone and iPad already had become tremendously popular with consumers, a growing number of whom were bringing those devices to their workplaces.

Perhaps Cisco’s hubris led it to believe that it had the brand, design, and marketing chops to win the affections of consumers. It has learned otherwise, the hard way.

But let’s come back to the hardware-versus-software issue, because Cisco’s Cius setback and how the company responds to it will be instructive, and not just within the context of its collaboration products.

Early Warning from a Software World

As noted previously, Cisco could have gone with a software-based strategy before it launched the Cius. It knew where the market was heading, and yet it still chose to lead with hardware. As I’ve argued before, Cisco develops a lot of software, but it doesn’t act (or sell) like software company. It can sell software, but typically only if the software is contained inside, and sold as, a piece of hardware. That’s why, I believe, Cisco answered the existential threat to its IP-phone business with the Cius rather than with a genuine software-based strategy. Cisco thinks like a hardware company, and it invariably proposes hardware products as reflexive answers to all the challenges it faces.

At least with its collaboration products, Cisco might have broken free of its hard-wired hardware mindset. It remains to be seen, however, whether the deprogramming will succeed in other parts of the business.

In a world where software is increasingly dominant — through virtualization, the cloud, and, yes, in networks — Cisco eventually will have to break its addiction to the hardware-based business model. That won’t be easy, not for a company that has made its fortune and its name selling switches and routers.

Cisco’s Storage Trap

Recent commentary from Barclays Capital analyst Jeff Kvaal has me wondering whether  Cisco might push into the storage market. In turn, I’ve begun to think about a strategic drift at Cisco that has been apparent for the last few years.

But let’s discuss Cisco and storage first, then consider the matter within a broader context.

Risks, Rewards, and Precedents

Obviously a move into storage would involve significant risks as well as potential rewards. Cisco would have to think carefully, as it presumably has done, about the likely consequences and implications of such a move. The stakes are high, and other parties — current competitors and partners alike — would not sit idly on their hands.

Then again, Cisco has been down this road before, when it chose to start selling servers rather than relying on boxes from partners, such as HP and Dell. Today, of course, Cisco partners with EMC and NetApp for storage gear. Citing the precedent of Cisco’s server incursion, one could make the case that Cisco might be tempted to call the same play .

After all, we’re entering a period of converged and virtualized infrastructure in the data center, where private and public clouds overlap and merge. In such a world, customers might wish to get well-integrated compute, networking, and storage infrastructure from a single vendor. That’s a premise already accepted at HP and Dell. Meanwhile, it seems increasingly likely data-center infrastructure is coming together, in one way or another, in service of application workloads.

Limits to Growth?

Cisco also has a growth problem. Despite attempts at strategic diversification, including failed ventures in consumer markets (Flip, anyone?), Cisco still hasn’t found a top-line driver that can help it expand the business while supporting its traditional margins. Cisco has pounded the table perennially for videoconferencing and telepresence, but it’s not clear that Cisco will see as much benefit from the proliferation of video collaboration as once was assumed.

To complicate matters, storm clouds are appearing on the horizon, with Cisco’s core businesses of switching and routing threatened by the interrelated developments of service-provider alienation and software-defined networking (SDN). Cisco’s revenues aren’t about to fall off a cliff by any means, but nor are they on the cusp of a second-wind surge.

Such uncertain prospects must concern Cisco’s board of directors, its CEO John Chambers, and its institutional investors.

Suspicious Minds

In storage, Cisco currently has marriages of mutual convenience with EMC (VBlocks and the sometimes-strained VCE joint venture) and with NetApp (the FlexPod reference architecture).  The lyrics of Mark James’ song Suspicious Minds are evocative of what’s transpiring between Cisco and these storage vendors. The problem is not only that Cisco is bigamous, but that the networking giant might have another arrangement in mind that leaves both partners jilted.

Neither EMC nor NetApp is oblivious to the danger, and each has taken care to reduce its strategic reliance on Cisco. Conversely, Cisco would be exposed to substantial risks if it were to abandon its existing partnership in favor of a go-it-alone approach to storage.

I think that’s particularly true in the case of EMC, which is the majority owner of server-virtualization market leader VMware as well as a storage vendor. The corporate tandem of VMware and EMC carries considerable enterprise clout, and Cisco is likely to be understandably reluctant to see the duo become its adversaries.

Caught in a Trap

Still, Cisco has boxed itself into a strategic corner. It needs growth, it hasn’t been able to find it from diversification away from the data center, and it could easily see the potential of broadening its reach from networking and servers to storage. A few years ago, the logical choice might have been for Cisco to acquire EMC. Cisco had the market capitalization and the onshore cash to pull it off five years ago, perhaps even three years ago.

Since then, though, the companies’ market fortunes have diverged. EMC now has a market capitalization of about $54 billion, while Cisco’s is slightly more than $90 billion. Even if Cisco could find a way of repatriating its offshore cash hoard without taking a stiff hit from the U.S. taxman, it wouldn’t have the cash to pull of an acquisition of EMC, whose shareholders doubtless would be disinclined to accept Cisco stock as part of a proposed transaction.

Therefore, even if it wanted to do so, Cisco cannot acquire EMC. It might have been a good move at one time, but it isn’t practical now.

Losing Control

Even NetApp, with a market capitalization of more than $12.1 billion, would rate as the biggest purchase by far in Cisco’s storied history of acquisitions. Cisco could pull it off, but then it would have to try to further counter and commoditize VMware’s virtualization and cloud-management presence through a fervent embrace of something like OpenStack or a potential acquisition of Citrix. I don’t know whether Cisco is ready for either option.

Actually, I don’t see an easy exit from this dilemma for Cisco. It’s mired in somewhat beneficial but inherently limiting and mutually distrustful relationships with two major storage players. It would probably like to own storage just as it owns servers, so that it might offer a full-fledged converged infrastructure stack, but it has let the data-center grass grow under its feet. Just as it missed a beat and failed to harness virtualization and cloud as well as it might have done, it has stumbled similarly on storage.

The status quo is likely to prevail until something breaks. As we all know, however, making no decision effectively is a decision, and it carries consequences. Increasingly, and to an extent that is unprecedented, Cisco is losing control of its strategic destiny.

Why Google Isn’t A Networking Vendor

Invariably trenchant and always worth reading, Ivan Pepelnjak today explores what he believes Google is doing with OpenFlow. As it turns out, Pepelnjak posits that Google is doing more with other technologies than it is with OpenFlow, seemingly building a modern routing platform and a traffic-engineering application deserving universal envy and admiration.

In assessing what Google is doing, Pepelnjak would seem to get it right, as he usually does, but I would like to offer modest commentary on a couple minor points. Let’s start with his assessment of how Google is using OpenFlow:

“Google is using OpenFlow between controller and adjacent chassis switches because (like every other vendor) they need a protocol between the control plane and forwarding planes, and they decided to use an already-documented one instead of inventing their own (the extra OpenFlow hype could also persuade hardware vendors to implement more OpenFlow capabilities in their next-generation chipsets).”

OpenFlow: Just A Piece of the Puzzle

First off, Pepelnjak is essentially right. I’m not going to quarrel with his central point, which is that Google adopted OpenFlow as a communication protocol between (and that separates) the control plane and the forwarding plane. That’s OpenFlow’s purpose, its raison d’être, so it’s no surprising that Google would use it that way. As Chris Rock might say, that’s what OpenFlow is supposed to do.

Larger claims made on behalf of OpenFlow are not its fault. Subsequently, Pepelnjak states that OpenFlow is but a small piece of the networking puzzle at Google, and he’s right there, too. I don’t think it’s possible for OpenFlow to be a bigger piece. As a protocol between the control and forwarding planes, OpenFlow is what it is.

Beyond that, though, Pepelnjak refers to Google as a “vendor,” which I find odd.

Not a Networking Vendor

In many ways, Google is a vendor. It’s a cloud vendor, it’s an advertising vendor, it’s a SaaS vendor, and so on. But, in this particular context, Pepelnjak seems to be classifying Google as a networking vendor. That would be an incorrect designation, and here’s why: Vendors sell things, they vend. Google doesn’t sell the homegrown networking hardware and software that it implements internally. It’s doing it only for itself, not as a business proposition that would involve it proffering the technology to customers. As such, it should not be tossed into the same networking-vendor bucket as a Cisco, a Juniper, or an HP.

In fact, Google is going the roll-your-own route with its network infrastructure precisely because it couldn’t get what it wanted from networking vendors. In that respect, it is the anti-vendor. Google and the other gargantuan cloud-service providers who steer the Open Networking Foundation (ONF) promulgated software-defined networking (SDN) and espoused OpenFlow because they wanted network infrastructure to be different from the conventional approaches advanced by networking vendors and the traditional networking industry.

Whatever else one might think of the ONF, it’s difficult not to conclude that it represents an instance of customers (in this case, cloud-service providers) attempting to wrest strategic control from vendors to set a technological agenda. Google, a networking vendor? Only if one misunderstands the origins and purpose of ONF.

Creating a Market

Nonetheless, Google might have a hidden agenda here, and Pepelnjak touches on it when he writes parenthetically that “the extra OpenFlow hype could also persuade hardware vendors to implement more OpenFlow capabilities in their next-generation chipsets.”

Well, yes. Just because Google has chosen to roll its own and doesn’t like what the networking industry is selling today, it doesn’t necessarily mean that it has closed the door to buying from vendors in the future, presuming said vendors jump on the ONF bandwagon and start developing the sorts of products Google wants. Google doesn’t want to disclose the particulars of its network infrastructure, which it views as a source of competitive advantage and differentiation, but it is not averse to hyping OpenFlow in a bid to spur the supply side of the market to get with the SDN program.

Later in his post, Pepelnjak notes that Google used “standard protocols (BGP and IS-IS) like everyone else and their traffic engineering implementation (and probably the northbound API) is proprietary. How is that different (from the openness perspective) from networks built from Juniper’s or Cisco’s gear?”

Critical Distinction

Again, my point is that Google is not a vendor. It is customer building network technologies for its own use. By the very nature of that implicit (non)-transaction, the technologies in question will be proprietary. They’re not going anywhere other than Google’s data-center network. Google owns them, and it is in full control of defining them and releasing them on a schedule that suits Google’s internal objectives.

It’s rather different for vendors, who profit — if they’re doing it right — from the commercial sale of products and technologies to customers. There might be value in proprietary products and technologies in that context, but customers need to ensure that the proprietary value outweighs the proprietary risks, typically represented by vendor lock-in and upgrade cycles dictated by the vendor’s product-release schedule.

Google is not a vendor, and neither are the other companies driving the agenda of the ONF. I think it’s critical to make that distinction in the context of SDN and, to a lesser extent, OpenFlow.

IRTF Considers SDN

For a while now, the Internet Engineering Task Force (IETF) has been looking for a role to play in relation to software defined networking (SDN).

Even as the IETF struggles to identify a clear mandate for itself as a potential standards arbiter for SDN, the Internet Research Task Force (IRTF) appears ready to jump into fray. The IRTF doesn’t conflict with the IETF, so its involvement with SDN would be parallel and ideally complementary to anything the IETF might pursue.

Both the IETF and IRTF  are overseen by the Internet Architecture Board (IAB). Whereas the IETF is mandated to focus on shorter-term issues involving engineering and standards, the IRTF focuses on longer-term Internet research.

Hybrid SDN Models

Cisco Systems’ David Mayer has drafted a proposed IRTF charter for the Software Defined Networking Research Group (SDNRG). It features an emphasis on hybrid SDN models, “in which control and data plane programability works in concert with existing and future distributed control planes.”

The proposed charter also states that the SDNRG will provide “objective definitions, metrics and background research, with the goal of providing this information as input to protocol, network, and service design to SDOs and other standards producing organizations such as the IETF, ITU-T, IEEE, ONF, MEF, and DMTF.”

How the research of the IRTF and the eventual standards activity of the IETF conform or diverge from the work of the Open Networking Foundation (ONF) will be interesting to monitor. The ONF is controlled exclusively at the board level by cloud service providers, whereas vendors will be actively steering the work of the IETF and IRTF.

What the Battle for “SDN” Reveals

As Mike Fratto notes in an excellent piece over at Network Computing, “software-defined networking” has become a semantical battleground, with the term getting pushed and pulled in various directions.

For good reason, Fratto was concerned that the proliferating definitions of software-defined networking (SDN) were in danger of shearing the term of all meaning. He compared what was happening to SDN to what happened previously to terms such as cloud computing, and he opined that once a term means anything, it means nothing.

Setting Record Straight

Not wanting to be passive observer to such linguistic nihilism, Fratto decided to set the record straight. He rightly observes that software-defined networking (SDN), as we understand it today, derives its provenance from the Open Networking Foundation (ONF). As such, the ONF’s definition of SDN should be the one that holds sway.

Citing an ONF white paper, “Software-Defined Networking:  The New Norm for Networks,” Fratto notes that, properly understood, SDN emphasizes three key features:

  • Separation of the control plane from the data plane
  • A centralized controller and view of the network
  • Programmability of the network by external applications

Why the Fuss?

I agree that the ONF’s definition is the one that should be authoritative and, well, definitive. What other vendors are doing in areas such as network virtualization and network programmability might be interesting — and perhaps even commendable and valuable to their customers — but unless what they are doing meets the ONF criteria, it should not qualify as SDN.  Furthermore, if what they’re doing doesn’t qualify as SDN, they should call it something else and explain its architectural principles and value clearly. An ambiguous, perhaps even disingenuous, linkage with SDN ought to be avoided.

What Fratto does not explore is why certain parties are attempting to muddy the SDN waters. In my experience, when vendors contest terminology, it suggests the linguistic real estate in question is uncommonly valuable, either strategically or monetarily. I posit that SDN is both.

Like “cloud” before it, everybody seemingly recognizes that SDN has struck a resounding chord. There’s hype attached to SDN, sure, but it also has genuine appeal and has generated considerable interest. As the composition of the ONF’s board of directors has suggested, and as the growing number of cloud service-provider deployments attest, SDN is not a passing fad. At least in the large cloud shops, it already has practical utility and business value.

The Value of Words

That value is likely to grow over time, and, while the enterprise will be a tough nut to crack for more than one reason, it’s certainly conceivable that the SDN eventually will find favor among at least certain enterprise demographics. The timeline for that outcome is not imminent, and, as I’ve written previously, Cisco isn’t about to “do a Nortel” and hold a going-out-of-business sale. Nonetheless, the auguries suggest that the ONF’s SDN will be with us for a long time and represents a significant business threat to networking’s status quo.

In this context, language really is power. If entrenched interests — such as the status quo of networking — don’t like an emerging trend, one way they can attempt to derail it is by co-opting it or subverting it. After all, it’s only an emerging trend, not yet entrenched, so therefore its terminology is nascent, too. If, as a major vendor with industry clout, you can change the meaning of the terminology, or make it so ambiguous that practically anything qualifies for inclusion, you can reassert control and dilute the threat.

In the past, this gambit — change the meaning, change the game — has accrued a decent track record. It works to impede change and to give entrenched interests more time to plot effective countermeasures.

Different This Time

What’s different this time — and Fratto’s piece provides corroborating evidence — is the existence of the ONF, a strong, customer-driven consortium that is (in its own words) “dedicated to the transformation of networking through the development and standardization of a unique architecture called Software-Defined Networking (SDN), which brings direct software programmability to networks worldwide. The mission of the Foundation is to commercialize and promote SDN and the underlying technologies as a disruptive approach to networking that will change how virtually every company with a network operates.”

If the ONF hadn’t existed, if it hadn’t already established an incontrovertible definition of SDN, the old “change the meaning, change the game” play might have worked.

But, as Fratto’s piece illustrates, it probably won’t work now.