Monthly Archives: May 2012

Distributed, Hybrid, Northbound: Key Words in Cisco’s SDN Counterstrategy

When it has broached the topic of software-defined networking (SDN) recently, Cisco has attempted to reframe the discussion within the larger context of programmable networks. In Cisco’s conception of the evolving networking universe, the programmable network encompasses SDN, which in turn envelops OpenFlow.

We know by now that OpenFlow is a relatively small part of SDN. OpenFlow is a protocol that provides for the physical separation of the control and data planes, which heretofore have been combined within a switch or router. As such, OpenFlow enables server-based software (a controller) to determine how packets should be forwarded by network elements. As has been mentioned before, here and elsewhere, mechanisms other than OpenFlow could be used for the same purpose.

Logical Outcome

SDN is bigger than OpenFlow. It deals not only with the abstraction of the data plane, but also with higher-layer abstractions, at the control plane and above. The whole idea behind SDN is to put the applications, and the services they deliver, in the driver’s seat, so that the network does not become a costly encumbrance that impedes business agility and operational efficiency. In that sense, Cisco is right to suggest that programmable networks are a logical outcome that can and should result from the rise of SDN.

That said, the devil can always be found in the details, and we should note that Cisco’s definition of SDN, to the extent that it might invoke that acronym rather one of its own, is at variance with the definition that has been proffered by the Open Networking Foundation (ONF), which is controlled by the world’s largest cloud-service providers rather than by the world’s largest networking vendors. Cisco’s understanding of SDN looks a lot more like conventional networking, with a distributed or hybrid control plane instead of the logically centralized control plane favored by the ONF.

This post isn’t about value judgments, though. I am not here to bash Cisco, or anybody else for that matter, but to understand and interpret Cisco’s motivations as it formulates a counterstrategy to the ONF’s plans.

Bog-Standard Switches

Given the context, then, it’s easy to understand why Cisco favors the retention of the distributed — or, failing that, even a hybrid — control plane. Cisco is the market leader in switches and routers, and it owns a lot of valuable real estate on its customers’ networks.  If OpenFlow succeeds, not only in service-provider networks but also in the enterprise, Cisco is at risk of losing the market dominance it has worked so long and hard to build.

Frankly, there isn’t much differentiation to be achieved in bog-standard OpenFlow switches. If the Googles of the world get their way, the merchant silicon vendors all will support OpenFlow on their chipsets, and industry-standard boxes will be available from a number of ODMs and OEMs. It will be a prototypical buyer’s market, perhaps advancing quickly toward commoditization, and that’s not a prospect that Cisco shareholders and executives wish to entertain.

As Cisco comes to grips with SDN, then, it needs to rediscover the sort of leverage that it had before the advent of the ONF.  After all, if SDN is all about putting applications and other software literally in control of networks composed of industry-standard boxes, then network hardware will suffer a significant margin-squeezing demotion in the value hierarchy of customers.  And Cisco, as we’ve discussed before, develops more than its fair share of software, but remains a company wedded to a hardware-based business model.

Compromise and Accommodation 

Cisco would like to resist and undermine any potential market shift to the ONF’s server-based controllers. Fortunately for Cisco, many within the ONF are willing to acquiesce, at least initially and up to a point. A general consensus seems to have developed about the need for a hybrid control plane, which would accommodate both logically centralized controllers and distributed boxes. The ONF’s braintrust sees this move as a necessary compromise that will facilitate a long-term transition to a server-based model. It seems a logical and rational deduction — there’s a lot of networking gear installed out there that does not support the ONF’s conception of SDN — but it’s an opening for Cisco, nonetheless.

Beyond the issue of physical separation of the data plane and the control plane, Cisco has at least one other card to play.  You might have noticed that Cisco representatives have talked a lot during the past couple months about a “northbound interface” for SDN. As currently constituted, OpenFlow is a “southbound” interface, in that serves as a mechanism for a controller to program a switch. On a network diagram, that communication flows downward (hence southbound).

In SDN, a northbound interface would go upward, extending from the switch to the control plane and potentially beyond to applications and management/orchestration software. This is a discussion Cisco wants to have with the industry, at the ONF and elsewhere. Whereas southbound interfaces are all about what is done to a switch by external software, the northbound interface is a conduit by which the switch confers value — in the form of information intrinsic to the network — to the higher layers of abstraction.

Northbound Traffic

For now, the ONF has chosen not to define standard protocols or APIs for northbound interfaces, which could run from the networking devices up to the control plane and to higher layers of abstraction. Cisco, as the vendor with the largest installed base of gear in customer networks, finds itself in a logical position to play a role in helping to define those northbound interfaces.

Ideally, if programmable networks and SDN fulfill their potential, we’ll see the development of a virtuous feedback loop at the highest layers of abstraction, with software programming an underlying virtualized network and the network sending back state and other data that dynamically allows applications to perform even better.

Therefore, the northbound interface will be an important element in the future of SDN. Cisco hopes to leverage it, but more for the sustenance of its own business model than for the furtherance of the ONF’s objectives. Cisco holds some interesting cards, but it should be careful not to overplay them. Ultimately, it does not control the ONF.

As the SDN discourse elevates beyond OpenFlow, watch the traffic in the northbound lanes.

Advertisement

HP’s Latest Cuts: Will It Be Any Different This Time?

If you were to interpret this list of acquisitions by Hewlett-Packard as a past-performance chart, and focused particularly on recent transactions running from the summer of 2008 through to the present, you might reasonably conclude that HP has spent its money unwisely.

That’s particularly true if you correlate the list of transactions with the financial results that followed. Admittedly, some acquisitions have performed better than others, but arguably the worst frights in this house of M&A horrors have been delivered by the most costly buys.

M&A House of Horrors

As Exhibit A, I cite the acquisition of EDS, which cost HP nearly $14 billion. As a series of subsequent staff cuts and reorganizations illustrate, the acquisition has not gone according to plan. At least one report suggested that HP, which just announced that it will shed about 27,000 employees during the next two years, will make about half its forthcoming personnel cuts in HP Enterprise Services, constituted by what was formerly known as EDS. Rather than building on EDS, HP seems to be shrinking the asset systematically.

The 2011 acquisition of Autonomy, which cost HP nearly $11 billion, seems destined for ignominy, too. HP described its latest financial results from Autonomy as “disappointing,” and though HP says it still has high hopes for the company’s software and the revenue it might derive from it, many senior executives at Autonomy and a large number of its software developers already have decamped. There’s a reasonable likelihood that HP is putting lipstick on a slovenly pig when it tries to put the best face on its prodigious investment in Autonomy.

Taken together, HP wagered a nominal $25 billion on EDS and Autonomy. In reality, it has spent more than that when one considers the additional operational expenses involved in integrating and (mis)managing those assets.

Still Haven’t Found What They’re Looking For

Then there was the Palm acquisition, which involved HP shelling out $1.2 billion to Bono and friends. By the time the sorry Palm saga ended, nobody at HP was covered in glory. It was an unmitigated disaster, marked by strategic reversals and tactical blunders.

I also would argue that HP has not gotten full value from its 3Com purchase. HP bought 3Com for about $2.7 billion, and many expected the acquisition to help HP become a viable threat to Cisco in enterprise networking. Initially, HP made some market-share gains with 3Com in the fold, but those advances have stalled, as Cisco CEO John Chambers recently chortled.

It is baffling to many, your humble scribe included, that HP has not properly consolidated its networking assets — HP ProCurve, 3Com outside China, and H3C in China. Even to this day, the three groups do not work together as closely as they should. H3C in China apparently regards itself as an autonomous subsidiary rather than an integrated part of HP’s networking business.

Meanwhile,  HP runs two networking operating systems (NOS) across its gear. HP justifies its dual-NOS strategy by asserting that it doesn’t want to alienate its installed base of customers, but there should be a way to manage a transition toward a unified code base. There should also be a way for all the gear to be managed by the same software. In sum, there should be a way for HP to get better results from its investments in networking technologies.

Too Many Missteps

As for some of HP’s other acquisitions during the last few years, it overpaid for 3PAR in a game of strategic-bidding chicken against Dell, though it seems to have wrung some value from its relatively modest purchase of LeftHand Networks. The jury is still out on HP’s $1.5-billion acquisition of ArcSight and its security-related technologies.

One could argue that the rationales behind the acquisitions of at least some of those companies weren’t terrible, but that the execution — the integration and assimilation — is where HP comes up short. The result, however, is the same: HP has gotten poor returns on its recent M&A investments, especially those represented by the largest transactions.

The point of this post is that we have to put the latest announcement about significant employee cuts at HP into a larger context of HP’s ongoing strategic missteps. Nobody said life is fair, but nonetheless it seems clear that HP employees are paying for the sins of their corporate chieftains in the executive suites and in the company’s notoriously fractious boardroom.

Until HP decides what it wants to be when it grows up, the problems are sure to continue. This latest in a long line of employee culling will not magically restore HP’s fortunes, though the bleating of sheep-like analysts might lead you to think otherwise. (Most market analysts, and the public markets that respond to them, embrace personnel cuts at companies they cover, nominally because the staff reductions result in near-term cost savings. However, companies with bad strategies can slash their way to diminutive irrelevance.)

Different This Time? 

Two analysts refused to read from the knee-jerk script that says these latest cuts necessarily position HP for better times ahead. Baird and Co. analyst Jason Noland was troubled by the drawn-out timeframe for the latest job cuts, which he described as “disheartening” and suggested would put a “cloud over morale.” Noland showed a respect for history and a good memory, saying that it is uncertain whether these layoffs would bolster the company’s fortunes any more than previous sackings had done.

Quoting from a story first published by the San Jose Mercury News:

In June 2010, HP announced it was cutting about 9,000 positions “over a multiyear period to reinvest for future growth.” Two years earlier, it disclosed a “restructuring program” to eliminate 24,600 employees over three years. And in 2005, it said it was cutting 14,500 workers over the next year and a half.

Rot Must Stop

If you are good with sums, you’ll find that HP has announced more than 48,000 job cuts from 2005 through 2010. And now another 27,000 over the next two years. But this time, we are told, it will be different.

Noland isn’t the only analyst unmoved by that argument. Deutsche Bank analysts countered that past layoffs “have done little to improve HP’s competitive position or reduce its reliance on declining or troubled businesses.” To HP’s assertion that cost savings from these cuts would be invested in growth initiatives such as cloud computing, security technology, and data analytics, Deutsche’s analysts retorted that HP “has been restructuring for the past decade.”

Unfortunately, it hasn’t only been restructuring. HP also has been an acquisitive spendthrift, investing and operating like a drunken, peyote-slathered sailor.  The situation must change. The people who run HP need to formulate and execute a coherent strategy this time so that other stakeholders, including those who still work for the company, don’t have to pay for their sins.

Lessons for Cisco in Cius Failure

When news broke late last week that Cisco would discontinue development of its Android-based Cius, I remarked on Twitter that it didn’t take a genius to predict the demise of  Cisco’s enterprise-oriented tablet. My corroborating evidence was an earlier post from yours truly — definitely not a genius, alas — predicting the Cius’s doom.

The point of this post, though, will be to look forward. Perhaps Cisco can learn an important lesson from its Cius misadventure. If Cisco is fortunate, it will come away from its tablet failure with valuable insights into itself as well as into the markets it serves.

Negative Origins

While I would not advise any company to navel-gaze obsessively, introspection doesn’t hurt occasionally. In this particular case, Cisco needs to understand what it did wrong with the Cius so that it will not make the same mistakes again.

If Cisco looks back in order to look forward, it will find that it pursued the Cius for the wrong reasons and in the wrong ways.  Essentially, Cisco launched the Cius as a defensive move, a bid to arrest the erosion of its lucrative desktop IP-phone franchise, which was being undermined by unified-communications competition from Microsoft as well as from the proliferation of mobile devices and the rise of the BYOD phenomenon. The IP phone’s claim to desktop real estate was becoming tenuous, and Cisco sought an answer that would provide a new claim.

In that respect, then, the Cius was a reactionary product, driven by Cisco’s own fears of desktop-phone cannibalization rather than by the allure of a real market opportunity. The Cius reeked of desperation, not confidence.

Hardware as Default

While the Cius’ genetic pathology condemned it at birth, its form also hastened its demise. Cisco now is turning exclusively to software (Jabber and WebEx) as answers to enterprise-collaboration conundrum, but it could have done so far earlier, before the Cius was conceived. By the time Cisco gave the green light to Cius, Apple’s iPhone and iPad already had become tremendously popular with consumers, a growing number of whom were bringing those devices to their workplaces.

Perhaps Cisco’s hubris led it to believe that it had the brand, design, and marketing chops to win the affections of consumers. It has learned otherwise, the hard way.

But let’s come back to the hardware-versus-software issue, because Cisco’s Cius setback and how the company responds to it will be instructive, and not just within the context of its collaboration products.

Early Warning from a Software World

As noted previously, Cisco could have gone with a software-based strategy before it launched the Cius. It knew where the market was heading, and yet it still chose to lead with hardware. As I’ve argued before, Cisco develops a lot of software, but it doesn’t act (or sell) like software company. It can sell software, but typically only if the software is contained inside, and sold as, a piece of hardware. That’s why, I believe, Cisco answered the existential threat to its IP-phone business with the Cius rather than with a genuine software-based strategy. Cisco thinks like a hardware company, and it invariably proposes hardware products as reflexive answers to all the challenges it faces.

At least with its collaboration products, Cisco might have broken free of its hard-wired hardware mindset. It remains to be seen, however, whether the deprogramming will succeed in other parts of the business.

In a world where software is increasingly dominant — through virtualization, the cloud, and, yes, in networks — Cisco eventually will have to break its addiction to the hardware-based business model. That won’t be easy, not for a company that has made its fortune and its name selling switches and routers.

Last Week’s Leavings: Avaya and HP

Update on Avaya

Pursuant to a post I wrote earlier last week on Avaya’s latest quarterly financial results and its continue travails, I’m increasingly pessimistic about the company’s prospects to deliver a happy ending (as in a successful exit) for its principal private-equity stakeholders.  There’s no growth profile, cost containment has yet to yield profitability, and the long-term debt overhang remains ominous. The company could sell its networking business, but that would only buy a modest amount of latitude.

At a company all-hands meeting last week, which I mentioned in the aforementioned post, Avaya CEO Kevin Kennedy spoke but didn’t say anything momentous, according to our sources. Those sources described the session as “disappointing,” in that little was disclosed about the company’s plans to right the ship. Kennedy also didn’t talk much about the long-delayed IPO, though he did say its timing would be determined by the company’s sponsors — which is true, but doesn’t tell us anything.

Kennedy apparently did say that the employee headcount at the company is likely to be reduced through layoffs, attrition, and “restructuring,” the last of which typically results in layoffs. He also reportedly said Avaya had too many locations, which suggests that geographic consolidation is in the cards.

HP: Layoffs will continue until morale improves

Speaking of cuts, reports that HP might be shedding a whopping eight percent of its staff are troubling. Remember, HP is a company that was headed by Mark Hurd, a CEO notorious for his operational austerity. Hurd wielded the sharp budgetary implements so exuberantly, he must have brought tears to the eyes of Chainsaw Al Dunlap, former CEO of Sunbeam, who, like Hurd, was ousted under dubious circumstances.

During Hurd’s reign at HP, spending on R&D was slashed aggressively, and it was somewhat jokingly suggested that the tightfisted CEO might insist that his employees power their offices by riding electric stationary bikes.  After the Hurd years, and the desultory and fleeting rule of Leo Apotheker, HP now appears to be getting another whopping dollop of restructuring. The groups affected will be hit hard, and one wonders how morale throughout the company will be affected. We might learn more about the extent and nature of the cuts later today.

Cisco’s Storage Trap

Recent commentary from Barclays Capital analyst Jeff Kvaal has me wondering whether  Cisco might push into the storage market. In turn, I’ve begun to think about a strategic drift at Cisco that has been apparent for the last few years.

But let’s discuss Cisco and storage first, then consider the matter within a broader context.

Risks, Rewards, and Precedents

Obviously a move into storage would involve significant risks as well as potential rewards. Cisco would have to think carefully, as it presumably has done, about the likely consequences and implications of such a move. The stakes are high, and other parties — current competitors and partners alike — would not sit idly on their hands.

Then again, Cisco has been down this road before, when it chose to start selling servers rather than relying on boxes from partners, such as HP and Dell. Today, of course, Cisco partners with EMC and NetApp for storage gear. Citing the precedent of Cisco’s server incursion, one could make the case that Cisco might be tempted to call the same play .

After all, we’re entering a period of converged and virtualized infrastructure in the data center, where private and public clouds overlap and merge. In such a world, customers might wish to get well-integrated compute, networking, and storage infrastructure from a single vendor. That’s a premise already accepted at HP and Dell. Meanwhile, it seems increasingly likely data-center infrastructure is coming together, in one way or another, in service of application workloads.

Limits to Growth?

Cisco also has a growth problem. Despite attempts at strategic diversification, including failed ventures in consumer markets (Flip, anyone?), Cisco still hasn’t found a top-line driver that can help it expand the business while supporting its traditional margins. Cisco has pounded the table perennially for videoconferencing and telepresence, but it’s not clear that Cisco will see as much benefit from the proliferation of video collaboration as once was assumed.

To complicate matters, storm clouds are appearing on the horizon, with Cisco’s core businesses of switching and routing threatened by the interrelated developments of service-provider alienation and software-defined networking (SDN). Cisco’s revenues aren’t about to fall off a cliff by any means, but nor are they on the cusp of a second-wind surge.

Such uncertain prospects must concern Cisco’s board of directors, its CEO John Chambers, and its institutional investors.

Suspicious Minds

In storage, Cisco currently has marriages of mutual convenience with EMC (VBlocks and the sometimes-strained VCE joint venture) and with NetApp (the FlexPod reference architecture).  The lyrics of Mark James’ song Suspicious Minds are evocative of what’s transpiring between Cisco and these storage vendors. The problem is not only that Cisco is bigamous, but that the networking giant might have another arrangement in mind that leaves both partners jilted.

Neither EMC nor NetApp is oblivious to the danger, and each has taken care to reduce its strategic reliance on Cisco. Conversely, Cisco would be exposed to substantial risks if it were to abandon its existing partnership in favor of a go-it-alone approach to storage.

I think that’s particularly true in the case of EMC, which is the majority owner of server-virtualization market leader VMware as well as a storage vendor. The corporate tandem of VMware and EMC carries considerable enterprise clout, and Cisco is likely to be understandably reluctant to see the duo become its adversaries.

Caught in a Trap

Still, Cisco has boxed itself into a strategic corner. It needs growth, it hasn’t been able to find it from diversification away from the data center, and it could easily see the potential of broadening its reach from networking and servers to storage. A few years ago, the logical choice might have been for Cisco to acquire EMC. Cisco had the market capitalization and the onshore cash to pull it off five years ago, perhaps even three years ago.

Since then, though, the companies’ market fortunes have diverged. EMC now has a market capitalization of about $54 billion, while Cisco’s is slightly more than $90 billion. Even if Cisco could find a way of repatriating its offshore cash hoard without taking a stiff hit from the U.S. taxman, it wouldn’t have the cash to pull of an acquisition of EMC, whose shareholders doubtless would be disinclined to accept Cisco stock as part of a proposed transaction.

Therefore, even if it wanted to do so, Cisco cannot acquire EMC. It might have been a good move at one time, but it isn’t practical now.

Losing Control

Even NetApp, with a market capitalization of more than $12.1 billion, would rate as the biggest purchase by far in Cisco’s storied history of acquisitions. Cisco could pull it off, but then it would have to try to further counter and commoditize VMware’s virtualization and cloud-management presence through a fervent embrace of something like OpenStack or a potential acquisition of Citrix. I don’t know whether Cisco is ready for either option.

Actually, I don’t see an easy exit from this dilemma for Cisco. It’s mired in somewhat beneficial but inherently limiting and mutually distrustful relationships with two major storage players. It would probably like to own storage just as it owns servers, so that it might offer a full-fledged converged infrastructure stack, but it has let the data-center grass grow under its feet. Just as it missed a beat and failed to harness virtualization and cloud as well as it might have done, it has stumbled similarly on storage.

The status quo is likely to prevail until something breaks. As we all know, however, making no decision effectively is a decision, and it carries consequences. Increasingly, and to an extent that is unprecedented, Cisco is losing control of its strategic destiny.

Why Google Isn’t A Networking Vendor

Invariably trenchant and always worth reading, Ivan Pepelnjak today explores what he believes Google is doing with OpenFlow. As it turns out, Pepelnjak posits that Google is doing more with other technologies than it is with OpenFlow, seemingly building a modern routing platform and a traffic-engineering application deserving universal envy and admiration.

In assessing what Google is doing, Pepelnjak would seem to get it right, as he usually does, but I would like to offer modest commentary on a couple minor points. Let’s start with his assessment of how Google is using OpenFlow:

“Google is using OpenFlow between controller and adjacent chassis switches because (like every other vendor) they need a protocol between the control plane and forwarding planes, and they decided to use an already-documented one instead of inventing their own (the extra OpenFlow hype could also persuade hardware vendors to implement more OpenFlow capabilities in their next-generation chipsets).”

OpenFlow: Just A Piece of the Puzzle

First off, Pepelnjak is essentially right. I’m not going to quarrel with his central point, which is that Google adopted OpenFlow as a communication protocol between (and that separates) the control plane and the forwarding plane. That’s OpenFlow’s purpose, its raison d’être, so it’s no surprising that Google would use it that way. As Chris Rock might say, that’s what OpenFlow is supposed to do.

Larger claims made on behalf of OpenFlow are not its fault. Subsequently, Pepelnjak states that OpenFlow is but a small piece of the networking puzzle at Google, and he’s right there, too. I don’t think it’s possible for OpenFlow to be a bigger piece. As a protocol between the control and forwarding planes, OpenFlow is what it is.

Beyond that, though, Pepelnjak refers to Google as a “vendor,” which I find odd.

Not a Networking Vendor

In many ways, Google is a vendor. It’s a cloud vendor, it’s an advertising vendor, it’s a SaaS vendor, and so on. But, in this particular context, Pepelnjak seems to be classifying Google as a networking vendor. That would be an incorrect designation, and here’s why: Vendors sell things, they vend. Google doesn’t sell the homegrown networking hardware and software that it implements internally. It’s doing it only for itself, not as a business proposition that would involve it proffering the technology to customers. As such, it should not be tossed into the same networking-vendor bucket as a Cisco, a Juniper, or an HP.

In fact, Google is going the roll-your-own route with its network infrastructure precisely because it couldn’t get what it wanted from networking vendors. In that respect, it is the anti-vendor. Google and the other gargantuan cloud-service providers who steer the Open Networking Foundation (ONF) promulgated software-defined networking (SDN) and espoused OpenFlow because they wanted network infrastructure to be different from the conventional approaches advanced by networking vendors and the traditional networking industry.

Whatever else one might think of the ONF, it’s difficult not to conclude that it represents an instance of customers (in this case, cloud-service providers) attempting to wrest strategic control from vendors to set a technological agenda. Google, a networking vendor? Only if one misunderstands the origins and purpose of ONF.

Creating a Market

Nonetheless, Google might have a hidden agenda here, and Pepelnjak touches on it when he writes parenthetically that “the extra OpenFlow hype could also persuade hardware vendors to implement more OpenFlow capabilities in their next-generation chipsets.”

Well, yes. Just because Google has chosen to roll its own and doesn’t like what the networking industry is selling today, it doesn’t necessarily mean that it has closed the door to buying from vendors in the future, presuming said vendors jump on the ONF bandwagon and start developing the sorts of products Google wants. Google doesn’t want to disclose the particulars of its network infrastructure, which it views as a source of competitive advantage and differentiation, but it is not averse to hyping OpenFlow in a bid to spur the supply side of the market to get with the SDN program.

Later in his post, Pepelnjak notes that Google used “standard protocols (BGP and IS-IS) like everyone else and their traffic engineering implementation (and probably the northbound API) is proprietary. How is that different (from the openness perspective) from networks built from Juniper’s or Cisco’s gear?”

Critical Distinction

Again, my point is that Google is not a vendor. It is customer building network technologies for its own use. By the very nature of that implicit (non)-transaction, the technologies in question will be proprietary. They’re not going anywhere other than Google’s data-center network. Google owns them, and it is in full control of defining them and releasing them on a schedule that suits Google’s internal objectives.

It’s rather different for vendors, who profit — if they’re doing it right — from the commercial sale of products and technologies to customers. There might be value in proprietary products and technologies in that context, but customers need to ensure that the proprietary value outweighs the proprietary risks, typically represented by vendor lock-in and upgrade cycles dictated by the vendor’s product-release schedule.

Google is not a vendor, and neither are the other companies driving the agenda of the ONF. I think it’s critical to make that distinction in the context of SDN and, to a lesser extent, OpenFlow.

Avaya’s Latest Results Portend Hard Choices

Those of you following the Avaya saga might want to check out the company’s latest quarterly financial results, which are available in a Form 10-Q filed with the Securities and Exchange Commission.

For Avaya backers hoping to see an IPO this year or in 2013, the results are not encouraging. In the three-month period that ended on March 31, Avaya generated revenue of $1.257 billion, with $637 million coming from product sales and $620 million from services. Those numbers were down from the correspondence quarter the previous year, when the company produced $1.39 billion in revenue, with product sales generating $757 million and services contributing $633 million. Basically, product sales were down sharply and services down slightly.

No Growth in Sight

Avaya also is seeing a weakening in channel sales. Moreover, growth from its networking products, on which the company had once pinned considerable hope, is stagnating. In the six-month period ending March 31, the company generated just $146 million from Avaya Network sales, down from $154 million in the preceding year. For the latest three-month period, concluding on the same date, networking sales were down to $64 million from $76 million last year. It is not projecting the profile of a growth engine.

Things are not much better in Avaya’s Global Communications Solutions (GCS) and Enterprise Collaboration Solutions (ECS) groups, which together account for the vast majority of the company’s product revenue. At this point, Avaya does not have a business unit on its balance sheet showing growth over the six- or three-month periods for which it filed its latest results.

Meanwhile, losses continue to mount and long-term debt remains distressingly high. Losses were down for both the three- and six-month periods reported by Avaya, but those mitigated losses were derived from persistent cost containment and cuts, which, if continued indefinitely, eventually (as in maybe now) hinder a company’s capacity to generate growth.

Interestingly, Avaya’s costs and operating expenses are down across the board, except for those attributable to “restructuring charges,” which are up markedly Avaya’s net loss for the six months ended on March 31 were $188 million as compared with $612 million last year. For the three-month period, the net loss was $162 million as compared with $432 million the previous year.

IPO Increasingly Unlikely

Although Avaya is not a public, and — company aspirations notwithstanding — does not appear to be on a trajectory to an IPO, markets reacted adversely to the financial results. Avaya bonds dropped to their lowest level in fourth months in response to the revenue decline, according to a Bloomberg report.

Avaya’s official message to stakeholders is that it will stay the course, but these results and market trends suggest a different outcome. Look for the company to explore its strategic options, perhaps considering a sale of itself in whole or in part. A sale of the floundering networking unit could buy time, but that, in and of itself, wouldn’t restore a growth profile to the company’s outlook.

Difficult choices loom for a company that has witnessed significant executive churn recently.

IRTF Considers SDN

For a while now, the Internet Engineering Task Force (IETF) has been looking for a role to play in relation to software defined networking (SDN).

Even as the IETF struggles to identify a clear mandate for itself as a potential standards arbiter for SDN, the Internet Research Task Force (IRTF) appears ready to jump into fray. The IRTF doesn’t conflict with the IETF, so its involvement with SDN would be parallel and ideally complementary to anything the IETF might pursue.

Both the IETF and IRTF  are overseen by the Internet Architecture Board (IAB). Whereas the IETF is mandated to focus on shorter-term issues involving engineering and standards, the IRTF focuses on longer-term Internet research.

Hybrid SDN Models

Cisco Systems’ David Mayer has drafted a proposed IRTF charter for the Software Defined Networking Research Group (SDNRG). It features an emphasis on hybrid SDN models, “in which control and data plane programability works in concert with existing and future distributed control planes.”

The proposed charter also states that the SDNRG will provide “objective definitions, metrics and background research, with the goal of providing this information as input to protocol, network, and service design to SDOs and other standards producing organizations such as the IETF, ITU-T, IEEE, ONF, MEF, and DMTF.”

How the research of the IRTF and the eventual standards activity of the IETF conform or diverge from the work of the Open Networking Foundation (ONF) will be interesting to monitor. The ONF is controlled exclusively at the board level by cloud service providers, whereas vendors will be actively steering the work of the IETF and IRTF.

What the Battle for “SDN” Reveals

As Mike Fratto notes in an excellent piece over at Network Computing, “software-defined networking” has become a semantical battleground, with the term getting pushed and pulled in various directions.

For good reason, Fratto was concerned that the proliferating definitions of software-defined networking (SDN) were in danger of shearing the term of all meaning. He compared what was happening to SDN to what happened previously to terms such as cloud computing, and he opined that once a term means anything, it means nothing.

Setting Record Straight

Not wanting to be passive observer to such linguistic nihilism, Fratto decided to set the record straight. He rightly observes that software-defined networking (SDN), as we understand it today, derives its provenance from the Open Networking Foundation (ONF). As such, the ONF’s definition of SDN should be the one that holds sway.

Citing an ONF white paper, “Software-Defined Networking:  The New Norm for Networks,” Fratto notes that, properly understood, SDN emphasizes three key features:

  • Separation of the control plane from the data plane
  • A centralized controller and view of the network
  • Programmability of the network by external applications

Why the Fuss?

I agree that the ONF’s definition is the one that should be authoritative and, well, definitive. What other vendors are doing in areas such as network virtualization and network programmability might be interesting — and perhaps even commendable and valuable to their customers — but unless what they are doing meets the ONF criteria, it should not qualify as SDN.  Furthermore, if what they’re doing doesn’t qualify as SDN, they should call it something else and explain its architectural principles and value clearly. An ambiguous, perhaps even disingenuous, linkage with SDN ought to be avoided.

What Fratto does not explore is why certain parties are attempting to muddy the SDN waters. In my experience, when vendors contest terminology, it suggests the linguistic real estate in question is uncommonly valuable, either strategically or monetarily. I posit that SDN is both.

Like “cloud” before it, everybody seemingly recognizes that SDN has struck a resounding chord. There’s hype attached to SDN, sure, but it also has genuine appeal and has generated considerable interest. As the composition of the ONF’s board of directors has suggested, and as the growing number of cloud service-provider deployments attest, SDN is not a passing fad. At least in the large cloud shops, it already has practical utility and business value.

The Value of Words

That value is likely to grow over time, and, while the enterprise will be a tough nut to crack for more than one reason, it’s certainly conceivable that the SDN eventually will find favor among at least certain enterprise demographics. The timeline for that outcome is not imminent, and, as I’ve written previously, Cisco isn’t about to “do a Nortel” and hold a going-out-of-business sale. Nonetheless, the auguries suggest that the ONF’s SDN will be with us for a long time and represents a significant business threat to networking’s status quo.

In this context, language really is power. If entrenched interests — such as the status quo of networking — don’t like an emerging trend, one way they can attempt to derail it is by co-opting it or subverting it. After all, it’s only an emerging trend, not yet entrenched, so therefore its terminology is nascent, too. If, as a major vendor with industry clout, you can change the meaning of the terminology, or make it so ambiguous that practically anything qualifies for inclusion, you can reassert control and dilute the threat.

In the past, this gambit — change the meaning, change the game — has accrued a decent track record. It works to impede change and to give entrenched interests more time to plot effective countermeasures.

Different This Time

What’s different this time — and Fratto’s piece provides corroborating evidence — is the existence of the ONF, a strong, customer-driven consortium that is (in its own words) “dedicated to the transformation of networking through the development and standardization of a unique architecture called Software-Defined Networking (SDN), which brings direct software programmability to networks worldwide. The mission of the Foundation is to commercialize and promote SDN and the underlying technologies as a disruptive approach to networking that will change how virtually every company with a network operates.”

If the ONF hadn’t existed, if it hadn’t already established an incontrovertible definition of SDN, the old “change the meaning, change the game” play might have worked.

But, as Fratto’s piece illustrates, it probably won’t work now.

Time for HP to Show Its SDN Hand

Although HP has demonstrated mounting support for OpenFlow, it has yet to formulate what I would call a full-fledged strategy for software defined networking (SDN). Yes, HP offers OpenFlow-capable switches, but there’s more to SDN than OpenFlow. Indeed. there’s definitely more to SDN than the packet-shunting hardware at the bottom of the value chain.

The word “software” is prominent in the SDN acronym for a reason, and HP hasn’t told us much about its plans in that area. I am not able to attend this week’s Interop in Las Vegas, but I am hoping HP takes the opportunity this week to disclose a meaningful SDN strategy.

HP could start by telling us what it plans to do on the controller front. Does its strategy involve taking a wait-and-see attitude, working with the likes of Big Switch Networks? Does HP have a controller of its own in the works? As an august publication once trumpeted in a long-ago  advertising campaign, inquiring minds want to know.

Above the controller, how does HP see the ecosystem developing? Does it plan to provide applications, management, orchestration? I think we have a reasonably good idea where Cisco is going with its SDN strategy — though Cisco would rather talk about network programmability (more on which later) — but HP has yet to play its hand.

HP is in Las Vegas this week. It’s as good a time as any to put its SDN cards on the table.

At Dell, Networking’s Role Secondary but Integral

Dell made a networking announcement last week, and, for the most part, reaction was muted. That’s party because Dell’s networking narrative is evolving and in transition, and partly because the announcements related to incremental, though notable, progression.

To be fair, Dell’s networking narrative is part of a larger story the company is telling in the data center. Networking is integral to that story, but it’s not the centerpiece and never will be. Dell is working from the blueprint of its Virtual Network Architecture (VNA), so its purchase and stewardship of Force10 is framed within a bigger picture that involves not just converged infrastructure, but also workload-driven orchestration of virtualized environments.

Integration and Assimilation

Some good news for Dell is that its integration and assimilation of Force10 Networks seems to have gone well and is now complete.  Dell’s OpenManage Networking Manager (OMNM) 5.0. offers a new look and support for the full line of Dell networking products, including the Force10 portfolio. What’s more, in its Dell Force10 MXL blade interconnect, a  40Gb Ethernet switch for the M1000e Blade chassis, Dell brings delivers an apt metaphor as well as a blade-server switch.

In that sense, it’s helpful to recall that Dell’s acquisition of Force10 was motivated by a desire to integrate networking into an automated, orchestrated data center in which it already offered compute and storage. Dell concluded that needed to own networking technology just as it owned server and storage technology. It further deduced that it needed a comprehensive networking portfolio, extending across SAN and LAN environments. Just as it moved previously to shake its dependence on storage partners, it would do likewise in networking.

Dell sees networking as an integral enabling technology, but not as an end in itself. Dell believes it can be more flexible than HP and IBM in certain enterprise demographics, and it believes it can outflank Cisco by being less “network centric” and more open to developments such as software defined networking (SDN).Force10, which was thought to be between a rock and hard place just before being acquired, understands and accepts its role in the Dell universe.

Fitting Into VNA

The key to understanding Dell’s data-center strategy is Virtual Network Architecture (VNA). The announcement of the new blade-server switch fits into that plan. Dell says VNA’s purpose is to virtualize, automate, and orchestrates network services so that they can adapt readily to application and business requirements. Core elements of VNA include the following:

  • High-performance switching systems for the campus and the data center
  • Virtualized Layer 4-7 services
  • Comprehensive automation & orchestration software
  • Open workload/hypervisor interfaces

So, what does it all mean? It means Dell is taking an approach that it believes will be differentiated and add considerable value in customers’ and prospective customers’ data centers. On the networking front, Dell believes it has espoused a strategy that encompasses and envelops the rise of SDN while also taking and accommodating approach to the networking gear already present in customer accounts.

Workload-Oriented Approach

In an article at The VAR Guy, Nathan Eddy quotes Dario Zamarian, VP and GM of Dell Networking, as follows:

“We are taking a workload-oriented approach — as in, ‘What does each require first?’ as opposed to starting with the network first [and] then trying to fit the application to it. In other words, networking is the enabler. The ultimate goal of VNA is to make networking as simple to set up, automate, operate, and manage as servers. VNA is doing for networking what VMware did for servers.”

Well, that’s the plan. In theory, in a slide show, all the pieces are there, but Dell has to execute and deliver on the vision. One can identify holes in the structure, places where Dell will need to buy, partner, or build to close the gaps. It’s clearly doing that, though, as the Force10 acquisition and others recently attest.

Taking Force10’s technology forward in alignment with its plans, Dell not only announced  a 40GbE-enabled blade server switch. It also introduced fabric- and network-management tools to simplify operations in the data center and the campus, and it announced data-center enhancements (stacking technology, L2 multipathing, data-center bridging, automated workload mobility through auto-provisioning of VLANs) to Force10’s FTOS for its S4810 10/40G switching platform.

Encompassing SDN

On the SDN front, Dell announced interoperability with Big Switch Networks’ Open SDN architecture and its OpenFlow-based Floodlight controller. That interoperability will be showcased next week in joint demonstrations at Interop, with the application emphasis on cloud multi-tenancy.

Regardless of where Dell goes with SDN, and regardless of how quickly (or slowly) SDN makes encroachments into the enterprise, Dell’s VNA model accounts for it and much else besides. Dell believes it can win in workload and network orchestration, with its Advanced Infrastructure Manager (AIM) providing virtual-network programming interfaces and doubtless with some forthcoming orchestration technologies it has yet to introduce (or buy).

Dell’s VNA seems a viable plan. But can the company continue to execute on it? Dell would have more focus and resources to do so if it jettisoned its woebegone consumer business, but that divestiture doesn’t seem to be in the cards.