Category Archives: VMware

Rackspace’s Bridge Between Clouds

OpenStack has generated plenty of sound and fury during the past several months, and, with sincere apologies to William Shakespeare, there’s evidence to suggest the frenzied activity actually signifies something.

Precisely what it signifies, and how important OpenStack might become, is open to debate, of which there has been no shortage. OpenStack is generally depicted as an open-source cloud operating system, but that might be a generous interpretation. On the OpenStack website, the following definition is offered:

“OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and feature rich. The technology consists of a series of interrelated projects delivering various components for a cloud infrastructure solution.”

Just for fun and giggles (yes, that phrase has been modified so as not to offend reader sensibilities), let’s parse that passage, shall we? In the words of the OpenStackers themselves, their project is an open-source cloud-computing platform for public and private clouds, and it is reputedly simple to implement, massively scalable, and feature rich. Notably, it consists of a “series of interrelated projects delivering various components for a cloud infrastructure solution.”

Simple for Some

Given that description, especially the latter reference to interrelated projects spawning various components, one might wonder exactly how “simple” OpenStack is to implement and by whom. That’s a question others have raised, including David Linthicum in a recent piece at InfoWorld. In that article, Linthicum notes that undeniable vendor enthusiasm and burgeoning market momentum accrue to OpenStack — the community now has 138 member companies (and counting), including big-name players such as HP, Dell, Intel, Rackspace, Cisco, Citrix, Brocade, and others — but he also offers the following caveat:

“So should you consider OpenStack as your cloud computing solution? Not on its own. Like many open source projects, it takes a savvy software vendor, or perhaps cloud provider, to create a workable product based on OpenStack. The good news is that many providers are indeed using OpenStack as a foundation for their products, and most of those are working, or will work, just fine.”

Creating Value-Added Services

Meanwhile, taking issue with a recent InfoWorld commentary by Savio Rodrigues — who argued that OpenStack will falter while its open-source counterpart Eucalyptus will thrive — James Urquhart, formerly of Cisco and now VP of product strategy at enStratus, made this observation:

“OpenStack is not a cloud service, per se, but infrastructure automation tuned to cloud-model services, like IaaS, PaaS and SaaS. Intsall OpenStack, and you don’t get a system that can instantly bill customers, provide a service catalog, etc. That takes additional software.

What OpenStack represents is the commodity element of cloud services: the VM, object, server image and networking management components. Yeah, there is a dashboard to interact with those commodity elements, but it is not a value-add capability in-and-of itself.

What HP, Dell, Cisco, Citrix, Piston, Nebula and others are doing with OpenStack is creating value-add services on top (or within) the commodity automation. Some focus more on “being commodity”, others focus more on value-add, but they are all building on top of the core OpenStack projects.”

New Revenue Stream for Rackspace

All of which brings us, in an admittedly roundabout fashion, to Rackspace’s recent announcement of its Rackspace Cloud Private Edition, a packaged version of OpenStack components that can be used by enterprises for private-cloud deployments. This move makes sense for OpenStack on couple levels.

First off, it opens up a new revenue stream for the company. While Rackspace won’t try to make money on the OpenStack software or the reference designs — featuring a strong initial emphasis on Dell servers and Cisco networking gear for now, though bare-bones OpenCompute servers are likely to be embraced before long —  it will provide value-add, revenue-generating managed services to customers of Rackspace Cloud Private Edition. These managed services will comprise installation of OpenStack updates, analysis of system issues, and assistance with specific questions relating to systems engineering. Some security-related services also will be offered. While the reference architecture and the software are available now, Rackspace’s managed services won’t be available until January.

Building a Bridge

The launch of Rackspace Cloud Private Edition is a diversification initiative for Rackspace, which hitherto has made its money by hosting and managing applications and computing services for others in its own data centers. The OpenStack bundle takes it into the realm of provided managed services in its customers’ data centers.

As mentioned above, this represents a new revenue stream for Rackspace, but it also provides a technological bridge that will allow customers who aren’t ready for multi-tenant public cloud services today to make an easy transition to Rackspace’s data centers at some future date. It’s a smart move, preventing prospective customers from moving to another platform for private cloud deployment, ensuring in the process that said customers don’t enter the orbit of another vendor’s long-term gravitational pull.

The business logic coheres. For each customer engagement, Rackspace gets a payoff today, and potentially a larger one at a later date.

Nicira Downplays OpenFlow on Road to Network Virtualization

While recent discussions of software-defined networking (SDN) and network virtualization have focused nearly exclusively on the OpenFlow protocol, various parties are making the point that OpenFlow is just one facet of a bigger story.

One of those parties is Nicira Networks, which was treated to favorable coverage in the New York Times earlier today. In the article, the words “software-defined networking” and “OpenFlow” are conspicuous by their absence. Sure, the big-picture concept of software-defined networking hovers over proceedings, but Nicira takes pains to position itself as a purveyor of “network virtualization,” which is a neater, simpler concept for the broader technology market to grasp.

VMware of Networking

Indeed, leveraging the idea of network virtualization, Nicira positions itself as the VMware of networking, contending that it will resolve the problem of inflexible, inefficient, complex, and costly data-center networks with a network hypervisor that decouples network services from the underlying hardware. Nicira’s goal, then, is to be the first vendor to bring network virtualization up to speed with server and storage virtualization.  

GigaOM’s Stacey Higginbotham takes issue with the New York Times article and with Nicira’s claims relating to its putatively peerless place in the networking firmament. Writes Higginbotham: 

“The article . . . .  does a disservice to the companies pursing network virtualization by conflating the idea of flexible and programmable networks with Nicira becoming “to networking something like what VMWare was to computer servers.” This is a nice trick for the lay audience, but unlike server virtualization, which VMware did pioneer and then control, network virtualization currently has a variety of vendors pushing solutions that range from being tied to the hardware layer (hello, Juniper and Xsigo) to the software (Embrane and Nicira). In addition to there being multiple companies pushing their own standards, there’s an open source effort to set the building blocks and standards in place to create virtualized networks.”

The ONF Factor

The open-source effort in question is the Open Networking Foundation (ONF), which is promulgating OpenFlow as the protocol by which software-defined networking will be attained. I have written about OpenFlow and the ONF previously, and will have more to say on both shortly. Recently, I also recounted HP’s position on OpenFlow

Nicira says nothing about OpenFlow, which suggests the company is playing down the protocol or might  be going in a different direction to realize its vision of network virtualization. As has been noted, there’s more than one road to software-defined networking, even though OpenFlow is a path that has been well traveled thus far by industry notables, including six major service providers that are the ONF’s founding board members (Google, Deutsche Telekom, Verizon, Microsoft, Facebook, and Yahoo.)

Then again, you will find Nicira Networks among the ONF’s membership, along with a number of other established and nascent networking vendors. Nicira sees a role for OpenFlow, then, though it clearly wants to put the emphasis on its own software and the applications and services that it enables. There’s nothing wrong with that. In fact, it’s a perfectly sensible strategy for a vendor to pursue.

Tension Between Vendors and Service Providers

Alan S. Cohen, a recent addition to the Nicira team, put it into pithy perspective on his personal blog, where he wrote about why he joined Nicira and why the network will be virtualized. Wrote Cohen:

“Virtualization and the cloud is the most profound change in information technology since client-server and the web overtook mainframes and mini computers.  We believe the full promise of virtualization and the cloud can only be fully realized when the network enables rather than hinders this movement.  That is why it needs to be virtualized.

Oh, by the way, OpenFlow is a really small part of the story.  If people think the big shift in networking is simply about OpenFlow, well, they don’t get it.”

So, the big service providers might see OpenFlow as a nifty mechanism that will allow them to reduce their capital expenditures on high-margin networking gear while also lowering their operational expenditures on network management,  but the networking vendors — neophytes and veterans alike — still seek and need to provide value (and derive commensurate margins) above and beyond OpenFlow’s parameters. 

With Latest Moves, HP Networking Responds to Customers, Partners, Competitors

Although media briefings took place yesterday in New York, HP officially announced new networking  products and services this morning based on its HP FlexNetwork Architecture.

Bethany Mayer, senior VP and general manager of HP Networking, launched proceedings yesterday, explaining that changing and growing requirements, including a shift toward server-to-server traffic (“east-west” traffic flows driven by inexorable virtualization) and the need for greater bandwidth, are overwhelming today’s networks. Datacenter networks aren’t keeping pace, bandwidth capacity in branch offices isn’t where it needs to be, there is limited support for third-party virtualized appliances, and networks are straining to accommodate the proliferation of mobile devices.

Quoting numbers from the Dell’Oro Group, Mayer said HP continues to take market share from Cisco in switching, with HP gaining share of about 3.8 percent and Cisco dropping about 6.5 percent. What’s more, Mayer cited data from analyst firm Robert W. Baird. indicating that 75 percent of enterprise-network purchase discussions involve HP. Apparently Baird also found that HP is influencing terms or winning deals about 33 percent of the time.

The Big Picture

Saar Gillai, vice president of HP’s Advanced Technology Group and CTO of HP Networking, followed with a presentation on HP Networking’s vision. Major trends he cited are virtualization, cloud computing, consumerization of IT, mobility, and unified communications. Challenges that accompany these trends include complexity, management, security, time to service, and cost.

In summary, Gillai said that the networks installed at customer sites today just weren’t designed to address the challenges they’re facing. To reinforce that point, Gillai provided a brief history of enterprise application delivery that took us from the 60s, when we had mainframes, through the client-server era and the Web-based applications of the 90s through to today’s burgeoning cloud environments.

He explained that enterprise networks have evolved along with their application delivery models.  Before, they were relatively static (serving employees onsite, for the most part), with well-defined perimeters and applications that were limited qualitatively and quantitatively. Today, though, enterprise networks must accommodate not only connected employees, but also connected customers, partners, contractors, and suppliers. The perimeter is fragmented, the network distributed, the applications mobile (even in the data center with virtualization), client devices (such as smartphones and tablets) proliferating, and wireless LANs, the public cloud and the Internet also prominently in the picture.

Connecting Users to Services

What’s the right approach for networks to take? Gillai says HP is advancing toward delivering networks that focus on connecting users to the services they need rather than on managing infrastructure. HP’s vision of enterprise-network architecture conceives of a pool of virtualized resources where managing and provisioning are done.  This network has a top layer of management/provisioning, a layer below inhabited by a control plane, and then a layer below that one comprising physical network infrastructure. In that regard, Gillai drew an analogy with server virtualization, with the control plane functioning as an abstraction layer.

With talk of a management layer sitting above a control plane that rides atop physical infrastructure, the HP vision seems strikingly similar to the defining principles of software-defined networking as realized through the OpenFlow protocol.

OpenFlow: It’s About the Applications

On OpenFlow, however, Gillai was guardedly optimistic, if not a little ambiguous. Although noting that HP has been an early proponent of OpenFlow and that the company sees promise in the technology, Gillai said the critical factor to OpenFlow’s success will be determined by the applications that run on it. HP is interested in those applications, but is less interested in the OpenFlow controller, which it does not see as a point of differentiation.

Gillai is of the opinion that the OpenFlow hype has moved considerably ahead of its current reality. He said OpenFlow, as a specific means of enabling software-defined networking, is evolutionary as opposed to revolutionary. He also said considerable work remains to be done before OpenFlow will be suitable for the enterprise market. Among the issues that need to be resolved, according to Gillai, is support for IPv6 and the “routing problem” of having a number of controllers communicate with each other.

On the Open Networking Foundation (ONF), the private non-profit organization whose first goal is to create a switching ecosystem to support the OpenFlow interface, Gillai suggested that the founding and board members — comprising Deutsche Telekom, Google, Microsoft, Facebook, Verizon, and Yahoo — have a clear vision of what they want OpenFlow to achieve.

“If the network could become programmable, their life will be great,” Gillai said of the ONF founders, all of whom are service providers with vast data centers.

Despite Gillai’s reservations about OpenFlow hype, he indicated that he believes “interesting applications” for it should begin emerging within the next 12 to 24 months. He also said that it “would not be big surprise” if HP were to leverage OpenFlow for forthcoming control-plane technology.

ToR Switch for the Data Center

As for the products and services announced, let’s begin in the data center, seen by all the major networking vendors as a lucrative growth market as well as venue for increasingly intense competition.

HP FlexFabric solutions for the data center include the new 10-GbE HP 5900 top-of-rack (ToR) switch and the updated HP 12500 switch series.

HP says the new HP 5900 series of 10-GbE ToR switches provides up to 300 percent greater network scalability while reducing the the number of logical devices in the server access layer by 50 percent, thereby decreasing total cost of ownership by 50 percent.

Lead Time and Changes to Product Naming

The switch is powered by the HP Intelligent Resilient Framework (IRF), which allows four HP 5900 switches to be virtualized so that they can operate as a single switch. The HP 5900 top-of-rack switch series is expected to be available in Q1 2012 in the United States with a starting list price of $38,000.

It bears noting that HP typically refrains from announcing switches this far ahead of release data. That it has announced the HP 5900 ToR switch six months before it will ship would appear to suggest both that customers are clamoring for a ToR switch and also that competitors have been exploiting the absence of such a switch in HP’s product portfolio. Although the 5900 isn’t ready to ship today, HP wants the world to know it’s coming soon.

HP says its HP 12500 switch series benefits from improved network resiliency and performance  as a result of  the addition of the updated HP IRF technology. The switch provides full IPv6 support, and HP says it doubles throughput and reduces network recovery time by more than 500 times. The HP 10500 campus core switch is available now worldwide starting at $38,000.

You might have noticed, incidentally, something different about the naming convention associated with new HP switches. HP has decided that, as of new, its networking products will just have numbers rather than alphabetical prefixes followed by numbers. This has been done to simplify matters, for HP and for its customers.

FlexCampus Moves 

On the campus front, new HP FlexCampus offerings include the HP 3800 stackable switches, which HP says provide up to 450 percent higher performance. HP also is offering a new reference architecture for campus environments that unifies wired and wireless networks to support mobility and high-bandwidth multimedia applications. The HP 3800 line of switches is available now worldwide starting at $4,969.

Although HP did not say it, at least one of its primary competitors has cited a lack of HP reference architectures for customers, particularly for campus environments. HP clearly is responding.

HP also unveiled virtualized services modules for the HP 5400zl and 8200zl switches, which it claims are the first in the industry to converge blade servers at the branch into a network infrastructure capable of hosting multiple applications and services. The company claims its HP Advanced Services zl Module with VMware vSphere 5 and HP Advanced Services zl Module with Citrix XenServer deliver a 57-percent cut in power consumption and a 43-percent reduction in space relative to competing products. Available now worldwide, the vSphere HP Advanced Series zl Module with VMware vSphere 5 (including support and subscription, 8GB of RAM) starts at $5,299. The HP Advanced Services zl Module with Citrix XenServer (including support and subscription, 4GB of RAM) starts at $4,499.

Emphasis on Simplicity and Evolution

HP also rolled out HP FlexManagement with integrated mobile network access control (NAC) in HP Intelligent Management Center (IMV) 5.1 to streamline enterprise access for mobile devices and to protect against mobile-application threats. HP Intelligent Management Center 5.1 is expected to be available in Q1 2012 with a list price of $6,995.

Also introduced are new services to facilitate migration to IPv6 and new financing to allow HP’s U.S-based channel partners to lease HP Networking products as demonstration equipment.

Key words associated with this slate of HP Networking announcements were “evolutionary” and “simplification.” As the substance and tone of the announcements suggest, HP Networking is responding to its customers and partners — and also to its competitors — closing gaps in its portfolio and looking to position itself to achieve further market-share gains.

IBM Rumored to be in Acquisition Talks with Platform Computing

Yes, I’m writing another post with a connection to the Open Virtualization Alliance (OVA), though I assure you I have not embarked on an obsessive serialization. That might occur at a later date, most likely involving a different topic, but it’s not on the cards now.

As for this post, the connection to OVA is glancing and tangential, relating to a company that recently joined the association (then again, who hasn’t of late?), but really made its bones — and its money — with workload-management solutions for high-performance computing. Lately, the company in question has gone with the flow and recast itself as a purveyor of private cloud computing solutions. (Again, who hasn’t?)

Talks Relatively Advanced

Yes, we’re talking about Platform Computing, rumored by some dark denizens of the investment-banking community to be a takeover target of none other than IBM. Apparently, according to sources familiar with the situation (I’ve always wanted to use that phrase), the talks are relatively advanced. That said, a deal is not a deal until pen is put to paper.

IBM and Platform first crossed paths, and began working together, many years ago in the HPC community, so their relationship is not a new one. The two companies know each other well.

Rich Heritage in Batch, Workload Management

Platform Computing broadly focuses on two sets of solutions. Its legacy workload-management business is represented by Load Sharing Facility (LSF), which is now part of its cluster-management product portfolio, which — like LSF in its good old days — is targeted squarely at the HPC world. With its rich heritage in batch applications, LSF also is part of Platform’s workload-management software for grid infrastructure.

Like so many others, Platform has refashioned itself as a cloud-computing provider. The company, and some of its customers, found that its core technologies could be adapted and repurposed for the ever-ambiguous private cloud.

Big Data, Too

Perhaps sensitive about being hit by charges of “cloud washing,” Platform contends that it offers “private cloud computing for the real world” through cloud bursting for HPC and private-cloud solutions for enterprise data centers. Not surprisingly given its history, Platform is most convincing and compelling when addressing the requirements of the HPC crowd.

That said, the company has jumped onto the Big Data bandwagon with gusto. It offers Platform MapReduce for vertical markets such as financial services (long a Platform vertical), telecommunications, government (fraud detection and cyber security, regulatory compliance, energy), life sciences, and retail.

Platform recently announced that its ISF, not be confused with LSF, was recognized as a finalist in the “Private Cloud Computing” category for the 2011 Best of VMworld awards. And, of course, to bring this post full circle, Platform was one of 134 new members to join the aforementioned Open Virtualization Association (OVA).

OVA Members Hope to Close Ground

I discussed the fast-growing Open Virtualization Alliance (OVA) in a recent post about its primary objective, which is to commoditize VMware’s daunting market advantage. In catching up on my reading, I came across an excellent piece by InformationWeek’s Charles Babcock that puts the emergence of OVA into historical perspective.

As Babcock writes, the KVM-centric OVA might not have come into existence at all if an earlier alliance supporting another open-source hypervisor hadn’t foundered first. Quoting Babcock regarding OVA’s vanguard members:

Hewlett-Packard, IBM, Intel, AMD, Red Hat, SUSE, BMC, and CA Technologies are examples of the muscle supporting the alliance. As a matter of fact, the first five used to be big backers of the open source Xen hypervisor and Xen development project. Throw in the fact Novell was an early backer of Xen as the owner of SUSE, and you have six of the same suspects. What happened to support for Xen? For one, the company behind the project, XenSource, got acquired by Citrix. That took Xen out of the strictly open source camp and moved it several steps closer to the Microsoft camp, since Citrix and Microsoft have been close partners for over 20 years.

Xen is still open source code, but its backers found reasons (faster than you can say vMotion) to move on. The Open Virtualization Alliance still shares one thing in common with the Xen open source project. Both groups wish to slow VMware’s rapid advance.

Wary Eyes

Indeed, that is the goal. Most of the industry, with the notable exception of VMware’s parent EMC, is casting a wary eye at the virtualization juggernaut, wondering how far and wide its ambitions will extend and how they will impact the market.

As Babcock points out, however, by moving in mid race from one hypervisor horse (Xen) to another (KVM), the big backers of open-source virtualization might have surrendered insurmountable ground to VMware, and perhaps even to Microsoft. Much will depend on whether VMware abuses its market dominance, and whether Microsoft is successful with its mid-market virtualization push into its still-considerable Windows installed base.

Long Way to Go

Last but perhaps not least, KVM and the Open Virtualization Alliance (OVA) will have a say in the outcome. If OVA members wish to succeed, they’ll not only have to work exceptionally hard, but they’ll also have to work closely together.

Coming from behind is never easy, and, as Babcock contends, just trying to ride Linux’s coattails will not be enough. KVM will have to continue to define its own value proposition, and it will need all the marketing and technological support its marquee backers can deliver. One area of particular importance is operations management in the data center.

KVM’s market share, as reported by Gartner earlier this year, was less than one percent in server virtualization. It has a long way to go before it causes VMware’s executives any sleepless nights. That it wasn’t the first choice of its proponents, and that it has lost so much time and ground, doesn’t help the cause.

Cisco Hedges Virtualization Bets

Pursuant to my post last week on the impressive growth of the Open Virtualization Alliance (OVA), which aims to commoditize VMware’s virtualization advantage by offering a viable open-virtualization alternative to the market leader, I note that Red Hat and five other major players have founded the oVirt Project, established to transform Red Hat Enterprise Virtualization Manager (RHEV-M) into a feature-rich virtualization management platform with well-defined APIs.

Cisco to Host Workshop

According to coverage at The Register, Red Hat has been joined on the oVirt Project by Cisco, IBM, Intel, NetApp and SuSE, all of which have committed to building a KVM-based pluggable hypervisor management framework along with an ecosystem of plug-in partners.

Although Cisco will be hosting an oVirt workshop on November 1-3 at its main campus in San Jose, the article at The Register suggests that the networking giant is the only one of the six founding companies not on the oVirt Project’s governance board.  Indeed, the sole reference to Cisco on the oVirt Project website relates to the workshop.

Nonetheless, Cisco’s participation in oVirt warrants attention.

Insurance Policies and Contingency Plans

Realizing that VMware could increasingly eat into the value, and hence the margins, associated with its network infrastructure as cloud computing proliferates, Cisco seems to be devising insurance policies and contingency plans in the event that its relationship with the virtualization market leader becomes, well, more complicated.

To be sure, the oVirt Project isn’t Cisco’s only backup plan. Cisco also is involved with OpenStack, the open-source cloud-computing project that effectively competes with oVirt — and which Red Hat assails as a community “owned”  by its co-founder and driving force, Rackspace — and it has announced that its Cisco Nexus 1000V distributed virtual switch and the Cisco Unified Computing System with Virtual Machine Fabric Extender (VM-FEX) capabilities will support the Windows Server Hyper-V hypervisor to be released with Microsoft Windows Server 8.

Increasingly, Cisco is spreading its virtualization bets across the board, though it still has (and makes) most of its money on VMware.

OVA Aims to Commoditize VMware’s Advantage

Although it’s no threat to VMware yet, the growth of the Open Virtualization Alliance (OVA) has been impressive. Formally announced in May, the OVA has grown from its original seven founding members — its four Governing Members (Red Hat, Intel, HP, and IBM), plus  BMC, Eucalyptus Systems, and Novel (SUSE) — expanding with the addition of 65 new members in June, finally encompassing  more than 200 members as of yesterday.

The overriding objective of the OVA is to popularize the open-source Kernel-based Virtual Machine (KVM) so that it can become a viable alternative to proprietary server-virtualization offerings, namely market leader VMware.  To achieve that goal, OVA is counting on broad-based industry support from large and small players alike as it works to accelerate the development of an ecosystem of KVM-based third-party solutions. In conjunction with that effort, OVA also is encouraging interoperability, promoting best practices, spotlighting customer successes, and generally raising awareness of KVM through marketing events and initiatives.

Give the People What They Want 

While VMware isn’t breaking out in a cold sweat or losing sleep over OVA, it’s clear that many members of OVA are anxious about the potential stranglehold VMware could gain in cloud infrastructure if its virtualization hegemony goes unchecked. In that regard, it’s notable that certain VMware partners — IBM and HP among them — are at the forefront of OVA.

If customers are demanding VMware, as they clearly have been doing, then that’s what IBM and HP will give them. It’s good business practice for service-based solution providers to give customers what they want. But circumstances can change — customers might be persuaded to accept alternatives to VMware — and IBM and HP probably wouldn’t mind if they did.

Certainly VMware recognizes that its partners also can be its competitors. There’s even well-worn industry phrase for  it: coopetition. At the same time, though, IBM and HP would welcome customer demand for an open-source alternative to VMware, which explains their avidity for and evangelization of KVM.

Client-Server Reprise?

An early lead in a strategic market can result in long-term industry dominance. That’s what VMware wants to achieve, and it’s what nearly everybody else — excluding VMware’s majority shareholder, EMC — would like to prevent. Industry giants IBM and HP have seen this script play out in the client-server era with Microsoft’s Windows, and they’re not keen to relive the experience in cloud computing.

VMware’s customer appeal and market differentiation derive from its dominance in server virtualization, a foundation that allows it to extend up and out into areas that could give it a stranglehold on cloud computing’s most valuable technologies. Nearly every vendor with a stake in the data center is keeping a wary eye on VMware. Some, such as Microsoft and Oracle, are outright competitors seeking to cut into VMware’s market lead, while others — such as HP, IBM, and Cisco — are partnering pragmatically with VMware while pursuing strategic alternatives and contingency plans.

Commoditizing Competitor’s Edge

In promoting an open-source alternative as a means of undercutting a competitor’s competitive advantage, IBM and its OVA cohorts are taking a page from a well-worn strategic handbook. This is what Google unleashed against Apple in mobile operating systems with Android, and what Facebook is trying to achieve against Google in cloud data centers with its Open Compute Project. For OVA’s charter members, it’s all about attempting to commoditize a market leader’s competitive differentiation to level the playing field — and perhaps to eventually tilt it to your advantage.

IBM and HP have integration prowess and professional-services capabilities that VMware lacks. If they can nullify virtualization as a strategic asset by commoditizing it, they relegate VMware to a lesser role. However, if they fail and VMware’s differentiation is maintained and extended further, they risk losing a great deal of long-term account control in a burgeoning market.

KVM Rather than XenServer

Some might wonder why the open-source server virtualization alternative became KVM and not, say, XenServer, whose custodian, XenSource, is owned by Citrix. One of the reasons could be Citrix’s relatively warm embrace by Microsoft. When Gartner released its Magic Quadrant for x86 Server Virtualization Infrastructure this summer, it questioned whether Citrix’s ties to Microsoft could result in XenServer being compromised. Microsoft, of course, has its own server-virtualization entry in Hyper-V.

In the end, the OVA gang put down its money on KVM rather than XenServer, seeing the former as a less-complicated proposition than the latter. That appears to have been the right move.

Clearly OVA has experienced striking growth in just a few months, but it has a long way to go before it meets the strategic mandate envisioned by its founders.

Further Intimations of Cisco-EMC Tensions

At the risk of further ad-hominem attacks, I will note again that all might not be well with the relationship between Cisco and EMC, particularly within the context of their VCE joint venture.

I suggested previously that Cisco and EMC might be heading for a not-so-amicable divorce, and I still feel that the organizational and technological auguries point in that direction. The signs at VCE — which provides converged infrastructure comprising Cisco servers and switches, EMC storage, and VMware virtualization — have been inauspicious lately, with layoffs, significant restructuring, and Cisco’s increasingly ardent converged-infrastructure partnership with EMC competitor NetApp adding murk to the mix.

Capellas Loses CEO Title

Now, there’s more to consider. A few weeks ago, as reported by The Register, Michael Capellas was delisted as VCE’s CEO on the company’s website. Capellas is a Cisco board member who was strongly backed by John Chambers for the CEO position at VCE.  The official story from VCE is that nothing has changed at VCE, that Capellas’ role remains the same even though he’s lost the CEO designation and now shares the responsibility of running the company with Frank Hauck, a longtime EMC executive who was appointed VCE president earlier this year.

Perhaps VCE’s official spin on the mahogany-row shuffle is true, but skepticism seems warranted.

In the same piece at The Register that updates us on Capellas’ current status at VCE, we also learn that a source formerly employed by the joint venture says “the Cisco originator of the Vblock concept  is no longer at VCE and neither is the Cisco staffer who ran VCE’s service provider and channel sales operation.”

Mere coincidence, one might contend, and I’m inclined to take that possibility under advisement.

EMC in Server Business?

There’s one other piece of evidence to consider, though. As reported by The Register (yes, again), EMC seems to have moved, via its storage arrays, into the server business. That, as you might expect, could have implications for EMC’s relationship with Cisco and its Unified Computing System (UCS) servers.

Here’s a particularly salient excerpt from The Register article, written by Chris Mellor:

“If you have a VMAX, with flash-enhanced engines, able to run application software, then you wouldn’t need UCS servers to do that job. Were EMC to do a deal with a network supplier, then you wouldn’t need Cisco network switches to hook the application server/array complex up to accessing clients either, and we might have a VMAXblock as well as a Vblock.”

For its part, EMC is ambiguous on whether it’s actually entering the server space. On his blog, EMC staffer Mark Twomey has enjoyed some mischievous fun with the proposition, concluding that EMC’s moves put in the compute and systems business and “maybe” in the server business.

Such fine distinctions might be lost on server vendors such as HP, Dell, and IBM.

Follow the Money

Let’s remember that EMC is the overwhelming majority shareholder — and, thus, owner — of VMware. As such, the virtualization leader will not do anything to hurt the business prospects of its de facto parent. More to the point, VMware remains in the strategic service of EMC, furthering its big-picture agenda while advancing its own interests.

That combination isn’t just a competitive threat to the likes of HP, IBM, and Dell. Increasingly — indirectly or otherwise — Cisco seems to be in EMC-VMware gunsights, too.

Limits to Consumerization of IT

At GigaOm, Derrick Harris is wondering about the limits of consumerization of IT for enterprise applications. It’s a subject that warrants consideration.

My take on consumerization of IT is that it makes sense, and probably is an unstoppable force, when it comes to the utilization of mobile hardware such as smartphones and tablets (the latter composed primarily and almost exclusively of iPads these days).

This is a mutually beneficial arrangement. Employees are happier, not to mention more productive and engaged, when using their own computing and communications devices. Employers benefit because they don’t have to buy and support mobile devices for their staff.  Both groups win.

Everybody Wins

Moreover, mobile device management (MDM) and mobile-security suites, together with various approaches to securing applications and data, mean that the security risks of allowing employees to bring their devices to work have been sharply mitigated. In relation to mobile devices, the organizational rewards of IT consumerization — greater employee productivity, engaged and involved employees, lower capital and operating expenditures — outweigh the security risks, which are being addressed by a growing number of management and security vendors who see a market opportunity in making the practice safer.

In other areas, though, the case in favor of IT consumerization is not as clear. In his piece, Harris questions whether VMware will be successful with a Dropbox-like application codenamed Project Octopus. He concludes that those already using Dropbox will be reluctant to swap it for a an enterprise-sanctioned service that provides similar features, functionality, and benefits. He posits that consumers will want to control the applications and services they use, much as they determine which devices they bring to work.

Data and Applications: Different Proposition

However, the circumstances and the situations are different. As noted above, there’s diminishing risk for enterprise IT in allowing employees to bring their devices to work.  Dropbox, and consumer-oriented data-storage services in general, is an entirely different proposition.

Enterprises increasingly have found ways to protect sensitive corporate data residing on and being sent to and from mobile devices, but consumer-oriented products like Dropbox do an end run around secure information-management practices in the enterprises and can leave sensitive corporate information unduly exposed. The enterprise cost-benefit analysis for a third-party service like Dropbox shows risks outweighing potential rewards, and that sets up a dynamic where many corporate IT departments will mandate and insist upon company-wide adoption of enterprise-class alternatives.

Just as I understand why corporate minders acceded to consumerization of IT in relation to mobile devices, I also fully appreciate why corporate IT will draw the line at certain types of consumer-oriented applications and information services.

Consumerization of IT is a real phenomenon, but it has its limits.