Category Archives: Storage Networking

Xsigo: Hardware Play for Oracle, Not SDN

When I wrote about Xsigo earlier this year, I noted that many saw Oracle as a potential acquirer of the I/O virtualization vendor. Yesterday morning, Oracle made those observers look prescient, pulling the trigger on a transaction of undisclosed value.

Chris Mellor at The Register calculates that Oracle might have paid about $800 million for Xsigo, but we don’t know. What we do know is that Xsigo’s financial backers were looking for an exit. We also know that Oracle was willing to accommodate it.

For the Love of InfiniBand, It’s Not SDN

Some think Oracle bought a software-defined networking (SDN) company. I was shocked at how many journalists and pundits repeated the mantra that Oracle had moved into SDN with its Xsigo acquisition. That is not right, folks, and knowledgeable observers have tried to rectify that misconception.

I’ve gotten over a killer flu, and I have a residual sinus headache that sours my usually sunny disposition, so I’m no mood to deliver a remedial primer on the fundamentals of SDN. Suffice it to say, readers of this forum and those familiar with the pronouncements of the ONF will understand that what Xsigo does, namely I/O virtualization, is not SDN.  That is not to say that what Xsigo does is not valuable, perhaps especially to Oracle. Nonetheless, it is not SDN.

Incidentally, I have seen a few commentators throwing stones at the Oracle marketing department for depicting Xsigo as an SDN player, comparing it to Nicira Networks, which VMware is in the process of acquiring for a princely sum of $1.26 billion. It’s probably true that Oracle’s marketing mavens are trying to gild their new lily by covering it with splashes of SDN gold, but, truth be told, the marketing team at Xsigo began dressing their company in SDN garb earlier this year, when it became increasingly clear that SDN was a lot more than an ephemeral science project involving OpenFlow and boffins in lab coats.

Why Confuse? It’ll be Obvious Soon Enough

At Network Computing, Howard Marks tries to get everybody onside. I encourage you to read his piece in its entirety, because it provides some helpful background and context, but his superbly understated money quote is this one: “I’ve long been intrigued by the concept of I/O virtualization, but I think calling it software-defined networking is a stretch.”

In this industry, words are stretched and twisted like origami until we can no longer recognize their meaning. The result, more often than not, is befuddlement and confusion, as we witnessed yesterday, an outcome that really doesn’t help anybody. In fact, I would argue that Oracle and Xsigo have done themselves a disservice by playing the SDN card.

As Marks points out, “Xsigo’s use of InfiniBand is a good fit with Oracle’s Exadata and other clustered solutions.” What’s more, Matt Palmer, who notes that Xsigo is “not really an SDN acquisition,” also writes that “Oracle is the perfect home for Xsigo.” Palmer makes the salient point that Xsigo is essentially a hardware play for Oracle, one that aligns with Oracle’s hardware-centric approaches to compute and storage.

Oracle: More Like Cisco Than Like VMWare

Oracle could have explained its strategy and detailed the synergies between Xsigo and its family of hardware-engineered “Exasystems” (Exadata and Exalogic) —  and, to be fair, it provided some elucidation (see slide 11 for a concise summary) — but it muddied the waters with SDN misdirection, confusing some and antagonizing others.

Perhaps my analysis is too crude, but I see a sharp divergence between the strategic direction VMware is heading with its acquisition of Nicira and the path Oracle is taking with its Exasystems and Xsigo. Remember, Oracle, after the Sun acquisition, became a proprietary hardware vendor. Its focus is on embedding proprietary hooks and competitive differentiation into its hardware, much like Cisco Systems and the other converged-infrastructure players.

VMware’s conception of a software-defined data center is a completely different proposition. Both offer virtualization, both offer programmability, but VMware treats the underlying abstracted hardware as an undifferentiated resource pool. Conversely, Oracle and Cisco want their engineered hardware to play integral roles in data-center virtualization. Engineered hardware is what they do and who they are.

Taking the Malocchio in New Directions

In that vein, I expect Oracle to look increasingly like Cisco, at least on the infrastructure side of the house. Does that mean Oracle soon will acquire a storage player, such as NetApp, or perhaps another networking company to fill out its data-center portfolio? Maybe the latter first, because Xsigo, whatever its merits, is an I/O virtualization vendor, not a switching or routing vendor. Oracle still has a networking gap.

For reasons already belabored, Oracle is an improbable SDN player. I don’t see it as the likeliest buyer of, say, Big Switch Networks. IBM is more likely to take that path, and I might even get around to explaining why in a subsequent post. Instead, I could foresee Oracle taking out somebody like Brocade, presuming the price is right, or perhaps Extreme Networks. Both vendors have been on and off the auction block, and though Oracle’s Larry Ellison once disavowed acquisitive interest in Brocade, circumstances and Oracle’s disposition have changed markedly since then.

Oracle, which has entertained so many bitter adversaries over the years — IBM, SAP, Microsoft, SalesForce, and HP among them — now appears ready to cast its “evil eye” toward Cisco.

Advertisements

Dell’s Steady Progression in Converged Infrastructure

With its second annual Dell Storage Forum in Boston providing the backdrop, Dell made a converged-infrastructure announcement this week.  (The company briefed me under embargo late last week.)

The press release is available on the company’s website, but I’d like to draw attention to a few aspects of the announcement that I consider noteworthy.

First off, Dell now is positioned to offer its customers a full complement of converged infrastructure, spanning server, storage, and networking hardware, as well as management software. For customers seeking a single-vendor, one-throat-to-choke solution, this puts Dell  on parity with IBM and HP, while Cisco still must partner with EMC or with NetApp for its storage technology.

Bringing the Storage

Until this announcement, Dell was lacking the storage ingredients. Now, with what Dell is calling the Dell Converged Blade Data Center solution, the company is adding its EqualLogic iSCSI Blade Arrays to Dell PowerEdge blade servers and Dell Force10 MXL blade switching. Dell says this package gives customers an entire data center within a single blade enclosure, streamlining operations and management, and thereby saving money.

Dell’s other converged-infrastructure offering is the Dell vStart 1000. For this iteration of vStart, Dell is including, for the first time, its Compellent storage and Force10 networking gear in one integrated rack for private-cloud environments.

The vStart 1000 comes in two configurations: the vStart 1000m and the vStart 1000v. The packages are nearly identical — PowerEdge M620 servers, PowerEdge R620 management servers, Dell Compellent Series 40 storage, Dell Force10 S4810 ToR Networking and Dell Force10 S4810 ToR Networking, plus Brocade 5100 ToR Fibre-Channel Switches — but the vStart 1000m comes with Windows Server 2008 R2 Datacenter (with the Hyper-V hypervisor), whereas the vStart 1000v features trial editions of VMware vCenter and VMware vSphere (with the ESXi hypervisor).

An an aside, it’s worth mentioning that Dell’s inclusion of Brocade’s Fibre-Channel switches confirms that Dell is keeping that partnership alive to satisfy customers’ FC requirements.

Full Value from Acquisitions

In summary, then, is Dell delivering converged infrastructure with both its in-house storage options, demonstrating that it has fully integrated its major hardware acquisitions into the mix.   It’s covering as much converged ground as it can with this announcement.

Nonetheless, it’s fair to ask where Dell will find customers for its converged offerings. During my briefing with Dell, I was told that mid-market was the real sweet spot, though Dell also sees departmental opportunities in large enterprises.

The mid-market, though, is a smart choice, not only because the various technology pieces, individually and collectively, seem well suited to the purpose, but also because Dell, given its roots and lineage, is a natural player in that space. Dell has a strong mandate to contest the mid-market, where it can hold its own against any of its larger converged-infrastructure rivals.

Mid-Market Sweet Spot

What’s more, the mid-market — unlike cloud-service providers today and some large enterprise in the not-too-distant future — are unlikely to have the inclination, resources, and skills to pursue a DIY, software-driven, DevOps-oriented variant of converged infrastructure that might involve bare-bones hardware from Asian ODMs. At the end of the day, converged infrastructure is sold as packaged hardware, and paying customers will need to perceive and realize value from buying the boxes.

The mid-market would seem more than receptive to the value proposition that Dell is selling, which is that its converged infrastructure will reduce the complexity of IT management and deliver operational cost savings.

This finally leads us to a discussion of Dell’s take on converged infrastructure. As noted in an eChannelLine article, Dell’s notion of converged infrastructure encompasses operations management, services management, and applications management. As Dell continues down the acquisition trail, we should expect the company to place greater emphasis on software-based intelligence in those areas.

That, too, would be a smart move. The battle never ends, but Dell — despite its struggles in the PC market — is now more than punching its own weight in converged infrastructure.

Cisco’s Storage Trap

Recent commentary from Barclays Capital analyst Jeff Kvaal has me wondering whether  Cisco might push into the storage market. In turn, I’ve begun to think about a strategic drift at Cisco that has been apparent for the last few years.

But let’s discuss Cisco and storage first, then consider the matter within a broader context.

Risks, Rewards, and Precedents

Obviously a move into storage would involve significant risks as well as potential rewards. Cisco would have to think carefully, as it presumably has done, about the likely consequences and implications of such a move. The stakes are high, and other parties — current competitors and partners alike — would not sit idly on their hands.

Then again, Cisco has been down this road before, when it chose to start selling servers rather than relying on boxes from partners, such as HP and Dell. Today, of course, Cisco partners with EMC and NetApp for storage gear. Citing the precedent of Cisco’s server incursion, one could make the case that Cisco might be tempted to call the same play .

After all, we’re entering a period of converged and virtualized infrastructure in the data center, where private and public clouds overlap and merge. In such a world, customers might wish to get well-integrated compute, networking, and storage infrastructure from a single vendor. That’s a premise already accepted at HP and Dell. Meanwhile, it seems increasingly likely data-center infrastructure is coming together, in one way or another, in service of application workloads.

Limits to Growth?

Cisco also has a growth problem. Despite attempts at strategic diversification, including failed ventures in consumer markets (Flip, anyone?), Cisco still hasn’t found a top-line driver that can help it expand the business while supporting its traditional margins. Cisco has pounded the table perennially for videoconferencing and telepresence, but it’s not clear that Cisco will see as much benefit from the proliferation of video collaboration as once was assumed.

To complicate matters, storm clouds are appearing on the horizon, with Cisco’s core businesses of switching and routing threatened by the interrelated developments of service-provider alienation and software-defined networking (SDN). Cisco’s revenues aren’t about to fall off a cliff by any means, but nor are they on the cusp of a second-wind surge.

Such uncertain prospects must concern Cisco’s board of directors, its CEO John Chambers, and its institutional investors.

Suspicious Minds

In storage, Cisco currently has marriages of mutual convenience with EMC (VBlocks and the sometimes-strained VCE joint venture) and with NetApp (the FlexPod reference architecture).  The lyrics of Mark James’ song Suspicious Minds are evocative of what’s transpiring between Cisco and these storage vendors. The problem is not only that Cisco is bigamous, but that the networking giant might have another arrangement in mind that leaves both partners jilted.

Neither EMC nor NetApp is oblivious to the danger, and each has taken care to reduce its strategic reliance on Cisco. Conversely, Cisco would be exposed to substantial risks if it were to abandon its existing partnership in favor of a go-it-alone approach to storage.

I think that’s particularly true in the case of EMC, which is the majority owner of server-virtualization market leader VMware as well as a storage vendor. The corporate tandem of VMware and EMC carries considerable enterprise clout, and Cisco is likely to be understandably reluctant to see the duo become its adversaries.

Caught in a Trap

Still, Cisco has boxed itself into a strategic corner. It needs growth, it hasn’t been able to find it from diversification away from the data center, and it could easily see the potential of broadening its reach from networking and servers to storage. A few years ago, the logical choice might have been for Cisco to acquire EMC. Cisco had the market capitalization and the onshore cash to pull it off five years ago, perhaps even three years ago.

Since then, though, the companies’ market fortunes have diverged. EMC now has a market capitalization of about $54 billion, while Cisco’s is slightly more than $90 billion. Even if Cisco could find a way of repatriating its offshore cash hoard without taking a stiff hit from the U.S. taxman, it wouldn’t have the cash to pull of an acquisition of EMC, whose shareholders doubtless would be disinclined to accept Cisco stock as part of a proposed transaction.

Therefore, even if it wanted to do so, Cisco cannot acquire EMC. It might have been a good move at one time, but it isn’t practical now.

Losing Control

Even NetApp, with a market capitalization of more than $12.1 billion, would rate as the biggest purchase by far in Cisco’s storied history of acquisitions. Cisco could pull it off, but then it would have to try to further counter and commoditize VMware’s virtualization and cloud-management presence through a fervent embrace of something like OpenStack or a potential acquisition of Citrix. I don’t know whether Cisco is ready for either option.

Actually, I don’t see an easy exit from this dilemma for Cisco. It’s mired in somewhat beneficial but inherently limiting and mutually distrustful relationships with two major storage players. It would probably like to own storage just as it owns servers, so that it might offer a full-fledged converged infrastructure stack, but it has let the data-center grass grow under its feet. Just as it missed a beat and failed to harness virtualization and cloud as well as it might have done, it has stumbled similarly on storage.

The status quo is likely to prevail until something breaks. As we all know, however, making no decision effectively is a decision, and it carries consequences. Increasingly, and to an extent that is unprecedented, Cisco is losing control of its strategic destiny.

At Dell, Networking’s Role Secondary but Integral

Dell made a networking announcement last week, and, for the most part, reaction was muted. That’s party because Dell’s networking narrative is evolving and in transition, and partly because the announcements related to incremental, though notable, progression.

To be fair, Dell’s networking narrative is part of a larger story the company is telling in the data center. Networking is integral to that story, but it’s not the centerpiece and never will be. Dell is working from the blueprint of its Virtual Network Architecture (VNA), so its purchase and stewardship of Force10 is framed within a bigger picture that involves not just converged infrastructure, but also workload-driven orchestration of virtualized environments.

Integration and Assimilation

Some good news for Dell is that its integration and assimilation of Force10 Networks seems to have gone well and is now complete.  Dell’s OpenManage Networking Manager (OMNM) 5.0. offers a new look and support for the full line of Dell networking products, including the Force10 portfolio. What’s more, in its Dell Force10 MXL blade interconnect, a  40Gb Ethernet switch for the M1000e Blade chassis, Dell brings delivers an apt metaphor as well as a blade-server switch.

In that sense, it’s helpful to recall that Dell’s acquisition of Force10 was motivated by a desire to integrate networking into an automated, orchestrated data center in which it already offered compute and storage. Dell concluded that needed to own networking technology just as it owned server and storage technology. It further deduced that it needed a comprehensive networking portfolio, extending across SAN and LAN environments. Just as it moved previously to shake its dependence on storage partners, it would do likewise in networking.

Dell sees networking as an integral enabling technology, but not as an end in itself. Dell believes it can be more flexible than HP and IBM in certain enterprise demographics, and it believes it can outflank Cisco by being less “network centric” and more open to developments such as software defined networking (SDN).Force10, which was thought to be between a rock and hard place just before being acquired, understands and accepts its role in the Dell universe.

Fitting Into VNA

The key to understanding Dell’s data-center strategy is Virtual Network Architecture (VNA). The announcement of the new blade-server switch fits into that plan. Dell says VNA’s purpose is to virtualize, automate, and orchestrates network services so that they can adapt readily to application and business requirements. Core elements of VNA include the following:

  • High-performance switching systems for the campus and the data center
  • Virtualized Layer 4-7 services
  • Comprehensive automation & orchestration software
  • Open workload/hypervisor interfaces

So, what does it all mean? It means Dell is taking an approach that it believes will be differentiated and add considerable value in customers’ and prospective customers’ data centers. On the networking front, Dell believes it has espoused a strategy that encompasses and envelops the rise of SDN while also taking and accommodating approach to the networking gear already present in customer accounts.

Workload-Oriented Approach

In an article at The VAR Guy, Nathan Eddy quotes Dario Zamarian, VP and GM of Dell Networking, as follows:

“We are taking a workload-oriented approach — as in, ‘What does each require first?’ as opposed to starting with the network first [and] then trying to fit the application to it. In other words, networking is the enabler. The ultimate goal of VNA is to make networking as simple to set up, automate, operate, and manage as servers. VNA is doing for networking what VMware did for servers.”

Well, that’s the plan. In theory, in a slide show, all the pieces are there, but Dell has to execute and deliver on the vision. One can identify holes in the structure, places where Dell will need to buy, partner, or build to close the gaps. It’s clearly doing that, though, as the Force10 acquisition and others recently attest.

Taking Force10’s technology forward in alignment with its plans, Dell not only announced  a 40GbE-enabled blade server switch. It also introduced fabric- and network-management tools to simplify operations in the data center and the campus, and it announced data-center enhancements (stacking technology, L2 multipathing, data-center bridging, automated workload mobility through auto-provisioning of VLANs) to Force10’s FTOS for its S4810 10/40G switching platform.

Encompassing SDN

On the SDN front, Dell announced interoperability with Big Switch Networks’ Open SDN architecture and its OpenFlow-based Floodlight controller. That interoperability will be showcased next week in joint demonstrations at Interop, with the application emphasis on cloud multi-tenancy.

Regardless of where Dell goes with SDN, and regardless of how quickly (or slowly) SDN makes encroachments into the enterprise, Dell’s VNA model accounts for it and much else besides. Dell believes it can win in workload and network orchestration, with its Advanced Infrastructure Manager (AIM) providing virtual-network programming interfaces and doubtless with some forthcoming orchestration technologies it has yet to introduce (or buy).

Dell’s VNA seems a viable plan. But can the company continue to execute on it? Dell would have more focus and resources to do so if it jettisoned its woebegone consumer business, but that divestiture doesn’t seem to be in the cards.

Further Thoughts on Cisco’s Latest Spin-In Venture

This is a follow-up post to my last missive regarding Cisco’s latest reported spin-in venture, Insieme (not Insiemi, apparently). As you will recall, we had heard for some time that Cisco’s masters of the spin-in venture were getting back in the saddle for at least one more stretch run.

The question had become not whether they’d come back, but what they would put on the playlist for their reunion. Now, as indicated in an article in the New York TImes, the widely held assumption is that Insieme will provide Cisco’s answer to software-defined networking (SDN).

But, as we know, SDN means different things to different vendors. Given the composition and capabilities of the team at Insieme, I wouldn’t expect this group to recreate the sort of logically centralized control plane and server-based programmable networking that the likes of Nicira and Big Switch Networks have championed.

ASICs in the Mix 

After all, the central protagonists at Insieme — Mario Mazzola, Luca Cafiero, Prem Jain — are hardware engineers. Throughout their long, storied, and illustrious careers, they have built switches. There is no reason to think they will be cast against type in this particular venture. A variation on what they’ve done in their previous spin-in ventures for Cisco —  Andiamo, which was responsible for Cisco’s storage-area networking (SAN) switches, and Nuova, which provided Cisco with its Nexus data-center switches — is probably what they’ll do this time, too.

Admittedly, there is some software talent on the Insieme roster. Network World’s Jim Duffy reported that Ronak Desai, the architect of Cisco’s NX-OS FabricPath and Virtual Device Context software, and of the MDS SAN switch operating system, is on the team. Michael Smith, a distinguished engineer who worked on Cisco’s Nexus 1000v virtual switch, also might be part of the Insieme squad.

Still, John Chambers recently reiterated Cisco’s unswerving commitment to the propriety switching ASIC, which Cisco sees a point of differentiation against Arista Networks and others. Chambers’ words suggest that Cisco isn’t about to get the newfangled SDN religion. In fact, if anything, they suggest that Cisco is still working from its well-thumbed playbook of ASIC-based switches in a network-centric world.

Moreover, with Tom Edsall, the lead ASIC architect on the Nexus and MDS switching lines, reportedly on board with Insieme, we can probably safely deduce that the ASIC will be front and center in whatever the spin-in effort delivers. So, if it’s an SDN architecture Insieme has been mandated to deliver, it will be one with a distributed control plane and absolutely no role for dumb, off-the-rack switches.

Two Possible Scenarios

With regard to the increasingly contested definition of SDN — look no further than the marketing messages of certain vendors or to the software-driven networking hijinks now occurring in the IETF — there’s also the possibility that what the Insieme pack are doing could be only incidentally connected to what many consider SDN.

With that in mind, I want to turn to some intriguing speculation that William Koss, now at Plexxi, has provided on what he believes Cisco’s latest spin-in venture might be building. In a post on his blog, Koss reviews Cisco’s switching history, much of it involving the three musketeers now reuniting at Insieme, He then explains why Cisco does spin-in ventures before he offers his assessment of what Insieme might be  trying to accomplish.

He offers two possible paths Insieme might take. The first path would involve Cisco attending to what Koss terms “unfinished business” (including Brocade) in the storage space. In this scenario, the Insieme team would build a successor switch to the Nexus line with storage-networking hooks. This switch would be intended as a crushing reply to Xsigo’s I/O Director, while simultaneously representing an attempt to limit further market encroachments by Arista Networks, currently well entrenched in low-latency application environments, and also to potentially inoculate against potential traction from SDN startups such as Nicira and Big Switch.

As for the second option, he envisions something proceeding along an “SDN OpenFlow strategy path.” In this scenario, Koss foresees a  new platform that functions as a “Nexus OS-to-OpenFlow arbitration box,” which he describes as analogous to a session border controller (SBC) between the two networks. This would give Cisco’s installed base to SDN-like capabilities while keeping them wrapped inside Cisco’s proprietary cocoon.

Surprise Not Likely

In my view, both paths described by Koss are plausible scenarios for Insieme.  My gut feeling is that the first is more likely. The second option is more software intensive, and it would seem to feature less of the ASIC and storage-networking expertise possessed by known members of the Insieme team. Perhaps Mario, Luca, and Prem will blaze an entirely different path and surprise us all, but Koss might be on the right track with his speculative musings.

As always, we shall see.

Cheriton Sees Opportunity in Infrastructure

When I wrote my first post on this blog, way back in 2006, I assumed that technology infrastructure largely was a spent force. I expected incremental enhancements, gradual advances, but I didn’t anticipate another major boom or a significant disruption of the established order in what once had been a vibrant technology space.

While the technology industry as a whole can suffer from blinkered, willful optimism, perhaps I was afflicted by a different condition entirely. I might have been too pessimistic, too gloomy, dispirited by the technology downturn of the early 2000s and the lack of a meaningful, sustained recovery in the years that immediately followed.

By the way, when I refer to technology, I’m not talking about social networking such as Facebook. I understand that there’s a lot of technology behind the scenes at Facebook, but the customer-facing “social” phenomenon leaves me cold. I never did see the point of Facebook from a user’s perspective, though I understood how it could serve as an unprecedented data-mining machine for advertisers.

Opportunity Renewed

Fortunately, though, I was wrong about the decline and fall of infrastructure. It took a while, but a new era of infrastructure has arisen, based on virtualization, orchestration, and automation. Technological possibilities that we could only dream about more than a decade ago are now possible. In the networking realm, software-defined networking (SDN) is enabling comparatively outmoded network infrastructure to catch up with compute and, to a lesser degree, storage infrastructure as the promise of an application-driven, programmable data center comes into clearer view.

Suddenly, at long last, there’s new opportunity in infrastructure.

You don’t have to take my word for it, either. There are people who’ve designed and developed industry-leading technologies who espouse the same opinion. Some of these people are billionaires, and they’re backed their convictions with substantial sums of money, investing in technologies and companies with clear mandates to remake IT infrastructure.

Outrageously Wealthy Canuck

One of those people is David Cheriton, a billionaire who wears many hats. He is Professor of Computer Science and Electrical Engineering at Stanford University, where he researches networking and distributed systems, and he also serves as a co-founder and chief scientist at Arista Networks. He’s also an investor in startup companies. Back in 1998, one early-stage company in which he invested, along with Arista co-founder Andy Bechtolsheim, was Google.  The duo made a similar early investment in VMware, so they’ve done okay.

Born in Vancouver, raised in Edmonton, Alberta, and ranked 37th on a Wikipedia list of “richest Canadians”** — Forbes ranks him 21st among outrageously wealthy Canucks  — Cheriton recently spoke about innovation and entrepreneurship at a Churchill Club event in Silicon Valley. The event was co-hosted and organized by the Hua Yuan Science and Technology Association and also featured Ken Xie, who founded NetScreen (acquired by Juniper Networks in 2004) and is now president and CEO of unified-threat-management/firewall vendor Fortinet, a company he also founded.

In addition to his apparent knack as an investor, Cheriton has considerable firsthand experience as an entrepreneur and an innovator. Before he and Bechtolsheim combined forces at Arista Networks,  they founded Granite Systems, a Gigabit-Ethernet switching concern that was acquired by Cisco in 1996 for about $220 million in stock, back when shares of Cisco were continuously on the rise.  Subsequently, after the Google investment, Bechtolsheim and Cheriton combined forces again to found Kealia, which specialized in server technology based on AMD’s Opteron microprocessor.  That company was acquired by Sun Microsystems in 2004, providing technology included in the Sun Fire X4500 storage product.

Room for Improvement

In 2005, Cheriton and Bechtolsheiim followed up with Arista, then called Arastra, and its 10-GbE switching technology, which brings us to the approximate present and back to something Cheriton said at the Churchill Club event late last month. Noting that people tend to become preoccupied with the latest developments in social networking and mobility, Cheriton expressed his enthusiasm for infrastructure, as an investment vehicle as well as an area in which he has an abiding technical interest. As quoted in a BusinessWeek article, Cheriton said: “I think there is an opportunity to go back and say, ‘Gee, I think there’s lot of room for improvement in the infrastructure.’ ”

Reinforcing that point, he noted that technology infrastructure today is predicated on ideas that are about 30 years old. The network was the place to start the infrastructure refurbishment, Cheriton believed, and Arista Networks grew from that conviction.

But Cheriton hasn’t stopped there. He also founded a company called Optumsoft, about which not much is known. On its website, Optumsoft is described as an early-stage startup company “taking distributed computing and distributed software development mainstream.” Quoting from the website:

Recent advancements in multi-core computing systems, coupled with the ever increasing functional and performance requirements of software has created an exciting market opportunity for addressing the programmatic and architectural issues involved in modern software development. Optumsoft is addressing this growing market with a novel technology approach that is transparent, scalable, and portable, resulting in significant improvement to the development and maintenance of distributed/parallel structured software systems. Early production usage by commercial clients has validated the technology and value proposition.

Last fall, an anonymous source suggested on Quora that what Optumsoft was building related to “how to structure object-oriented RPC in a way that makes it easy to build robust systems.  The technology behind Arista’s EOS is based on some of these ideas, as was software structure at a previous startup, Kealia.  The technology includes an IDL and a C++ runtime, similar to what you’d get using CORBA.”

Nebula and Tintri

On the investment side, Cheriton and Bechtolsheim have put money into Nebula, which has venture-capital backing from Kleiner Perkins Caulfield & Byers and Highland Capital Partners. Built on OpenStack, the Nebula Enterprise Cloud Appliance is designed to provision and configure flexible, scalable cloud-computing infrastructure. Although it doesn’t say so on the Nebula website, previous reports indicated that Arista’s networking technology is included in the Nebula appliance.

According to the BusinessWeek article,  Cheriton also has a stake in Tintri, co-founded by Kieran Harty and Mark Gritter. Harty was EVP of R&D at VMware for seven years, and Gritter was one of the first of Cheriton’s employees at Kealia. They’ve assembled a PhD-laden engineering team that has developed a virtual-machine-aware storage appliance designed for virtualized environments, which the company says have been underserved by older storage technology that apparently contributes to “VM stall.”

Another early-stage investment that Cheriton made was in Aster Data Systems, a purveyor of a massively parallel DBMS that runs on clustered commodity servers. Already a minority owner of Aster, Teradata bought the 89% of the company it didn’t own for $263 million last year.

Cheriton has made bets on infrastructure, and he’ll likely make others. It’s an encouraging sign for those of us who gravitate to that part of the industry.

(**No, I am not on the list, but thanks for asking.)

Xsigo’s Virtualized Infrastructure Draws Cisco’s Fire

Long involved in the discussion about and the market for converged I/O, Xsigo wants to be part of a larger debate and a potentially much bigger market opportunity.

Xsigo said last summer that its goal was to virtualize components of data-center networking, just as servers and storage have been virtualized previously. Wait, some of you might say, isn’t that the purview of software-defined networking (SDN) vendors? Well, yes, that’s true, and while there are obvious differences between what Xsigo delivers and what’s being put on the table by SDN purveyors, Xsigo thinks it has a compelling story to tell.

Xsigo’s I/O Director started off addressing virtualization and data transfer between servers and storage. Last summer, though, its I/O Director stepped up to the server-to-server challenge, simultaneously extending its incursion onto server turf while making a claim on networking territory.

Cisco Takes Notice

That got the attention of Cisco Systems, which offers networking and servers, and a relatively vehement vendetta ensued between the two companies. Xsigo probably got more benefit than Cisco did from the mutual antagonism, if only because Cisco’s public reaction to Xsigo indicated that the smaller player had done enough damage to be considered a threat by the networking giant. In aiming its competitive marketing guns at Xsigo and blasting away, Cisco explicitly acknowledged Xsigo and implicitly conferred added legitimacy in the process.

At any rate, with the addition of the Xsigo Server Fabric, which began shipping in earnest toward the end of last year, the Xsigo I/O Director now allows servers and devices to connect to each other directly without going over the network. As a result, adding a virtual machine (VM) doesn’t involve using an IP address or setting up a virtual LAN (VLAN).  That’s addressed by I/O director and its virtual server interfaces.

Market analyst Zeus Kerravla has said that the Xsigo Server Fabric creates a new infrastructure atop the physical network, which is true enough. The Xsigo Server Fabric obviates the access-layer network, allowing servers and their VMs to communicate directly.

Bumping Layers

Xsigo contends its Server Fabric also effectively eliminates the aggregation layer. Xsigo says its infrastructure extends as for as the core network, where it is compatible with switches from any of the major players, including Cisco and Juniper. As such, Xsigo says its technology transforms a hierarchical network into a pool of bandwidth that can be used to connect virtualized resources in a data center.

By reducing the numbers of switch ports and infrastructure layers — the company says there’s just one layer of connectivity management between the OS or hypervisor and the core network with its approach as compared to as many as four layers in the Cisco model — Xsigo says its business model is the exact opposite of Cisco’s. Further to that point, Xsigo says that it is open, acting as a transparent conduit moving data between servers and the network core, whereas it alleges Cisco is not. Finally, Xsigo says it has no server agenda, whereas Cisco pushes its own servers as part of its Unified Computing System (UCS) for data-center virtualization.

Playing Its Part

Having no server agenda and taking a cut of the networking pie seem to have resulted in a go-it-alone strategy for Xsigo. It’s conceivable that market dynamics  and shifting vendor alliances could change that picture, but for now Xsigo doesn’t have a powerful technology-partner ecosystem to leverage.  As The Register noted, Xsigo has no OEM deals and is not thought to be an acquisition target of a major player, though Dell is responsible for about 20 percent of Xsigo’s sales and Oracle is cited as a potential acquirer in some quarters.

Xsigo customers, including some big names, have derived some significant cost savings from cutting down on cabling and getting much greater utilization from servers, virtual machines, and their network resources.

While not a member of the SDN fraternity, Xsigo wants us to know that it is playing its part in virtualized infrastructure for the data center.