Category Archives: Big Data

Further Progress of Infineta

When I attended Network Field Day 3 (NFD3) in the Bay Area back in late March, the other delegates and I had the pleasure of receiving a presentation on Infineta Systems’ Data Mobility Switch (DMS), a WAN-optimization system built with merchant silicon and designed to serve as a high-performance data-center interconnect for applications such as multi-gigabit Business Continuity/Disaster Recovery (BCDR), cross-site virtualization, and other variations on what Infineta calls “Big Traffic,” a fast-moving sibling of Big Data.

Waiting on Part II

I wrote about Infineta and its DMS, as did some of the other delegates, including cardigan-clad fashionista Tony Bourke  and avowed Networking Nerd Tom Hollingsworth. Meanwhile, formerly hirsute Derick Winkworth, who goes by the handle of Cloud Toad, began a detailed two-part serialization on Infineta and its technology, but he seems to be taking longer to deliver the sequel than it took Francis Ford Coppola to bring us The Godfather: Part II.

Suffice it to say, Infineta got our attention with its market focus (data-center interconnect rather than branch acceleration) and its compelling technological approach to solving the problem.  I thought Winkworth made an astute point in noting that Infineta’s targeting of data-center interconnect means that the performance and results of its DMS can be assessed purely on the basis of statistical results rather than on human perceptions of application responsiveness.

Name that Tune 

Last week, Infineta’s Haseeb Budhani, the company’s chief product officer, gave me a update that coincided with the company’s announcement of FlowTune, a software QoS feature set for the DMS that is intended to deliver the performance guarantees required for applications such as high-speed replication and data backup.

Budhani used a medical analogy to explain why FlowTune is more effective than traditional solutions. FlowTune, he said, takes a preventive approach to network congestion occasioned by contentious application flows, treating the cause of the problem instead of responding to the symptoms.  So, whereas conventional approaches rely on packet drops to facilitate congestion recovery, FlowTune dynamically manages application-transmission rates through a multi-flow mechanism that allocates bandwidth credits according to QoS priorities that specify minimum and maximum performance thresholds.   As a result, Budhani says, the WAN is fully utilized.

Storage Giants

Last week, Infineta and NetApp jointly announced that the former has joined the NetApp Alliance Partner Program. In a blog post, Budhani says Infineta’s relationships with storage-market leaders EMC and NetApp validate his company’s unique capability to deliver “the scale needed by their customers to accelerate traffic running at multi-Gigabit speeds at any distance.”

A software update, FlowTune is available to all Infineta customers. Budhani says it’s already being  used by Time Warner.

Advertisements

HP’s Launches Its Moonshot Amid Changing Industry Dynamics

As I read about HP’s new Project Moonshot, which was covered extensively by the trade press, I wondered about the vendor’s strategic end game. Where was it going with this technology initiative, and does it have a realistic likelihood of meeting its objectives?

Those questions led me to consider how drastically the complexion of the IT industry has changed as cloud computing takes hold. Everything is in flux, advancing toward an ultimate galactic configuration that, in many respects, will be far different from what we’ve known previously.

What’s the Destination?

It seems to me that Project Moonshot, with its emphasis on a power-sipping and space-saving server architecture for web-scale processing, represents an effort by HP to re-establish a reputation for innovation and thought leadership in a burgeoning new market. But what, exactly, is the market HP has in mind?

Contrary to some of what I’ve seen written on the subject, HP doesn’t really have a serious chance of using this technology to wrest meaningful patronage from the behemoths of the cloud service-provider world. Google won’t be queuing up for these ARM-based, Calxeda-designed, HP-branded “micro servers.” Nor will Facebook or Microsoft. Amazon or Yahoo probably won’t be in the market for them, either.

The biggest of the big cloud providers are heading in a different direction, as evidenced by their aggressive patronage of open-source hardware initiatives that, when one really thinks about it, are designed to reduce their dependence on traditional vendors of server, storage, and networking hardware. They’re breaking that dependence — in some ways, they see it as taking back their data centers — for a variety of reasons, but their behavior is invariably motivated by their desire to significantly reduce operating expenditures on data-center infrastructure while freeing themselves to innovate on the fly.

When Customers Become Competitors

We’ve reached an inflection point where the largest cloud players — the Googles, the Facebooks, the Amazons, some of the major carriers who have given thought to such matters — have figured out that they can build their own hardware infrastructure, or order it off the shelf from ODMs, and get it to do everything they need it to do (they have relatively few revenue-generating applications to consider) at a lower operating cost than if they kept buying relatively feature-laden, more-expensive gear from hardware vendors.

As one might imagine, this represents a major business concern for the likes of HP, as well as for Cisco and others who’ve built a considerable business selling hardware at sustainable margins to customers in those markets. An added concern is that enterprise customers, starting with many SMBs, have begun transitioning their application workloads to cloud-service providers. The vendor problem, then, is not only that the cloud market is growing, but also that segments of the enterprise market are at risk.

Attempt to Reset Technology Agenda

The vendors recognize the problem, and they’re doing what they can to adapt to changing circumstances. If the biggest web-scale cloud providers are moving away from reliance on them, then hardware vendors must find buyers elsewhere. Scores of cloud service providers are not as big, or as specialized, or as resourceful as Google, Facebook, or Microsoft. Those companies might be considering the paths their bigger brethren have forged, with initiatives such as the Open Compute Project and OpenFlow (for computing and networking infrastructure, respectively), but perhaps they’re not entirely sold on those models or don’t think they’re quite right  for their requirements just yet.

This represents an opportunity for vendors such as HP to reset the technology agenda, at least for these sorts of customers. Hence, Project Moonshot, which, while clearly ambitious, remains a work in progress consisting of the Redstone Server Development Platform, an HP Discovery Lab (the first one is in Houston), and HP Pathfinder, a program designed to create open standards and third-party technology support for the overall effort.

I’m not sure I understand who will buy the initial batch of HP’s “extreme low-power servers” based on Calxeda’s EnergyCore ARM server-on-a-chip processors. As I said before, and as an article at Ars Technica explains, those buyers are unlikely to be the masters of the cloud universe, for both technological and business reasons. For now, buyers might not even come from the constituency of smaller cloud providers

Friends Become Foes, Foes Become Friends (Sort Of)

But HP is positioning itself for that market and to be involved in those buying decisions relating to the energy-efficient system architectures.  Its Project Moonshot also will embrace energy-efficient microprocessors from Intel and AMD.

Incidentally, what’s most interesting here is not that HP adopted an ARM-based chip architecture before opting for an Intel server chipset — though that does warrant notice — but that Project Moonshot has been devised not so much as to compete against other server vendors as it is meant to provide a rejoinder to an open-computing model advanced by Facebook and others.

Just a short time ago, industry dynamics were relatively easy to discern. Hardware and system vendors competed against one another for the patronage of service providers and enterprises. Now, as cloud computing grows and its business model gains ascendance, hardware vendors also find themselves competing against a new threat represented by mammoth cloud service providers and their cost-saving DIY ethos.

Update on IBM’s Acquisition of Platform Computing

Despite my best efforts, I have been unable to obtain specific details relating to the price that IBM paid to acquire high-performance computing (HPC) workload-management pioneer Platform Computing. If anything further surfaces on that front, I’ll let you know.

In the meantime, others have made some good observations regarding the logic behind the acquisition and the potential ramifications of the move. Dan Kusnetzky who has longstanding familiarity with Platform in both a vendor and analyst capacity, provides a succinct explanation of what Platform does and then provides the following verdict:

“I believe IBM will be able to take this technology, integrate it into its “Smarter Computing” marketing programs and introduce many organizations to the benefits of harnessing together the power of a large number of systems to tackle very large and complex workloads.

This is a good match. “

Meanwhile, Curt Monash recounts details of a briefing he had with Platform in August. He suspects that IBM acquired Platform for its MapReduce offering, but, as Kusnetzky suggests, I think IBM also sees a lot of untapped potential in Platform’s traditional HPC-oriented technical markets, where the company already has an impressive roster  of blue-chip customers that have achieved compelling business results in cost savings and time-to-market improvements with the company’s cluster-management and load-sharing software.

There’s a lot of bluster about the cloud in relation to this acquisition, and that undoubtedly is a facet IBM will try to exploit in the future, but today Platform still does a robust business with its flagship software in scientific and technical computing. 

Platform apparently told Monash that it had “close to $100 million in revenue” and about 500 employees. The employee count seems about right, but I suspect the revenue number is exaggerated. According to a CBC news item on the acquisition, market-research firm Branham Group Inc. estimated that Platform generated revenue of about $71.6 million in its 2010 fiscal year. Presuming the Branham numbers to be correct, Platform would have 2011 fiscal year revenue ranging from $75 million to $80 million.

Finally, Ian Lumb, formerly an employee at Platform (as was your humble scribe) considers the potential implications of the acquisition on Platform’s long-heralded capacity to manage heterogeneous systems and workloads for its customers. This is a point that many analysts missed, and Lumb does an excellent job framing the dilemma IBM faces. Ostensibly, as Lumb notes, it will be business as usual for Platform and its support of heterogeneous systems, including those of IBM competitors such as Dell and HP.

But IBM faces a conundrum. Even if it were to choose to continue to support Platform’s heterogeneous-systems approach in deference to customer demand, the practicalities of doing so would prove daunting. Lumb explains why:

“To deliver a value-rich solution in the HPC context, Platform has to work (extremely) closely with the ‘system vendor’. In many cases, this closeness requires that Intellectual Property (IP) of a technical and/or business nature be communicated – often well before solutions are introduced to the marketplace and made available for purchase. Thus Platform’s new status as an IBM entity, has the potential to seriously complicate matters regarding risk, trust, etc., relating to the exchange of IP.

Although it’s been stated elsewhere that IBM will allow Platform measures of post-acquisition independence, I doubt that this’ll provide sufficient comfort for matters relating to IP. While NDAs specific to the new (and independent) Platform business unit within IBM may offer some measure of additional comfort, I believe that technically oriented approaches offer the greatest promise for mitigating concerns relating to risk, trust, etc., in the exchange of IP.”

It will be interesting to see how IBM addresses that challenge. Platform’s competitors, as Lumb writes, already are attempting to capitalize on the issue. 

IBM Rumored to be in Acquisition Talks with Platform Computing

Yes, I’m writing another post with a connection to the Open Virtualization Alliance (OVA), though I assure you I have not embarked on an obsessive serialization. That might occur at a later date, most likely involving a different topic, but it’s not on the cards now.

As for this post, the connection to OVA is glancing and tangential, relating to a company that recently joined the association (then again, who hasn’t of late?), but really made its bones — and its money — with workload-management solutions for high-performance computing. Lately, the company in question has gone with the flow and recast itself as a purveyor of private cloud computing solutions. (Again, who hasn’t?)

Talks Relatively Advanced

Yes, we’re talking about Platform Computing, rumored by some dark denizens of the investment-banking community to be a takeover target of none other than IBM. Apparently, according to sources familiar with the situation (I’ve always wanted to use that phrase), the talks are relatively advanced. That said, a deal is not a deal until pen is put to paper.

IBM and Platform first crossed paths, and began working together, many years ago in the HPC community, so their relationship is not a new one. The two companies know each other well.

Rich Heritage in Batch, Workload Management

Platform Computing broadly focuses on two sets of solutions. Its legacy workload-management business is represented by Load Sharing Facility (LSF), which is now part of its cluster-management product portfolio, which — like LSF in its good old days — is targeted squarely at the HPC world. With its rich heritage in batch applications, LSF also is part of Platform’s workload-management software for grid infrastructure.

Like so many others, Platform has refashioned itself as a cloud-computing provider. The company, and some of its customers, found that its core technologies could be adapted and repurposed for the ever-ambiguous private cloud.

Big Data, Too

Perhaps sensitive about being hit by charges of “cloud washing,” Platform contends that it offers “private cloud computing for the real world” through cloud bursting for HPC and private-cloud solutions for enterprise data centers. Not surprisingly given its history, Platform is most convincing and compelling when addressing the requirements of the HPC crowd.

That said, the company has jumped onto the Big Data bandwagon with gusto. It offers Platform MapReduce for vertical markets such as financial services (long a Platform vertical), telecommunications, government (fraud detection and cyber security, regulatory compliance, energy), life sciences, and retail.

Platform recently announced that its ISF, not be confused with LSF, was recognized as a finalist in the “Private Cloud Computing” category for the 2011 Best of VMworld awards. And, of course, to bring this post full circle, Platform was one of 134 new members to join the aforementioned Open Virtualization Association (OVA).