It seems that we’ve been talking about application-aware networking for a long time. I admit that I was talking about it more than a decade ago, back when I initiated a technology partnership between my employer at the time, a workload-management vendor, and a network colossus (a previous employer of mine) that shall also remain nameless.
My brainwave — if I might beg your kind indulgence — was that the availability of applications and services over the Internet could be greatly enhanced if one could somehow feed upper-layer intelligence about content and applications down to the switches and routers that were making load-balancing decisions at the network and transport layers.
This was back in 1998, so it was considered heady stuff. The partnership was necessary because my employer at the time knew a lot about the characteristics and real-time behavior of applications but knew little about the network; whereas the network-infrastructure vendor knew all about the network but not nearly enough about the applications it supported.
Because I had worked for both companies, I understood how they could combine forces to create an application-intelligent infrastructure that would provide unprecedented availability over and across the Internet. In theory, everything should have worked without a hitch. In theory.
The real challenge was getting the two companies to understand each other. It wasn’t a clash of corporate cultures so much as a failure to speak the same language. One focused on how and where applications were processed and the other focused on the pipes that stitched everything together. They had different assumptions, used different vocabularies, and they viewed the world — the data center, anyway — from different perspectives. Each had difficulty understanding what the other was saying. For a long time, l felt like an interpreter in the diplomatic service.
You’d think by now these problems would be behind us, right? You’d think that in an era when networking titans are buying videoconferencing vendors, computer vendors are buying networking vendors, and everybody seemingly has a strategy for the converged data center, that maybe the networking world and the computing world would speak the same language, or at least have an implicit understanding of what the other side is saying. Apparently, though, the problem persists. The chasm between the two worlds might not be as vast, but a schism still precludes a meaningful mind meld.
I reached that sad conclusion after reading a blog post by Lori MacVittie at F5′s DevCentral. In her post, she recounts finding an article that promised to explore the profound smartness of application-aware networking. Instead of reading about a network capable of understanding and dynamically supporting an application’s data and behavior, she read a piece that talked about “container concerns” such as CPU, RAM, and memory, with a dollop about 10-GbE networking tossed into the mix for good measure. As she writes:
Application-awareness is more than just CPU and RAM and network bandwidth. That’s just one small piece of the larger contextual pie. It’s about the user’s environment and the network and the container and the application and the individual request being made. At the moment it’s made. Not based on historical trends.
So, the technology isn’t the problem. The defining concept of application-aware networking has been with us for some time, and the technologies clearly exist to facilitate its widespread deployment. What’s preventing it from coming together is the balkanized thinking and — why not say it? — the entrenched politics of the traditional data center and its vendor ecosystem.
We’ll get there, though. The wheels are in motion, and the destination is in sight.The trip just took longer than some of us though it would.
Lesson learned: Never underestimate institutional resistance to change that is seen to threaten an established order. Keep that lesson in mind as you consider all those rosy near-term projections for cloud computing.