Daily Archives: August 2, 2011

Is Li-Fi the Next Wi-Fi?

The New Scientist published a networking-related article last week that took me back to my early days in the industry.

The piece in question dealt with Visible Light Communication (VLC), a form of light-based networking in which data is encoded and transmitted by varying the rate at which LEDs flicker on and off, all at intervals imperceptible to the human eye.

Also called Li-Fi — yes, indeed, the marketers are involved already — VLC is being positioned for various applications, including those in hospitals, on aircraft, on trading floors, in automotive car-to-car and traffic-control scenarios, on trade-show floors, in military settings,  and perhaps even in movie theaters where VLC-based projection might improve the visual acuity of 3D films. (That last wacky one was just something that spun off the top of my shiny head.)

From FSO to VLC

Where I don’t see VLC playing a big role, certainly not as a replacement for Wi-Fi or its future RF-based successors, is in home networking. VLC’s requirement for line of sight will make it a non-starter for Wi-Fi scenarios where wireless networking must traverse floors, walls, and ceilings. There are other room-based applications for VLC in the home, though, and those might work if device (PC, tablet, mobile phone), display,  and lighting vendors get sufficiently behind the technology.

I feel relatively comfortable pronouncing an opinion on this technology. The idea of using light-based networking has been with us for some time, and I worked extensively with infrared and laser data-transmission technologies back in the early to mid 90s. Those were known as free-space optical (FSO) communications systems, and they fulfilled a range of niche applications, primarily in outdoor point-to-point settings. The vendor for which I worked provided systems for campus deployments at universities, hospitals, museums, military bases, and other environments where relatively high-speed connectivity was required but couldn’t be delivered by trenched fiber.

The technology mostly worked . . . except when it didn’t. Connectivity disruptions typically were caused by what I would term “transient environmental factors,” such as fog, heavy rain or snow, as well as dust and sand particulate. (We had some strange experiences with one or two desert deployments). From what I can gather, the same parameters generally apply to VLC systems.

Will that be White, Red, or Resonant Cavity?

Then again, the performance of VLC systems goes well beyond what we were able to achieve with FSO in the 90s. Back then, laser-based free-space optics could deliver maximum bandwidth of OC3 speeds (144Mbps), whereas the current high-end performance of VLC systems reaches transmission rates of 500Mbps. An article published earlier this year at theEngineer.com provides an overview of VLC performance capabilities:

 “The most basic form of white LEDs are made up of a bluish to ultraviolet LED surrounded by a yellow phosphor, which emits white light when stimulated. On average, these LEDs can achieve data rates of up to 40Mb/sec. Newer forms of LEDs, known as RGBs (red, green and blue), have three separate LEDs that, when lit at the same time, emit a light that is perceived to be white. As these involve no delay in stimulating a phosphor, data rates in RGBs can reach up to 100Mb/sec.

But it doesn’t stop there. Resonant-cavity LEDs (RCLEDs), which are similar to RGB LEDs and are fitted with reflectors for spectral clarity, can now work at even higher frequencies. Last year, Siemens and Berlin’s Heinrich Hertz Institute achieved a data-transfer rate of 500Mb/sec with a white LED, beating their earlier record of 200Mb/sec. As LED technology improves with each year, VLC is coming closer to reality and engineers are now turning their attention to its potential applications.”

I’ve addressed potential applications earlier in this post, but a sage observation is offered in theEngineer.com piece by Oxford University’s Dr. Dominic O’Brien, who sees applications falling into two broad buckets: those that “augment existing infrastructure,” and those in which  visible networking offers a performance or security advantage over conventional alternatives.

Will There Be Light?

Despite the merit and potential of VLC technology, its market is likely to be limited, analogous to the demand that developed for FSO offerings. One factor that has changed, and that could work in VLC’s favor, is RF spectrum scarcity. VLC could potentially help to conserve RF spectrum by providing much-needed bandwidth; but such a scenario would require more alignment and cooperation between government and industry than we’ve seen heretofore. Curb your enthusiasm accordingly.

The lighting and display industries have a vested interest in seeing VLC prosper. Examining the membership roster of the Visible Light Communications Consortium (VLCC), one finds it includes many of Japan’s big names in consumer electronics. Furthermore, in its continuous pursuit of new wireless technologies, Intel has taken at least a passing interest in VLC/Li-Fi.

If the vendor community positions it properly, standards cohere, and the market demands it, perhaps there will be at least some light.

Advertisements

OpenFlow Crystal Ball Still Foggy

OpenFlow originated in academia, from research work conducted at Stanford University and the University of California, Berkeley. Academics remain intensively involved in the development of OpenFlow, but the protocol, a manifestation of software-defined networking (SDN), appears destined for potentially widespread commercial deployment, first at major data centers and cloud service providers, and perhaps later at enterprises of various shapes and sizes.

Encompassing a set of APIs, OpenFlow enables programmability and control of flow tables in routers and switches. Today’s switches combine network-control functions (control plane) and packet processing and forwarding functions (data plane). OpenFlow aims to separate the two, abstracting flow manipulation and control from the underlying switch hardware. thus making it possible to define flows and determine what paths they take through a network.

From Academic Origins to Commercial Data Centers

Getting back to the academics, they wanted to use OpenFlow as a means of making networks more amenable to experimentation and innovation. The law of unintended consequences intervened, however, and OpenFlow is spreading in many different directions, spawning a growing number of applications.

To see where (or, at least, by whom) OpenFlow will be applied first commercially, consider the composition of the board of directors of the Open Networking Foundation (ONF), which bills itself as “a nonprofit organization dedicated to promoting a new approach to networking called Software-Defined Networking (SDN). SDN allows owners and operators of networks to control and manage their networks to best serve their users’ needs. ONF’s first priority is to develop and use the OpenFlow protocol. Through simplified hardware and network management, OpenFlow seeks to increase network functionality while lowering the cost associated with operating networks.”

The six board members at ONF are Deutsche Telekom, Facebook, Google, Microsoft, Verizon, and Yahoo. As I’ve noted previously, what they have in common are large, heavily virtualized data centers. They’re all presumably looking for ways to run them more efficiently, with the network having become one of their biggest inhibitor to data-center scaling. While servers and storage have been virtualized and have become more dynamic and programmable, networks lag behind, not keeping pace with new requirements but still accounting for a large share of capital and operational expenditures.

Problem Shrieking for a Solution

That, my friends, is a problem shrieking for a solution. While academia hatched OpenFlow, there’s nothing academic about the data-center pain that the six board members of the ONF are feeling. They need their network infrastructure to become more dynamic, flexible, and functional, and they also want to lower their network operating costs.

The economic and operational impetus for change is considerable. The networking industry, at least the portion of it that wants to serve the demographic profile represented by the board members of ONF, must sit up and take notice. And if you look at the growing vendor membership of the ONF, the networking industry is paying attention.

One of many questions I have relates to how badly Cisco and, to a less extent, Juniper Networks — proponents of proprietary alternatives to some of the problems SDN and OpenFlow are intended to address — might be affected by an OpenFlow wave.

Two Schools of Thought

There are at least two schools of thought on the topic. One school, inhabited by more than a few market analysts, says that OpenFlow will hasten and intensify the commoditization of networking gear, as a growing percentage of switches will be made to serve as simple packet-forwarding boxes. Another learned quarter contends that, just as the ONF charter says, the focus and the impact will be primarily on network-related operating costs, and not so much on capital costs. In other words, OpenFlow — even if it is wildly popular — leaves plenty of room for continued switch differentiation, and thus for margin erosion to be at least somewhat mitigated.

The long-term implications of OpenFlow are difficult to predict. Prophecy is made more daunting by OpenFlow hype and disinformation, disseminated by the protocol’s proponents and detractors, respectively.  It does have the feeling of something big, though, and I’ve been spending increasing amounts of time trying to get my limited gray matter around it.

Look for further zigzagging peregrinations on my journey toward OpenFlow understanding.