An Atlanta-based startup company, an outgrowth of research conducted at the Georgia Institute of Technology, believes it can passively identify botnet communications and then take proactive measures to thwart bots from working in concert to wreak Internet havoc.
The company’s name is Damballa, and it received $2.5 million in Series A funding in June (or perhaps in April) from venture capital firms Sigma Partners, Noro-Moseley Partners, and Imlay Investments, as well as others. Its core technology is based on research led by Merrick Furst, an associate dean at Georgia Tech’s College of Computing and a noted researcher of bot behavior.
Furst, the president of the new company, recruited some of his research colleagues from Georgia Tech to Damballa, which named veteran Internet executive Steve Linowes as its CEO earlier this year.
Citing its stealth status, the principals behind Damballa have been reluctant to speak publicly about what the company is doing, but we do know that whatever they’re doing is being done exclusively for US federal government agencies, at least for now.
Actually, we have a pretty good idea what Damballa is attempting to do.
In an interview with CNN in late January, Furst explained botnets and how they work. He also listed the various types of damage that botnets could unleash, including denial-of-service (DoS) attacks and associated extortion scams, distributed spam onslaughts, key logging, click fraud (which targets advertisers who users advertising platforms such as Google’s), and trust fraud (which targets buyers on auction sites such as eBay).
Furst revealed that bots could account for seven percent of Internet-connected computers. which translates into 75 million to 100 million personal computers conscripted for malicious purposes by so-called botmasters. In Georgia Tech’s laboratory, Furst and his team had collected the IP addresses of 12 million machines that belonged to botnets.
Click fraud is a hot topic at the moment, so here is an excerpt of Furst discussing how botmasters use their zombie minions to do their nefarious bidding:
So let me tell you how a botmaster makes money with click fraud. … They’ll build a Web site that looks like a normal Web site. They’ll put up banner ads, or other types of ads on their Web site, and these are ads served up by Google. Google contracts an advertiser to put up ads on sites — [unwittingly] contracts the botmaster online to put up ads on that botmaster’s site. … So [the botmaster] commands the machines in his bot army to click on the ads on this site. Every time one of his machines click, the message goes back to Google, Google charges the advertiser, the advertiser pays Google, Google keeps 20 percent and [unwittingly] gives 80 percent to the botmaster. … Let’s say even if [the botmaster] controls a small army of 5,000 machines, which is very small in this game — he can make $15,000 a month in click fraud.
One might be think it would be worth Google’s time and effort to invest in or otherwise support a new venture looking to combat that sort of malevolence.
There’s no question that Furst cited real problems in the CNN interview, but does his team at Damballa have a silver bullet that can slay the criminal threats posed by botmasters and the effectiveness of their zombie armies?
Since Damballa isn’t revealing anything about what they will develop and when it will reach market, it’s impossible for us to pass definitive judgment. Still, some clues exist that strongly delineate the the approximate technology foundations on which the company is likely to build its first commercial products.
In a research paper published on the Advanced Computing System Association’s Usenix website, researchers from Georgia Tech describe how monitoring DNS-based lookups of DNS blacklists can help identify bots and ultimately frustrate their interactions with members of their own and other botnets in real time. One of the core observations from their research, on which Damballa might be building a commercial product, is that the patterns of DNS lookups by bots — which check to see whether they, or members of bots on their botnet or another botnet, have been blacklisted — evince different patterns than those initiated by legitimate computer systems, such as email servers.
At first glance, it would seem this approach has some potential. Then again, what’s to stop malicious botmasters from varying the behavior of the bots under their command to escape known pattern-recognition detection? Furthermore, there remain several several unanswered questions relating to the approach described in the research paper, including about how such a system would avoid false positives — such as wrongly identifying an email system or other legitimate computer as a bot.
From my digging, I have been able to discover that Damballa’s early government-agency customers probably are the the Federal Bureau of Investigations (FBI) and the US Navy.
I think Damballa has the potential to make a significant contribution to the battle against pernicious botnets, but, like a lot so many security approaches and technologies before it, Damballa’s probable offerings will be in a constant race to stay one step ahead of endlessly creative online malefactors.