Saturday, September 29, 2007

TD Ameritrade Lawsuit

I wonder if the era of infosec-driven lawsuits is finally here to stay? I've been figuring on a big rise in lawsuits one of these years. A number of things can motivate companies to do a better job of information security risk mitigation: laws, legal enforcement, standards, best practices, statistics, news stories.

But short of Sarbanes-Oxley and perhaps lawsuits, few things provide real drive, other than a desire by company leadership to do the right thing and protect customer data. Alternatively, lawsuits may simple push negligent companies to focus on covering their butts, legally speaking, rather than simply demonstrating due care and implementing basic information security controls. Can we say that malpractice lawsuits have improved healthcare?

Google Mail Vulnerability

This article in The Register describes a vulnerability in Google Mail / Google Groups wherein specially crafted email could compromise Google Mail and do bad things like install a filter in the victim mail account to siphon off email to the bad guy. (Other vulnerabilities were mentioned in Google Search Appliance and Google Picasa)

As the technology industry continues to focus on new features, new applications, constant change, it seems that we poor humans are left behind, unable to adapt so quickly to these new threats. We can barely teach people not to open suspicious attachments, so how does the average user know when emails could potentially have exploit code in them? It's my experience that the more you know about how things work under the covers the safer you can be (you have to be paranoid too) but expecting the average user to be a paranoid software coding expert is impractical to say the least.

We're not at that point in the industry where we've fool-proofed products. Maybe it is because of the continued flux in features and capabilities. When automobiles first made there appearance there were all sorts of designs and models. Eventually the market settled down and then we could worry about safety some 50-100 years later. Don't know if I care to wait that long for the computer industry.

Friday, September 14, 2007

Kitten CAPTCHAs

As you may know, over the last several years, spammers have sought to post spam in blog comments, on web groups, web bulletin board forums, etc. Hackers and their worms do the same but post malware instead.

I'm sure you've seen CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart") on various websites. Typically it's that image of distorted letters that supposedly only humans can decode.

CAPTCHAs were a means of reducing the risk of spamming. I'm sure originally the idea was to have a silver bullet solution but whenever you pit humans against humans you end up with an arms race. So now we have spammers with software able to decode the more rudimentary CAPTCHAs. More sophisticated tests arise with letters too hard for some people to recognize and decipher.

Here's another alternative--kittens! The key is to make the test solely about doing something that computers are bad at doing but humans are good at doing. Computers are good at memorizing and doing things quickly so you have to prevent them from trying every combination and memorizing all the answers. Natural language, image recognition, letter recognition, facial recognition, all these things people do much better than computers even when spammers have thousands of compromised systems at their disposal. One has to be careful about implementation details (don't always use the same image name for the cute little calico or someone can just memorize which filenames correspond to kittens).

I wonder if we couldn't leverage quirks of the human brain instead. Like optical illusions. I'm not sure how you'd do this in a way that would lend itself to quick Q&A and prevent computers from just memorizing answers by rote. Make commenters play something like Brain Age. Or what if you find a way to ask questions that humans are very likely to get wrong but computers won't (like the second question of this series)? Even better if humans answer in unpredictably wrong ways or if a given question can be made to be unpredictable.

It all goes back to Alan Turing who imagined a test for artificial intelligence. Put a person, the judge, in a room and have them communicate in natural language with a computer and a person. If the judge can't tell the two apart with certainty the computer passes. Of course that test would take too long just to post a blog comment (and you'd have to repeat it every time you take a similarly protected action). That's why the shortcut methods above were developed. The biggest shortcut is that the judge is now a computer, a major flaw in the test (or does the judge simply have to act more human than the computer being tested?).

One major glitch in this whole scheme is the underlying assumption that spamming is done solely by computers. I read once that spammers have been known to actually hire people to thwart CAPTCHAs. So for that threat, we're out of luck. But the automated threat of bots and the like is still viable and widely used.

In security our goal isn't to solve problems once and for all (impossible since it is a human vs. human kind of problem), it's to raise the bar for the difficulty of the attack just enough to reduce the risk just enough that we can live with the remaining risk.

Sometimes it's also a question of being a little more secure than your peers. Do you ever park your nice car next to much nicer cars in a parking lot figuring if any get stolen it won't be yours?

Security is kind of like escaping a bear. You don't have to run faster than the bear, you just have to run faster than your friend.

Saturday, September 01, 2007

Worms: a look back

Time flies. This is from an old infosec blog post of mine in 2005 about a paper from late 2004. Just two years ago we were still worried about containing mass worm outbreaks. Those days are essentially over with the rise of true criminal activity and targeted attacks. Nevertheless this concept could , maybe, be applied to controlling botnets, the key tool behind phishing, spamming, and other criminally motivated attacks. Botnets used to be centrally controlled. Kill the head, the botnet dies too. New botnets use a distributed architecture. They're more like a Hydra.

The Future of Malware Defense?
"Security UPDATE -- March 16, 2005"
Information Security News

"The research paper 'Can We Contain Internet Worms?,' was published in August 2004. In it, Microsoft researchers discuss how worms might become more readily containable as computers collaborate in a more automated manner. The concept, which the researchers have dubbed 'Vigilante,' proposes 'a new host centric approach for automatic worm containment.' ... Hosts detect worms by analysing attempts to infect applications and broadcast self-certifying alerts (SCAs) when they detect a worm. SCAs are automatically generated machine-verifiable proofs of vulnerability"