I wonder if the era of infosec-driven lawsuits is finally here to stay? I've been figuring on a big rise in lawsuits one of these years. A number of things can motivate companies to do a better job of information security risk mitigation: laws, legal enforcement, standards, best practices, statistics, news stories.
But short of Sarbanes-Oxley and perhaps lawsuits, few things provide real drive, other than a desire by company leadership to do the right thing and protect customer data. Alternatively, lawsuits may simple push negligent companies to focus on covering their butts, legally speaking, rather than simply demonstrating due care and implementing basic information security controls. Can we say that malpractice lawsuits have improved healthcare?
Saturday, September 29, 2007
TD Ameritrade Lawsuit
Google Mail Vulnerability
This article in The Register describes a vulnerability in Google Mail / Google Groups wherein specially crafted email could compromise Google Mail and do bad things like install a filter in the victim mail account to siphon off email to the bad guy. (Other vulnerabilities were mentioned in Google Search Appliance and Google Picasa)
As the technology industry continues to focus on new features, new applications, constant change, it seems that we poor humans are left behind, unable to adapt so quickly to these new threats. We can barely teach people not to open suspicious attachments, so how does the average user know when emails could potentially have exploit code in them? It's my experience that the more you know about how things work under the covers the safer you can be (you have to be paranoid too) but expecting the average user to be a paranoid software coding expert is impractical to say the least.
We're not at that point in the industry where we've fool-proofed products. Maybe it is because of the continued flux in features and capabilities. When automobiles first made there appearance there were all sorts of designs and models. Eventually the market settled down and then we could worry about safety some 50-100 years later. Don't know if I care to wait that long for the computer industry.
Friday, September 14, 2007
Kitten CAPTCHAs
As you may know, over the last several years, spammers have sought to post spam in blog comments, on web groups, web bulletin board forums, etc. Hackers and their worms do the same but post malware instead.
I'm sure you've seen CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart") on various websites. Typically it's that image of distorted letters that supposedly only humans can decode.
CAPTCHAs were a means of reducing the risk of spamming. I'm sure originally the idea was to have a silver bullet solution but whenever you pit humans against humans you end up with an arms race. So now we have spammers with software able to decode the more rudimentary CAPTCHAs. More sophisticated tests arise with letters too hard for some people to recognize and decipher.
Here's another alternative--kittens! The key is to make the test solely about doing something that computers are bad at doing but humans are good at doing. Computers are good at memorizing and doing things quickly so you have to prevent them from trying every combination and memorizing all the answers. Natural language, image recognition, letter recognition, facial recognition, all these things people do much better than computers even when spammers have thousands of compromised systems at their disposal. One has to be careful about implementation details (don't always use the same image name for the cute little calico or someone can just memorize which filenames correspond to kittens).
I wonder if we couldn't leverage quirks of the human brain instead. Like optical illusions. I'm not sure how you'd do this in a way that would lend itself to quick Q&A and prevent computers from just memorizing answers by rote. Make commenters play something like Brain Age. Or what if you find a way to ask questions that humans are very likely to get wrong but computers won't (like the second question of this series)? Even better if humans answer in unpredictably wrong ways or if a given question can be made to be unpredictable.
It all goes back to Alan Turing who imagined a test for artificial intelligence. Put a person, the judge, in a room and have them communicate in natural language with a computer and a person. If the judge can't tell the two apart with certainty the computer passes. Of course that test would take too long just to post a blog comment (and you'd have to repeat it every time you take a similarly protected action). That's why the shortcut methods above were developed. The biggest shortcut is that the judge is now a computer, a major flaw in the test (or does the judge simply have to act more human than the computer being tested?).
One major glitch in this whole scheme is the underlying assumption that spamming is done solely by computers. I read once that spammers have been known to actually hire people to thwart CAPTCHAs. So for that threat, we're out of luck. But the automated threat of bots and the like is still viable and widely used.
In security our goal isn't to solve problems once and for all (impossible since it is a human vs. human kind of problem), it's to raise the bar for the difficulty of the attack just enough to reduce the risk just enough that we can live with the remaining risk.
Sometimes it's also a question of being a little more secure than your peers. Do you ever park your nice car next to much nicer cars in a parking lot figuring if any get stolen it won't be yours?
Security is kind of like escaping a bear. You don't have to run faster than the bear, you just have to run faster than your friend.
Saturday, September 01, 2007
Worms: a look back
Time flies. This is from an old infosec blog post of mine in 2005 about a paper from late 2004. Just two years ago we were still worried about containing mass worm outbreaks. Those days are essentially over with the rise of true criminal activity and targeted attacks. Nevertheless this concept could , maybe, be applied to controlling botnets, the key tool behind phishing, spamming, and other criminally motivated attacks. Botnets used to be centrally controlled. Kill the head, the botnet dies too. New botnets use a distributed architecture. They're more like a Hydra.
The Future of Malware Defense?
"Security UPDATE -- March 16, 2005"
Information Security News
"The research paper 'Can We Contain Internet Worms?,' was published in August 2004. In it, Microsoft researchers discuss how worms might become more readily containable as computers collaborate in a more automated manner. The concept, which the researchers have dubbed 'Vigilante,' proposes 'a new host centric approach for automatic worm containment.' ... Hosts detect worms by analysing attempts to infect applications and broadcast self-certifying alerts (SCAs) when they detect a worm. SCAs are automatically generated machine-verifiable proofs of vulnerability"
Tuesday, August 28, 2007
More Sony Stealthware
It looks like Sony is getting more bad press for foisting stealth software on users. This isn't strictly a rootkit, and not as extreme an example as before.
You might have been residing under a rock a couple years ago. Back in 2005 Sony BMG put what amounted to rootkit-like software on certain CDs. Insertion into PCs resulted in software being installed without user consent the purpose of which was to enforce digital rights management and hide itself from all but the most experienced users.
The current software creates a hidden directory perhaps to prevent tampering with USB fingerprint device it supports. Sure, a hidden directory could aid attackers (but not any more than leaving the bedroom light off at night aids burglars).
One of the big issues in 2005 was the software introduced a vulnerability to PCs on which it ran. Any software installed could do so. Even knowing what software is running on our systems, it's hard enough keeping up on patches (forget all the 0-day attacks in the last couple years). When companies cloak their software from us it makes it that much harder protect one's computer and to make informed decisions about security. Essentially companies that follow this approach are taking away our right to make certain security decisions for ourselves.
Even though somewhat hyped, I'm glad this made the news. More press on the topic can't hurt user rights and dissuading companies from these ill-advised tactics.
Saturday, July 28, 2007
Insider Attacks, Trust but Verify
For those security ostriches out there who are convinced that internal networks are perfectly safe, and that firewalls keep the bad guys out, this ComputerWorld article is yet another example of an insider stealing sensitive data. Worst of all this is a very trusted individual (a database administrator). Time to turn to proper risk management.
The impact of this sort of attack can be huge but I suspect the likelihood of this risk is low, or we wouldn’t be hearing about it in the news (to shamelessly quote Bruce Schneier: I tell people that if it's in the news, don't worry about it. The very definition of 'news' is 'something that hardly ever happens.). Think too of the cost/benefit equation for the threat source. So the risk is probably low; there are almost certainly bigger fish to fry in corporate America than distrusting DBAs and System Admins.
Low risk doesn’t justify much security spending if you look at this risk alone. But considering a number of related risks, there's a business case for employing security controls in a layered fashion to reduce risk in aggregate across these related risks. Controls might include background checks on employees, centralized logging with separation of duties and good monitoring, and blocking peer 2 peer network communications. For really sensitive data maybe more intrusive controls make sense.
But information security professionals should consider the whole equation. An oppressive culture of distrust of high paid techies is intuitively going to be bad for productivity and personnel retention. Is that worth it (or even necessary) given the likelihood and risk?
Sunday, June 24, 2007
Changing the Firewall Paradigm
This article in eWeek got me to thinking a little about the venerable Firewall, staple of modern internet security. The technology was originally developed in the early days of the internet, a time when collaboration and communication between organizations was vastly different.
Back then, you were as liable to use BITNET as ftp or talk for file transfer or communications. At that time the internet was more open in its architecture. Systems within an organization were all accessible by the internet. The move to firewall technology sought to hide organizations’ computing assets behind a gateway.
But the talk of eroding network boundaries has been going for years now. We have telecommuters, road warriors, B2B connectivity, and Services Oriented Architecture essentially creating dozens of backdoors into an organization’s networks, not to mention internet facing applications with backend systems on internal networks. Enterprises benefit from more connectivity and collaboration. How do we do that without sacrificing security?
Firewalls aren’t going anywhere. It still makes sense filter incoming and outgoing internet traffic. But the shift towards endpoint security is bound to continue and I hope the result is that firewalls won’t always be perceived as the main mechanism for reducing risk. It’s not that simple anymore. It hasn’t been for years.
I’ve often wondered if firewalls, in some sense, create a false sense of security. Immature organizations make the mistake of ignoring host security and internal security threats because the firewall supposedly fixes everything.
A little thought experiment: if firewalls weren’t an available technology wouldn’t organizations have to enact better endpoint security? That could include implementing better host security for desktops and servers, or better application endpoint security such as agents to intercept and enforce security for web applications or web services, or improved development practices to reduce the vulnerabilities in deployed applications. Seems to me companies should already be doing these things.
It’s probably still safe to say a lot of attacks come from the big-I cloud and so, for now, the internet firewall is still a key feature of enterprise security architectures. But even if fewer attacks come from other sources such as telecommuter workstations or business partner networks or intranet users, the damage potential and thus the risk may be far greater than that due to generic internet based attacks. Could it be that in some cases it makes sense to focus as much or more than on so-called perimeter, aka firewall, security? I think so.
If you’re already doing good, careful, intelligent risk analysis as part of a holistic, enterprise-level process of information security risk management, you already know this and you apply your controls where they’re most needed. Otherwise you’re probably spending too much security money in the wrong place and not getting much risk reduction out of it.