Friday, September 03, 2010

The Password is Dead, Long Live the Password

This article from Information Week relates the ease by which a Graphics Processing Unit on a readily available graphics card renders 7 character passwords useless because they are too easily guessed.
So is the Password dead?  Beyond dead, a pile of bones picked clean?

To answer that, you have to think of the threat in the larger context. That's true of any security  arm-waving you encounter.

Consider other attack vectors sharing the same outcome (credential theft) or the same ultimate goal.

Then, think about the various security controls that address the steps in each of these attack vectors.

Only with this sort of methodical, thorough analysis can you get a good handle on the issue. For example...

Consider that the attack described in the article, a brute force password guessing attack, where the ciphertext / hash of a password is known ahead of time, is just one vector for password compromise.

Provided that the attacker cannot get the ciphertext in the first place (the /etc/shadow or the NTLM hash or what have you), then this kind of offline password guessing is rendered useless, itself.

So don't let the bad guys get your encrypted passwords.

Your controls (flaw remediation/patching, access control enforcement, and so forth) should reduce the likelihood of this significantly.

Another attack vector is that of brute force guessing against a logon interface. This attack can be hampered by logon failure delays that slow down the process and by account lockouts where several failed logons within a particular time window lock the account.

A way around these controls is to guess one password, each, for a long list of accounts. This delays the time between guesses for any one account.  Even if you exceed threshold of failed logons on that account, it happens outside the time window.

Logging combined with solid monitoring will, hopefully, notify someone of repeated logon failures. The logging part is easy. The monitoring can be harder, requiring some combination of technical and people/process solution (think SOC / incident response / CIRT).

Using a keylogger delivered through malware is probably a much easier way to steal credentials. A host of controls have to be in place to protect users from themselves, and protect operating systems from infection by such stuff.

So should we protect ourselves from GPU-based password cracking?  The problem is that the solutions are expensive or onerous.  Complex 12-character passwords are going to wreak havoc amongst the user community and use of smart cards or tokens or whatever, are going to cost a fortune.

Is this the best place to spend scant security dollars?  Or should you spend your infosec budget on a stone that takes out two birds?

That is, controls that not only protect password hashes but the other sensitive data on your servers and networks?  You need to do that anyway.

Friday, August 27, 2010

Verizon's Insider Threat

You've heard the psuedo-axiomatic bull-puckey that 80% of attacks are internal. As if this were universally true everywhere on Earth and everyone just "knows" this fact, like they know the hue of the sky.

Somewhere along the way (I was hearing this when I first got into infosec in the mid 90's) some government study came to this conclusion. Quite possibly the CSI/FBI computer crime surveys were at the root, I really don't know and it really doesn't matter.

I'm not saying there isn't insider threat. Or that insider access increases impact of successful attacks thus increasing risk. I'm not even particularly disagreeing with 80% because I'm sure there are cases where that figure is accurate.

But we as infosec professionals have to understand our own unique threats rather than blindly quoting some nearly urban-legendary statistics as if it applies everywhere.

Verizon's insider threat data, according to this article, lends some credence to the notion of insider threat being a big deal. Where bigness of deal varies from company to company. It also suggests that the problem--at Verizon, specifically--isn't as bad as the oft-quoted 80%.

Less interesting than the actual numbers, to me, is the fact that they collect these metrics in the first place. Do you?  Should you?  I think so.  How do/would you go about it?

And at the same time remain mindful of the fact that we don't know what we don't know? I hate it when infosec professionals tell me, for example, "we've had xxx incidents this year" and forget to add on the phrase "that we know of".

Thursday, August 19, 2010

Facebook Clickjacking Scam

More bad things on Facebook: This Network World article speaks of a Facebook clickjacking scam that entices users to view some photos or some such. It hides a functional Share button underneath a Next button with some social engineering that entices users to click, unknowningly spreading the worm/thingy, then they are taken to a survey that generates money for the scammers.

No Script detects the attack. Cool. I've just started using this add on myself. It seems to add a pretty solid layer of defense to Firefox.

Wednesday, August 18, 2010

Facebook Dislike Button Scam

All you overly paranoid Infosec people who scoff at the slightest hint of risk taking can just take a chill pill right now. It'll take you a few years to learn--and I hope you do learn for the sake of the companies you're supposed to be protecting--that there's no place for ultra paranoia in the business world.  Maybe I'll explain that in another post.

I bring up this point because I can just hear some infosec folks sniffing arrogantly when I admit that I use Facebook. Well, guess what, I am balancing risk versus benefit, something those sniffly infosec people should try sometime.

There are risks I'm taking using Facebook and, in fact, I did get partially snookered by the Facebook Dislike Button Scam. In that I clicked "like" when I saw the thing. I didn't actually use it.  And I'd like to believe that if I had, I'd get suspicious of it trying to do a survey and I would disallow access to it in the end.

Guess what, social engineering works beautifully, even occasionally on an infosec pro. There's no way to reliably patch wetware against it.

The best we can do is achieve a reasonable, helpful level of paranoia that prevents us from doing overly stupid things.

Then hope the rest of our technology defenses protect us from our slightly stupid mistakes.

Monday, July 13, 2009

Yet another example of the trusted insider threat against intellectual property.

In the days before his June 5 resignation from Goldman Sachs, Aleynikov copied, encrypted and transferred approximately 32MB of proprietary code to a server located in Germany, the FBI claimed
Exfiltration is a difficult threat to address. You can try to prevent it by limiting outbound protocols and connectivity. But covert channels are always possible, even something as simple as uploading using a protocol other than HTTP running over port 80/tcp.

Detection may be possible if you have a device that can detect proprietary keywords. A proxy server requiring authentication and providing adequate logging can facilitate incident response: determining the extent of the incident and finding the culprit.

I deduce that Goldman Sachs is either lucky or has a pretty good start on solving this problem.
Aleynikov resigned to take a job with a new company "that intended to engage in high-volume automated trading," for triple his $400,000 salary, the complaint said.
...he was allegedly a vice president of equity strategy.
The reality is, the higher you go up the executive chain, usually the harder it is to enforce rules. That's another reason that security programs are only successful when the CEO and board want it, demand it, and make sure they get it.