Monday, December 31, 2007

Happy New Year

I mean it. Hope yours is safe and happy.

But the Storm Worm folks have dark agendas when they send out their evil holiday greeting emails at year's end.

I wonder if we will ever choose to solve the inherent insecurity of email? Or are we stuck because it's so hard to change from the current infrastructure. If the US can upgrade from NTSC to HDTV (mandated by law, and of course delayed numerous times), maybe governments need to force a change from SMTP to something less spoof-prone.

Thursday, December 27, 2007

Access Management from the Trenches

Call it user access management, account management, identity management, or whatever else. I am talking about making sure that authorized users, and only authorized users have access to applications, operating systems, and databases.

When new employees are hired, or existing employees leave, or when employee's jobs change, their access privileges have to change. To my mind, this is probably the most fundamental security control you can think of. It is definitely one you want to get right.

Here's a quick roadmap for fixing your company's access management processes. As a big fan of the triage approach to infosec: fix the worst first.

Too many companies don't do a good job of decommissioning user accounts when the user separates from the company. It isn't too difficult to find stories of disgruntled employees causing sabotage after they walk out the door for the last time. Work with Human Resources and access managers (system admins? infosec admins? helpdesk?) and devise a workflow. In most companies, a list of separated employees is sent out weekly and access managers disable accounts for a period of time before removing them altogether. Write up a simple process describing the steps, and a simple policy capturing the strategy, and run it through the appropriate management chains and make it official. You might want to devise a more expedient procedure for separation of higher risk individuals like privileged users, layoffs, termination, etc.

With employee separation in place, another tricky problem to solve is that of user transfers. You want to prevent, for example, Joe who's been transferring around the company for 5 years from accumulating access to everything. In an ideal world, you have beautifully designed processes and identity management technology that readily manage the lifecycle of a user's access. But here in the real world you probably have dozens or hundreds of systems with no real hope of a unified technology or procedure to ensure that when Joe transfers from Marketing to Advertising, his access is instantly changed.

Work with HR and see if you can insert a step into any existing transfer process. Maybe HR can include employee transfers in their weekly list (I've seen this done at a large telco) or in a smaller company maybe they can simply send transfers ad-hoc to system owners or to a mailing list. As with all things infosec, find a creative, practical solution.

Another excellent control to implement is periodic account access reviews conducted by system owners, data owners, managers, etc. This is conceptually simple, fairly simple to implement and better, it is a distributed. Those in the know will be doing the review. I recommend a period of 4, 6 or 12 months for the review. Too frequent and it is a burden and could get skipped. If not frequent enough it won't be very effective. As with all things infosec, it is a balancing act of cost, risk mitigation, and human behavior. And as we all know, we aren't interested in perfect security but practical risk reduction. A company whose managers check accounts of their employees every so often will have reduced risk substantially. You can always compliment this control with others (like logging & monitoring). Document the strategy in your account management policy, and the review process documented as another procedure and run up through the appropriate management chains to make official.

Finally, there's the onboarding process. This is the question of giving users only the access they need. It's been my experience that even in the most security-clueless environments, companies get this right--they have to, if they want their new employees to be productive. Though I haven't seen it or heard of it, if your company gives new hires access to everything, this may be the highest risk. Either find the highest risk business area and fix their onboarding process before moving to the next, or classify users and their access broadly---you can add granularity later. Work with the appropriate management to define appropriate access control.

You need management backing to do any of this. That means it has to be a real problem, even amongst the constellation of business problems senior management faces. In this day and age, unless the company is having big problems, then due care demands that company leaders fix bad access management. Work with management at as high a level as possible (HR? higher if possible) to get done what you can. Keep the scale small, be smart about what you can and can't accomplish, focus on reducing risk not eliminating it, and get the biggest bang for the buck, and you should wind up with significantly less risk in a fairly short timeframe.

Tuesday, December 04, 2007

Foreign Developed Code

One area that is of interest to me is the security risk associated with foreign-developed code. The premise is that code developed in, let's say France, could have malicious code hidden in it that could be used to compromise the confidentiality of data being processed.

As an Infosec pro, how do you deal with this potential threat? As always, through careful analysis of the related risk. Let's think about this from an enterprise-wide point of view.

Starting with threats, is your company in an industry that is particularly targeted by criminals or spies (corporate / government)? If the answer is yes, I have to ask why we wouldn't be similarly worried about the threat of malicious domestic developed code? It's not like there is a shortage of American hackers, and everyone in this country doesn't bleed red white and blue. How hard would it be for a malicious entity to bribe a disgruntled but intelligent programmer? Or plant someone in a target company? All of this goes towards analyzing the likelihood of the risk being realized.

Unless you'd rather freak out about foreign code because it seems scarier. After all, people are phenomenally good at accurately judging real risk. Yes. I am being sarcastic.

In a sense we've already sort of looked at the data. Threat sources target data and have different motivation levels for different types of data. So it is often hard to talk about threats, threat sources, and data independently. Nevertheless, consider the impact to the company of data compromise (in this case, we're primarily concerned with a compromise of confidentiality). This impact, combined with likelihood above gets at the general risk of malicious COTS software (whether foreign or domestic).

Now, what about safeguards to mitigate this problem? Some propose intensive source code review, a time consuming, extremely costly endeavor. Even if you can afford a source code license (or get one in the first place) I question its value in mitigating risk. How do you know you can trust that the source code given you is the source used to compile the binaries you're given? In various scenarios, the source may be cleaned before its handed over, but be sane: consider the realistic likelihood of subversion. Alternatively some clever fellow could subvert the compilers at the company to insert malicious code ala Reflections on Trusting Trust.

The point is that you could spend a lot of money reviewing source without, I think, reasonable assurance that you're substantially mitigating the problem. But hey, it sounds really cool and hardcore. It sounds like you're doing your best. Isn't that what security is all about?

(Uh, no, it isn't).

You could analyze binaries. Use a debugger. That's even more time consuming, if you can find someone with that expertise or find a suitable decompiler (and you're back to source code but at least this time you stand half a chance). All this assumes it you aren't violating your license agreement, most of which prohibit reverse engineering. Might want to chat with your legal department. Have fun with that.

Or hey, just ignore those pesky licensing issues. Who cares, right?

Hopefully you, if no one else.

Or you could analyze the behavior of the binaries. This is also fruitless because a clever individual could hide behavior of rogue code in such a way that no normal affordable, justifiable amount of lab testing is likely to uncover. What if the code "phones home" on some particular day? Are you going to test all possibilities? What if it is on a particular day of year? Or every other year? Or only when a debugger / network sniffer isn't running? Or when the internet is accessible? What if the information is sent via a side channel? How many possibilities are there to investigate? You are smart enough to come up with a dozen more scenarios to avoid detection. But none of us are probably smart enough to come up with ways to detect a combination of these tricks without knowing ahead of time which tricks were used.

Not that this is a hopeless situation. If your threat sources are less sophisticated, you may find these controls helpful albeit expensive (you have to carefully control scope/cost). Great, if the risk justifies it. Here's my way of thinking. Find cheap ways to reduce the risk.

  • Address the low hanging fruit of unsophisticated insertion of trojans with anti-virus scanning of source.
  • Check digital hashes from the vendor (and require the vendor to use these and store on an isolated server).
  • Sure, go ahead put the software in a lab and see if it does anything obviously bad without spending a fortune.
  • In deployment, why not implement good egress filtering? That takes care of obvious phone home attacks.
  • Investigate vendors and their development practices prior to selection and purchase. If they implement good separation of duties and promotion controls it will be far harder for someone to subvert the process.
Do all of this in proportion to the risk. And consider using open source software (I said consider Mr. Corporate Suit Nervous Nelly; there are many factors to think about). Remember that open source is going to be harder to subvert with all the eyes on it.

While reverse engineering, debuggers, intricate lab testing, source code review all sound sexy and cool (and some of us would love to do that kind of work), these controls are best reserved for the most intense risk and dedicated threat sources with the expense carefully weighed against the risk. Even then, more pedestrian security controls can give you a lot more bang for the buck.

That's what Infosec is about.

Friday, November 23, 2007

Analyze Risk

This article in Computerworld brings up an interesting problem. It reflects the claims of one Thierry Zoller who has been studying bugs in anti-virus software.

"...companies that try to improve security by checking data with more than one antivirus engine may actually be making things worse. Why? Because bugs in the 'parser' software used to examine different file formats can easily be exploited by attackers, so increasing your use of antivirus software increases the chances that you could be successfully attacked."

Zoller has found a number of parser bugs in anti-virus software. At least some, I am sure, are known to the most sophisticated hackers. But the level of risk of the two options is not as clear cut as Zoller states. The problem at hand is one of analyzing complex and very subtle shades of risk when engineering security.

"People think that putting one AV engine after another is somehow defense in depth. They think that if one engine doesn't catch the worm, the other will catch it," he said. "You haven't decreased your attack surface; you've increased it, because every AV engine has bugs."

Is it better to have only 1 parser? Or are these holes so dangerous we should have no anti-virus? Or is Zoller overstating the risk, and so 2 parsers are better than one? Sure, the attack surface increases the more parsers you have--for the attack vector targeting the parsers. Meanwhile, the virus/trojan threat vector increases. So, what is the optimal balance in this tradeoff?

While Zoller appears to ignore this crucial question as a researcher, infosec professionals responsible for architecting and engineering security solutions in their organizations don't have that luxury. Not if they want to spend resources where it counts the most, and provide a sufficient level of security to their companies at a proportional price.

To find the optimal balance, look at risk of each available alternative, while seeking to minimize risk and cost (ultimately the business has to decide what level of cost and risk mitigation is acceptable).

As we all know, risk is a product of likelihood and impact. Likelihood of an attack is based on types of threat sources, their capabilities, motivations, and attraction to your information assets; also, how widespread or easily obtained is information about the attacks in question. Impact of a successful compromise is based on the attack itself, the intent of the attacker, value of your data, and mitigating security controls in place.

For the situation above, on one hand, common viruses and email-borne trojans are very common. Attackers range from the fairly unsophisticated aiming to expand a bot empire and/or steal personal information, to motivated and reasonably equipped corporate spies targeting companies with spear-phishing attacks and such. Less common are highly sophisticated attackers leveraging true 0-day exploits such as those in anti-virus parsers. But they are out there.

Arguably, the more targeted and sophisticated the threat source and attack, the more impact is possible per compromise, although in aggregate, common anti-virus threats may represent more financial risk to the company through sheer volume than highly sophisticated anti-virus parser attacks. It depends on the value of the data, and the impacts of its compromise.

Don't forget to consider mitigating controls. Look at existing controls, and consider additional controls--and their cost--for each alternative. Suppose we use an architecture that isolates email anti-virus engines with excellent egress filtering controls in place, among other countermeasures. Such controls alone may largely mitigate the risk of the anti-virus parser compromise attack vector. Look at existing controls, and also consider controls that can be added. But don't forget to consider the costs of each alternative's controls.

Likewise, the anti-virus is itself a control. The number of anti-virus engines is strongly related to the number of malware emails that pass through (and result in a successful compromise). Fewer engines mean more likelihood of compromise through that attack vector.

Running 2 a-v parsers doesn't guarantee doom. But, it might. It depends on all these factors and risk analysis will help you answer this question and make good tradeoff decisions.

Don't forget that threats change. The best option today may be terrible in a month, or a year, or some time in the future. Keep that in mind, and revisit risk analysis tradeoffs, too.

Saturday, September 29, 2007

TD Ameritrade Lawsuit

I wonder if the era of infosec-driven lawsuits is finally here to stay? I've been figuring on a big rise in lawsuits one of these years. A number of things can motivate companies to do a better job of information security risk mitigation: laws, legal enforcement, standards, best practices, statistics, news stories.

But short of Sarbanes-Oxley and perhaps lawsuits, few things provide real drive, other than a desire by company leadership to do the right thing and protect customer data. Alternatively, lawsuits may simple push negligent companies to focus on covering their butts, legally speaking, rather than simply demonstrating due care and implementing basic information security controls. Can we say that malpractice lawsuits have improved healthcare?

Google Mail Vulnerability

This article in The Register describes a vulnerability in Google Mail / Google Groups wherein specially crafted email could compromise Google Mail and do bad things like install a filter in the victim mail account to siphon off email to the bad guy. (Other vulnerabilities were mentioned in Google Search Appliance and Google Picasa)

As the technology industry continues to focus on new features, new applications, constant change, it seems that we poor humans are left behind, unable to adapt so quickly to these new threats. We can barely teach people not to open suspicious attachments, so how does the average user know when emails could potentially have exploit code in them? It's my experience that the more you know about how things work under the covers the safer you can be (you have to be paranoid too) but expecting the average user to be a paranoid software coding expert is impractical to say the least.

We're not at that point in the industry where we've fool-proofed products. Maybe it is because of the continued flux in features and capabilities. When automobiles first made there appearance there were all sorts of designs and models. Eventually the market settled down and then we could worry about safety some 50-100 years later. Don't know if I care to wait that long for the computer industry.

Friday, September 14, 2007

Kitten CAPTCHAs

As you may know, over the last several years, spammers have sought to post spam in blog comments, on web groups, web bulletin board forums, etc. Hackers and their worms do the same but post malware instead.

I'm sure you've seen CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart") on various websites. Typically it's that image of distorted letters that supposedly only humans can decode.

CAPTCHAs were a means of reducing the risk of spamming. I'm sure originally the idea was to have a silver bullet solution but whenever you pit humans against humans you end up with an arms race. So now we have spammers with software able to decode the more rudimentary CAPTCHAs. More sophisticated tests arise with letters too hard for some people to recognize and decipher.

Here's another alternative--kittens! The key is to make the test solely about doing something that computers are bad at doing but humans are good at doing. Computers are good at memorizing and doing things quickly so you have to prevent them from trying every combination and memorizing all the answers. Natural language, image recognition, letter recognition, facial recognition, all these things people do much better than computers even when spammers have thousands of compromised systems at their disposal. One has to be careful about implementation details (don't always use the same image name for the cute little calico or someone can just memorize which filenames correspond to kittens).

I wonder if we couldn't leverage quirks of the human brain instead. Like optical illusions. I'm not sure how you'd do this in a way that would lend itself to quick Q&A and prevent computers from just memorizing answers by rote. Make commenters play something like Brain Age. Or what if you find a way to ask questions that humans are very likely to get wrong but computers won't (like the second question of this series)? Even better if humans answer in unpredictably wrong ways or if a given question can be made to be unpredictable.

It all goes back to Alan Turing who imagined a test for artificial intelligence. Put a person, the judge, in a room and have them communicate in natural language with a computer and a person. If the judge can't tell the two apart with certainty the computer passes. Of course that test would take too long just to post a blog comment (and you'd have to repeat it every time you take a similarly protected action). That's why the shortcut methods above were developed. The biggest shortcut is that the judge is now a computer, a major flaw in the test (or does the judge simply have to act more human than the computer being tested?).

One major glitch in this whole scheme is the underlying assumption that spamming is done solely by computers. I read once that spammers have been known to actually hire people to thwart CAPTCHAs. So for that threat, we're out of luck. But the automated threat of bots and the like is still viable and widely used.

In security our goal isn't to solve problems once and for all (impossible since it is a human vs. human kind of problem), it's to raise the bar for the difficulty of the attack just enough to reduce the risk just enough that we can live with the remaining risk.

Sometimes it's also a question of being a little more secure than your peers. Do you ever park your nice car next to much nicer cars in a parking lot figuring if any get stolen it won't be yours?

Security is kind of like escaping a bear. You don't have to run faster than the bear, you just have to run faster than your friend.

Saturday, September 01, 2007

Worms: a look back

Time flies. This is from an old infosec blog post of mine in 2005 about a paper from late 2004. Just two years ago we were still worried about containing mass worm outbreaks. Those days are essentially over with the rise of true criminal activity and targeted attacks. Nevertheless this concept could , maybe, be applied to controlling botnets, the key tool behind phishing, spamming, and other criminally motivated attacks. Botnets used to be centrally controlled. Kill the head, the botnet dies too. New botnets use a distributed architecture. They're more like a Hydra.

The Future of Malware Defense?
"Security UPDATE -- March 16, 2005"
Information Security News

"The research paper 'Can We Contain Internet Worms?,' was published in August 2004. In it, Microsoft researchers discuss how worms might become more readily containable as computers collaborate in a more automated manner. The concept, which the researchers have dubbed 'Vigilante,' proposes 'a new host centric approach for automatic worm containment.' ... Hosts detect worms by analysing attempts to infect applications and broadcast self-certifying alerts (SCAs) when they detect a worm. SCAs are automatically generated machine-verifiable proofs of vulnerability"

Tuesday, August 28, 2007

More Sony Stealthware

It looks like Sony is getting more bad press for foisting stealth software on users. This isn't strictly a rootkit, and not as extreme an example as before.

You might have been residing under a rock a couple years ago. Back in 2005 Sony BMG put what amounted to rootkit-like software on certain CDs. Insertion into PCs resulted in software being installed without user consent the purpose of which was to enforce digital rights management and hide itself from all but the most experienced users.

The current software creates a hidden directory perhaps to prevent tampering with USB fingerprint device it supports. Sure, a hidden directory could aid attackers (but not any more than leaving the bedroom light off at night aids burglars).

One of the big issues in 2005 was the software introduced a vulnerability to PCs on which it ran. Any software installed could do so. Even knowing what software is running on our systems, it's hard enough keeping up on patches (forget all the 0-day attacks in the last couple years). When companies cloak their software from us it makes it that much harder protect one's computer and to make informed decisions about security. Essentially companies that follow this approach are taking away our right to make certain security decisions for ourselves.

Even though somewhat hyped, I'm glad this made the news. More press on the topic can't hurt user rights and dissuading companies from these ill-advised tactics.

Saturday, July 28, 2007

Insider Attacks, Trust but Verify

For those security ostriches out there who are convinced that internal networks are perfectly safe, and that firewalls keep the bad guys out, this ComputerWorld article is yet another example of an insider stealing sensitive data. Worst of all this is a very trusted individual (a database administrator). Time to turn to proper risk management.

The impact of this sort of attack can be huge but I suspect the likelihood of this risk is low, or we wouldn’t be hearing about it in the news (to shamelessly quote Bruce Schneier: I tell people that if it's in the news, don't worry about it. The very definition of 'news' is 'something that hardly ever happens.). Think too of the cost/benefit equation for the threat source. So the risk is probably low; there are almost certainly bigger fish to fry in corporate America than distrusting DBAs and System Admins.

Low risk doesn’t justify much security spending if you look at this risk alone. But considering a number of related risks, there's a business case for employing security controls in a layered fashion to reduce risk in aggregate across these related risks. Controls might include background checks on employees, centralized logging with separation of duties and good monitoring, and blocking peer 2 peer network communications. For really sensitive data maybe more intrusive controls make sense.

But information security professionals should consider the whole equation. An oppressive culture of distrust of high paid techies is intuitively going to be bad for productivity and personnel retention. Is that worth it (or even necessary) given the likelihood and risk?

Sunday, June 24, 2007

Changing the Firewall Paradigm

This article in eWeek got me to thinking a little about the venerable Firewall, staple of modern internet security. The technology was originally developed in the early days of the internet, a time when collaboration and communication between organizations was vastly different.

Back then, you were as liable to use
BITNET as ftp or talk for file transfer or communications. At that time the internet was more open in its architecture. Systems within an organization were all accessible by the internet. The move to firewall technology sought to hide organizations’ computing assets behind a gateway.

But the talk of eroding network boundaries has been going for years now. We have telecommuters, road warriors, B2B connectivity, and Services Oriented Architecture essentially creating dozens of backdoors into an organization’s networks, not to mention internet facing applications with backend systems on internal networks. Enterprises benefit from more connectivity and collaboration. How do we do that without sacrificing security?

Firewalls aren’t going anywhere. It still makes sense filter incoming and outgoing internet traffic. But the shift towards endpoint security is bound to continue and I hope the result is that firewalls won’t always be perceived as the main mechanism for reducing risk. It’s not that simple anymore. It hasn’t been for years.

I’ve often wondered if firewalls, in some sense, create a false sense of security. Immature organizations make the mistake of ignoring host security and internal security threats because the firewall supposedly fixes everything.


A little thought experiment: if firewalls weren’t an available technology wouldn’t organizations have to enact better endpoint security? That could include implementing better host security for desktops and servers, or better application endpoint security such as agents to intercept and enforce security for web applications or web services, or improved development practices to reduce the vulnerabilities in deployed applications. Seems to me companies should already be doing these things.

It’s probably still safe to say a lot of attacks come from the big-I cloud and so, for now, the internet firewall is still a key feature of enterprise security architectures. But even if fewer attacks come from other sources such as telecommuter workstations or business partner networks or intranet users, the damage potential and thus the risk may be far greater than that due to generic internet based attacks. Could it be that in some cases it makes sense to focus as much or more than on so-called perimeter, aka firewall, security? I think so.

If you’re already doing good, careful, intelligent risk analysis as part of a holistic, enterprise-level process of information security risk management, you already know this and you apply your controls where they’re most needed. Otherwise you’re probably spending too much security money in the wrong place and not getting much risk reduction out of it.

Tuesday, May 01, 2007

Contact Me

Would love to hear your opinions!

Your Name
Your Email
Subject
Message
Image Verification
Please enter the text from the image
[ Refresh Image ] [ What's This? ]