Friday, April 18, 2008

Google Street View Becomes Driveway

This article on SecurityProNews describes a situation where a Google Street View camera car enters and films someone's driveway.

When The Smoking Gun tipped off Janet McKee as to Google's impromptu visit, she said it was "a little bit creepy to think of someone filming our home without me knowing about it."
The Google camera car left public property (prohibited by Google) and drove up the couple's winding driveway. The reaction would have probably been different had the images not found themselves at the fingertips of billions.

This is dumb, but I admit it does creep me out a little bit that my own house is viewable by the planet. But why? Instead of people having to physically be present to ogle my abode -- so I can see them by peeking out the window -- they can anonymously view it at any time, entirely unknown to me.

Whereas I rely on the obscurity afforded by the physical world's limitations, when those limitations go away, what is the impact to my privacy, confidentiality?

Should I throw a tarp over the Jeep lest someone stumble across my street view and find themselves a cheap source for parts?

Should I worry that criminals now have an easier time casing my house?

For now, the world's internet users still have to click their way to our houses. But as more information comes online about each of us, we'll have to rethink some basic assumptions about our security and privacy.

Tuesday, April 15, 2008

Targeting Oddball Platforms

Another article on targeted attacks. Larry Seltzer makes an interesting point towards the end of the article about the use of oddball operating systems and applications.

Some experts might recommend that you use alternative platforms like the Mac or OpenOffice, but these really don't help at all with targeted attacks. If someone's rolling out a new vulnerability for a targeted attack, it's just as easy for them to do it on OpenOffice and the Mac, which have numerous vulnerabilities, as for Windows. In fact, it's easier and cheaper for them to do it on the alternatives, where the price for a new, unpatched vulnerability is probably much cheaper than for Windows.

I'd think oddball platforms probably help with mass attacks. Those attacks are more likely to target Windows and more likely to be a bigger issue for home users. So, switching over to an alternative platform could make more sense for the home user; the cost/benefit analysis probably looks different than it would to an enterprise.

Wednesday, April 02, 2008

Advance Auto Parts Store Data Breach

From The Register:

Advance Auto Parts, the US motoring parts retailer, is the latest firm to give up customer credit card data to hackers.

The bad guys gleaned financial information on up to 56,000 customers, through an attack affecting 14 stores nationwide...

Advance Auto Parts website provides more information. So this only affects a handful of stores. Interesting. Methods and perps unknown.

I'm a big fan of companies being held accountable to standards of due care in the form of PCI standards and legal obligations. Significant penalties encourage companies to do the right thing.

Having worked at companies whose execs and upper management didn't give a rip about data security, due care, or anything that didn't involve raking in money, and having heard from more than a few infosec peers that this is the rule, not the exception, the only way my data and yours is going to stay protected is through penalties.

Penalties that significantly affect the bottom line --- or better, penalties that personally affect CEOs: in the form of wearing orange jumpsuits. The only reason SOX got any traction in companies is the threat of jail time for management. HIPAA has been largely ignored by a surprising number of healthcare companies. Bigger fines and jail time for execs would fix that fast.

And remember, we wouldn't even be hearing about these breaches in the first place if it weren't for California's SB1386 and all the copycat state laws that states created thereafter. The effect of these laws should be obvious to anyone following infosec news before and after 2003. Companies were hardly voluntarily disclosing data breaches --- until they were required to by law.

Even then, I'll bet cash money there are still companies ignoring this requirement. I've already posted about delayed notifications. Penalties and laws don't fix everyone and everything, but they do help to counter temptation and encourage honesty.

Tuesday, April 01, 2008

Radio Tracking for Backup Tapes

Fujifilm bugs backup tapes with LoJack device

Looks like it runs $150/mo to track your tapes and reduce the likelihood of tapes going missing as has happened to quite a number of organizations over the last few years.

Is this really the best solution? How often do tapes go missing and how much damage does it cause? What level of risk mitigation does this technology afford? How does encryption of the data on the tapes compare in cost? Those are the questions I'd be asking myself if I were in a position of managing this risk.

I think I'd rather know my lost tapes were unreadable than to possibly know where my readable lost tapes were. Y'know?

I say "possibly" because I am guessing the lojack thingy is probably not 100% tamper resistant.

Michael

Friday, March 28, 2008

Laptop theft exposes patients' medical data

Laptop theft exposes patients' medical data (C|Net News)

The computer was stolen in February ... but officials did not notify the patients of the theft until Thursday, saying they didn't want to spread unnecessary alarm, according to The Washington Post.
Pure infosec brilliance.

Targeted Malware Used in Hannaford Credit Card Heist

Targeted Malware Used in Hannaford Credit Card Heist (eWeek)

Saturday, March 15, 2008

CanSecWest hacking contest here. OS X Leopard vs. Vista vs. Linux. Entertaining, but hope no one actually thinks the results will be conclusive. You certainly wouldn't make risk based decisions on the results... would you?
Anymore, with true 0-days becoming more and more commonplace, even though your risk may be lowered a bit by using an OS that seems to have fewer vulnerabilities discovered per year, it's still not worth comparing until the reliability factor goes way, way up. Until that number reaches one remotely exploitable vulnerability every 5 or 10 years (like OpenBSD, say?), you still need to "worry" and stack up your defense in depth security controls.

We're still at a point in OS software reliability where it's like comparing a 70's Italian roadster to a 70's British roadster. One may drive an extra day or two longer before breaking down but who cares? They both spend more time in the shop than on the road.

Saturday, February 16, 2008

Espionage and China

This article by the Washington Post makes an interesting read regarding the threat of economic espionage from China and Chinese nationals.

I wonder how much current concern about this topic is grounded in reality versus hysteria. It'd be worth finding out how many cases of corporate espionage involve countries other than China, and how many are perpetrated by U.S. citizens or by non-Chinese foreign nationals. Maybe it seems like there's an epidemic of Chinese espionage simply because those are the stories that sell best.

Thursday, February 14, 2008

Infrastructure Attacks

I'm not big on arm waving, notions of cyber terrorism, or blowing things out of proportion. Still, this PC World article is kind of interesting. It reports on internet-based infrastructure attacks on cities in an undisclosed location (outside the U.S.). While the reality of these specific attacks is news, the possibility of such attacks is surely no huge surprise to anyone in IT security.

As long as one doesn't jump to conclusions or fall into the trap of overestimating the risk because of its recency or other factors, such a report is a good reminder that infosec professionals need to methodically analyze and address a wide array of threats and risks. Of course, not all infosec pros have to deal with this sort of issue.

Another reminder are the (count them) five undersea cable cuts in the Middle East. Whether from anchors, sharks, terrorists, intelligence agencies, or just normal failures that the media hypes into a story ("Cable cuts happen on average once every three days"), there are lots of risks that maybe we don't think about, and occasionally the unlikely does occur. Thinking carefully about such rarities, we may choose to accept the risk even if our ill adapted brains scream that we need to prepare immediately right after reading the news article.

Back to the infrastructure attacks. The motivation in this instance was extortion. When doing risk analysis at different levels (individual facility, city, county, state, country) I could see that motivation would change the nature of the threat and risk. I wouldn't expect extortion to be extremely widespread or coordinated in locale or temporally. The impact of such an attack might be more limited. If instead the motivation of the threat source was some sort of military action, terrorist action, etc., that would change matters and the scope of impact would be greater if the attack were successful.

Let's hope the infrastructure security folks are on top of this. It makes me a little nervous to read "The U.S. is taking steps to lock down the computers that manage its power systems, however." Shouldn't we have already done that years ago?

Thursday, January 24, 2008

Article in The Register:

A security researcher says he has observed criminals using a new form of attack that causes victims to visit spoofed banking pages by secretly making changes to their high-speed home routers.

Talk about a targeted attack... Thing is, broadband users don't have all have the same router so that lowers the usefulness of this attack for the big money criminal operations, I would think, even if the attack can be carried out over the internet versus in a car across the street. Homogeneity in the digital gene pool does pay off, I think.

Seems this would be more on the level of neighborhood crime. Perhaps in the future when people are more tech savvy overall, this type of crime will make stealing radios and CDs out of cars obsolete. Meanwhile I suppose this attack could be interesting if the target of the attack is, let's say, a financial planner...

While the likelihood is probably on the low side, impact is high. But really, who cares? Changing your router password is not that tough. A near zero risk mitigation cost is a no-brainer no matter what the risk.

Although it's One More Thing for the average home user has to fix. Wouldn't it be neat if manufacturers could set the router password to be unique per box or at least chosen from a reasonably sized set? DIP switches? Programmed Logic Array? A batch of different EEPROMs? If they can print unique serial numbers can't they give routers unique passwords?

Tuesday, January 22, 2008

Backwaters Internet

My parents are still on dialup. It's like some kind of backwater, third world, armpit of the internet ruled by evil war lords. You're standing buck naked in the middle of a town square during a firefight between warring factions and if you want body armor or a helmet, you have to mail order it from China.

I was trying to get Mom's computer updated. Symantec A-V hadn't been updated since December. Mostly it went ok on 56k modem. Until it bombed. It couldn't install the latest LiveUpdate software. So I went to a free internet hotspot and even that took me 2 hours to work through. I can't see a home user being this patient. And we wonder why there are bot networks?

This is to say nothing of the giant patches that have to be installed every month (assuming auto update is enabled). And then there's 3rd party patches. Good luck with that. This constant deluge of patching and signature updates and software updates is maddening. Microsoft seems to be getting it together when comparing patch volumes for Win2k, XP, 2003, and Vista (so far).

Even so, most systems are just too hard to keep secure. They require constant attention and vigilance, tinkering, and time. It's almost as tough as trying to keep my Jeep running...

Sunday, January 13, 2008

When do we fix the problem?

So, with the increase in internet crime we seem to keep hearing about over, and over, and over again in security news publications, the attackers have really ramped up their sophistication. The information security game has radically changed and it sounds like the good guys are losing. This article in PC World talks about new malware techniques for evading detection.

The bad guys are testing their code against anti-virus engines to ensure they aren't detectable. This technique is mentioned along with numerous other depressing techniques used by the cybercrime underground in this report by Peter Gutmann.

For years we've been patching to address shoddy programming, installing anti-virus updates and then anti-spyware, we've used firewalls to hide gobs of insecure servers, and so on. Not that any of this works all that well for the average user (or we wouldn't have so many botnet members falling in home user IP space). It burns up a lot of time in the corporate world.

I don't think we can keep ignoring the underlying, fundamental problems in computer security for much longer. We need something for the disease not the symptoms. At some point the pain will get large enough to pass it on to the software vendors. Perhaps there will actually come a time that users would rather be secure than get the next greatest feature. Or am I being too optimistic again?

Saturday, January 12, 2008

Helpdesk Social Engineering

This article discusses attacks on Xbox Live accounts. The key point is that of social engineering of helpdesk/support employees. Call up the helpdesk of the target, pretend to be the account owner, request password reset, et voila.

Same thing in IT security of course. Fundamentally it's an authentication issue. Or lack of one. You want to use a something-you-have, or more commonly, a something-you-know (and-others-don't) aka a secret.

I've set an optional password on bank accounts where they ask it before they can make any changes over the phone or even in person. Simple. Effective. We've all run into the common "please verify your mailing address for me" verification, usually following entry of an account number. If attackers know your name there's that little detail of online white pages to get them the info. In a previous incarnation, the company I worked for would verify you by your SecurID using a website. That's solid. But kind of a pain.

Once again it's a balance. Don't forget when looking at the risk of social engineering that there is also the risk of time lost to a cumbersome password reset process. You want optimal security, not ultimate security.

If your company's helpdesk isn't doing some reasonable authentication before doing password resets, then it's probably about time work with 'em to develop a new, simple process. With a priority based on risk analysis, obviously... but with this being an easy, common attack, I bet the risk ranks fairly high on the list.

You do have a risk list, don't you?

Saturday, January 05, 2008

Privacy or Security Engineering

Sears has a portal that lets you lookup past purchases. It also allows you to lookup purchases of others if you know their name and phone number. In violation of their own privacy policy. Oops.

The article makes a lot of noise about privacy issues, but to me this is primarily another example of poor (or no) security engineering.

By analyzing the data sensitivity, existing requirements (like that privacy policy), and the data flow for the portal, it should've been obvious that stronger authentication and authorization controls were needed.

Banning Hacking Tools

Here's another tale of government trying to stop crime by banning general purpose tools that are used in crime, as well as to protect against it. Security, network, and system administrators should regularly use tools to detect vulnerabilities. These tools are used by hackers (and so would be illegal). But this is precisely why they should be available to everyone. To level the playing field. Fortunately the law's guidance improves the situation slightly, but the overarching approach is fundamentally flawed.

As a deterrent it is almost wholly ineffective. If someone is already breaking into computers they are already disregarding the law. Another law prohibiting the use/distribution of the tools used in committing crime is only a deterrent in that it slightly increases the risk to the perpetrator by increasing the penalties if caught (like armed robbery vs. plain old robbery, although in this case, simple social engineering is probably as much or more effective than some of these tools). If the criminal already is willing to take the risk of jail time, then what's a little more added on?

It also is an attempt to simplify catching cyber criminals, I suppose. Tracking them through cyberspace is hard. Much easier if all you have to do is find people with "Evil" tools. Except for that niggling problem of justice. People not breaking into computers will have the tools, so how do you tell the two apart? Intent, it turns out. I'm sure that's nice and crisply defined.

Perhaps the law was intended to keep these tools out the hands of the bad guys in the first place? Even if you somehow banned the transfer of these tools into the UK, since it's the Internet, trying to stop the distribution of, well, anything, isn't exactly a cake walk. And if legit folks get to use the tools, then this law can do nothing to control the flow of these tools.

IT and security professionals need the tools to protect themselves. The criminals will have the tools whether you ban them or not. (They aren't going to give up their life of crime and take up professional knitting). They'll have other techniques like phishing (let's ban email!) and social engineering (cursed evil telephones!). So with laws that ban dual purpose tools, all you're really doing is tipping the balance in favor of criminals. Brilliant.

Monday, December 31, 2007

Happy New Year

I mean it. Hope yours is safe and happy.

But the Storm Worm folks have dark agendas when they send out their evil holiday greeting emails at year's end.

I wonder if we will ever choose to solve the inherent insecurity of email? Or are we stuck because it's so hard to change from the current infrastructure. If the US can upgrade from NTSC to HDTV (mandated by law, and of course delayed numerous times), maybe governments need to force a change from SMTP to something less spoof-prone.

Thursday, December 27, 2007

Access Management from the Trenches

Call it user access management, account management, identity management, or whatever else. I am talking about making sure that authorized users, and only authorized users have access to applications, operating systems, and databases.

When new employees are hired, or existing employees leave, or when employee's jobs change, their access privileges have to change. To my mind, this is probably the most fundamental security control you can think of. It is definitely one you want to get right.

Here's a quick roadmap for fixing your company's access management processes. As a big fan of the triage approach to infosec: fix the worst first.

Too many companies don't do a good job of decommissioning user accounts when the user separates from the company. It isn't too difficult to find stories of disgruntled employees causing sabotage after they walk out the door for the last time. Work with Human Resources and access managers (system admins? infosec admins? helpdesk?) and devise a workflow. In most companies, a list of separated employees is sent out weekly and access managers disable accounts for a period of time before removing them altogether. Write up a simple process describing the steps, and a simple policy capturing the strategy, and run it through the appropriate management chains and make it official. You might want to devise a more expedient procedure for separation of higher risk individuals like privileged users, layoffs, termination, etc.

With employee separation in place, another tricky problem to solve is that of user transfers. You want to prevent, for example, Joe who's been transferring around the company for 5 years from accumulating access to everything. In an ideal world, you have beautifully designed processes and identity management technology that readily manage the lifecycle of a user's access. But here in the real world you probably have dozens or hundreds of systems with no real hope of a unified technology or procedure to ensure that when Joe transfers from Marketing to Advertising, his access is instantly changed.

Work with HR and see if you can insert a step into any existing transfer process. Maybe HR can include employee transfers in their weekly list (I've seen this done at a large telco) or in a smaller company maybe they can simply send transfers ad-hoc to system owners or to a mailing list. As with all things infosec, find a creative, practical solution.

Another excellent control to implement is periodic account access reviews conducted by system owners, data owners, managers, etc. This is conceptually simple, fairly simple to implement and better, it is a distributed. Those in the know will be doing the review. I recommend a period of 4, 6 or 12 months for the review. Too frequent and it is a burden and could get skipped. If not frequent enough it won't be very effective. As with all things infosec, it is a balancing act of cost, risk mitigation, and human behavior. And as we all know, we aren't interested in perfect security but practical risk reduction. A company whose managers check accounts of their employees every so often will have reduced risk substantially. You can always compliment this control with others (like logging & monitoring). Document the strategy in your account management policy, and the review process documented as another procedure and run up through the appropriate management chains to make official.

Finally, there's the onboarding process. This is the question of giving users only the access they need. It's been my experience that even in the most security-clueless environments, companies get this right--they have to, if they want their new employees to be productive. Though I haven't seen it or heard of it, if your company gives new hires access to everything, this may be the highest risk. Either find the highest risk business area and fix their onboarding process before moving to the next, or classify users and their access broadly---you can add granularity later. Work with the appropriate management to define appropriate access control.

You need management backing to do any of this. That means it has to be a real problem, even amongst the constellation of business problems senior management faces. In this day and age, unless the company is having big problems, then due care demands that company leaders fix bad access management. Work with management at as high a level as possible (HR? higher if possible) to get done what you can. Keep the scale small, be smart about what you can and can't accomplish, focus on reducing risk not eliminating it, and get the biggest bang for the buck, and you should wind up with significantly less risk in a fairly short timeframe.

Tuesday, December 04, 2007

Foreign Developed Code

One area that is of interest to me is the security risk associated with foreign-developed code. The premise is that code developed in, let's say France, could have malicious code hidden in it that could be used to compromise the confidentiality of data being processed.

As an Infosec pro, how do you deal with this potential threat? As always, through careful analysis of the related risk. Let's think about this from an enterprise-wide point of view.

Starting with threats, is your company in an industry that is particularly targeted by criminals or spies (corporate / government)? If the answer is yes, I have to ask why we wouldn't be similarly worried about the threat of malicious domestic developed code? It's not like there is a shortage of American hackers, and everyone in this country doesn't bleed red white and blue. How hard would it be for a malicious entity to bribe a disgruntled but intelligent programmer? Or plant someone in a target company? All of this goes towards analyzing the likelihood of the risk being realized.

Unless you'd rather freak out about foreign code because it seems scarier. After all, people are phenomenally good at accurately judging real risk. Yes. I am being sarcastic.

In a sense we've already sort of looked at the data. Threat sources target data and have different motivation levels for different types of data. So it is often hard to talk about threats, threat sources, and data independently. Nevertheless, consider the impact to the company of data compromise (in this case, we're primarily concerned with a compromise of confidentiality). This impact, combined with likelihood above gets at the general risk of malicious COTS software (whether foreign or domestic).

Now, what about safeguards to mitigate this problem? Some propose intensive source code review, a time consuming, extremely costly endeavor. Even if you can afford a source code license (or get one in the first place) I question its value in mitigating risk. How do you know you can trust that the source code given you is the source used to compile the binaries you're given? In various scenarios, the source may be cleaned before its handed over, but be sane: consider the realistic likelihood of subversion. Alternatively some clever fellow could subvert the compilers at the company to insert malicious code ala Reflections on Trusting Trust.

The point is that you could spend a lot of money reviewing source without, I think, reasonable assurance that you're substantially mitigating the problem. But hey, it sounds really cool and hardcore. It sounds like you're doing your best. Isn't that what security is all about?

(Uh, no, it isn't).

You could analyze binaries. Use a debugger. That's even more time consuming, if you can find someone with that expertise or find a suitable decompiler (and you're back to source code but at least this time you stand half a chance). All this assumes it you aren't violating your license agreement, most of which prohibit reverse engineering. Might want to chat with your legal department. Have fun with that.

Or hey, just ignore those pesky licensing issues. Who cares, right?

Hopefully you, if no one else.

Or you could analyze the behavior of the binaries. This is also fruitless because a clever individual could hide behavior of rogue code in such a way that no normal affordable, justifiable amount of lab testing is likely to uncover. What if the code "phones home" on some particular day? Are you going to test all possibilities? What if it is on a particular day of year? Or every other year? Or only when a debugger / network sniffer isn't running? Or when the internet is accessible? What if the information is sent via a side channel? How many possibilities are there to investigate? You are smart enough to come up with a dozen more scenarios to avoid detection. But none of us are probably smart enough to come up with ways to detect a combination of these tricks without knowing ahead of time which tricks were used.

Not that this is a hopeless situation. If your threat sources are less sophisticated, you may find these controls helpful albeit expensive (you have to carefully control scope/cost). Great, if the risk justifies it. Here's my way of thinking. Find cheap ways to reduce the risk.

  • Address the low hanging fruit of unsophisticated insertion of trojans with anti-virus scanning of source.
  • Check digital hashes from the vendor (and require the vendor to use these and store on an isolated server).
  • Sure, go ahead put the software in a lab and see if it does anything obviously bad without spending a fortune.
  • In deployment, why not implement good egress filtering? That takes care of obvious phone home attacks.
  • Investigate vendors and their development practices prior to selection and purchase. If they implement good separation of duties and promotion controls it will be far harder for someone to subvert the process.
Do all of this in proportion to the risk. And consider using open source software (I said consider Mr. Corporate Suit Nervous Nelly; there are many factors to think about). Remember that open source is going to be harder to subvert with all the eyes on it.

While reverse engineering, debuggers, intricate lab testing, source code review all sound sexy and cool (and some of us would love to do that kind of work), these controls are best reserved for the most intense risk and dedicated threat sources with the expense carefully weighed against the risk. Even then, more pedestrian security controls can give you a lot more bang for the buck.

That's what Infosec is about.

Friday, November 23, 2007

Analyze Risk

This article in Computerworld brings up an interesting problem. It reflects the claims of one Thierry Zoller who has been studying bugs in anti-virus software.

"...companies that try to improve security by checking data with more than one antivirus engine may actually be making things worse. Why? Because bugs in the 'parser' software used to examine different file formats can easily be exploited by attackers, so increasing your use of antivirus software increases the chances that you could be successfully attacked."

Zoller has found a number of parser bugs in anti-virus software. At least some, I am sure, are known to the most sophisticated hackers. But the level of risk of the two options is not as clear cut as Zoller states. The problem at hand is one of analyzing complex and very subtle shades of risk when engineering security.

"People think that putting one AV engine after another is somehow defense in depth. They think that if one engine doesn't catch the worm, the other will catch it," he said. "You haven't decreased your attack surface; you've increased it, because every AV engine has bugs."

Is it better to have only 1 parser? Or are these holes so dangerous we should have no anti-virus? Or is Zoller overstating the risk, and so 2 parsers are better than one? Sure, the attack surface increases the more parsers you have--for the attack vector targeting the parsers. Meanwhile, the virus/trojan threat vector increases. So, what is the optimal balance in this tradeoff?

While Zoller appears to ignore this crucial question as a researcher, infosec professionals responsible for architecting and engineering security solutions in their organizations don't have that luxury. Not if they want to spend resources where it counts the most, and provide a sufficient level of security to their companies at a proportional price.

To find the optimal balance, look at risk of each available alternative, while seeking to minimize risk and cost (ultimately the business has to decide what level of cost and risk mitigation is acceptable).

As we all know, risk is a product of likelihood and impact. Likelihood of an attack is based on types of threat sources, their capabilities, motivations, and attraction to your information assets; also, how widespread or easily obtained is information about the attacks in question. Impact of a successful compromise is based on the attack itself, the intent of the attacker, value of your data, and mitigating security controls in place.

For the situation above, on one hand, common viruses and email-borne trojans are very common. Attackers range from the fairly unsophisticated aiming to expand a bot empire and/or steal personal information, to motivated and reasonably equipped corporate spies targeting companies with spear-phishing attacks and such. Less common are highly sophisticated attackers leveraging true 0-day exploits such as those in anti-virus parsers. But they are out there.

Arguably, the more targeted and sophisticated the threat source and attack, the more impact is possible per compromise, although in aggregate, common anti-virus threats may represent more financial risk to the company through sheer volume than highly sophisticated anti-virus parser attacks. It depends on the value of the data, and the impacts of its compromise.

Don't forget to consider mitigating controls. Look at existing controls, and consider additional controls--and their cost--for each alternative. Suppose we use an architecture that isolates email anti-virus engines with excellent egress filtering controls in place, among other countermeasures. Such controls alone may largely mitigate the risk of the anti-virus parser compromise attack vector. Look at existing controls, and also consider controls that can be added. But don't forget to consider the costs of each alternative's controls.

Likewise, the anti-virus is itself a control. The number of anti-virus engines is strongly related to the number of malware emails that pass through (and result in a successful compromise). Fewer engines mean more likelihood of compromise through that attack vector.

Running 2 a-v parsers doesn't guarantee doom. But, it might. It depends on all these factors and risk analysis will help you answer this question and make good tradeoff decisions.

Don't forget that threats change. The best option today may be terrible in a month, or a year, or some time in the future. Keep that in mind, and revisit risk analysis tradeoffs, too.

Saturday, September 29, 2007

TD Ameritrade Lawsuit

I wonder if the era of infosec-driven lawsuits is finally here to stay? I've been figuring on a big rise in lawsuits one of these years. A number of things can motivate companies to do a better job of information security risk mitigation: laws, legal enforcement, standards, best practices, statistics, news stories.

But short of Sarbanes-Oxley and perhaps lawsuits, few things provide real drive, other than a desire by company leadership to do the right thing and protect customer data. Alternatively, lawsuits may simple push negligent companies to focus on covering their butts, legally speaking, rather than simply demonstrating due care and implementing basic information security controls. Can we say that malpractice lawsuits have improved healthcare?