I mean it. Hope yours is safe and happy.
But the Storm Worm folks have dark agendas when they send out their evil holiday greeting emails at year's end.
I wonder if we will ever choose to solve the inherent insecurity of email? Or are we stuck because it's so hard to change from the current infrastructure. If the US can upgrade from NTSC to HDTV (mandated by law, and of course delayed numerous times), maybe governments need to force a change from SMTP to something less spoof-prone.
Monday, December 31, 2007
Happy New Year
Thursday, December 27, 2007
Access Management from the Trenches
Call it user access management, account management, identity management, or whatever else. I am talking about making sure that authorized users, and only authorized users have access to applications, operating systems, and databases.
When new employees are hired, or existing employees leave, or when employee's jobs change, their access privileges have to change. To my mind, this is probably the most fundamental security control you can think of. It is definitely one you want to get right.
Here's a quick roadmap for fixing your company's access management processes. As a big fan of the triage approach to infosec: fix the worst first.
Too many companies don't do a good job of decommissioning user accounts when the user separates from the company. It isn't too difficult to find stories of disgruntled employees causing sabotage after they walk out the door for the last time. Work with Human Resources and access managers (system admins? infosec admins? helpdesk?) and devise a workflow. In most companies, a list of separated employees is sent out weekly and access managers disable accounts for a period of time before removing them altogether. Write up a simple process describing the steps, and a simple policy capturing the strategy, and run it through the appropriate management chains and make it official. You might want to devise a more expedient procedure for separation of higher risk individuals like privileged users, layoffs, termination, etc.
With employee separation in place, another tricky problem to solve is that of user transfers. You want to prevent, for example, Joe who's been transferring around the company for 5 years from accumulating access to everything. In an ideal world, you have beautifully designed processes and identity management technology that readily manage the lifecycle of a user's access. But here in the real world you probably have dozens or hundreds of systems with no real hope of a unified technology or procedure to ensure that when Joe transfers from Marketing to Advertising, his access is instantly changed.
Work with HR and see if you can insert a step into any existing transfer process. Maybe HR can include employee transfers in their weekly list (I've seen this done at a large telco) or in a smaller company maybe they can simply send transfers ad-hoc to system owners or to a mailing list. As with all things infosec, find a creative, practical solution.
Another excellent control to implement is periodic account access reviews conducted by system owners, data owners, managers, etc. This is conceptually simple, fairly simple to implement and better, it is a distributed. Those in the know will be doing the review. I recommend a period of 4, 6 or 12 months for the review. Too frequent and it is a burden and could get skipped. If not frequent enough it won't be very effective. As with all things infosec, it is a balancing act of cost, risk mitigation, and human behavior. And as we all know, we aren't interested in perfect security but practical risk reduction. A company whose managers check accounts of their employees every so often will have reduced risk substantially. You can always compliment this control with others (like logging & monitoring). Document the strategy in your account management policy, and the review process documented as another procedure and run up through the appropriate management chains to make official.
Finally, there's the onboarding process. This is the question of giving users only the access they need. It's been my experience that even in the most security-clueless environments, companies get this right--they have to, if they want their new employees to be productive. Though I haven't seen it or heard of it, if your company gives new hires access to everything, this may be the highest risk. Either find the highest risk business area and fix their onboarding process before moving to the next, or classify users and their access broadly---you can add granularity later. Work with the appropriate management to define appropriate access control.
You need management backing to do any of this. That means it has to be a real problem, even amongst the constellation of business problems senior management faces. In this day and age, unless the company is having big problems, then due care demands that company leaders fix bad access management. Work with management at as high a level as possible (HR? higher if possible) to get done what you can. Keep the scale small, be smart about what you can and can't accomplish, focus on reducing risk not eliminating it, and get the biggest bang for the buck, and you should wind up with significantly less risk in a fairly short timeframe.
Tuesday, December 04, 2007
Foreign Developed Code
One area that is of interest to me is the security risk associated with foreign-developed code. The premise is that code developed in, let's say France, could have malicious code hidden in it that could be used to compromise the confidentiality of data being processed.
As an Infosec pro, how do you deal with this potential threat? As always, through careful analysis of the related risk. Let's think about this from an enterprise-wide point of view.
Starting with threats, is your company in an industry that is particularly targeted by criminals or spies (corporate / government)? If the answer is yes, I have to ask why we wouldn't be similarly worried about the threat of malicious domestic developed code? It's not like there is a shortage of American hackers, and everyone in this country doesn't bleed red white and blue. How hard would it be for a malicious entity to bribe a disgruntled but intelligent programmer? Or plant someone in a target company? All of this goes towards analyzing the likelihood of the risk being realized.
Unless you'd rather freak out about foreign code because it seems scarier. After all, people are phenomenally good at accurately judging real risk. Yes. I am being sarcastic.
In a sense we've already sort of looked at the data. Threat sources target data and have different motivation levels for different types of data. So it is often hard to talk about threats, threat sources, and data independently. Nevertheless, consider the impact to the company of data compromise (in this case, we're primarily concerned with a compromise of confidentiality). This impact, combined with likelihood above gets at the general risk of malicious COTS software (whether foreign or domestic).
Now, what about safeguards to mitigate this problem? Some propose intensive source code review, a time consuming, extremely costly endeavor. Even if you can afford a source code license (or get one in the first place) I question its value in mitigating risk. How do you know you can trust that the source code given you is the source used to compile the binaries you're given? In various scenarios, the source may be cleaned before its handed over, but be sane: consider the realistic likelihood of subversion. Alternatively some clever fellow could subvert the compilers at the company to insert malicious code ala Reflections on Trusting Trust.
The point is that you could spend a lot of money reviewing source without, I think, reasonable assurance that you're substantially mitigating the problem. But hey, it sounds really cool and hardcore. It sounds like you're doing your best. Isn't that what security is all about?
(Uh, no, it isn't).
You could analyze binaries. Use a debugger. That's even more time consuming, if you can find someone with that expertise or find a suitable decompiler (and you're back to source code but at least this time you stand half a chance). All this assumes it you aren't violating your license agreement, most of which prohibit reverse engineering. Might want to chat with your legal department. Have fun with that.
Or hey, just ignore those pesky licensing issues. Who cares, right?
Hopefully you, if no one else.
Or you could analyze the behavior of the binaries. This is also fruitless because a clever individual could hide behavior of rogue code in such a way that no normal affordable, justifiable amount of lab testing is likely to uncover. What if the code "phones home" on some particular day? Are you going to test all possibilities? What if it is on a particular day of year? Or every other year? Or only when a debugger / network sniffer isn't running? Or when the internet is accessible? What if the information is sent via a side channel? How many possibilities are there to investigate? You are smart enough to come up with a dozen more scenarios to avoid detection. But none of us are probably smart enough to come up with ways to detect a combination of these tricks without knowing ahead of time which tricks were used.
Not that this is a hopeless situation. If your threat sources are less sophisticated, you may find these controls helpful albeit expensive (you have to carefully control scope/cost). Great, if the risk justifies it. Here's my way of thinking. Find cheap ways to reduce the risk.
- Address the low hanging fruit of unsophisticated insertion of trojans with anti-virus scanning of source.
- Check digital hashes from the vendor (and require the vendor to use these and store on an isolated server).
- Sure, go ahead put the software in a lab and see if it does anything obviously bad without spending a fortune.
- In deployment, why not implement good egress filtering? That takes care of obvious phone home attacks.
- Investigate vendors and their development practices prior to selection and purchase. If they implement good separation of duties and promotion controls it will be far harder for someone to subvert the process.
While reverse engineering, debuggers, intricate lab testing, source code review all sound sexy and cool (and some of us would love to do that kind of work), these controls are best reserved for the most intense risk and dedicated threat sources with the expense carefully weighed against the risk. Even then, more pedestrian security controls can give you a lot more bang for the buck.
That's what Infosec is about.