A few months ago I was trying to automate the retrieval and analysis of JavaScript exploits from a site that was designed to target vulnerable browsers. It was reading User-Agent header strings on the server side and only serving exploits to vulnerable versions of IE and displaying ads to everybody else. So my attempts to script the get-and-grep analysis I was doing weren't working with curl or wget. So I wrote this:
#!/bin/sh -f
if [ $1x = x ]; then
echo "Usage: $0 [url]"
exit
fi
/usr/bin/wget -U "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; InfoPath.1; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)" $1
All it does is pass url to wget while wget uses an IE6 User-Agent string when it makes its request. Nothing fancy, but it was worth the 30 seconds it took me to whip it up.
Wednesday, January 30, 2008
How To Develop Firewall Policy For An Existing Network
So, hopefully in 2008, you're not finding yourself with an Internet connection and no firewall. But it seems to have happened to this guy. Though, I have worked on 2 projects in my past life where this was exactly the case, and we had to put in a firewall and develop policy around existing traffic, and do so without documentation or clear guidance from the company. And in fairness, I don't know that there's no firewall. It may be a situation where there's a firewall that needs replacing that nobody has access to. (That's happened to me before as well, but I digress.)
So here's my response to Ruggero. It was long, so I thought I'd also post it here:
Having done pretty much exactly this twice before, I can tell you that I wouldn't use a sniffer, and I especially wouldn't use an automated rule generator of any sort. You need to make sure any traffic you allow is deliberate and important to the organization. An automated tool won't make such judgments, and you will end up allowing all of the crap that's already on your network that you should probably be blocking.
My advice on how to proceed:
Step 1. Put the firewall in place with a policy that allows all traffic to pass. Turn on logging. Use this instead of a sniffer as analyzing firewall logs will be much easier. Putting the firewall in first will also separate the physical and routing changes you make to the network from the policy changes. This will aid in troubleshooting connectivity issues down the road.
Step 2. Analyze logs. I wrote a shell script to do this with PIX logs, and basically all I did was identify each of the unique sets of srcaddr,dstaddr,dstport and then count the number of times each unique set occurred in the log file. Sort results by number of occurrences and you will find either a) common traffic or b) crappy protocols at the top of the list.
Step 3. Investigate the results of your log analysis. Find out what each set is and then compare it to your business requirements and security policy. Determine whether or not the traffic should be allowed.
Step 4. Write rules for the traffic you decide to allow in Step 3. To help with your process, you may also want to write rules for traffic you wish to block.
Step 5. This is really a GOTO for step 2. You're going to eliminate known traffic from your logs and then analyze again. It's especially cool if the firewall includes rule numbers in its logs, so when you made those rules in step 4, it's now easier to separate known and unknown traffic. Loop through steps 2-5 for as many iterations as appropriate.
Step 6. Change from 'permit all' to 'deny all.' Once you've sufficiently analyzed data and implemented rules, cut over your default policy and test your business-related apps. If you made block rules in step 4 that are superseded by this policy change, you may want to remove them now in order to keep your firewall rule set as simple as it can be.
Step 7. Wait for the phone to ring. Depending on the size of the network and the volume of traffic, it will take you anywhere from a week to a month to get through step 6. Step 7 should last another 30-90 days, and basically you want to let the organization go through a long enough business cycle that things like payroll and billing can run through to completion at least once. You probably trampled on one or two things that are seldom used so they didn't generate traffic before, and this is when you find them and write rules for them.
Good luck. This kind of work is typically a real bear.
PaulM
There, now I don't ever have to lay that out again. :)
So here's my response to Ruggero. It was long, so I thought I'd also post it here:
Having done pretty much exactly this twice before, I can tell you that I wouldn't use a sniffer, and I especially wouldn't use an automated rule generator of any sort. You need to make sure any traffic you allow is deliberate and important to the organization. An automated tool won't make such judgments, and you will end up allowing all of the crap that's already on your network that you should probably be blocking.
My advice on how to proceed:
Step 1. Put the firewall in place with a policy that allows all traffic to pass. Turn on logging. Use this instead of a sniffer as analyzing firewall logs will be much easier. Putting the firewall in first will also separate the physical and routing changes you make to the network from the policy changes. This will aid in troubleshooting connectivity issues down the road.
Step 2. Analyze logs. I wrote a shell script to do this with PIX logs, and basically all I did was identify each of the unique sets of srcaddr,dstaddr,dstport and then count the number of times each unique set occurred in the log file. Sort results by number of occurrences and you will find either a) common traffic or b) crappy protocols at the top of the list.
Step 3. Investigate the results of your log analysis. Find out what each set is and then compare it to your business requirements and security policy. Determine whether or not the traffic should be allowed.
Step 4. Write rules for the traffic you decide to allow in Step 3. To help with your process, you may also want to write rules for traffic you wish to block.
Step 5. This is really a GOTO for step 2. You're going to eliminate known traffic from your logs and then analyze again. It's especially cool if the firewall includes rule numbers in its logs, so when you made those rules in step 4, it's now easier to separate known and unknown traffic. Loop through steps 2-5 for as many iterations as appropriate.
Step 6. Change from 'permit all' to 'deny all.' Once you've sufficiently analyzed data and implemented rules, cut over your default policy and test your business-related apps. If you made block rules in step 4 that are superseded by this policy change, you may want to remove them now in order to keep your firewall rule set as simple as it can be.
Step 7. Wait for the phone to ring. Depending on the size of the network and the volume of traffic, it will take you anywhere from a week to a month to get through step 6. Step 7 should last another 30-90 days, and basically you want to let the organization go through a long enough business cycle that things like payroll and billing can run through to completion at least once. You probably trampled on one or two things that are seldom used so they didn't generate traffic before, and this is when you find them and write rules for them.
Good luck. This kind of work is typically a real bear.
PaulM
There, now I don't ever have to lay that out again. :)
Friday, January 25, 2008
Quick Links
Since I haven't had any time to write any new blog posts, let me point you to a couple of recent must-reads from others' blogs.
- Frank Boldewin @ Offensive Computing has a new advanced unpacking tutorial. The best part is that it's "Part I" implying that there's more to come. Required reading for sure.
- Ronaldo at SecuriTeam has put together a Google Calendar of 2008 Security Events.
- Wouldn't it be ironic if WebSense started blocking Blogspot or GooglePages? There's good reason to. (But then you couldn't read my blog, and that's no good...)
- And finally, this post to EthicalHacker.net about a new infosec cert that sounds like it might actually be hard to get.
Sunday, January 20, 2008
An Interesting Budget Item
The company I work for has some neat incentive programs that it makes available to its staff. One of them is the result of the company being in the health care vertical and also having a parking shortage at our headquarters. To ease the parking problem, we offer a "healthy parking" incentive where people receive a small cash payment for parking in the most inconvenient, far away lot. It works so well that there's now a waiting list to get a lousy parking spot. If small cash incentives can work this well for parking, why not other things?
I wish I had this idea last year when we were putting together the 08 budget for infosec: "healthy computing!" In this case, users who have local admin on their company-issued computers would willingly give up their elevated privileges for a cash payment. Sounds expensive, right? But what are you spending on anti-virus and other host-security products? It's probably pretty close, and at least today, there's more value in reducing local admin access than there is in running anti-virus. Not to mention the time and internal cost of proving that a user doesn't need local admin privileges in order to revoke them.
And while I'm denigrating the value of AV products, I'd like to share with you my favorite blog post of the month. It's from Amrit Williams' blog, which is one my regular reads.
I wish I had this idea last year when we were putting together the 08 budget for infosec: "healthy computing!" In this case, users who have local admin on their company-issued computers would willingly give up their elevated privileges for a cash payment. Sounds expensive, right? But what are you spending on anti-virus and other host-security products? It's probably pretty close, and at least today, there's more value in reducing local admin access than there is in running anti-virus. Not to mention the time and internal cost of proving that a user doesn't need local admin privileges in order to revoke them.
And while I'm denigrating the value of AV products, I'd like to share with you my favorite blog post of the month. It's from Amrit Williams' blog, which is one my regular reads.
Monday, January 7, 2008
Bad News for Mac Users
If there is still any doubt at all about the security of Mac OS X, I think we are about to find out. Someone at the Army has resurrected an old idea. They're going to start using more Macs because...
"...fewer attacks have been designed to infiltrate Mac computers, and adding more Macs to the military's computer mix makes it tougher to destabilize a group of military computers with a single attack."
Right. I can't think of a single instance of a vulnerability that affects both Windows and Mac. And Apple have always been proactive and fair when dealing with security researchers, so that's good.
Seriously, though, the real reason Mac's aren't subjected to more drive-by-downloads and malware in general isn't that OS X is significantly more secure than XP or Vista. It's that OS X is still a tiny fraction of the potential pool of malware victims. (An all-time high of 7% as of last month.) It's not worth the money for the botnet czars to develop exploits and bots for OS X. But if somebody really big like, I dunno... the U.S. Army were to deploy Macs in large numbers, then the scale might begin to tip toward profitability.
And then hold on to your blackberry green tea frappuccino, pal. Here it comes.
Sunday, January 6, 2008
Why Some InfoSec Training Is Sacred
Here's another one for the CISO's to ponder.
Think about how you handle professional development across IT. I'll bet that, if you are fortunate enough to have a training budget, that it's based on the number of FTE's in each of your IT budget columns. And this makes sense - it's perfectly fair to invest equally in each person working for you. So it only makes sense that when you reduce spending on training, you do this equally as well. But this could be a mistake.
Professional development, and most IT spending for that matter, can be tied to needs that the business controls. For instance, if you think about server platforms, I'm sure you want to train your people on Windows Server 2008 before you start widely deploying it. But the decision can also be made to subsist on Windows Server 2003 for another year or two. (In fact, you probably have third-party apps holding you back anyway, but I digress.)
Now consider your incident response team. I'm sure you want to keep them trained up on malware analysis, forensics, the latest threats and exploits, etc., etc. But that's money you may want to spend elsewhere. Unfortunately, you and your business don't get to decide whether or not the new threats that come out this year are going to apply to you. They will. They do.
So you see what I'm getting at, right? You can postpone investing in new technologies and therefore the training that goes with them. But you can't postpone new threats. And so you can't postpone spending on infosec training and also expect to be as prepared to handle security threats this year as you were last year. So before you uniformly cut IT training, understand that where infosec training is concerned, there is an elevated risk level that the business may not be able to manage in other ways.
Think about how you handle professional development across IT. I'll bet that, if you are fortunate enough to have a training budget, that it's based on the number of FTE's in each of your IT budget columns. And this makes sense - it's perfectly fair to invest equally in each person working for you. So it only makes sense that when you reduce spending on training, you do this equally as well. But this could be a mistake.
Professional development, and most IT spending for that matter, can be tied to needs that the business controls. For instance, if you think about server platforms, I'm sure you want to train your people on Windows Server 2008 before you start widely deploying it. But the decision can also be made to subsist on Windows Server 2003 for another year or two. (In fact, you probably have third-party apps holding you back anyway, but I digress.)
Now consider your incident response team. I'm sure you want to keep them trained up on malware analysis, forensics, the latest threats and exploits, etc., etc. But that's money you may want to spend elsewhere. Unfortunately, you and your business don't get to decide whether or not the new threats that come out this year are going to apply to you. They will. They do.
So you see what I'm getting at, right? You can postpone investing in new technologies and therefore the training that goes with them. But you can't postpone new threats. And so you can't postpone spending on infosec training and also expect to be as prepared to handle security threats this year as you were last year. So before you uniformly cut IT training, understand that where infosec training is concerned, there is an elevated risk level that the business may not be able to manage in other ways.
Thursday, January 3, 2008
ArcSight Test Alert Connector & Replay
If you use ArcSight and haven't heard of the Test Alert connector before, listen up. Especially if you have a test environment or need to perform stress testing on your ArcSight deployment/configuration, this should interest you.
There are really two components here. The first is part of your Manager, and that is the ability to generate "replay" files for use with the Test Alert connector. But replay files are actually CSV format event exports. You may find this functionality very useful even if you never intend actually replay them anywhere.
The first step is to think about what events you would like to export. Space is an issue depending on what time frame and type of events you want to export. If you need to stress test your hardware or rules, make sure you are going to get enough events to sustain the event/min or event/sec rate that you'd like to test up to. But beyond that, less is more, since it doesn't take much to get a very large (uncompressed, CSV text) replay file. Also, this feature makes use of filters defined in your ArcSight manager, so if you have specific events you wish to select, review your current filters or create a new one for your export.
The next step is pretty easy. Log in to the manager and run 'arcsight replayfilegen' from the manager/bin directory. Then follow the prompts to log in, select a file name, time range, and filter.
Now that you're generating replay files, it's time to set up your Test Alert connector. Because you're not going to run Test Alert like a service, you can install it anywhere that is convenient for your purposes, your test server, your laptop, whatever. The install is the same as any connector, just select 'Test Alert' in the type dialog and then finish the install as you would normally.
To use it, copy your replay files into the 'current' directory and launch 'arcsight agents' from the 'bin' directory. You will need a GUI display for this, and I have found the X11 display to be flaky with missing or slow redraws to remote X servers. All of the *.events files in the connector's 'current' directory will be displayed. You can turn any combination on/off, select flow rate, and then click 'Continue' to start pumping events into your manager.
There are really two components here. The first is part of your Manager, and that is the ability to generate "replay" files for use with the Test Alert connector. But replay files are actually CSV format event exports. You may find this functionality very useful even if you never intend actually replay them anywhere.
The first step is to think about what events you would like to export. Space is an issue depending on what time frame and type of events you want to export. If you need to stress test your hardware or rules, make sure you are going to get enough events to sustain the event/min or event/sec rate that you'd like to test up to. But beyond that, less is more, since it doesn't take much to get a very large (uncompressed, CSV text) replay file. Also, this feature makes use of filters defined in your ArcSight manager, so if you have specific events you wish to select, review your current filters or create a new one for your export.
The next step is pretty easy. Log in to the manager and run 'arcsight replayfilegen' from the manager/bin directory. Then follow the prompts to log in, select a file name, time range, and filter.
Now that you're generating replay files, it's time to set up your Test Alert connector. Because you're not going to run Test Alert like a service, you can install it anywhere that is convenient for your purposes, your test server, your laptop, whatever. The install is the same as any connector, just select 'Test Alert' in the type dialog and then finish the install as you would normally.
To use it, copy your replay files into the 'current' directory and launch 'arcsight agents' from the 'bin' directory. You will need a GUI display for this, and I have found the X11 display to be flaky with missing or slow redraws to remote X servers. All of the *.events files in the connector's 'current' directory will be displayed. You can turn any combination on/off, select flow rate, and then click 'Continue' to start pumping events into your manager.