So I was playing with my Mac today, trying to make a desktop that's as useful as my KDE/SuperKaramba setup on my Kubuntu laptop. I didn't find all the stuff I was looking for, but I did find GeekTool, which is a cool little app that lets you display command-line tools or tail a log to your desktop.
Wednesday, April 25, 2007
Tuesday, April 24, 2007
Phishing Credit Unions
You may have caught this story in the Washington Post about hacked servers and phishing attacks at Indiana U. If you haven't, I recommend that you do read it. It stars Phishing's man of the hour, Chris Soghoian. Go on. I'll wait.
OK, so the interesting thing about the phishing attack at IU is that it seems that the phishermen were targeting specific credit unions. From the standpoint of traditional bank phishing attacks, targeting small credit unions doesn't make a ton of sense. Local credit unions typically have only thousands or tens of thousands of members. Chase, BofA, and Citibank, for example, all have millions of members worldwide. That's why originally, the big banks were the primary targets of phishing. That seems to be changing, though.
Old model:
Build phishing site that looks like global bank's website & write convincing phishing e-mail. Spam e-mail to tens of millions of addresses. Wait for victims to hand over credentials. Steal info, empty accounts, sell on IRC. Site is shut down in less than a week because high volume of spam == high likelihood of landing in a spam trap or being reported to bank, ISC, CERT, etc.
New model:
Build phishing site that looks like local credit union's website & write convincing phishing e-mail. Spam e-mail to domains of companies listed on credit union's list of select employer groups. Wait for victims to hand over credentials. Steal info, empty accounts, sell on IRC. Site is up longer because the likelihood of being detected is less (no spam traps), and most credit unions outsource infosec functions and only keep a small IT staff so reaction times are typically slower.
To review, credit unions make good phishing targets because they:
1. Outsource lots of IT & infosec functions.
1a. Pay for infosec work required by NCUA and PCI, but neither requires policy/procedure for responding to active phishing attacks.
2. Publish list of companies whose employees are eligible to join, making it easy to target spam to members.
The easy solutions to CU phishing, like making the employer list private, suck because they can have a negative impact on business. So here's a crazy thought. There's a niche in phishing detection for CUs. You would need to create a phony web/email presence, put the fake company on the CU's employer group list, and then wait for hits and coordinate the response to members, antispam vendors, the ISP of the phishing site, and law enforcement. Credit unions already like outsourcing infosec, and the best way to be cost-effective at this is to service multiple credit unions.
So, uh... gotta go. Got some VC folks to call.
OK, so the interesting thing about the phishing attack at IU is that it seems that the phishermen were targeting specific credit unions. From the standpoint of traditional bank phishing attacks, targeting small credit unions doesn't make a ton of sense. Local credit unions typically have only thousands or tens of thousands of members. Chase, BofA, and Citibank, for example, all have millions of members worldwide. That's why originally, the big banks were the primary targets of phishing. That seems to be changing, though.
Old model:
Build phishing site that looks like global bank's website & write convincing phishing e-mail. Spam e-mail to tens of millions of addresses. Wait for victims to hand over credentials. Steal info, empty accounts, sell on IRC. Site is shut down in less than a week because high volume of spam == high likelihood of landing in a spam trap or being reported to bank, ISC, CERT, etc.
New model:
Build phishing site that looks like local credit union's website & write convincing phishing e-mail. Spam e-mail to domains of companies listed on credit union's list of select employer groups. Wait for victims to hand over credentials. Steal info, empty accounts, sell on IRC. Site is up longer because the likelihood of being detected is less (no spam traps), and most credit unions outsource infosec functions and only keep a small IT staff so reaction times are typically slower.
To review, credit unions make good phishing targets because they:
1. Outsource lots of IT & infosec functions.
1a. Pay for infosec work required by NCUA and PCI, but neither requires policy/procedure for responding to active phishing attacks.
2. Publish list of companies whose employees are eligible to join, making it easy to target spam to members.
The easy solutions to CU phishing, like making the employer list private, suck because they can have a negative impact on business. So here's a crazy thought. There's a niche in phishing detection for CUs. You would need to create a phony web/email presence, put the fake company on the CU's employer group list, and then wait for hits and coordinate the response to members, antispam vendors, the ISP of the phishing site, and law enforcement. Credit unions already like outsourcing infosec, and the best way to be cost-effective at this is to service multiple credit unions.
So, uh... gotta go. Got some VC folks to call.
Sunday, April 22, 2007
April GRSec Reminder
Who: You, of course! Especially if you like infosectacular conversation.
What: April GRSec Meetup
When: 4/25/2007 @ 5:30pm EST
Where: Bistro Bella Vita (Downtown GR)
Why: Because your family doesn't want to talk with you about PCI compliance or using statistical analysis to detect malware, but we do!
What: April GRSec Meetup
When: 4/25/2007 @ 5:30pm EST
Where: Bistro Bella Vita (Downtown GR)
Why: Because your family doesn't want to talk with you about PCI compliance or using statistical analysis to detect malware, but we do!
Friday, April 20, 2007
My ArcSight Toolbox
I'm not shy about the fact that I use ArcSight at work, though when talking about SIM's and logging, I try not to make it all about them. But this post is all about ArcSight, but also maybe not. Maybe you use another SIM that has this same type of functionality - it wouldn't surprise me if this was standard on most SIM's shipping today.
Anyway, ArcSight has a "Tools" feature that basically allows you to pass the contents of any cell in a table view (ArcSight calls them Active Channels) to an external program. This is unbelievably handy. So here are some of my favorite ArcSight Tools.
1. Cygwin Whois - ArcSight comes with a built-in, java-based whois lookup tool. But for whatever reason, if the address is outside the US, say in an APNIC block, ArcSight just returns the NIC. Cygwin's whois will look up the registrant from the correct NIC.
2. EventID.Net Lookup - Takes a field containing EventLogType:ID ('Device Event Class ID' by default) and passes it to a shell script that launches IE with a properly f0rmatted eventid.net URL:
#!/bin/bash
PATH=$PATH:/cygdrive/c/cygwin/bin:/usr/bin:/bin
if [ "$1" = "" ];
then echo "usage: $0 [ArcSight EventLog ID Tag]";
exit 0
fi
query=`echo $1 | sed 's/\(.*\):\(.*\)/eventid=\2\&source=\1/'`
if [ "$query" = "" ]; then echo "Error in field format";
exit 1
fi
/cygdrive/c/Program\ Files/Internet\ Explorer/IEXPLORE.EXE "http://www.eventid.net/display.asp?$query" &
3. LDAP Server/User Lookup - This is a Perl script that I wrote that takes a server or user name field and searches AD via LDAP for it and returns things like distinguishedName, operatingSystem, description, memberOf, and so on. This runs in Cygwin as well.
4. VHost Live Search - Got this idea from a post to the pen-test mailing list. Sometimes whois and nslookup don't cut it. This is a great way to figure out what vhosts might be present on a given IP address.
5. IP2Asset - On our network, workstation names and asset ID's are the same. So here's a script that takes an IP address, runs nslookup, and then launches Altiris web console to search for the asset.
#!/bin/bash PATH=$PATH:/cygdrive/c/cygwin/bin:/usr/bin:/bin
if [ "$1" = "" ]; then echo "usage: $0 [ip address]";
exit 0
fi
asset=`nslookup $1 |grep ^Name |sed 's/.*\(it[0-9]*\)\.wks.*/\1/'`
if [ "$asset" = "" ]; then echo "Error resolving address";
exit 1
fi
/cygdrive/c/Program\ Files/Internet\ Explorer/IEXPLORE.EXE "http://altiris_svr/Altiris/NS/Console.aspx?NameMatch='$asset'" &
Anyway, ArcSight has a "Tools" feature that basically allows you to pass the contents of any cell in a table view (ArcSight calls them Active Channels) to an external program. This is unbelievably handy. So here are some of my favorite ArcSight Tools.
1. Cygwin Whois - ArcSight comes with a built-in, java-based whois lookup tool. But for whatever reason, if the address is outside the US, say in an APNIC block, ArcSight just returns the NIC. Cygwin's whois will look up the registrant from the correct NIC.
2. EventID.Net Lookup - Takes a field containing EventLogType:ID ('Device Event Class ID' by default) and passes it to a shell script that launches IE with a properly f0rmatted eventid.net URL:
#!/bin/bash
PATH=$PATH:/cygdrive/c/cygwin/bin:/usr/bin:/bin
if [ "$1" = "" ];
then echo "usage: $0 [ArcSight EventLog ID Tag]";
exit 0
fi
query=`echo $1 | sed 's/\(.*\):\(.*\)/eventid=\2\&source=\1/'`
if [ "$query" = "" ]; then echo "Error in field format";
exit 1
fi
/cygdrive/c/Program\ Files/Internet\ Explorer/IEXPLORE.EXE "http://www.eventid.net/display.asp?$query" &
3. LDAP Server/User Lookup - This is a Perl script that I wrote that takes a server or user name field and searches AD via LDAP for it and returns things like distinguishedName, operatingSystem, description, memberOf, and so on. This runs in Cygwin as well.
4. VHost Live Search - Got this idea from a post to the pen-test mailing list. Sometimes whois and nslookup don't cut it. This is a great way to figure out what vhosts might be present on a given IP address.
5. IP2Asset - On our network, workstation names and asset ID's are the same. So here's a script that takes an IP address, runs nslookup, and then launches Altiris web console to search for the asset.
#!/bin/bash PATH=$PATH:/cygdrive/c/cygwin/bin:/usr/bin:/bin
if [ "$1" = "" ]; then echo "usage: $0 [ip address]";
exit 0
fi
asset=`nslookup $1 |grep ^Name |sed 's/.*\(it[0-9]*\)\.wks.*/\1/'`
if [ "$asset" = "" ]; then echo "Error resolving address";
exit 1
fi
/cygdrive/c/Program\ Files/Internet\ Explorer/IEXPLORE.EXE "http://altiris_svr/Altiris/NS/Console.aspx?NameMatch='$asset'" &
Who to sell log standards to?
Wednesday, Anton Chuvakin excitedly announced the imminent release of a new logging standard. If you know me, then you know that I like logs and log analysis and SIMs and all that. So I had questions for Dr. Chuvakin about CEE and how the new standard will attract smaller vendors and OSS projects.
PM: "I know the more boutique vendors that use standard logging formats the easier your life becomes. But ... how do you incent small and/or niche vendors to support a log standard that they weren't at the table to design?"
AC: "Nah, we are talking MUCH bigger players than boutique vendors ... just wait."
But when it comes to log standards and source-to-analyzer log flows, who cares about big players? LogLogic, ArcSight, Intellitactics, and everybody else in their field have developed and will continue to maintain transport and parsing code for Microsoft and Cisco logs, regardless of any standard. Why? They can't afford not to support the big players.
As I see it, the advantage of log standards is interoperability of third-party products. It costs SIM vendors a decent chunk of change in development and support cost to spin up support for new products. Standards make this proposition cheaper: one set of code to maintain, potentially hundreds of products supported. I've never had a problem with my SIM parsing EventLog properly. It's vsftpd with log_ftp_protocol set that makes my SIM cough up 1-field hairballs of syslog goo.
So how to sell log standards to small vendors and open source developers so that SIM's can do a better job? That's a tough question, and I don't know the best answer. But I am fairly confident that for today at least, that's where the value of logging standards lies.
PM: "I know the more boutique vendors that use standard logging formats the easier your life becomes. But ... how do you incent small and/or niche vendors to support a log standard that they weren't at the table to design?"
AC: "Nah, we are talking MUCH bigger players than boutique vendors ... just wait."
But when it comes to log standards and source-to-analyzer log flows, who cares about big players? LogLogic, ArcSight, Intellitactics, and everybody else in their field have developed and will continue to maintain transport and parsing code for Microsoft and Cisco logs, regardless of any standard. Why? They can't afford not to support the big players.
As I see it, the advantage of log standards is interoperability of third-party products. It costs SIM vendors a decent chunk of change in development and support cost to spin up support for new products. Standards make this proposition cheaper: one set of code to maintain, potentially hundreds of products supported. I've never had a problem with my SIM parsing EventLog properly. It's vsftpd with log_ftp_protocol set that makes my SIM cough up 1-field hairballs of syslog goo.
So how to sell log standards to small vendors and open source developers so that SIM's can do a better job? That's a tough question, and I don't know the best answer. But I am fairly confident that for today at least, that's where the value of logging standards lies.
Tuesday, April 17, 2007
Snort Performance
Today I learned an interesting lesson about Snort and performance. If you happen to read the snort-users mailing list, you may know a little bit about my problem.
It actually turns out that changes made in the Snort PCRE (perl-compatible regex) library beteween 2.6.0 and 2.6.1 combined with some old rules that I had written caused Snort to hog CPU time like, well, you know. So, here are two lessons learned from today.
The problem with my rules is that they didn't use flow directives to take advantage of the stream4 preprocessor to track TCP connection state, they went straight into the pcre pattern. This caused Snort to try regex matching the payloads of all packets that matched the layer-3 designation:
alert $HOME_NET any -> $EXTERNAL_NET 6660:6670
Since $HOME_NET contains web servers and so on, this turns out to be a lot of traffic, not just outbound IRC traffic. Adding the following to the rule:
'flow:to_server,established'
...makes use of stream4 state tracking and dials in the rule to match only IRC traffic, which on a good day is zero packets. Additionally, since my rule was looking for specific client-to-client traffic (bot commands), I added the following:
content:"PRIVMSG|20|"; nocase;
...ahead of the pcre expression in the old rule, so now it dials the rule in even more and the regex is only invoked when the packet matches statefully at layer 3 and starts with PRIVMSG (or any case-variation thereof). These rules are now more accurate and less likely to create false positive alerts, but more importantly the Snort process that was eating 80-100% of the CPU time available to it is now down in the 5-10% range. This is important because lower CPU utilizatiton translates directly into lower dropped packet rates.
Which is a good segue into my second lesson learned today. I have been using Snort's perfmonitor preprocessor to dump sensor performance statistics. This turns out to be very useful for dealing with these very issues. But I would recommend also that if you are using perfmonitor that you also use Andreas Ostling's pmgraph tool. It creates MRTG-like graphs from your perfmonitor output and makes it easy to spot trends and problems in your sensor's performance. This made it very easy for me to identify the problem in the first place as well as be certain that the changes I made corrected the issue, not only with respect to CPU utilization which can be easily checked with top, but also with respect to dropped packets, which only Snort tracks. Besides perfmonitor, the only way do get that data is to 'kill -HUP [snort's pid]' and read syslog. And I have found that data to be unreliable in the past.
Additionally, if you are on Snort 2.6.1.x or later and you use perfmonitor, you have probably noticed that there's a new field in the perfmonitor output. Andreas hasn't updated pmgraph to deal with this new field yet, but I have a diff file I can send you for it. It's small and I would post it here, but it's not handy at the moment.
Finally, a big thank you to Jason, Joel, and Adam at Sourcefire for being so helpful with this issue. Yes, I'm a paying Sourcefire customer, but they probably didn't know that and it didn't matter to them. That's awesome.
It actually turns out that changes made in the Snort PCRE (perl-compatible regex) library beteween 2.6.0 and 2.6.1 combined with some old rules that I had written caused Snort to hog CPU time like, well, you know. So, here are two lessons learned from today.
The problem with my rules is that they didn't use flow directives to take advantage of the stream4 preprocessor to track TCP connection state, they went straight into the pcre pattern. This caused Snort to try regex matching the payloads of all packets that matched the layer-3 designation:
alert $HOME_NET any -> $EXTERNAL_NET 6660:6670
Since $HOME_NET contains web servers and so on, this turns out to be a lot of traffic, not just outbound IRC traffic. Adding the following to the rule:
'flow:to_server,established'
...makes use of stream4 state tracking and dials in the rule to match only IRC traffic, which on a good day is zero packets. Additionally, since my rule was looking for specific client-to-client traffic (bot commands), I added the following:
content:"PRIVMSG|20|"; nocase;
...ahead of the pcre expression in the old rule, so now it dials the rule in even more and the regex is only invoked when the packet matches statefully at layer 3 and starts with PRIVMSG (or any case-variation thereof). These rules are now more accurate and less likely to create false positive alerts, but more importantly the Snort process that was eating 80-100% of the CPU time available to it is now down in the 5-10% range. This is important because lower CPU utilizatiton translates directly into lower dropped packet rates.
Which is a good segue into my second lesson learned today. I have been using Snort's perfmonitor preprocessor to dump sensor performance statistics. This turns out to be very useful for dealing with these very issues. But I would recommend also that if you are using perfmonitor that you also use Andreas Ostling's pmgraph tool. It creates MRTG-like graphs from your perfmonitor output and makes it easy to spot trends and problems in your sensor's performance. This made it very easy for me to identify the problem in the first place as well as be certain that the changes I made corrected the issue, not only with respect to CPU utilization which can be easily checked with top, but also with respect to dropped packets, which only Snort tracks. Besides perfmonitor, the only way do get that data is to 'kill -HUP [snort's pid]' and read syslog. And I have found that data to be unreliable in the past.
Additionally, if you are on Snort 2.6.1.x or later and you use perfmonitor, you have probably noticed that there's a new field in the perfmonitor output. Andreas hasn't updated pmgraph to deal with this new field yet, but I have a diff file I can send you for it. It's small and I would post it here, but it's not handy at the moment.
Finally, a big thank you to Jason, Joel, and Adam at Sourcefire for being so helpful with this issue. Yes, I'm a paying Sourcefire customer, but they probably didn't know that and it didn't matter to them. That's awesome.
Friday, April 13, 2007
Phish 2.0
Richard Stiennon points out a most excellent post from Christopher Soghoian's blog on phishing attacks against PassMark and similar technologies (with movies!). I teach a home computer security class through my employer's corporate training program, and this very issue (Does PassMark prevent phishing?) came up in a class I taught yesterday. Chris' work proves what I suspected - no, PassMark accounts can still be phished quite reliably.
In his post about Chris' work, Richard concludes, "Its a war of escalation and banks have to stay ahead." It is a war of escalation. Most of infosec is. However, while banks have a vested interest in making financial transactions on the web safe for customers, it's the customers that have to stay ahead. If you can trick someone into clicking a link and believing that web site is something it's not, then there's not much the bank can do. MiTM is like trump here - it even beats tokens and other 2-factor authentication mechanisms as long as the phisherman can intercept that traffic as well. That's why I also believe that the owning of public wireless networks will continue to grow in prevalence.
The real work to be done is the client software vendors. If Outlook warned you or outright prevented you from clicking sender-supplied "a href=" links, then phishing would be all but over. Similarly, if Microsoft made IE's SSL cert warning messages more dramatic, or even cached error-free certificates for later comparison, MiTM against SSL would be over as well.
I think that following links in e-mail or IM is going to have to become like leaving your car unlocked at the mall. Nobody locks the car for you, even though the technology probably could, but you know that if you don't, you could get robbed. Unfortunately, a lot of people are going to lose a lot of money in the mean time.
In his post about Chris' work, Richard concludes, "Its a war of escalation and banks have to stay ahead." It is a war of escalation. Most of infosec is. However, while banks have a vested interest in making financial transactions on the web safe for customers, it's the customers that have to stay ahead. If you can trick someone into clicking a link and believing that web site is something it's not, then there's not much the bank can do. MiTM is like trump here - it even beats tokens and other 2-factor authentication mechanisms as long as the phisherman can intercept that traffic as well. That's why I also believe that the owning of public wireless networks will continue to grow in prevalence.
The real work to be done is the client software vendors. If Outlook warned you or outright prevented you from clicking sender-supplied "a href=" links, then phishing would be all but over. Similarly, if Microsoft made IE's SSL cert warning messages more dramatic, or even cached error-free certificates for later comparison, MiTM against SSL would be over as well.
I think that following links in e-mail or IM is going to have to become like leaving your car unlocked at the mall. Nobody locks the car for you, even though the technology probably could, but you know that if you don't, you could get robbed. Unfortunately, a lot of people are going to lose a lot of money in the mean time.
Thursday, April 12, 2007
Malware, Packers, Debuggers, OEPs, and... Arrrgggh!
I haven't posted much this week because most of my free time has been spent tearing my hair out.
This past Tuesday I gave a lecture to my friend Tim's computer security class at a local college while he was on vacation. The topic was introductory malware analysis, and I decided I would include a live demo of iDefense SysAnalyzer in VMWare as the big finale. I was extra excited to do this when, last Friday, I found an ANI exploit in the wild and captured not only the exploit file but the alleged malware that the exploit drops on its victims.
The ANI exploit was easy enough to analyze:
$ strings file.jpg
RIFF
ACONanih$
tsil
tsil
anihR
11111111111111111111111111111111
444444444444444444444444444
000000000
CMD >
/C "
T} >
QSPPPPPPWP
hURlm
jlhntdl
huser
l$$6
6;|$(u
http://XXXXXXXXXXXXXX/download/167212/bin.exe
But bin.exe continues to be a pain in my side. So the students got to see my demo with some older malware that my Nepenthes honeypot collected last November. I refuse to admit defeat, now at least in the hopes of learning something.
In VMWare, it simply exits with errorlevel=0. My initial reaction was, "I found vm-aware malware! Sweet!" But now I'm not so sure. Applying some great advice from my friend Matt at IntelGuardians, I tried to disguise the presence of VMWare. Still nothing in SysAnalyzer.
So I decided to venture into new territory and attempt to unpack the bin.exe. I spent several hours yesterday and today trying to unpack the binary. PEID says it's packed with UPX, but UPX won't unpack it straight up. After much searching, I found an excellent flash demonstration by Frank Boldewin on unpacking obfuscated packed executables with OllyDbg, the OllyDump plugin, and ImpRec. But after several hours of trying variations of Frank's method, I still can't find a valid OEP (Original Entry Point - from which the binary can be dumped). I wish I had a point to all of this other than the one I have - malware analysis is hard and people like Frank and Matt that do this stuff for a living are jaw-droppingly smart.
This is me being envious of their giant brains.
This past Tuesday I gave a lecture to my friend Tim's computer security class at a local college while he was on vacation. The topic was introductory malware analysis, and I decided I would include a live demo of iDefense SysAnalyzer in VMWare as the big finale. I was extra excited to do this when, last Friday, I found an ANI exploit in the wild and captured not only the exploit file but the alleged malware that the exploit drops on its victims.
The ANI exploit was easy enough to analyze:
$ strings file.jpg
RIFF
ACONanih$
tsil
tsil
anihR
11111111111111111111111111111111
444444444444444444444444444
000000000
CMD >
/C "
T} >
QSPPPPPPWP
hURlm
jlhntdl
huser
l$$6
6;|$(u
http://XXXXXXXXXXXXXX/download/167212/bin.exe
But bin.exe continues to be a pain in my side. So the students got to see my demo with some older malware that my Nepenthes honeypot collected last November. I refuse to admit defeat, now at least in the hopes of learning something.
In VMWare, it simply exits with errorlevel=0. My initial reaction was, "I found vm-aware malware! Sweet!" But now I'm not so sure. Applying some great advice from my friend Matt at IntelGuardians, I tried to disguise the presence of VMWare. Still nothing in SysAnalyzer.
So I decided to venture into new territory and attempt to unpack the bin.exe. I spent several hours yesterday and today trying to unpack the binary. PEID says it's packed with UPX, but UPX won't unpack it straight up. After much searching, I found an excellent flash demonstration by Frank Boldewin on unpacking obfuscated packed executables with OllyDbg, the OllyDump plugin, and ImpRec. But after several hours of trying variations of Frank's method, I still can't find a valid OEP (Original Entry Point - from which the binary can be dumped). I wish I had a point to all of this other than the one I have - malware analysis is hard and people like Frank and Matt that do this stuff for a living are jaw-droppingly smart.
This is me being envious of their giant brains.
Monday, April 9, 2007
Is it illegal to pass off Nessus reports as your own?
Ron Gula weighed in on this question as posed to he nessus mailing list today. I'm trying to read between the lines here, but I think Ron's answer boils down to, "It pisses me off when people pretend they don't use Nessus and just re-style our reports to customers as original content. But since we don't own the IP in a good number of the NASL files including the report output that they generate, no, it's not illegal." I'm liberally paraphrasing, of course.
Food for thought, because it's almost standard operating procedure for pen-test companies big and small alike to not broadcast their use of Nessus. But next time you hire someone to do a pen-test of your network, grep your web server logs for Nessus - I'll bet it hits. As to how many copy & paste Nessus results directly, only the laziest do it straight up, since there are typos and layout mistakes galore.
Food for thought, because it's almost standard operating procedure for pen-test companies big and small alike to not broadcast their use of Nessus. But next time you hire someone to do a pen-test of your network, grep your web server logs for Nessus - I'll bet it hits. As to how many copy & paste Nessus results directly, only the laziest do it straight up, since there are typos and layout mistakes galore.
Saturday, April 7, 2007
Sparty On!
http://www.ncaasports.com/icehockey/mens/recaps/d1_0407_01/2007/2007
At least there's 1 NCAA champion in Michigan. Yeah, I said it. Go Green!
At least there's 1 NCAA champion in Michigan. Yeah, I said it. Go Green!
Thursday, April 5, 2007
Avivah Litan drops the dime on TJX
Avivah Litan, a VP and security analyst at Gartner, dishes on TJX to NetworkWorld. The interesting tidbit that she shares in this story is that they suspect that unsecured wireless APs were the hackers' way in. If that's true, there need to be some very public firings of security and network staff at TJX.
Wednesday, April 4, 2007
Guilty Pleasures, Social Networks, and Event ID's
So, one of my guilty pleasures is I like to read and answer the Information Security questions people ask on LinkedIn. It's like infosec Jeopardy without having to go to Vegas. Sometimes I even know the answer and have time to post it. The other day was one, and I'll share it here in a little more detail.
Venkatesh asks: "What are you monitoring on Active Directory/SQL Server as part of IT compliance?"
The cool thing about Microsoft EventLog format is the Event ID field, which for the most part tells you what is happening, and the details are things like who or what is doing that thing to who- or what-else. An example is Event ID Security:628. Any time you see that code, you know that A changed the password of B, and it is possible that A == B or A != B.
So get your left pinky finger ready for Ctrl-C & Ctrl-V action. Here's my big list of Security EventLog ID's that you should monitor as part of your log review processes.
In our environment (200+ Windows servers, another 80-100 UNIX servers that authenticate against AD, and 1200+ Windows workstations), this represents about 70-100 events per day out of roughly a half million EventLog entries that we collect per day. That's so totally manageable. The rest of it you can subject to trending, thresholds, and so on to find weirdness worth investigating.
It's also a good idea to go through your EventLog data every couple of months and look for new Event ID's that you haven't seen before. I use a filter that matches all of the Event ID's that I've already identified and excludes them. Then it's just a matter of researching the new Event ID's and determining their cause and relevance.
If you've got other ideas of good EventLog content that you focus on, post it up here. I'd love to hear about it!
Venkatesh asks: "What are you monitoring on Active Directory/SQL Server as part of IT compliance?"
The cool thing about Microsoft EventLog format is the Event ID field, which for the most part tells you what is happening, and the details are things like who or what is doing that thing to who- or what-else. An example is Event ID Security:628. Any time you see that code, you know that A changed the password of B, and it is possible that A == B or A != B.
So get your left pinky finger ready for Ctrl-C & Ctrl-V action. Here's my big list of Security EventLog ID's that you should monitor as part of your log review processes.
- EventID == Security:535
- EventID == Security:624
- EventID == Security:628
- EventID == Security:629
- EventID == Security:630
- EventID == Security:631
- EventID == Security:632
- EventID == Security:634
- EventID == Security:636
- EventID == Security:637
- EventID == Security:639
- EventID == Security:641
- EventID == Security:642
- EventID == Security:644
- EventID == Security:647
- EventID == Security:659
- EventID == Security:660
- EventID == Security:671
- EventID == Security:685
- Target Account Name == Administrator
- Target Account Name == [real admin user name] (you renamed Administrator, right?)
In our environment (200+ Windows servers, another 80-100 UNIX servers that authenticate against AD, and 1200+ Windows workstations), this represents about 70-100 events per day out of roughly a half million EventLog entries that we collect per day. That's so totally manageable. The rest of it you can subject to trending, thresholds, and so on to find weirdness worth investigating.
It's also a good idea to go through your EventLog data every couple of months and look for new Event ID's that you haven't seen before. I use a filter that matches all of the Event ID's that I've already identified and excludes them. Then it's just a matter of researching the new Event ID's and determining their cause and relevance.
If you've got other ideas of good EventLog content that you focus on, post it up here. I'd love to hear about it!
In Case You Missed It...
http://www.matasano.com/log/755/questions-for-stillsecure-about-cobia/
Tom at Matasano called out StillSecure for calling their new Cobia platform "open source."
It's a small nit to pick in my book - we all know that open source projects have been exploited by for-profit vendors pretty much from go. But read the comments.
There's a lesson for all of the vendors out there that are ripping off open source projects to save time and money - shut your mouth and don't try and spin it as 'giving back' or whatever.
Tom at Matasano called out StillSecure for calling their new Cobia platform "open source."
It's a small nit to pick in my book - we all know that open source projects have been exploited by for-profit vendors pretty much from go. But read the comments.
There's a lesson for all of the vendors out there that are ripping off open source projects to save time and money - shut your mouth and don't try and spin it as 'giving back' or whatever.
I Ain't Got No Crystal Ball...
...but I feel comfortable in predicting the next thing every NIDS vendor will roll into their products: full packet logging.
In a nutshell, analysis will be removed one step further from capture. You'll have a tool that logs all packets to disk. Then packet and stream data will be analyzed on disk. It will work like your current IDS except that you can go back and get all of the packets, not just the ones that the IDS alerted on at the time. There are both performance and forensic advantages to doing this. The main drawback is the amount of disk it will take and how you will manage data retention over time. But disk is cheap and security products are expensive and this is something that will make the old seem new again.
In a nutshell, analysis will be removed one step further from capture. You'll have a tool that logs all packets to disk. Then packet and stream data will be analyzed on disk. It will work like your current IDS except that you can go back and get all of the packets, not just the ones that the IDS alerted on at the time. There are both performance and forensic advantages to doing this. The main drawback is the amount of disk it will take and how you will manage data retention over time. But disk is cheap and security products are expensive and this is something that will make the old seem new again.
HoneyC Follow-Up
At the end of February, I posted about honeypots and honeyclients and promised a follow-up on Christian Seifert's HoneyC honeyclient.
If you want an intro to HoneyC and how it works, then visit here.
I installed HoneyC on OpenBSD 4.0 (my home firewall), but you can run it on just about anything since it's 100% Ruby. HoneyC relies on Snort-like signatures to detect web client attacks or malware. So the first step is to load it up with some signatures. I hacked up the web-client.rules file that comes free with Snort for starters. An example rule would be:
Original Snort rule:
alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg:"WEB-CLIENT Javascript document.domain attempt"; flow:to_client,established; content:"document.domain|28|"; nocase; reference:bugtraq,5346; reference:cve,2002-0815; classtype:attempted-user; sid:1840; rev:7;)
Modified HoneyC rule:
alert tcp any any <> any any (msg:"WEB-CLIENT Javascript document.domain attempt"; flow:to_client,established; content:"document.domain|28|"; nocase; reference:bugtraq,5346; reference:cve,2002-0815; classtype:attempted-user; sid:1840; rev:7;)
The next step is to give HoneyC someplace to go looking for signature matches. There are two options for this. You can feed HoneyC a list of URLs to visit, or you can feed it a search term and it will search Yahoo and then crawl and analyze the results. The second option is a lot more interesting, and if you use your imagination, you can think of some easy search terms that will yield results (think keygen, lyrics, etc.) However, feeding HoneyC a list of URLs from, say, a proxy server log is a whole lot more relevant. For kicks I took a day's work of logs where the URL ended in .EXE and ran those through HoneyC. Sure enough, there were a couple of hits. But as you might have guessed, these were also found via traditional IDS. But unlike the IDS, I now have a sample of the malware to analyze.
In my opinion, HoneyC has some shortcomings. First, by being signature based, it misses a lot. And the whole point of using a honeypot or a honeyclient is to find bad stuff you don't already know about. So you may be able to luck up and find some new malware using old browser exploits, but since HoneyC doesn't parse JavaScript, or even automatically download URLs that look like malware (.EXE, .VBS, .CMD), you still have to do most of the heavy lifting. Getting it to run is really easy. But getting from there to having exploits and binaries to analyze is still a lot of work.
In recent honeyclient news, Niels Provos, one of the monkey.org supergeniuses has released SpyBye. SpyBye is a proxy that analyzes pages for browser exploits. You can run and install it locally or you may use the proxy that Niels has set up at spybye.org. Very cool.
If you want an intro to HoneyC and how it works, then visit here.
I installed HoneyC on OpenBSD 4.0 (my home firewall), but you can run it on just about anything since it's 100% Ruby. HoneyC relies on Snort-like signatures to detect web client attacks or malware. So the first step is to load it up with some signatures. I hacked up the web-client.rules file that comes free with Snort for starters. An example rule would be:
Original Snort rule:
alert tcp $EXTERNAL_NET $HTTP_PORTS -> $HOME_NET any (msg:"WEB-CLIENT Javascript document.domain attempt"; flow:to_client,established; content:"document.domain|28|"; nocase; reference:bugtraq,5346; reference:cve,2002-0815; classtype:attempted-user; sid:1840; rev:7;)
Modified HoneyC rule:
alert tcp any any <> any any (msg:"WEB-CLIENT Javascript document.domain attempt"; flow:to_client,established; content:"document.domain|28|"; nocase; reference:bugtraq,5346; reference:cve,2002-0815; classtype:attempted-user; sid:1840; rev:7;)
The next step is to give HoneyC someplace to go looking for signature matches. There are two options for this. You can feed HoneyC a list of URLs to visit, or you can feed it a search term and it will search Yahoo and then crawl and analyze the results. The second option is a lot more interesting, and if you use your imagination, you can think of some easy search terms that will yield results (think keygen, lyrics, etc.) However, feeding HoneyC a list of URLs from, say, a proxy server log is a whole lot more relevant. For kicks I took a day's work of logs where the URL ended in .EXE and ran those through HoneyC. Sure enough, there were a couple of hits. But as you might have guessed, these were also found via traditional IDS. But unlike the IDS, I now have a sample of the malware to analyze.
In my opinion, HoneyC has some shortcomings. First, by being signature based, it misses a lot. And the whole point of using a honeypot or a honeyclient is to find bad stuff you don't already know about. So you may be able to luck up and find some new malware using old browser exploits, but since HoneyC doesn't parse JavaScript, or even automatically download URLs that look like malware (.EXE, .VBS, .CMD), you still have to do most of the heavy lifting. Getting it to run is really easy. But getting from there to having exploits and binaries to analyze is still a lot of work.
In recent honeyclient news, Niels Provos, one of the monkey.org supergeniuses has released SpyBye. SpyBye is a proxy that analyzes pages for browser exploits. You can run and install it locally or you may use the proxy that Niels has set up at spybye.org. Very cool.