Earlier this month I gave a talk at SecureWorld Expo Detroit on malware analysis. The goal of the talk was to discuss the state of malware and tools for people who aren't ready to go to town with a debugger. Unfortunately, to put it on SlideShare, I had to replace the cool Camtasia videos with lame screen shots.
Friday, November 21, 2008
Friday, October 10, 2008
SecureWorld Expo Detroit
SecureWorld Expo Detroit is coming up at the beginning of next month. I will be presenting on operationalized malware analysis and response. In this case "operationalized" means, "without a debugger.
Cathy Luders, a friend and colleague that I met through the local ISSA chapter, is also presenting at SecureWorld. On the same day. At the exact same time. Which has me bummed out more than a little because I've not gotten to see her present before. But now that I know she's got a talk in her back pocket, I'll probably ask her to present at an upcoming ISSA meeting. :-)
Cathy Luders, a friend and colleague that I met through the local ISSA chapter, is also presenting at SecureWorld. On the same day. At the exact same time. Which has me bummed out more than a little because I've not gotten to see her present before. But now that I know she's got a talk in her back pocket, I'll probably ask her to present at an upcoming ISSA meeting. :-)
ArcSight Tools Slide Deck
Wow, I've just been buried, both at work and at home. I promised a sanitized copy of our slides from the ArcSight User Conference and here they are. A month late. Enjoy.
Monday, September 15, 2008
Managing ArcSight ESM Tools
Last week Tim and I presented at the ArcSight user conference on using Tools in the ESM console to augment incident response and investigation. I'm hoping to have the sanitized slide deck up this week, and maybe a little bit of code to go with it.
At the end of the talk - and then a couple of times in the hallway - people asked how we manage all of these Tools. It's a great question, and the answer is, not very well. But here's the best way that I know how in ESM 4.0.
If you've ever looked at the tools editor in the ESM console, you've seen this dialog, which is pretty basic:
By default, this info is recorded in your AST file (C:\arcsight\Console\current\paul.ast), which is just a text file with a bunch of values declared. The values in the file look like this:
console.ui.tools[toolName].program=
console.ui.tools[toolName].workingdir=
console.ui.tools[toolName].iconFile=
console.ui.tools[toolName].parameters=
console.ui.tools[toolName].showInToolBar=
console.ui.tools[toolName].isExportTool=
And then there's this:
console.ui.toolsList=
Which is pretty self-explanatory. It occurs once in the AST file and is just a CSV list of the tool names, used to populate the console's Tools menu.
I wrote a Perl program that can parse an AST file and extract tool data from it for the purposes of sharing. It's designed to help scale and distribute tools across your analysts' consoles so that they don't have to manually recreate and test them. In order for it to be useful, there are some best practices. Here's what I do.
1. Cygwin (Surprise!) If you've been reading my blog for any period of time, you knew this was coming. If your analysts use Mac OS X or Linux for their ESM console platform, not to worry. They can play, too. Beyond, Perl, bash, and Python, that's kind of the point.
2. Standardize on a source directory for scripts. Put all of your scripts in the same spot. Pathing is difficult to manage by hand, so by defining a standard (I use /usr/local/bin/arcsight), you have less to do each time you distribute a new tool.
3. Use a repository like Subversion or CVS for scripts and other tool artifacts. That way, you can make a change to your tool, check it in, and the other analysts can check it out quickly and easily. No messy manual copies. Also, when you foul something up, you have revision history to go back to. That can be a life saver if you are - like me - not a developer with good testing habits.
4. Use consoleupdates.txt on the ESM manager to distribute tool configs. Here's how you do that:
Let's say my ESM user id is 'paul' and I have developed a whole bunch of tools following the first three rules above. I can use this Perl script to create an export of the tool configs for use on the server. It looks like this in Cygwin:
$ arc_tool.pl export all /cygdrive/c/arcsight/Console/current/paul.ast > ~/consoleupdates.txt
$ scp consoleupdates.txt arcsight@esmmanager:/opt/arcsight/manager/config
$ ssh arcsight@esmmanager
Password:
arcsight@esmmanager:~ $ chown arcsight.arcsight $ARCSIGHT_HOME/config/consoleupdates.txt
arcsight@esmmanager:~ $ chmod 644 $ARCSIGHT_HOME/config/consoleupdates.txt
If you want your analysts to get the updated tool list, they need to log out of the console, move their AST file somewhere safe, and log back in to the console. No restart of the manager is necessary. Now they just need to create the same script directory you did and check out your scripts from your CVS server.
The Perl script I wrote also supports listing the tool names in an AST file as well as exporting single tool configurations, so it's more than just a one-trick pony. It's got another half a trick.
My advice at this time is not to invest a ton of time in doing this unless it's a weekly headache for your security team, but it is worth doing if you've already got the moving parts in place (like a CVS server). The reason is that ArcSight has already fixed the issue of sharing tool configs in ESM 4.5. So once that's released (later this year?), some of this will be a non-issue for you. I suspect that the first two or three best practices I list above will still be valid in ESM 4.5, so it's still a valuable exercise if you have any number of custom tools already.
At the end of the talk - and then a couple of times in the hallway - people asked how we manage all of these Tools. It's a great question, and the answer is, not very well. But here's the best way that I know how in ESM 4.0.
If you've ever looked at the tools editor in the ESM console, you've seen this dialog, which is pretty basic:
By default, this info is recorded in your AST file (C:\arcsight\Console\current\paul.ast), which is just a text file with a bunch of values declared. The values in the file look like this:
console.ui.tools[toolName].program=
console.ui.tools[toolName].workingdir=
console.ui.tools[toolName].iconFile=
console.ui.tools[toolName].parameters=
console.ui.tools[toolName].showInToolBar=
console.ui.tools[toolName].isExportTool=
And then there's this:
console.ui.toolsList=
Which is pretty self-explanatory. It occurs once in the AST file and is just a CSV list of the tool names, used to populate the console's Tools menu.
I wrote a Perl program that can parse an AST file and extract tool data from it for the purposes of sharing. It's designed to help scale and distribute tools across your analysts' consoles so that they don't have to manually recreate and test them. In order for it to be useful, there are some best practices. Here's what I do.
1. Cygwin (Surprise!) If you've been reading my blog for any period of time, you knew this was coming. If your analysts use Mac OS X or Linux for their ESM console platform, not to worry. They can play, too. Beyond, Perl, bash, and Python, that's kind of the point.
2. Standardize on a source directory for scripts. Put all of your scripts in the same spot. Pathing is difficult to manage by hand, so by defining a standard (I use /usr/local/bin/arcsight), you have less to do each time you distribute a new tool.
3. Use a repository like Subversion or CVS for scripts and other tool artifacts. That way, you can make a change to your tool, check it in, and the other analysts can check it out quickly and easily. No messy manual copies. Also, when you foul something up, you have revision history to go back to. That can be a life saver if you are - like me - not a developer with good testing habits.
4. Use consoleupdates.txt on the ESM manager to distribute tool configs. Here's how you do that:
Let's say my ESM user id is 'paul' and I have developed a whole bunch of tools following the first three rules above. I can use this Perl script to create an export of the tool configs for use on the server. It looks like this in Cygwin:
$ arc_tool.pl export all /cygdrive/c/arcsight/Console/current/paul.ast > ~/consoleupdates.txt
$ scp consoleupdates.txt arcsight@esmmanager:/opt/arcsight/manager/config
$ ssh arcsight@esmmanager
Password:
arcsight@esmmanager:~ $ chown arcsight.arcsight $ARCSIGHT_HOME/config/consoleupdates.txt
arcsight@esmmanager:~ $ chmod 644 $ARCSIGHT_HOME/config/consoleupdates.txt
If you want your analysts to get the updated tool list, they need to log out of the console, move their AST file somewhere safe, and log back in to the console. No restart of the manager is necessary. Now they just need to create the same script directory you did and check out your scripts from your CVS server.
The Perl script I wrote also supports listing the tool names in an AST file as well as exporting single tool configurations, so it's more than just a one-trick pony. It's got another half a trick.
My advice at this time is not to invest a ton of time in doing this unless it's a weekly headache for your security team, but it is worth doing if you've already got the moving parts in place (like a CVS server). The reason is that ArcSight has already fixed the issue of sharing tool configs in ESM 4.5. So once that's released (later this year?), some of this will be a non-issue for you. I suspect that the first two or three best practices I list above will still be valid in ESM 4.5, so it's still a valuable exercise if you have any number of custom tools already.
Tuesday, September 9, 2008
ArcSight User Conference 2008
I'm on the floor of the ArcSight "Protect '08" conference this morning. Tim and I gave our talk on ArcSight ESM Tools yesterday, and I will post some version of those slides and some of the code after I return from the conference.
Right now I'm listening to Hugh Njemanze give his keynote on product lines. There's a lot of interesting stuff in the release pipe; Logger 3.0, ESM 4.5, a new Connector appliance, IdentityView content for ESM, and something called "McLovin."
Anyway, here's what's been good so far:
Here's what's been not-so-good:
And I would be remiss if I didn't drop a product scoop or two:
Right now I'm listening to Hugh Njemanze give his keynote on product lines. There's a lot of interesting stuff in the release pipe; Logger 3.0, ESM 4.5, a new Connector appliance, IdentityView content for ESM, and something called "McLovin."
Anyway, here's what's been good so far:
- Customer presentations (other than mine, I mean) - I missed out last year, these are the best talks so far.
- Location - the new hotel is within walking distance of stuff (and by stuff I mean not trees and the NSA.)
- Networking - Always the best part of this conference. I love standing around with free beer, talking to other folks about what they're doing with their SIM, and sharing ideas. Looking forward to more tonight.
Here's what's been not-so-good:
- Wireless - the hotel wireless has been unreliable and overloaded. Frankly, I'm surprised I've been able to stay on long enough to get this post up.
- Vendor/sponsor floor - no offense to these guys, but the freebies this year are unimpressive. I've already got a pen, thanks.
- No bag - Instead of a "conference bag," everyone was issued a plastic file folio thing. Not that I needed another bag, but I can't smoosh the one foam squeezy thing I did get from a vendor booth into this blue plastic thing.
And I would be remiss if I didn't drop a product scoop or two:
- Logger 3.0 has adopted a more-ESM-like boolean filter interface. Big improvement over the chained-regex search in 2.5 and earlier.
- Demo of Logger 3.0 shows that searches of data (no details on data set) are roughly 80x faster than a similar sized search on 2.5. (The claim is 100x faster, but I counted. Still, that's a significant improvement.)
- Hugh has hinted that the slick, high-performance append-only storage stuff that Logger has is going to be integrated into ESM in some release beyond 4.5. That could mean the end of the Oracle / PartitionArchiver storage model. It won't be missed.
Friday, September 5, 2008
Visual Analysis of 'Ideas in Security'
Amrit Williams, former Gartner analyst and CTO at BigFix is one of the bloggers that I follow regularly. Amrit's a very smart guy and I respect what he has to say. He recently wrote a pair of blog posts (here and here) that compliment eachother.
Now, in the details of what he has to say, Amrit and I are in agreement. But I got to thinking about the second post and how it relates to the first post. And, well, I fired up Visio and mapped the relationships between Amrit's greatest and worst ideas lists.
If we look at the great ideas that didn't spawn or perpetuate the worst ideas, then we're not left with much. Just segmentation and theory of least privilege. If we drop out planning and segmentation because they're not actually security ideas - just good ideas that work lots of places - we're left with Theory of Least Privilege as the one great idea to come out of security. Oddly, that seems about right.
Now, in the details of what he has to say, Amrit and I are in agreement. But I got to thinking about the second post and how it relates to the first post. And, well, I fired up Visio and mapped the relationships between Amrit's greatest and worst ideas lists.
If we look at the great ideas that didn't spawn or perpetuate the worst ideas, then we're not left with much. Just segmentation and theory of least privilege. If we drop out planning and segmentation because they're not actually security ideas - just good ideas that work lots of places - we're left with Theory of Least Privilege as the one great idea to come out of security. Oddly, that seems about right.
Tuesday, August 19, 2008
Evidence FAIL
So, first read this .
John Dozier, self-described "SuperLawyer" of the Internet, thinks you kids and your DefCon are a bunch of punks. Stay off his lawn.
Of course, I disagree. DefCon used to be a hacker conference by hackers for hackers. Now it's the BlackHat afterparty-slash-olympics. But what it isn't is a bunch of criminals. Sure, there's some mischief, and a few folks even break the rules. But everyone I know who attended DefCon this year (and that number is solidly in the double-digits), works in InfoSec, and uses what they learn at DefCon in their professional lives.
Compelling as my argument may fail to be to people like Mr. Dozier, his argument is weaker than mine. Let's dissect, shall we:
OK, so what we have here is a number of known, old, web attacks from China against his web server that coincide with the timing of DefCon. And aside from the timing, there's nothing to implicate anybody having anything to do with DefCon. My guess is that this wasn't even an actual human being at all, but rather an ASPROX scan that Dozier's IDS detected.
The funny thing about this is that, with the notable exception of Dan Kaminsky's DNS attacks, there aren't IDS signatures for the research presented at DefCon. So any attacks that did come as a result of learning done at DefCon wouldn't be on that graph.
First, the key word there is suspect. Mr. Dozier has zero evidence that these IDS alerts had anything to do with DefCon. None. Not a shred. Second, they would've gotten in.
If you have documents that are sealed by a court order stored on your company website, then you have problems. Most federal district courts won't allow you to electronically file with the court to have a document "sealed" if that document must be or otherwise is included in the filing. Those general orders aren't accidents. It's a recognition on the part of the judiciary that electronic documents are inherently less secure. But I digress.
This is simply untrue. There are organized wargames, conducted on an air-gapped network off the Internet or any other network. This is perfectly legal. The US Air Force has staffed a team in the past. By the way, congratulations to Chris Eagle and sk3wl0fr00t on their CTF win. They bested two-time champs 1@stplace, who are some of the smartest people I know, and who are all highly ethical InfoSec professionals.
This is sensational and unsubstantiated. Or as a judge would describe it, hearsay.
No, John, this is the mob of the 21st century.
Yes, the security researchers at DefCon are the good guys. And I promise you that the DoD and DoJ agree, as many of the speakers, attendees, volunteers, and contestants at DefCon are paid consultants to these organizations.
UPDATE: John Sawyer has an excellent write-up on this issue and on this year's DefCon (unlike John Dozier, he was actually there) on his blog, Evil Bits, over at Dark Reading. Go read.
John Dozier, self-described "SuperLawyer" of the Internet, thinks you kids and your DefCon are a bunch of punks. Stay off his lawn.
Of course, I disagree. DefCon used to be a hacker conference by hackers for hackers. Now it's the BlackHat afterparty-slash-olympics. But what it isn't is a bunch of criminals. Sure, there's some mischief, and a few folks even break the rules. But everyone I know who attended DefCon this year (and that number is solidly in the double-digits), works in InfoSec, and uses what they learn at DefCon in their professional lives.
Compelling as my argument may fail to be to people like Mr. Dozier, his argument is weaker than mine. Let's dissect, shall we:
Defcon ... began August 8 and it looks like the hackers sitting in the audience and participating in the hacking competitions spent two days trying to hack into the Dozier Internet Law website using SQL Injection Attacks, Mambo Exploits, encoded cross site scripting attempts, shared ciphers overflow attempts, and the like.
The favorite and most common ISP access was from Vietnam and China, with Beijing the host and doorway of the Olympic Games as well as many, many hackers.
OK, so what we have here is a number of known, old, web attacks from China against his web server that coincide with the timing of DefCon. And aside from the timing, there's nothing to implicate anybody having anything to do with DefCon. My guess is that this wasn't even an actual human being at all, but rather an ASPROX scan that Dozier's IDS detected.
The graph above shows what these hackers do. They come to Vegas to learn how to hack into systems and create havoc.
The funny thing about this is that, with the notable exception of Dan Kaminsky's DNS attacks, there aren't IDS signatures for the research presented at DefCon. So any attacks that did come as a result of learning done at DefCon wouldn't be on that graph.
The frustrated perpetrators (they never got access) were sitting in the Riviera Hotel ballrooms, I suspect...
First, the key word there is suspect. Mr. Dozier has zero evidence that these IDS alerts had anything to do with DefCon. None. Not a shred. Second, they would've gotten in.
Going after law firm websites and administration areas that contain attorney/client protected communications and documentation, and even court ordered "sealed" files, is a direct attack on the integrity of the judicial process and the judiciary
If you have documents that are sealed by a court order stored on your company website, then you have problems. Most federal district courts won't allow you to electronically file with the court to have a document "sealed" if that document must be or otherwise is included in the filing. Those general orders aren't accidents. It's a recognition on the part of the judiciary that electronic documents are inherently less secure. But I digress.
Many attendees commit criminal acts while in attendance in organized war games.
This is simply untrue. There are organized wargames, conducted on an air-gapped network off the Internet or any other network. This is perfectly legal. The US Air Force has staffed a team in the past. By the way, congratulations to Chris Eagle and sk3wl0fr00t on their CTF win. They bested two-time champs 1@stplace, who are some of the smartest people I know, and who are all highly ethical InfoSec professionals.
Others commit criminal acts as they learn the tools of the trade in the very ballroom during speaker presentations. They hack into banks, into personal computers, into businesses, into government agencies, and steal private information, cost businesses billions of dollars annually, and ruin the financial well-being and impair the emotional stability of individuals all across our country.
This is sensational and unsubstantiated. Or as a judge would describe it, hearsay.
This is the mob of the 21st century;
No, John, this is the mob of the 21st century.
The only "security researchers" in attendance, I suspect, are the good guys.
Yes, the security researchers at DefCon are the good guys. And I promise you that the DoD and DoJ agree, as many of the speakers, attendees, volunteers, and contestants at DefCon are paid consultants to these organizations.
UPDATE: John Sawyer has an excellent write-up on this issue and on this year's DefCon (unlike John Dozier, he was actually there) on his blog, Evil Bits, over at Dark Reading. Go read.
Wednesday, August 13, 2008
On Blended Threats
Dave Hull over at Trusted Signal has an interesting post on his blog right now about blended threats. (Unfortunately, I can't find a permalink for it, so I don't know how long you'll be able to read it.)
If it's not still there for you to read, let me give you the gist of it. There's been some recent research into and discussion of blended threat scenarios by some very smart people.
So what is a blended threat? It's where two or more lesser-severity vulnerabilities are exploited in conjunction with each other to lead to a greater compromise. An example would be a pen-test I did some years back where we found a SQL injection vulnerability in a low-value web app with no insert/delete grant to an older, unpatched version of Oracle. Individually, you wouldn't rank either vuln especially high. You could break the web app, but there wasn't sensitive data in there, and you couldn't tamper with the data itself. The Oracle database wasn't exposed to the Internet directly. But by using SQL injection to attack Oracle, I broke out into the server OS, reverse tunneled a command shell, and had the Administrator password in very short order. Which was also the Administrator password of the other servers I could talk to.
Myself and others have been predicting the emergence of wide scale blended threat attacks since at least about 2002/2003. And so far we've been wrong, which is good. For now, blended attacks are, as Dave points out, the stuff of professional pen-testers and other intelligent intruders. But frankly, I don't know why.
The problem with blended threats is that they're harder to identify and calculate risk for. CVSS doesn't provide a way for scoring vuln A when also in the presence of vuln B. And this has lead to vendors delaying patches or downplaying the severity of vulnerabilities based on the assumption that any vulnerability the only vulnerability present.
This creates an opening in the patching cycle for malware/botnet folks to capitalize on if the right blended threat comes along. Maybe we haven't seen it becauuse, to date, these folks simply haven't needed to go there in order to be successful.
If it's not still there for you to read, let me give you the gist of it. There's been some recent research into and discussion of blended threat scenarios by some very smart people.
So what is a blended threat? It's where two or more lesser-severity vulnerabilities are exploited in conjunction with each other to lead to a greater compromise. An example would be a pen-test I did some years back where we found a SQL injection vulnerability in a low-value web app with no insert/delete grant to an older, unpatched version of Oracle. Individually, you wouldn't rank either vuln especially high. You could break the web app, but there wasn't sensitive data in there, and you couldn't tamper with the data itself. The Oracle database wasn't exposed to the Internet directly. But by using SQL injection to attack Oracle, I broke out into the server OS, reverse tunneled a command shell, and had the Administrator password in very short order. Which was also the Administrator password of the other servers I could talk to.
Myself and others have been predicting the emergence of wide scale blended threat attacks since at least about 2002/2003. And so far we've been wrong, which is good. For now, blended attacks are, as Dave points out, the stuff of professional pen-testers and other intelligent intruders. But frankly, I don't know why.
The problem with blended threats is that they're harder to identify and calculate risk for. CVSS doesn't provide a way for scoring vuln A when also in the presence of vuln B. And this has lead to vendors delaying patches or downplaying the severity of vulnerabilities based on the assumption that any vulnerability the only vulnerability present.
This creates an opening in the patching cycle for malware/botnet folks to capitalize on if the right blended threat comes along. Maybe we haven't seen it becauuse, to date, these folks simply haven't needed to go there in order to be successful.
Saturday, August 2, 2008
What Role Will Security Researchers Play a Decade From Now?
The whole Dan Kaminsky DNS Thing has gotten me thinking about disclosure. I intentionally haven't blogged about it because, well, the speculation around Dan's finding has turned into something of a spectacle. And you didn't need to read yet another blog post about the sky falling.
But on the eve of Black Hat, Dan's talk is less than a week away, and I can't help feeling like we've gotten no closer to understanding the issue of disclosure than we were a year ago. So, all I'm going to say about Dan's recent "situation" is that I, for one, am impressed by the level of care and coordination that went into working with vendors to get patches. This is hard. Researchers hate it because vendors can be uncooperative, incompetent, and downright vindictive. So, thank you, Dan, for spending what must have been countless hours on conference calls and e-mail getting vendors onboard.
Now that that's out of the way, let's talk about research, disclosure, and the future. Dino Dai Zovi noted in a recent blog post that the 90's were the era of full disclosure, and that that is now over. (It's an excellent post. Go read the whole thing.) And this is evident in a number of ways. For one, ZDI and other pay-per-sploit buyers. For another, in-the-wild 0days showing up for sale from malware vendors like the MPack team.
And then there's the ongoing "debate" (read: stalemate) between researchers and vendors about protocol, grace periods, and credit.
So disclosure is a mess. But I don't think it has to stay this way, at least not in the USA. Researchers who publish - as opposed to sell - have the opportunity to become consumer advocates. By cooperating with vendors in a way that still holds them accountable, researchers can demonstrate value to the consumer public. When that becomes the prevalent sentiment, then other interesting things like grants and nonprofits make it possible for researchers to earn a living without having to also do consulting or sell their exploits to a third party.
And that's the dead horse I'm beating in the disclosure race - the consumers of IT products don't have a voice in the disclosure dialogue and desperately need one. Researchers can, if they're able to forego infighting and ego theatre, be that voice.
But on the eve of Black Hat, Dan's talk is less than a week away, and I can't help feeling like we've gotten no closer to understanding the issue of disclosure than we were a year ago. So, all I'm going to say about Dan's recent "situation" is that I, for one, am impressed by the level of care and coordination that went into working with vendors to get patches. This is hard. Researchers hate it because vendors can be uncooperative, incompetent, and downright vindictive. So, thank you, Dan, for spending what must have been countless hours on conference calls and e-mail getting vendors onboard.
Now that that's out of the way, let's talk about research, disclosure, and the future. Dino Dai Zovi noted in a recent blog post that the 90's were the era of full disclosure, and that that is now over. (It's an excellent post. Go read the whole thing.) And this is evident in a number of ways. For one, ZDI and other pay-per-sploit buyers. For another, in-the-wild 0days showing up for sale from malware vendors like the MPack team.
And then there's the ongoing "debate" (read: stalemate) between researchers and vendors about protocol, grace periods, and credit.
So disclosure is a mess. But I don't think it has to stay this way, at least not in the USA. Researchers who publish - as opposed to sell - have the opportunity to become consumer advocates. By cooperating with vendors in a way that still holds them accountable, researchers can demonstrate value to the consumer public. When that becomes the prevalent sentiment, then other interesting things like grants and nonprofits make it possible for researchers to earn a living without having to also do consulting or sell their exploits to a third party.
And that's the dead horse I'm beating in the disclosure race - the consumers of IT products don't have a voice in the disclosure dialogue and desperately need one. Researchers can, if they're able to forego infighting and ego theatre, be that voice.
Wednesday, July 16, 2008
Coffee Shop Warfare
It seems like I can't go to a coffee shop, conference center, or bar these days without some jackass on the network abusing the bandwidth. Running MMO games, BitTorrent, gnutella, or even just a large FTP/HTTP download will saturate the wireless access point, let alone the modest DSL line it's connected to, rendering it unusable for the other patrons there. This is just plain rude. And since the barrista can make a mean caramel cappucino, but doesn't have the ability to blacklist your MAC on the AP (which I realize isn't a very effective control, but hey - maybe you'd get the message then?), we're all stuck to suffer.
And I wouldn't do anything hostile on a public network. But in the name of network self-defense, there are a couple of tools you might want to take with you to the coffee shop next time.
There are lots of other wireless tools out there that have some application here, but many of them either go to far to be civil (Void11) or legal (Hotspotter), so I don't recommend them. For that matter, what I do recommend is getting your own EVDO card. Then you don't have to put up with rude WiFi users in the first place.
And I wouldn't do anything hostile on a public network. But in the name of network self-defense, there are a couple of tools you might want to take with you to the coffee shop next time.
- Wireshark - The quickest, easiest way to identify the abuser's MAC/IP is with a sniffer like Wireshark, tcpdump, or iptraf.
- Snort - Snort with flexresp2 enabled, bound to your wireless interface, and the p2p.rules set enabled and modified with "resp:reset_both,icmp_host" is an effective deterrent for people using P2P file-sharing software.
- Ettercap - More severe than Snort, you can use Ettercap to perform ARP poisoning and essentially blackhole the client(s) of your choice by MAC address. You could also use this tool to sniff unencrypted traffic between clients and the AP (and points beyond). But you wouldn't do this. It would be uncivilized, and possibly illegal.
There are lots of other wireless tools out there that have some application here, but many of them either go to far to be civil (Void11) or legal (Hotspotter), so I don't recommend them. For that matter, what I do recommend is getting your own EVDO card. Then you don't have to put up with rude WiFi users in the first place.
Tuesday, July 15, 2008
A Conversation With My Wife
My wife was at her mother's tonight when she caught me on GMail chat. This is the log of that chat, unedited:
Jessica: boo!
me: hey there
Jessica: hey baby!
Just looking at my moms task mamanger, she has a ton of stuff running
inlcuiding a bunch of exe file
me: that's all you should see in task manager - exe files
Sent at 10:28 PM on Tuesday
Jessica: how amobile deviceservice.exe, alg.exe, msmsgs.exe, searchprotection.exe, jusched.exe, E-S10IC1.exe
all of these are listed under "Administrator"
me: some of those are fine
type them into google
liutilities.com
searchprotection.exe sounds suspicious
don't log into the bank or anything
Jessica: why would there be 4 svchost.exe's?
me: that's typical
Jessica: or services.exe
winlogon.exe
me: both fine
Jessica: csrss.exe
me: also fine
Jessica: smss
me: seriously
google
Jessica: mDNSR
me: that sounds suspicious
Jessica: I don't need no stinkin google, I have you
:)
me: meh
Sent at 10:33 PM on Tuesday
Jessica: boo!
me: hey there
Jessica: hey baby!
Just looking at my moms task mamanger, she has a ton of stuff running
inlcuiding a bunch of exe file
me: that's all you should see in task manager - exe files
Sent at 10:28 PM on Tuesday
Jessica: how amobile deviceservice.exe, alg.exe, msmsgs.exe, searchprotection.exe, jusched.exe, E-S10IC1.exe
all of these are listed under "Administrator"
me: some of those are fine
type them into google
liutilities.com
searchprotection.exe sounds suspicious
don't log into the bank or anything
Jessica: why would there be 4 svchost.exe's?
me: that's typical
Jessica: or services.exe
winlogon.exe
me: both fine
Jessica: csrss.exe
me: also fine
Jessica: smss
me: seriously
Jessica: mDNSR
me: that sounds suspicious
Jessica: I don't need no stinkin google, I have you
:)
me: meh
Sent at 10:33 PM on Tuesday
Monday, July 14, 2008
When is a Security Event Not a Security Event?
When it's also a beer event, of course!
July's GRSec meetup will be Wednesday, 7/23/08. The reason for the Wednesday date is two-fold. First, Tuesdays don't work for everybody, so we're switching it up over the summer to see if we can get some fresh faces out to GRSec. Second, this month we're at the new Graydon's Derby Station, and that particular evening, they will be tapping a cask of Victory Hop-Devil IPA.
If that's not enough reason for you to be there, then I don't know who you are anymore, man! I don't know you at all...
Details & Map
July's GRSec meetup will be Wednesday, 7/23/08. The reason for the Wednesday date is two-fold. First, Tuesdays don't work for everybody, so we're switching it up over the summer to see if we can get some fresh faces out to GRSec. Second, this month we're at the new Graydon's Derby Station, and that particular evening, they will be tapping a cask of Victory Hop-Devil IPA.
If that's not enough reason for you to be there, then I don't know who you are anymore, man! I don't know you at all...
Details & Map
Tuesday, July 8, 2008
Monkey-Spider
It's been awhile since I've covered anything to do with honeypots or honeyclients. But it's also been awhile since anything new came along.
Via Thorsten Holz at honeyblog: Sicherheit'08: "Monkey-Spider: Detecting Malicious Web Sites with Low-Interaction Honeyclients"
Monkey-Spider, not to be confused with SpiderMonkey, is a new honeyclient from Thorsten, Ali Ikinci, and Felix Freiling. Like HoneyC, it's a crawler-based client that detects web-based, client-side attacks. It was presented at Sicherheit in Germany in April. Fortunately, the whitepaper and documentation are in English.
After reading the whitepaper and playing with the code a little, the thing that occurs to me is that, while this is very cool, and still somewhat useful, what I really want for operationalizing a honeyclient in my enterprise is the ability to seed the honeyclient from firewall/proxy logs. That way the honeyclient is analyzing my web traffic, not off looking for random malicious sites to add to already big blacklists.
Via Thorsten Holz at honeyblog: Sicherheit'08: "Monkey-Spider: Detecting Malicious Web Sites with Low-Interaction Honeyclients"
Monkey-Spider, not to be confused with SpiderMonkey, is a new honeyclient from Thorsten, Ali Ikinci, and Felix Freiling. Like HoneyC, it's a crawler-based client that detects web-based, client-side attacks. It was presented at Sicherheit in Germany in April. Fortunately, the whitepaper and documentation are in English.
After reading the whitepaper and playing with the code a little, the thing that occurs to me is that, while this is very cool, and still somewhat useful, what I really want for operationalizing a honeyclient in my enterprise is the ability to seed the honeyclient from firewall/proxy logs. That way the honeyclient is analyzing my web traffic, not off looking for random malicious sites to add to already big blacklists.
Monday, July 7, 2008
MiniMetriCon 2.5 Slide Decks
MiniMetricon 2.5 was a one-day security metrics event held in San Francisco back in April. Some of the slides decks were published to securitymetrics.org earlier today. I'm only about half way through them, but there's some good stuff in there, and if you're doing anything around security metrics, I recommend you check them out.
So far, the standouts for me are Pete Lindstrom's slides on Enterprise Security Metrics, and Wade Baker's deck on Incident Reponse Trends. And speaking of Wade Baker, he and a few of the other rockstars at Verizon Business have a blog that you should add to your feeds list.
So far, the standouts for me are Pete Lindstrom's slides on Enterprise Security Metrics, and Wade Baker's deck on Incident Reponse Trends. And speaking of Wade Baker, he and a few of the other rockstars at Verizon Business have a blog that you should add to your feeds list.
Friday, June 27, 2008
I'm Floored: Raffael Marty declares that SIM is dead.
No really, he said it. He would've been on the short list of people I assume would never say it. But there it is.
Here's the thing; I think that this is a lot like Gartner's IDS declaration (which he cites). IDS went through some product positioning changes (IPS, UTM, DLP, etc.) but the core idea and technology is still there, and guess what? The original IDS use case is still viable. Sure the attacks have changed, but having a sniffer that can search for known-bad and known-strange traffic on the wire is very, very useful.
So I assume that we are in the midst of a product positioning shift around SIM. Raffy's point that SIM schema are IP-centric and rules are based around correlating firewall and IDS events is true. But most of the vendors have already acknowledged this and are developing content to focus on other log sources. Either way, the use case is here to stay - the ability to search and correlate log events is highly useful, and will continue to be. You may call it "SIEM" or "IT Search" or "log management," but it's the same core concept, repurposed to address the constantly changing security environment.
One final note for vendors from the SecOps trenches: I am not open to a replace/resell on the basis that SIM is old and whatever-you-call-it-now is new and better. My SIM, like my IDS, contains custom content that our team has developed to keep on top of changing threats, including application attacks. SIM, like IDS, succeeds when you put talented security professionals in front of it and let them tune it and manage it like a tool. But it will fail miserably if you are hands-off with it.
Here's the thing; I think that this is a lot like Gartner's IDS declaration (which he cites). IDS went through some product positioning changes (IPS, UTM, DLP, etc.) but the core idea and technology is still there, and guess what? The original IDS use case is still viable. Sure the attacks have changed, but having a sniffer that can search for known-bad and known-strange traffic on the wire is very, very useful.
So I assume that we are in the midst of a product positioning shift around SIM. Raffy's point that SIM schema are IP-centric and rules are based around correlating firewall and IDS events is true. But most of the vendors have already acknowledged this and are developing content to focus on other log sources. Either way, the use case is here to stay - the ability to search and correlate log events is highly useful, and will continue to be. You may call it "SIEM" or "IT Search" or "log management," but it's the same core concept, repurposed to address the constantly changing security environment.
One final note for vendors from the SecOps trenches: I am not open to a replace/resell on the basis that SIM is old and whatever-you-call-it-now is new and better. My SIM, like my IDS, contains custom content that our team has developed to keep on top of changing threats, including application attacks. SIM, like IDS, succeeds when you put talented security professionals in front of it and let them tune it and manage it like a tool. But it will fail miserably if you are hands-off with it.
Monday, June 23, 2008
Useless Statistics: Nate McFeters vs. Verizon
You know how much I love to tear into vendors whose studies and data analysis wouldn't pass muster for a high school statistics course. It's nice to see someone else go off for a change. Nate McFeters, Ernst & Young security dude, ZDNet blogger, con regular, and fellow Michigan native has taken issue with the results of a study on data breaches that Verizon published earlier this year.
So let's just get this out of the way:
To Nate's point about the wording of the survey and the study, I agree - it is dangerously ambiguous. However, it's probably not the cause of the improbable skew toward external attackers in the survey data.
I think I know what is. See, Nate's thinking about the Verizon study like a pen-tester, and forgetting that most data breaches and security compromises don't involve vulns and sploits, just the interesting ones. Sometimes they involve phishing, but most of the time they involve simple impersonation (the FBI calls it 'identity theft').
The thing is, if you follow basic authentication principles and practices around your self-service web apps, this stuff is hard to prevent but easy to detect and resolve. And this is how you get the disparity between the number of breaches and the amount of data breached, in the statistics. Most of those "external attacker" scenarios were someone's kid, deadbeat brother, or ex-wife impersonating them to get at their information. Not good. But not interesting.
Seriously, I don't know what it is, but it's almost always divorced/divorcing couples involved in these impersonation breaches. Nate, if you want to interview me about this some day, I've got some great stories.
Anyway, use Verizon's survey for what it's good for - getting more security funding. Because bottom line, that's a lot of breaches, no matter the circumstances.
So let's just get this out of the way:
The first thing you’re thinking is, “Wow, my consultant has been lying to me about internal threats!”, the thing is, that’s not necessarily true.Yes it is. "Insider threat" is a red herring throughout security, but especially where data breaches are concerned. There's no breach notification law out there that defines a breach where the data ends up in the hands of someone that already works for you. Since there's no external force requiring companies to track these incidents, it's probably very safe to assume that tracking and detection of these is low, except within a handful of specific verticals.
To Nate's point about the wording of the survey and the study, I agree - it is dangerously ambiguous. However, it's probably not the cause of the improbable skew toward external attackers in the survey data.
I think I know what is. See, Nate's thinking about the Verizon study like a pen-tester, and forgetting that most data breaches and security compromises don't involve vulns and sploits, just the interesting ones. Sometimes they involve phishing, but most of the time they involve simple impersonation (the FBI calls it 'identity theft').
The thing is, if you follow basic authentication principles and practices around your self-service web apps, this stuff is hard to prevent but easy to detect and resolve. And this is how you get the disparity between the number of breaches and the amount of data breached, in the statistics. Most of those "external attacker" scenarios were someone's kid, deadbeat brother, or ex-wife impersonating them to get at their information. Not good. But not interesting.
Seriously, I don't know what it is, but it's almost always divorced/divorcing couples involved in these impersonation breaches. Nate, if you want to interview me about this some day, I've got some great stories.
Anyway, use Verizon's survey for what it's good for - getting more security funding. Because bottom line, that's a lot of breaches, no matter the circumstances.
Monday, June 16, 2008
ArcSight Logger Face-plate-lift
Not only did ArcSight deliver on the improved UI and feature set for Logger 2.5, but like any good appliance vendor, they've popped their collar. And by collar I mean front bezel.
The red with blue LED scroller is definitely a better look than the previous model. Still no backlit logo, though. :-)
The red with blue LED scroller is definitely a better look than the previous model. Still no backlit logo, though. :-)
Friday, June 13, 2008
ArcSight's new Logger Apps
ArcSight is releasing the Logger 2.5 software here soon, and along with it new appliances with some interesting variations. You can check out the vitals on the ArcSight website here.
Prior versions of Logger were available in small, large, and SuperSized, where the SuperSized box was the same spec as the large box with artificial limitations removed via license key. So really only two boxes, all self-contained, all CentOS, all MySQL.
Now, there's a whole new batch. It would appear by the naming designations that they are going after PCI compliance heavily with the L3K-PCI, which must have retention policies and capabilities that make it easier to comply with PCI-DSS 5.2. Another model supports SAN-attached storage and Oracle, so you can grow your Logger with SAN instead of NAS. And finally, there are two new L7100 models with 6x750MB drives. If I'm doing my math right, that works out, after compression, to about40TB 36TB of log storage. That's a significant increase over the 15TB 12TB that the large/SuperSized L5K boxes shipped with.
Update: Talked with Ansh at ArcSight today, and aparently the 2.5 software adds columns to the CEF event view. That's a big deal for folks using CEF events in Logger, and may make CEF-only the preferred format for most Logger users. The new software also includes real-time alert views (like Active Channels in ESM), as well as a number of other enhancements to alerts and search filters and more. Current customers can download 2.5 from the software site.
Prior versions of Logger were available in small, large, and SuperSized, where the SuperSized box was the same spec as the large box with artificial limitations removed via license key. So really only two boxes, all self-contained, all CentOS, all MySQL.
Now, there's a whole new batch. It would appear by the naming designations that they are going after PCI compliance heavily with the L3K-PCI, which must have retention policies and capabilities that make it easier to comply with PCI-DSS 5.2. Another model supports SAN-attached storage and Oracle, so you can grow your Logger with SAN instead of NAS. And finally, there are two new L7100 models with 6x750MB drives. If I'm doing my math right, that works out, after compression, to about
Update: Talked with Ansh at ArcSight today, and aparently the 2.5 software adds columns to the CEF event view. That's a big deal for folks using CEF events in Logger, and may make CEF-only the preferred format for most Logger users. The new software also includes real-time alert views (like Active Channels in ESM), as well as a number of other enhancements to alerts and search filters and more. Current customers can download 2.5 from the software site.
DefCon 2008 Quals
The lineup for this year's DefCon CTF competition has been decided.
And, better still, Doc Brown has once again posted the challenges (with answers) for everyone to try on their own. Brain calisthenics, to be sure. Get your mental leg warmers on and jazzercise kenshoto style.
And, better still, Doc Brown has once again posted the challenges (with answers) for everyone to try on their own. Brain calisthenics, to be sure. Get your mental leg warmers on and jazzercise kenshoto style.
Sunday, June 1, 2008
From My Inbox... (ArcSight Connectors & Logger)
SC left a comment on an earlier post on ArcSight Logger and CEF vs. Raw formats...
There are probably a number of ways to do this, but I've only tested one. In earlier configurations of our syslog infrastructure, there were a couple single points of failure. In order to meet log analysis commitments, we would reload lost syslog data from file.
Start by configuring a 'Syslog Pipe' Connector. Since the connector only has to be online with the Manager when you're manually inserting logs from file, you have greater flexibility about where this Connector will live. It lived on my laptop for a while. When you set it up, point to a path that isn't already used for anything else. Then you can simply start the Connector and pipe the raw log file(s) to the named pipe:
# cat oldsyslogs.txt >> /var/spool/my_arcsight_pipe
Depending on how far the date/time stamps of the events in those files are from $Now, ArcSight will probably throw some errors. It will maintain the "End Time" from the raw log events, and apply "Manager Receipt Time" as the time the manager collects the parsed events from the Connector. This will absolutely screw up any correlation rules you wanted these events to be subject to. Sorry, no easy way around that.
Hi Paul,
Do you know if its possible to "insert" logs into the logger or SmartConnector if the logs are on a physical storage, e.g. DVD or external storage?
Thanks.
Kind Regards,
SC
There are probably a number of ways to do this, but I've only tested one. In earlier configurations of our syslog infrastructure, there were a couple single points of failure. In order to meet log analysis commitments, we would reload lost syslog data from file.
Start by configuring a 'Syslog Pipe' Connector. Since the connector only has to be online with the Manager when you're manually inserting logs from file, you have greater flexibility about where this Connector will live. It lived on my laptop for a while. When you set it up, point to a path that isn't already used for anything else. Then you can simply start the Connector and pipe the raw log file(s) to the named pipe:
# cat oldsyslogs.txt >> /var/spool/my_arcsight_pipe
Depending on how far the date/time stamps of the events in those files are from $Now, ArcSight will probably throw some errors. It will maintain the "End Time" from the raw log events, and apply "Manager Receipt Time" as the time the manager collects the parsed events from the Connector. This will absolutely screw up any correlation rules you wanted these events to be subject to. Sorry, no easy way around that.
Friday, May 23, 2008
Game Theory and Mac Malware
Cloudmark's Adam J. O'Donnell wrote a fascinating article which uses game theory to predict the tipping point for mass malware attacks on Mac OS X.
It's very hard to construct a game that can accurately represent an uncontrolled environment like the one that malware and botnets currently exist in - and to his credit, Adam fully acknowledges this. But Adam's article is great for two reasons.
First, I think that his estimate of Mac's needing to break 17% market share before they become worthwhile to malware authors isn't too far off. And second, the game he's constructed is a great model for risk-based analysis of a partially unknown threat environment (which is a fancy way of saying how likely you are to be pwned in the future).
I so often find myself ranting about infosec articles and papers that fail at basic math, let alone reasonable science, that it's nice to see something of this quality hit the trade press. Thanks, Adam.
It's very hard to construct a game that can accurately represent an uncontrolled environment like the one that malware and botnets currently exist in - and to his credit, Adam fully acknowledges this. But Adam's article is great for two reasons.
First, I think that his estimate of Mac's needing to break 17% market share before they become worthwhile to malware authors isn't too far off. And second, the game he's constructed is a great model for risk-based analysis of a partially unknown threat environment (which is a fancy way of saying how likely you are to be pwned in the future).
I so often find myself ranting about infosec articles and papers that fail at basic math, let alone reasonable science, that it's nice to see something of this quality hit the trade press. Thanks, Adam.
Thursday, May 22, 2008
TJX vs. CrYpTiC_MauleR
rsnake reported today that TJX has fired an employee who goes by the handle CrYpTiC_MauleR. He was apparently fired for disparaging remarks he left on a sla.ckers.org message board.
So who is CrYpTiC_MauleR and why should you care? He's some college kid working a retail job at TJ Maxx, and you probably shouldn't. Unless you're TJX, that is. And not for the reasons you might think.
Sure, this kind of thing is bad PR for TJX coming and going. And sure, it's disloyal and immature of an employee to trash his employer to the public, especially when it exposes their security vulnerabilities to self-proclaimed hackers. So you might think that firing this guy is an appropriate response. And maybe it is. But I don't think so.
Now, don't get me wrong, I don't believe for a second that this guy is an actual whistleblower. PCI's not a law, and rsnake isn't a regulatory or law-enforcement agency (that I know of), so what he did doesn't even approach whistleblower status. But his now-public firing is going to have a stifling effect on employees, both retail and corporate. And that is a failure of TJX's security program (one of many if you believe CrYpTiC_MauleR).
The thing is, a company needs to have a method of intaking security concerns from staff, and whatever that looks like needs to be communicated to staff, especially from company leadership, like the loss prevention exec that CrYpTiC_MauleR claims to have spoken to. Firing this kid for airing his concerns to the only people that would listen to him is certainly TJX's perrogative and not at all unexpected, really. But it also points out that the culture that allowed the initial breach to occur in the first place hasn't changed.
I've suggested before that TJX could stand to purge themselves. This only reinforces that opinion. If TJX can't change its overall security culture, it's only a matter of months before they're all over the news again.
So who is CrYpTiC_MauleR and why should you care? He's some college kid working a retail job at TJ Maxx, and you probably shouldn't. Unless you're TJX, that is. And not for the reasons you might think.
Sure, this kind of thing is bad PR for TJX coming and going. And sure, it's disloyal and immature of an employee to trash his employer to the public, especially when it exposes their security vulnerabilities to self-proclaimed hackers. So you might think that firing this guy is an appropriate response. And maybe it is. But I don't think so.
Now, don't get me wrong, I don't believe for a second that this guy is an actual whistleblower. PCI's not a law, and rsnake isn't a regulatory or law-enforcement agency (that I know of), so what he did doesn't even approach whistleblower status. But his now-public firing is going to have a stifling effect on employees, both retail and corporate. And that is a failure of TJX's security program (one of many if you believe CrYpTiC_MauleR).
The thing is, a company needs to have a method of intaking security concerns from staff, and whatever that looks like needs to be communicated to staff, especially from company leadership, like the loss prevention exec that CrYpTiC_MauleR claims to have spoken to. Firing this kid for airing his concerns to the only people that would listen to him is certainly TJX's perrogative and not at all unexpected, really. But it also points out that the culture that allowed the initial breach to occur in the first place hasn't changed.
I've suggested before that TJX could stand to purge themselves. This only reinforces that opinion. If TJX can't change its overall security culture, it's only a matter of months before they're all over the news again.
Saturday, May 17, 2008
List of Malware Analysis Tools
Update: There is an updated version of this list of tools posted to my blog here.
The first step is to build a virtual machine with VMware, VirtualPC, or whatever you prefer. It should be as similar to your corporate image as you can make it, but it should not be on your domain. Also, if you select VMware Server, do not install VMware Tools into the VM. Sure it makes things easier, but it can also make it easy for malware to determine that it's in a VM and prevent it from running. I would also recommend installing your company's AV scanner, but disable real-time scanning by default.
Once you've created your VM, you need add some tools to make analysis possible. Here's the list of stuff in my VM.
Cygwin
- Didier Stevens' SpiderMonkey
- pefile
- Jim Clausing's packerid.py
- My ieget.sh
- Mozilla rhino debugger
GMER
catchme
Mandiant Red Curtain
OSAM Autorun Manager
Mike Lin's Startup Control Panel
HiJackThis / StartupList / ADSSpy
HashCalc
HHD Free Hex Editor
OllyDBG (also: Immunity Debugger)
Plugins:
- AnalyzeThis
- FindCrypt
- Hide Debugger
- OllyDump
- OllyFlow
- OllyDbg PE Dumper
ImportREC
iDEFENSE
- MAP
- SysAnalyzer
- HookExplorer
- SniffHit
- PEiD
SysInternals
- AccessEnum
- autoruns
- Filemon
- procexp
- psexec
- psfile
- psgetsid
- Psinfo
- pskill
- pslist
- psloggedon
- psloglist
- pspasswd
- psservice
- psshutdown
- pssuspend
- Regmon
- RootkitRevealer
- tcpvcon
- Tcpview
Firefox (JavaScript Console mod)
Also, having links to VirusTotal and CWSandbox in your VM is a good idea.
Wednesday, April 30, 2008
Can the Media Move the Ball on HIPAA?
I finally have a serious prediction for 2008: I predict that unauthorized access of medical records will be the new lost laptop story.
Reporting on the compromise of data through laptop loss/theft over the past few years has raised public awareness around data breaches and disk encryption. The upswing in incidents involving hospital employees accessing celebrity medical records will have a similar affect on awareness. I mention this because a former UCLA Medical Center employee was indicted yesterday on charges stemming from similar activity. What made this a criminal case and not just another firing is that the employee sold these records to a "media outlet" (tabloid).
The reason this is significant is that stories like this in the media raise public awareness about HIPAA requirements and medical provider capabilities. Those capabilities being the ability to review who accessed a patient's medical record and when, and that the hospitals have a way of determining whether or not the access was appropriate. The end result will likely be two-fold. First, more patients will be aware of these capabilities, and will start doing things like asking doctors and hospitals for this information. And secondly, the hospitals that aren't currently reviewing the logs from their EMR systems will feel some pressure to start doing so.
Reporting on the compromise of data through laptop loss/theft over the past few years has raised public awareness around data breaches and disk encryption. The upswing in incidents involving hospital employees accessing celebrity medical records will have a similar affect on awareness. I mention this because a former UCLA Medical Center employee was indicted yesterday on charges stemming from similar activity. What made this a criminal case and not just another firing is that the employee sold these records to a "media outlet" (tabloid).
The reason this is significant is that stories like this in the media raise public awareness about HIPAA requirements and medical provider capabilities. Those capabilities being the ability to review who accessed a patient's medical record and when, and that the hospitals have a way of determining whether or not the access was appropriate. The end result will likely be two-fold. First, more patients will be aware of these capabilities, and will start doing things like asking doctors and hospitals for this information. And secondly, the hospitals that aren't currently reviewing the logs from their EMR systems will feel some pressure to start doing so.
Monday, April 28, 2008
Don't Read My Blog
At least, don't read it this week. I don't have anything to say that you should read before you read the transcript of Clay Shirky's "Looking for The Mouse" keynote at Web 2.0.
But I do have this to say - feeling inspired by Clay's speech, I've revived 2 old projects and started a new one and am making it my goal to finish them this year. Thanks to Dave Aitel for posting a link to this on Twitter.
But I do have this to say - feeling inspired by Clay's speech, I've revived 2 old projects and started a new one and am making it my goal to finish them this year. Thanks to Dave Aitel for posting a link to this on Twitter.
Wednesday, April 16, 2008
Cool Things To Do In West Michigan This Spring
Sorry in advance if Google sent you here looking for tourist attractions. Blame it on the title's lack of creativity or specificity. But while I have your attention, if you like security and/or beer, and will be in the vicinity of Grand Rapids within the next 6 weeks, this could be your lucky day.
That's because...
That's because...
- This Friday, April 18th, Matt Carpenter of Intelguardians is presenting to the Grand Rapids ISSA. His talk will basically be like drinking from the SANS 504 firehose. The best parts of a 6-day course, condensed into 90 minutes or less. Matt was my instructor for 504, and he's awesome. This will be an excellent talk.
- The following Tuesday is GRSec at Graydon's Crossing. Amazing pub food and a great beer selection. I like this place a lot.
- Next month, GR-ISSA has Jared DeMott coming to speak. Jared will be giving a presentation that dailydave readers have already seen a preview of. I met Jared after attending his fuzzing talk at Black Hat last year. He's freakin brilliant to start, but also a very eloquent presenter.
- And there's a pretty good chance that there will be another GRSec the Tuesday after that!
Friday, April 11, 2008
It's The End of The (Security) World As We Know it... And I Feel Deja Vu?
If you've been following blogs or online trade press coming out of this week's RSA conference, then you no doubt have heard about the keynote that IBM's Val Rahmani gave in which she declared that, "The security business has no future." Now, that's the punch line she used to get into the trade press and onto the blogs (including mine), but the real gist of the talk was that the future of security is for vendors to bake security into infrastructure products, and that that's what IBM would be doing.
I'm not going to dissect Val's talk, but I do want to point out two interesting things. First is that Val is Tom Noonan's replacement at the security branch of IBM Global Services (formerly ISS). So why the GM of a consulting practice is talking about her offerings' futility in a public way is a little confusing and not good for morale. I'm sure that's not what she intended, but still.
Second, and perhaps more interesting, is that this year's keynote is eerily similar to the Bill Gates keynote from RSA 2006. Now, he didn't open with a shock-jock style punch line the way Rahmani did, but he could have. And he would have had the high ground. But Gates did talk a lot about what Microsoft was doing at the time to build secure, sustainable infrastructure. He also dragged out OneCare (now Forefront) and Vista as examples of Microsoft's advances in platform security. The stories I have read seem to indicate that Rahmani did not mention specific products or tactics that IBM would be sending to market.
So I guess if you're looking for a take away, it is that platform security has gained traction at least as a talking point. And IBM is at least 2 years behind Microsoft in product positioning for security.
I'm not going to dissect Val's talk, but I do want to point out two interesting things. First is that Val is Tom Noonan's replacement at the security branch of IBM Global Services (formerly ISS). So why the GM of a consulting practice is talking about her offerings' futility in a public way is a little confusing and not good for morale. I'm sure that's not what she intended, but still.
Second, and perhaps more interesting, is that this year's keynote is eerily similar to the Bill Gates keynote from RSA 2006. Now, he didn't open with a shock-jock style punch line the way Rahmani did, but he could have. And he would have had the high ground. But Gates did talk a lot about what Microsoft was doing at the time to build secure, sustainable infrastructure. He also dragged out OneCare (now Forefront) and Vista as examples of Microsoft's advances in platform security. The stories I have read seem to indicate that Rahmani did not mention specific products or tactics that IBM would be sending to market.
So I guess if you're looking for a take away, it is that platform security has gained traction at least as a talking point. And IBM is at least 2 years behind Microsoft in product positioning for security.
Friday, April 4, 2008
ArcSight Logger: CEF vs. Raw
Here's something for potential ArcSight Logger customers to ponder. The issue is whether you should use CEF formatted logs (post-Connector) or raw logs (pre-Connector) or both in your Logger environment. In this case, a picture is worth at least a few hundred words:
If you look carefully at that image, you can see that it shows the same event in both its raw syslog format and it's Connector-ized CEF format. From my point of view, it boils down to use case. Analysis versus troubleshooting. Reporting versus response. The CEF formatted message is chock-full of metadata-and-labeling goodness. It's also overkill on the eyes. Log messages are already cryptic to the point of questionable usefulness. CEF amplifies that. The raw format, on the other hand, is easier to read due largely to the fact that it's what your UNIX admins are used to seeing. But that's where the positives end. Raw syslog is all but unformatted and trying to write a small chain of regexes that do a good job of parsing large quantities of syslog is a headache and a half.
Of course, you may have already realized that there is a right answer to this problem: Do both. Sure there's some overhead to consider, since you're going to pass syslog to a Connector that will then send raw events to Logger, CEF events to Logger, and CEF events to ESM if you have it. Or you could send raw syslog to Logger, have Logger forward it to a Connector and then configure the Connector to send CEF to Logger and ESM. There are probably many other complicated flows that you could implement as well, but you get the idea.
If you look carefully at that image, you can see that it shows the same event in both its raw syslog format and it's Connector-ized CEF format. From my point of view, it boils down to use case. Analysis versus troubleshooting. Reporting versus response. The CEF formatted message is chock-full of metadata-and-labeling goodness. It's also overkill on the eyes. Log messages are already cryptic to the point of questionable usefulness. CEF amplifies that. The raw format, on the other hand, is easier to read due largely to the fact that it's what your UNIX admins are used to seeing. But that's where the positives end. Raw syslog is all but unformatted and trying to write a small chain of regexes that do a good job of parsing large quantities of syslog is a headache and a half.
Of course, you may have already realized that there is a right answer to this problem: Do both. Sure there's some overhead to consider, since you're going to pass syslog to a Connector that will then send raw events to Logger, CEF events to Logger, and CEF events to ESM if you have it. Or you could send raw syslog to Logger, have Logger forward it to a Connector and then configure the Connector to send CEF to Logger and ESM. There are probably many other complicated flows that you could implement as well, but you get the idea.
Tuesday, April 1, 2008
Binary File Visual Analysis Redux
I got a great comment on my post regarding simple binary file visual analysis from Erik Heidt. Erik made the very valid point that visual analysis of ciphertext is not a highly reliable way to distinguish "good" crypto from "bad." He used the example of an 8-bit XOR of a file as an ineffective method of encrypting data that also has random byte distribution.
Since there's nothing good on TV, I decided to see what an XOR-ed file data looks like in gnuplot. So here's what I did.
Like before, I used the Netcat nc.exe binary. I then encrypted it using GPG and also encoded it using Luigi Auriemma's Xor utility. I then ran the three files through the Perl script from my previous post and then plotted them with gnuplot.
Here's the plot of the original binary:
Here's the plot of the GPG-encrypted file:
And here's the plot of the XOR-encoded file:
As you can see, the XOR plot has peaks and valleys that are characteristically similar to the original binary. I don't want you to take away from this that this visual analysis method is highly reliable in all situations. I only wanted to share that basic XOR encoding does stand out visually.
Since there's nothing good on TV, I decided to see what an XOR-ed file data looks like in gnuplot. So here's what I did.
Like before, I used the Netcat nc.exe binary. I then encrypted it using GPG and also encoded it using Luigi Auriemma's Xor utility. I then ran the three files through the Perl script from my previous post and then plotted them with gnuplot.
Here's the plot of the original binary:
Here's the plot of the GPG-encrypted file:
And here's the plot of the XOR-encoded file:
As you can see, the XOR plot has peaks and valleys that are characteristically similar to the original binary. I don't want you to take away from this that this visual analysis method is highly reliable in all situations. I only wanted to share that basic XOR encoding does stand out visually.
Sunday, March 30, 2008
On Firewall Obsolescence
This has been a popular topic the past week. But I think that in both Richard's and William's respective positions, it's a game of semantics that's being played here. The real debate on whether or not firewalls are still relevant is actually taking place in marketing meetings, not product development meetings, though the two are inexorably linked. I'll tell you why.
Firewalls are not an actual control, or even a technology. They are a concept. Without reposting the chronological history of the firewall here, I'll simply point out that firewalls have been made up of varying combinations of packet filters, state tables, address translators, and proxies. Vendor-driven convergence has further complicated defining what a firewall is by adding VPN tunneling, content filtering, anti-virus, and IDS/IPS functionality to their products and continuing to call them "firewalls."
So let's do away with the repackaged-converged-for-post-2002-sales firewall and break it down by function. Here's what's not obsolete:
The thing is, this isn't new functionality, it's new content. Back in 2001 I was writing patterns for Raptor's HTTPD proxy to discard inbound CodeRed exploit attempts against web servers. And since that time, I've been writing custom Snort rules to detect or block specific threats including attacks against web servers. Whether by proxy or by packet payload inspection, we've already seen the underlying concept that will make these new security features possible. The hard part isn't getting the data to the rule set. The hard part is going to be writing rules to prevent injection attacks via XML. This is where security vendors discover that extensible protocols are a real bitch.
Firewalls are not an actual control, or even a technology. They are a concept. Without reposting the chronological history of the firewall here, I'll simply point out that firewalls have been made up of varying combinations of packet filters, state tables, address translators, and proxies. Vendor-driven convergence has further complicated defining what a firewall is by adding VPN tunneling, content filtering, anti-virus, and IDS/IPS functionality to their products and continuing to call them "firewalls."
So let's do away with the repackaged-converged-for-post-2002-sales firewall and break it down by function. Here's what's not obsolete:
- Packet filter with a state table - Easy and fast to deploy for outbound Internet traffic. Most modern OS's come with one built in. If you're doing outbound traffic without it and instead proxying everything you are either:
- Using SOCKSv4 on AIX 3.3 and your uptime dates back to 1995 ...or...
- Only allowing SMTP and DNS outbound, which makes you my hero
- Proxies - Content filtering or inline anti-virus with your "UTM" appliance? Transparent proxy. Web application firewall? Reverse proxy. PIX? They call yours fixups. Check Point? They call yours "Security Servers." FYI - you have proxies.
- NAT - Unless you're The University of Michigan and have a spreadsheet to keep track of the public Class A's that you own, you need NAT. For that matter, University of Michigan has NAT, too. But they don't have to. You do.
- tcp_wrappers - Replaced by stateful firewall in the kernel.
- Straight up packet filtering - Memory is cheap. Your routers all have state tables now.
The thing is, this isn't new functionality, it's new content. Back in 2001 I was writing patterns for Raptor's HTTPD proxy to discard inbound CodeRed exploit attempts against web servers. And since that time, I've been writing custom Snort rules to detect or block specific threats including attacks against web servers. Whether by proxy or by packet payload inspection, we've already seen the underlying concept that will make these new security features possible. The hard part isn't getting the data to the rule set. The hard part is going to be writing rules to prevent injection attacks via XML. This is where security vendors discover that extensible protocols are a real bitch.
Wednesday, March 26, 2008
Useless Statistics Returns!
Breaking News: Information Security is still not a science and vendors still suck at statistics.
What? You already knew that? Well, somebody forgot to tell WhiteHat. You'd think they might learn from their competitor's mistakes.
I'll save you 4 of the 5 minutes necessary to read the whole thing by summarizing WhiteHat's press-release-posing-as-a-study for you. They collected vulnerability statistics from an automated scanning tool that they sell (and give away demo use of during their sales cycle). From that, they generated some numbers about what percent of sites had findings, what types of findings were the most common, and what verticals some of the sites' owners are in. Then they let their marketing folks make silly claims based on wild speculation based on inherently flawed data. Anyway, I guess this isn't the first one that WhiteHat has put out there. They've been doing it quarterly for a year. But this is the first time I had a sales guy forward one to me. Can't wait for that follow-up call.
So what's wrong with WhiteHat's "study?" First, they collected data using an automated tool. Anybody that does pen-testing knows that automated tools will generate false positives. And based on my experience - which does not include WhiteHat's product, but does include most of the big name products in the web app scanner space - tests for things like XSS, CSRF, and blind SQL injection are, by their nature, prone to a high rate of false positives. No coincidence, XSS and CSRF top their list of vulnerabilities found by their study.
Second, their data is perhaps even more skewed by the fact that they let customers demo their product during their sales cycle. And if you want to demonstrate value doing pre-sales, you will want to show the customer how the product works when you know there will be results. Enter WebGoat, Hacme Bank, and the like. These are an SE's best friends when doing customer demos because there's nothing worse than scanning the client's web app only to come up with no results. It doesn't show off the product's full capabilities, and it pretty much guarantees that the customer won't buy. Of course, what these do to the "study" is to artificially drive the number of findings up. Way up.
Finally, and perhaps best of all, when Acunetix did this exact same thing last year, it turned into a giant, embarrassing mess. Mostly for Joel Snyder at NetworkWorld. The real killer for me is that I know that Jeremiah Grossman, WhiteHat's CTO and a smart guy, was around for that whole thing.
Oh, well. Maybe we'll luck up and Joel Snyder will give us a repeat performance as well.
But just like last time, the real loser is the infosec practitioner. This kind of "research" muddies the waters. It lacks any rigor or even basic data sampling and normalization methodologies. Hell, they don't even bother to acknowledge the potential skew inherent in their data set. It's not that WhiteHat's number is way off. In fact, I'd say it's probably pretty reasonable. But if they - or if infosec as a professional practice - want to be taken seriously, then they (and we) need to do something more than run a report from their tool for customer=* and hand it to marketing to pass around to trade press.
What? You already knew that? Well, somebody forgot to tell WhiteHat. You'd think they might learn from their competitor's mistakes.
I'll save you 4 of the 5 minutes necessary to read the whole thing by summarizing WhiteHat's press-release-posing-as-a-study for you. They collected vulnerability statistics from an automated scanning tool that they sell (and give away demo use of during their sales cycle). From that, they generated some numbers about what percent of sites had findings, what types of findings were the most common, and what verticals some of the sites' owners are in. Then they let their marketing folks make silly claims based on wild speculation based on inherently flawed data. Anyway, I guess this isn't the first one that WhiteHat has put out there. They've been doing it quarterly for a year. But this is the first time I had a sales guy forward one to me. Can't wait for that follow-up call.
So what's wrong with WhiteHat's "study?" First, they collected data using an automated tool. Anybody that does pen-testing knows that automated tools will generate false positives. And based on my experience - which does not include WhiteHat's product, but does include most of the big name products in the web app scanner space - tests for things like XSS, CSRF, and blind SQL injection are, by their nature, prone to a high rate of false positives. No coincidence, XSS and CSRF top their list of vulnerabilities found by their study.
Second, their data is perhaps even more skewed by the fact that they let customers demo their product during their sales cycle. And if you want to demonstrate value doing pre-sales, you will want to show the customer how the product works when you know there will be results. Enter WebGoat, Hacme Bank, and the like. These are an SE's best friends when doing customer demos because there's nothing worse than scanning the client's web app only to come up with no results. It doesn't show off the product's full capabilities, and it pretty much guarantees that the customer won't buy. Of course, what these do to the "study" is to artificially drive the number of findings up. Way up.
Finally, and perhaps best of all, when Acunetix did this exact same thing last year, it turned into a giant, embarrassing mess. Mostly for Joel Snyder at NetworkWorld. The real killer for me is that I know that Jeremiah Grossman, WhiteHat's CTO and a smart guy, was around for that whole thing.
Oh, well. Maybe we'll luck up and Joel Snyder will give us a repeat performance as well.
But just like last time, the real loser is the infosec practitioner. This kind of "research" muddies the waters. It lacks any rigor or even basic data sampling and normalization methodologies. Hell, they don't even bother to acknowledge the potential skew inherent in their data set. It's not that WhiteHat's number is way off. In fact, I'd say it's probably pretty reasonable. But if they - or if infosec as a professional practice - want to be taken seriously, then they (and we) need to do something more than run a report from their tool for customer=* and hand it to marketing to pass around to trade press.
Monday, March 24, 2008
Quicky Binary File Visual Analysis
I've been reading Greg Conti's book, Security Data Visualization. If I'm honest, I was looking for new ideas for framing and presenting data to folks outside of security. But that's not really what this book is about. It's a good introduction to visualization as an analysis tool, but there's very little polish and presentation to the graphs in Greg's book.
It's what I wasn't looking for in this book, however, that wound up catching my eye. In Chapter 2, "The Beauty of Binary File Visualization," there's a comparison (on p31) that struck me. It's a set of images that are graphical representations of Word document files protected via various password methods. It was clear by looking at the graphs which methods were thorough and effective and which ones weren't. And it struck me that this is an accessible means of evaluating crypto for someone like me who sucks at math. And, hey, I just happen to have some crypto to evaluate. More on that some other time, but here's what I did. It's exceedingly simple.
I wanted to take a binary file of no known format and calculate how many times a given byte value occurs within that file, in no particular order. I wrote a Perl script to do this:
#!/usr/bin/perl
use strict;
my $buffer = "";
my $file = $ARGV[0] or die("Usage: bytecnt.pl [filename] \n");
open(FILE, $file) or die("Could not open file : $file\n");
my $filesz = -s $file;
binmode(FILE);
read(FILE, $buffer, $filesz, 0);
close(FILE);
my @bytes = ();
foreach (split(//, $buffer)) {
$bytes[ord($_)]++;
}
for my $i (0 .. 255) {
print "$i $bytes[$i]\n";
}
This script will output two columns worth of data, the first being the byte value (0-255), and the second being the number of times that byte value occurred in the file. The idea is to redirect this output to a text file and then use it to generate some graphs in gnuplot.
For my example, I analyzed a copy of nc.exe and also a symmetric-key-encrypted copy of that same file. I generated two files using the above Perl script, one called "bin.dat" and another called "crypt.dat". Then I fired up Cygwin/X and gnuplot and created some graphs using the following settings:
# the X axis range is 0-255 because those are all of
# the possible byte values
set xrange [255:0] noreverse nowriteback
set xlabel "byte value"
# the max Y axis value 1784 comes from the bin.dat file
# `cut -d\ -f2 bin.dat |sort -n |uniq |tail -1`
set yrange [1784:0] noreverse nowriteback
set ylabel "count"
I then ran:
plot "bin.dat" using 1:2 with impulses
..which generated this:
I then repeated the process with the other file:
plot "crypt.dat" using 1:2 with impulses
...which generated this:
As you can see, there's a clear difference between the encrypted file and the unencrypted file when it comes to byte count and uniqueness. Using the xrange/yrange directives in gnuplot helps emphasize this visually as well. The expectation would be that weak, or "snake-oil" crypto schemes would look more like the unencrypted binary and less like the PGP-encrypted file.
It's what I wasn't looking for in this book, however, that wound up catching my eye. In Chapter 2, "The Beauty of Binary File Visualization," there's a comparison (on p31) that struck me. It's a set of images that are graphical representations of Word document files protected via various password methods. It was clear by looking at the graphs which methods were thorough and effective and which ones weren't. And it struck me that this is an accessible means of evaluating crypto for someone like me who sucks at math. And, hey, I just happen to have some crypto to evaluate. More on that some other time, but here's what I did. It's exceedingly simple.
I wanted to take a binary file of no known format and calculate how many times a given byte value occurs within that file, in no particular order. I wrote a Perl script to do this:
#!/usr/bin/perl
use strict;
my $buffer = "";
my $file = $ARGV[0] or die("Usage: bytecnt.pl [filename] \n");
open(FILE, $file) or die("Could not open file : $file\n");
my $filesz = -s $file;
binmode(FILE);
read(FILE, $buffer, $filesz, 0);
close(FILE);
my @bytes = ();
foreach (split(//, $buffer)) {
$bytes[ord($_)]++;
}
for my $i (0 .. 255) {
print "$i $bytes[$i]\n";
}
This script will output two columns worth of data, the first being the byte value (0-255), and the second being the number of times that byte value occurred in the file. The idea is to redirect this output to a text file and then use it to generate some graphs in gnuplot.
For my example, I analyzed a copy of nc.exe and also a symmetric-key-encrypted copy of that same file. I generated two files using the above Perl script, one called "bin.dat" and another called "crypt.dat". Then I fired up Cygwin/X and gnuplot and created some graphs using the following settings:
# the X axis range is 0-255 because those are all of
# the possible byte values
set xrange [255:0] noreverse nowriteback
set xlabel "byte value"
# the max Y axis value 1784 comes from the bin.dat file
# `cut -d\ -f2 bin.dat |sort -n |uniq |tail -1`
set yrange [1784:0] noreverse nowriteback
set ylabel "count"
I then ran:
plot "bin.dat" using 1:2 with impulses
..which generated this:
I then repeated the process with the other file:
plot "crypt.dat" using 1:2 with impulses
...which generated this:
As you can see, there's a clear difference between the encrypted file and the unencrypted file when it comes to byte count and uniqueness. Using the xrange/yrange directives in gnuplot helps emphasize this visually as well. The expectation would be that weak, or "snake-oil" crypto schemes would look more like the unencrypted binary and less like the PGP-encrypted file.
Thursday, March 20, 2008
Prior Art
OK, I don't have anything resembling a patent claim here, but a year ago I described a need and a potential solution for targeted phishing attacks against Credit Unions. Today, Brandimensions announced StrikePhish, a service offering in that very space.
So instead of the millions of dollars I deserve, I'll settle for StrikePhish buying me a beer at the next con I see them at. ;-)
So instead of the millions of dollars I deserve, I'll settle for StrikePhish buying me a beer at the next con I see them at. ;-)
Wednesday, March 19, 2008
B(uste)D+
Apparently, SlySoft's new release of AnyDVD has the ability to strip BD+, Blu-Ray's DRM scheme that has gotten some very credible acclaim.
It turns out that one of the folks behind BD+, Nate Lawson, is giving a talk on DRM at RSA next month. I'm interested to see what Nate has to say about BD+ being broken. That's why I asked him. :-)
It turns out that one of the folks behind BD+, Nate Lawson, is giving a talk on DRM at RSA next month. I'm interested to see what Nate has to say about BD+ being broken. That's why I asked him. :-)
Cool Firefox / JavaScript Trick
Here's an easy trick for deobfuscating JavaScript within Firefox. Via BornGeek and Offensive Computing.
- Launch Firefox and browse to 'about:config'
- Create a new boolean config preference named
browser.dom.window.dump.enabled
and set it to 'true' - Close Firefox. Now run "firefox.exe -console". A console window will open along with the browser window.
- Edit file containing obfuscated JavaScript and replace "document.write" with "dump"
- Open the file in Firefox.
- Disable NoScript, Firebug, or other scripting add-ons and reload the file if necessary.
- Switch to the JavaScript Console window. Now you can read the deobfuscated code.
Sunday, March 16, 2008
My Not-So-Secret Glee
When I heard that the only remaining semi-above-board sploit broker is calling it quits, I couldn't help but smile. We still have 3Com and iDefense buying exploits outright. For now. But to see that the "0Day eBay" model is failing for reasons beyond a sudden lack of staff, well that is good news.
I wish Adriel, Simon, and the rest of the folks at Netragard / SNOSoft no ill will whatsoever. I hope their business continues to prosper and that they continue to be positive, active members of the infosec community. That said, I've mentioned my stance on the buying and selling of software vulnerabilities before. There are very real ethical issues here. And, more importantly, there are very real security implications for corporations and end users, who seem to have no representation in the discussion about those ethics.
I wish Adriel, Simon, and the rest of the folks at Netragard / SNOSoft no ill will whatsoever. I hope their business continues to prosper and that they continue to be positive, active members of the infosec community. That said, I've mentioned my stance on the buying and selling of software vulnerabilities before. There are very real ethical issues here. And, more importantly, there are very real security implications for corporations and end users, who seem to have no representation in the discussion about those ethics.
Thursday, March 13, 2008
Abstract
I've asked to be considered for presenting at this year's ArcSight User Conference. Today I sent my abstract over and will hopefully be on the agenda this year to talk about ArcSight Tools.
I've blogged about Tools before and this presentation aims to be an expansion of that concept. The truth is, there's lots of great data in your environment that isn't in a log flow somewhere. And while it maybe doesn't belong in your SIM, you want it at your fingertips when investigating a potential incident. It's good to have answers to questions like, "What does this server do?," or "Is this user a local admin?," or "What is this person's boss' phone number?" close at hand.
Anyway, I hope to have more to say about ArcSight Tools soon.
The incident handlers at [my company] use ArcSight Tools in their investigations as a way to quickly and easily collect additional intelligence from existing data stores in their environment. Come see how, with very little custom code, they have harnessed existing applications and services to quickly gather in-depth information about servers, users, workstations, and external hosts during an investigation. In addition to seeing how [my company] has leveraged ArcSight Tools, learn some of the simple tricks that will help you go back to your office and do the same.
I've blogged about Tools before and this presentation aims to be an expansion of that concept. The truth is, there's lots of great data in your environment that isn't in a log flow somewhere. And while it maybe doesn't belong in your SIM, you want it at your fingertips when investigating a potential incident. It's good to have answers to questions like, "What does this server do?," or "Is this user a local admin?," or "What is this person's boss' phone number?" close at hand.
Anyway, I hope to have more to say about ArcSight Tools soon.
Sunday, February 24, 2008
Anonymous writes... (ArcSight Resources)
Anonymous writes,
So first of all, there's an unfortunate shortage of sources on building content for ArcSight. It's part of why I blog about it, because there are only a few people putting information out there. And if SIM's in general are going to mature, then best practices and an open community are part of that maturation. Besides blogs like mine, the ArcSight forums are a good place to get questions answered and share content. Beyond that, I would highly recommend the annual User Conference that ArcSight puts on. For those that can't attend the User Conference, the slides are published to your software site, and definitely worth downloading. And of course ArcSight's own training offerings. But that is pretty much the extent of resources available at the moment.
As far as ways to monitor Web Apps and VoIP Security with ArcSight, it's going to boil down to the log sources you have available. Here are a couple of ideas I have off the top of my head.
For Web App there are tons of optiions. ArcSight works with several web security proxies, IIS and Apache, most IDS/IPS products under the sun, web app servers like Weblogic and WebSphere, and the more popular commercial databases like Oracle, MS-SQL, and DB2. Depending on what's in your web environment and which sources you're drawing from, you have lots of options here. An easy idea might be to create a filter to sift through web server logs for special characters (like < > ' or - ) or requests where the web server returned a 500 or some other obscure error (not 403 or 404).
VoIP is a trickier one to go after since there's no ArcSight connector for CallManager or whatever SIP gateway you use. You could write one with the Flex Connector SDK, but I'm not sure how great your SIP gateway logs are to begin with it comes to security. I think switches, IDS/IPS, and firewall are your best bets here. You'd want to filter firewall logs for packets sourced from your VoIP VLAN address space that might indicate a rogue device connected to your voice network. (Which reminds me, a new version of voiphopper just came out.) You might also want to filter IDS logs for traffic sourced from your VoIP VLANs as well. Hopefully you've already got "switchport port-security maximum 2" set on all of your VoIP ports (and all of your userland switch ports in general) to prevent ARP spoofing/poisoning attacks. In which case, if you send your switch syslogs to ArcSight, a rule to alert on 'NOMAC' messages could be very useful. These can be regular errors, but also occur when someone attempts ARP-based MITM attacks in a port where port-security has been configured.
Anyway, I hope that helps, Anonymous. Good luck with your projects.
Hi Paul,
I would like to ask if you know of any resources I can reference for ArcSight correlation rules authoring.
In particular, I am looking for Web App and VOIP Security. Thanks in advance.
So first of all, there's an unfortunate shortage of sources on building content for ArcSight. It's part of why I blog about it, because there are only a few people putting information out there. And if SIM's in general are going to mature, then best practices and an open community are part of that maturation. Besides blogs like mine, the ArcSight forums are a good place to get questions answered and share content. Beyond that, I would highly recommend the annual User Conference that ArcSight puts on. For those that can't attend the User Conference, the slides are published to your software site, and definitely worth downloading. And of course ArcSight's own training offerings. But that is pretty much the extent of resources available at the moment.
As far as ways to monitor Web Apps and VoIP Security with ArcSight, it's going to boil down to the log sources you have available. Here are a couple of ideas I have off the top of my head.
For Web App there are tons of optiions. ArcSight works with several web security proxies, IIS and Apache, most IDS/IPS products under the sun, web app servers like Weblogic and WebSphere, and the more popular commercial databases like Oracle, MS-SQL, and DB2. Depending on what's in your web environment and which sources you're drawing from, you have lots of options here. An easy idea might be to create a filter to sift through web server logs for special characters (like < > ' or - ) or requests where the web server returned a 500 or some other obscure error (not 403 or 404).
VoIP is a trickier one to go after since there's no ArcSight connector for CallManager or whatever SIP gateway you use. You could write one with the Flex Connector SDK, but I'm not sure how great your SIP gateway logs are to begin with it comes to security. I think switches, IDS/IPS, and firewall are your best bets here. You'd want to filter firewall logs for packets sourced from your VoIP VLAN address space that might indicate a rogue device connected to your voice network. (Which reminds me, a new version of voiphopper just came out.) You might also want to filter IDS logs for traffic sourced from your VoIP VLANs as well. Hopefully you've already got "switchport port-security maximum 2" set on all of your VoIP ports (and all of your userland switch ports in general) to prevent ARP spoofing/poisoning attacks. In which case, if you send your switch syslogs to ArcSight, a rule to alert on 'NOMAC' messages could be very useful. These can be regular errors, but also occur when someone attempts ARP-based MITM attacks in a port where port-security has been configured.
Anyway, I hope that helps, Anonymous. Good luck with your projects.
Monday, February 18, 2008
Frank Boldewin Releases "More Advanced Unpacking - Part II"
Just last month I posted about Frank's first release in this series, along with some other recommended reading. Here's Part II via Offensive Computing. Awesome stuff. Frank is essentially giving away what he could charge thousands for as a BlackHat Training course or something similar. It's exceptionally generous of him to do so. I've been in awe of Frank's malware unpacking skills for a while now, and I hope he continues to release more tutorials.
Friday, February 15, 2008
Ranting About Insider Threat
This is another one of those posts that started out on a listserv. The original thread was about using contracting staff for security work. Along the way, someone mentioned insider threat and I went off. Enjoy.
> While he or she does not necessarily have access to
> many/all functional areas (hopefully), he/she would
> have an easier time compromising the organization's
> security. Being that internal threats are much more
> prevalent than external ones, I'd argue that this
> poses a greater risk when compared to equally-
> screened contract employees under the same NDAs, etc.
I'm not sure that I agree with your statement that internal threats are much more prevalent, or even more prevalent. Looking at the CSI/FBI Survey results over time, the Internet surpassed internal networks as the origin of attack in 2001 and has been widening ever since. I don't think the data bears this conclusion out.
If I put my tinfoil hat on here for a minute, I do think that the way that insider abuse is hyped and promoted in infosec trade press is intentionally vendor-driven. Vendors whose products address border security at layers 3-4 found their sales slumping a few years ago when everybody finally got a firewall, everybody that was going to get NIDS got NIDS, and everybody that had to comply with GLBA got DLP.
So they started telling ghost-story-anecdotes about insiders and how we need to watch our staff like they're an elite team of Chinese hackers. Of course, they never suggested how they would solve the real insider issue. Insiders getting unauthorized access to data isn't the problem. It's what they do with the data that they *are* authorized to access that's the problem. When somebody comes up with IDS signatures for bad intentions, please let me know. I'll be the first one with my checkbook out.
I guess my bottom line on insider threat is not that it's wholly imaginary, but that it's not the same as external threats. Cool appliance-based technologies don't serve you as well inside. Doing things like monitoring use of administrative accounts, auditing financial application access for proper separation of duties, and proper pre-hire screening of staff go a whole lot further than watching what files people copy to their thumb drives and guessing what they might do with them.
> While he or she does not necessarily have access to
> many/all functional areas (hopefully), he/she would
> have an easier time compromising the organization's
> security. Being that internal threats are much more
> prevalent than external ones, I'd argue that this
> poses a greater risk when compared to equally-
> screened contract employees under the same NDAs, etc.
I'm not sure that I agree with your statement that internal threats are much more prevalent, or even more prevalent. Looking at the CSI/FBI Survey results over time, the Internet surpassed internal networks as the origin of attack in 2001 and has been widening ever since. I don't think the data bears this conclusion out.
If I put my tinfoil hat on here for a minute, I do think that the way that insider abuse is hyped and promoted in infosec trade press is intentionally vendor-driven. Vendors whose products address border security at layers 3-4 found their sales slumping a few years ago when everybody finally got a firewall, everybody that was going to get NIDS got NIDS, and everybody that had to comply with GLBA got DLP.
So they started telling ghost-story-anecdotes about insiders and how we need to watch our staff like they're an elite team of Chinese hackers. Of course, they never suggested how they would solve the real insider issue. Insiders getting unauthorized access to data isn't the problem. It's what they do with the data that they *are* authorized to access that's the problem. When somebody comes up with IDS signatures for bad intentions, please let me know. I'll be the first one with my checkbook out.
I guess my bottom line on insider threat is not that it's wholly imaginary, but that it's not the same as external threats. Cool appliance-based technologies don't serve you as well inside. Doing things like monitoring use of administrative accounts, auditing financial application access for proper separation of duties, and proper pre-hire screening of staff go a whole lot further than watching what files people copy to their thumb drives and guessing what they might do with them.
14.4Kbps Nostalgia
I've got some decent display real-estate on my desk, roughly 576 square inches total, so choosing background wallpaper is not a task Itypically proceed into lightly. Well, last week, my selection of background aesthetic was undertaken a bit hastily. There was fallout. In what can only be described as a Bostonian fashion, Aqua Teen Hungerforce was once again poorly received. This time by my co-workers.
So today I went to DeviantArt looking for some new wallpaper, and stumbled upon these:
They're ANSI art, all done by Thor (iCE), for BBS's I used to call back when I was in high school. Legion HQ was the place, too. No real names. Just teaching each other random bits of knowledge and (mostly) cursing each other out like the angsty, angry misfits that we were. I didn't know it then, but I made some good friends during this time. I still keep in touch with a few, 14 years later. But what ever happened to Evil Dude?
Anyway, it's cool to see these again. They remind me of a much simpler time... of Desqview and Telix, of Pascal and WordPerfect, of Kings Quest and fractint, of 2600 meetings and boot sector viruses. It was all new back then. I was all new back then.
Sometimes in life, every few years or so, you stop and look back at who you used to be. The long blue hair has been replaced with short hair that's starting to speckle grey. I still listen to the same music, but it's not cool anymore. But I think that, aside from having never gotten a tattoo, the 16yr old I used to be would've been pretty jazzed to see how his life turned out.
So today I went to DeviantArt looking for some new wallpaper, and stumbled upon these:
They're ANSI art, all done by Thor (iCE), for BBS's I used to call back when I was in high school. Legion HQ was the place, too. No real names. Just teaching each other random bits of knowledge and (mostly) cursing each other out like the angsty, angry misfits that we were. I didn't know it then, but I made some good friends during this time. I still keep in touch with a few, 14 years later. But what ever happened to Evil Dude?
Anyway, it's cool to see these again. They remind me of a much simpler time... of Desqview and Telix, of Pascal and WordPerfect, of Kings Quest and fractint, of 2600 meetings and boot sector viruses. It was all new back then. I was all new back then.
Sometimes in life, every few years or so, you stop and look back at who you used to be. The long blue hair has been replaced with short hair that's starting to speckle grey. I still listen to the same music, but it's not cool anymore. But I think that, aside from having never gotten a tattoo, the 16yr old I used to be would've been pretty jazzed to see how his life turned out.
Tuesday, February 12, 2008
It was 20 years ago today... (minus 19)
This time last year, I decided to act on my idea to steal Matasano's idea for an informal meetup for infosec practitioners, proselytizers, patrons, and partygoers. And while I can take credit for something that boils down to, "Hey, who likes beer and security? Then let's go to the bar!", I can't take credit for all of the awesome conversation and fascinating stories from some of the best and brightest infosec minds in the midwest or anywhere for that matter.
But that's what GRSec has become. I've been to most of the 12 meetups, and I've had a good time with and learned from others at each one. And so I want to extend a genuine thank you to everyone who's come out to a GRSec.
If you're in the Mid- or West Michigan area and want to network and talk shop with other infosec types, I encourage you to join us for GRSec. Or if you're in another geoloc, check out CitySec.org and find a meetup in your area.
But that's what GRSec has become. I've been to most of the 12 meetups, and I've had a good time with and learned from others at each one. And so I want to extend a genuine thank you to everyone who's come out to a GRSec.
If you're in the Mid- or West Michigan area and want to network and talk shop with other infosec types, I encourage you to join us for GRSec. Or if you're in another geoloc, check out CitySec.org and find a meetup in your area.
Monday, February 11, 2008
ArcSight 4.0 Part 2: Awesomeness
All you need to do to know that ArcSight has made some big improvements to it's v4.0 ESM product is read a press release. Or launch the 4.0 console even:
The other obvious improvement in the filters is automatic escaping of characters. Here's a screen shot of that:
And while trending, improved reporting, identity correlation, and portable content are all great feature adds, you can read about them anywhere and everywhere. So I'm not going to talk about them. What I am going to talk about are the little changes to the 4.0 console that I have found to be very useful. The stuff that you might not find if someone didn't point it out.
Let's start with filters. This one you'll probably find, but I just want to say, "Hallelujah!" Seriously, this makes me want to do backflips down the cube aisles like John Belushi in The Blues Brothers. They fixed the filter editor!
If you ever created a big, complicated filter in 3.x only to open it later to find a hideous tangled nest where your neatly organized filter had been before, then this is a victory for you. Take a look:
If you ever created a big, complicated filter in 3.x only to open it later to find a hideous tangled nest where your neatly organized filter had been before, then this is a victory for you. Take a look:
The other obvious improvement in the filters is automatic escaping of characters. Here's a screen shot of that:
This is an interesting feature add for a number of reasons. The big advantage to you is that when you send results to external formats (think HTML or PDF reports or CSV exports), you don't have to do escaping yourself. I suspect there are a number of advantages to the ArcSight Manager and Web components as well. I don't know, so I'll leave it to you to speculate.
One of the new features that I really like is in the right-click menu of the Events tab of the Case tool.
"Retrieve correlated events" will do exactly that. If, like me, you have cases that open with correlated events only, but you want to record or examine the events that triggered the case, this saves several minutes. Very handy.
The final feature that I wanted to talk about, and my personal favorite, has to do with an enhancement to event annotation. We use event annotation as a way to create an audit trail for log review. The selected events load in an active channel and then our handler-on-duty reviews the events, opens cases where necessary, and changes the events' annotation stage to "Closed" as a way to indicate systematically that they have been reviewed. This adds up to a good 10-15 hours of work that week at least, so anything we can do to speed the process along is greatly appreciated. Starting this month, we will be migrating to using the isReviewed event annotation flag.
The final feature that I wanted to talk about, and my personal favorite, has to do with an enhancement to event annotation. We use event annotation as a way to create an audit trail for log review. The selected events load in an active channel and then our handler-on-duty reviews the events, opens cases where necessary, and changes the events' annotation stage to "Closed" as a way to indicate systematically that they have been reviewed. This adds up to a good 10-15 hours of work that week at least, so anything we can do to speed the process along is greatly appreciated. Starting this month, we will be migrating to using the isReviewed event annotation flag.
The main reason for doing this, though, has more to do with a menu item and shortcut that's been added to the Active Channel / Grid view.
Now instead of 5 mouse clicks, Ctrl-R marks the events as reviewed. It's a nice streamline if you use the feature.
There are lots of other enhancements to the 4.0 console, many of which I probably haven't found yet. But these were the ones that jumped out at me and will have a positive impact on the way I use the tool.
Sunday, February 10, 2008
ArcSight 4.0 Part 1: Oddities
This past week, we upgraded our production ArcSight environment from 3.5 SP2 to 4.0 SP1. We've been running ArcSight 4.0 in our test environment since August 2007, but as with all things "test" there are always a few things that you won't discover in a test environment.
This is the first part of a 2-part series on the upgrade experience. I want to describe some technical challenges that we experienced that you won't find in the upgrade documentation in hopes that someone else finds this helpful.
Before I start talking about what went wrong, I should say that the process was seemingly painless aside from what I describe here. I say seemingly, because I didn't actually do it. But Tim, who did the 2-part upgrade and redesign over the course of three weeks, was still smiling on Thursday following the upgrade. He seems to have emerged from the gauntlet unscathed. I think this is because Tim's a bad-ass and also because ArcSight dramatically improved the upgrade process in 4.0.
So the first problem we ran into has to do with some changes to the Jetty code in the ArcSight manager between 3.5 and 4.0. Here's the error we got:
[2008-02-06 13:50:55,727][INFO ] [default.com.arcsight.server.Jetty311ServletContainer] [initialize] Key Store: [JKS] /opt/arcsight/manager/config/jetty/keystore com.arcsight.common.InitializationException: Exception initializing 'com.arcsight.server.Jetty311ServletContainer': The keystore may not contain more than 1 entry. Please remove excessive entries. at com.arcsight.server.Jetty311ServletContainer.initialize (Jetty311ServletContainer.java:288)
The keystore file was the same one we'd been using since 3.0. We generated our own CA key pair and certificate from OpenSSL and signed the certificate for the keystore file with it. We then added the CA certificate to the cacerts file that is used by all of the other components. While going through this process, I added the CA cert to the keystore file for posterity.
The CA cert being present in the keystore file is what caused the error, and editing the keystore with 'keytoolgui' to remove the CA cert was all it took to get it back up and running.
The other issues that we ran into occured post-upgrade and had to do with upgraded content. The first issue we saw isn't really an issue at all, rather ArcSight's improved some of its logic around Active Channels. Specifically, Active Channels that were created using one time stamp and configured to sort on another time stamp will give an error now. For example, if you created a channel and set EndTime for "Use as time stamp," and then on the Sort tab, set a sort for ManagerRecieptTime, this would cause an error in 4.0. In 3.5 it would merely hurt the channels performance, but would eventually load. This is a good change, but it may mean having to edit some of your old (and presumably really slow) Active Channels before they work again.
The second issue we saw was around filters. ArcSight has made some major improvements to Filter objects, and I'll talk more about that in the upcoming post. However, one drawback seems to be a bug in the parsing/escaping enhancements in 4.0. Filters that use a string match that contains angle brackets ( [ or ] ) will return null sets. There is no error, the only symptom is null results. In our case, all of the brackets appeared in filters matching on the Name field (gotta love undocumented syslog formats). The solution is to revise your filters to not use the brackets.
This is the first part of a 2-part series on the upgrade experience. I want to describe some technical challenges that we experienced that you won't find in the upgrade documentation in hopes that someone else finds this helpful.
Before I start talking about what went wrong, I should say that the process was seemingly painless aside from what I describe here. I say seemingly, because I didn't actually do it. But Tim, who did the 2-part upgrade and redesign over the course of three weeks, was still smiling on Thursday following the upgrade. He seems to have emerged from the gauntlet unscathed. I think this is because Tim's a bad-ass and also because ArcSight dramatically improved the upgrade process in 4.0.
So the first problem we ran into has to do with some changes to the Jetty code in the ArcSight manager between 3.5 and 4.0. Here's the error we got:
[2008-02-06 13:50:55,727][INFO ] [default.com.arcsight.server.Jetty311ServletContainer] [initialize] Key Store: [JKS] /opt/arcsight/manager/config/jetty/keystore com.arcsight.common.InitializationException: Exception initializing 'com.arcsight.server.Jetty311ServletContainer': The keystore may not contain more than 1 entry. Please remove excessive entries. at com.arcsight.server.Jetty311ServletContainer.initialize (Jetty311ServletContainer.java:288)
The keystore file was the same one we'd been using since 3.0. We generated our own CA key pair and certificate from OpenSSL and signed the certificate for the keystore file with it. We then added the CA certificate to the cacerts file that is used by all of the other components. While going through this process, I added the CA cert to the keystore file for posterity.
The CA cert being present in the keystore file is what caused the error, and editing the keystore with 'keytoolgui' to remove the CA cert was all it took to get it back up and running.
The other issues that we ran into occured post-upgrade and had to do with upgraded content. The first issue we saw isn't really an issue at all, rather ArcSight's improved some of its logic around Active Channels. Specifically, Active Channels that were created using one time stamp and configured to sort on another time stamp will give an error now. For example, if you created a channel and set EndTime for "Use as time stamp," and then on the Sort tab, set a sort for ManagerRecieptTime, this would cause an error in 4.0. In 3.5 it would merely hurt the channels performance, but would eventually load. This is a good change, but it may mean having to edit some of your old (and presumably really slow) Active Channels before they work again.
The second issue we saw was around filters. ArcSight has made some major improvements to Filter objects, and I'll talk more about that in the upcoming post. However, one drawback seems to be a bug in the parsing/escaping enhancements in 4.0. Filters that use a string match that contains angle brackets ( [ or ] ) will return null sets. There is no error, the only symptom is null results. In our case, all of the brackets appeared in filters matching on the Name field (gotta love undocumented syslog formats). The solution is to revise your filters to not use the brackets.