Monday, December 28, 2009

Malware Analysis Toolkit for 2010

Back in 2008 I posted a list of the tools I use for doing malware analysis. The tools I use have changed over time, and rather than just talk about a couple of recent additions, I decided I'd put a current complete list up with links. This is by no means a comprehensive list of malware analysis tools, it's just what I like and use.

Platform
  • VMWare Workstation
  • The "vulnerable stuff:"
    • Windows XP
    • Internet Explorer 7/8
    • Firefox
    • Acrobat Reader
    • Flash Player
General Tools
Analysis Tools

Binary Tools
JavaScript & HTTP Tools
PDF & Flash Tools
Web Sites as Tools

Wednesday, November 18, 2009

ArcSight Logger VS Splunk

You are here because you are searching for information on Splunk vs. ArcSight Logger. I actually wrote this post months before posting it, but sat on it for reasons that may become apparent as you read on.

If you want to hear me talk about my experience with Logger 4.0 through the beta process and beyond, you can check out the video case study I did for ArcSight. In short, Logger is good at what it does, and Logger 4.0 is fast. Ridiculously fast.

But that's not what I want to talk about. I want to talk about the question that's on everyone's mind: ArcSight Logger vs. Splunk?

Comparing features, there's not a strong advantage in either camp. Everybody's got built-in collection based on file and syslog. Everybody's got a web interface with pretty graphs. The main way Logger excels here is in its ability to natively front-end data aggregation for ArcSight's ESM SIEM product. But if you've already got ESM, you're going to buy Logger anyway. So that leaves price and performance as the remaining differentiators.

Splunk can compete on price, especially for more specialized use cases where Logger needs the ArcSight Connector software to pick up data (i.e. Windows EventLog via WMI, or database rows via JDBC). And if you don't care about performance, implying that your needs are modest, Splunk may be cheaper for you for even the straightforward use cases because of the different licensing model that scales downward. So for smaller businesses, Splunk scales down.

For larger businesses, Logger scales up. For example, if you need to add storage capacity to your existing Logger install, and you didn't buy the SAN-attached model, you just buy another Logger appliance. You then 'peer' the Logger appliances, split or migrate log flows, and continue to run search & reporting out of the same appliance you've been using, across all peer data stores. With Splunk? You buy and implement more hardware on your own. And pay for more licenses.

My thinking on performance? Logger 4.0 is a Splunk killer, plain and simple. To analogize using cars, Splunk is a Ford Taurus for log search. It gets you down the road, it's reliable, you can pick the entry model up cheap, and by now you know what you're getting. Logger 4.0, however, is a Zonda F with a Volvo price tag.

To bring the comparison to a fine point, I'd like to share a little story with you. It's kind of gossipy, but that makes it fun.

When ArcSight debuted Logger 4.0 and announced its GA release at their Protect conference last fall, they did a live shoot-out of a Logger 7200 running 4.0 with a vanilla install of Splunk 4 on comparable hardware and the same Linux distro (CentOS) that Logger is based on. They performed a simple keyword search in Splunk across 2 million events, which took just over 12 minutes to complete. That's not awful. But that same search against the same data set ran in about 3 seconds on Logger 4.

This would be an interesting end to an otherwise pretty boring story if it weren't for what happened next. Vendors other than ArcSight - partners, integrators, consultants, etc. - participate in their conference both as speakers and on the partner floor. One of these vendors, an integrator of both ArcSight and Splunk products, privately called ArcSight out for the demo. His theory was that a properly-tuned Splunk install would perform much better. Now, it's a little nuts (and perhaps a little more dangerous) to be an invited vendor at a conference and accuse the conference organizer of cooking a demo. But what happened next is even crazier. ArcSight wheeled the gear up to this guy's room and told him that if he could produce a better result during the conference that they would make an announcement to that effect.

Not one to shy away from a technical challenge, this 15-year infosec veteran skipped meals, free beer, presentations, more free beer, and a lot of sleep to tweak the Splunk box to get better performance out of it. That's dedication. There's no doubt in my mind that he wanted to win. Badly. I heard from him personally at the close of the conference that not only did he not make significant headway, but that all of his results were worse than the original 12 minute search time.

You weren't there, you're just reading about it on some dude's blog, so the impact isn't the same. But that was all the convincing I needed.

But if you need more convincing; we stuffed 6mos of raw syslog from various flavors of UNIX and Linux (3TB) into Logger 4 during the beta. I could keyword search the entire data set in 14 seconds. Regex searches were significantly worse. They took 32 seconds.

Monday, November 9, 2009

Reversing JavaScript Shellcode: A Step By Step How-To

With more and more exploits being written in JavaScript, even some 0-day, there is a need to be able to reverse exploits written in JavaScript beyond de-obfuscation. I spent some time this weekend searching Google for a simple way to reverse JavaScript shellcode to assembly. I know people do it all the time. It's hardly rocket science. Yet, I didn't find any good walk-throughs on how to do this. So I thought I'd write one.

For this walk-through, I'll start with JavaScript that has already been extracted from a PDF file and de-obfuscated. So this isn't step 1 of fully reversing a PDF exploit, but for the first several steps, check out Part 2 of this slide deck.

What you'll need:
  1. A safe place to play with exploits (I'll be using an image in VMWare Workstation.)
  2. JavaScript debugger (I highly recommend and will be using Didier Stevens' modified SpiderMonkey.)
  3. Perl
  4. The crap2shellcode.pl script, which you'll find further down in this post
  5. A C compiler and your favorite binary debugger

I'll be using one of the example Adobe Acrobat exploits from the aforementioned slides for this example. You can grab it from milw0rm.

Step 1 - Converting from UTF-encoded characters to ASCII
Most JavaScript shellcode is encoded as either UTF-8 or UTF-16 characters. It would be easy enough to write a tool to convert from any one of these formats to the typical \x-ed UTF-8 format that we're used to seeing shellcode in. But because of the diversity of encoding and obfuscation showing up in JavaScript exploits today, it's more reliable to use JavaScript to decode the shellcode.

For this task, you need a JavaScript debugger. Didier Stevens' SpiderMonkey mod is a great choice. Start by preparing the shellcode text for passing to the debugger. In this case, drop the rest of the exploit, and then wrap the unescape function in an eval function:



Now run this code through SpiderMonkey. SpiderMonkey will create two log files for the eval command, the one with our ASCII shellcode is eval.001.log.



Step 2 - crap2shellcode.pl
This is why I wrote this script, to take an ASCII dump of some shellcode and automate making it debugger-friendly.


---cut---

#!/bin/perl
#
# crap2shellcode - 11/9/2009 Paul Melson
#
# This script takes stdin from some ascii dump of shellcode
# (i.e. unescape-ed JavaScript sploit) and converts it to
# hex and outputs it in a simple C source file for debugging.
#
# gcc -g3 -o dummy dummy.c
# gdb ./dummy
# (gdb) display /50i shellcode
# (gdb) break main
# (gdb) run
#

use strict;
use warnings;

my $crap;
while($crap=<stdin>) {
my $hex = unpack('H*', "$crap");

my $len = length($hex);
my $start = 0;

print "#include <stdio.h>\n\n";
print "static char shellcode[] = \"";

for (my $i = 0; $i < length $hex; $i+=4) {
my $a = substr $hex, $i, 2;
my $b = substr $hex, $i+2, 2;
print "\\x$b\\x$a";
}
print "\";\n\n";
}

print "int main(int argc, char *argv[])\n";
print "{\n";
print " void (*code)() = (void *)shellcode;\n";
print " code();\n";
print " exit(0);\n";
print "}\n";
print "\n";



--paste--

The output of passing eval.001.log through crap2shellcode.pl is a C program that makes debugging the shellcode easy.



Step 3 - View the shellcode/assembly in a debugger
First we have to build it. Since we know that this shellcode is a Linux bindshell the logical choice for where and how to build is Linux with gcc. Similarly, we can use gdb to dump the shellcode. For Win32 shellcode, we would probably pick Visual Studio Express and OllyDbg. Just about any Windows C compiler and debugger will work fine, though.

To build the C code we generated in step 2 with gcc, use the following:

gcc -g3 shellcode.c -o shellcode

The '-g3' flag builds the binary with labels for function stack tracing. This is necessary for debugging the binary. Or at least it makes it a whole lot easier.

Now open the binary in gdb, print *shellcode in x/50i format, set a breakpoint at main(), and run it.

$ gdb ./shellcode
(gdb) display /50i shellcode

(gdb) break main

(gdb) run



Sunday, October 18, 2009

Two-For-One Talk: Malware Analysis for Everyone

These two mini-talks were originally going to be blog posts, but I needed a speaker for this month's ISSA meeting. So I volunteered myself. Here are the slides.

Wednesday, September 23, 2009

Queries: Excel vs. ArcSight

Since ArcSight ESM 4.0, reports and trends have been based on queries. Considering that ESM runs on top of Oracle, a query in ESM is exactly what you think it is. Queries are an extremely flexible way to get at event data. But as the name implies, they go against the ARC_EVENT_DATA tablespace, and therefore you can't use them to build data monitors or rule conditions, since those engines run against data prior to insertion into the database.

Anyway, I've got a story about how cool queries are. And about how much of an Excel badass I am. And also about how queries are still better. Last month, I got a request from one of our architects who was running down an issue related to client VPN activity. Specifically, he wanted to know how many remote VPN users we had over time for a particular morning. Since we feed those logs to ESM, I was a logical person to ask for the information.

So I pulled up the relevant events in an active channel and realized that I wasn't going to be able to work this one out just sorting columns. So, without thinking, I exported the events and pulled them up in Excel. So here's the Excel badass part:



If you want to copy it, here it is:
=SUM(IF(FREQUENCY(MATCH(A2:A3653,A2:A3653,0),MATCH(A2:A3653,A2:A3653,0))>0,1))

So A is the column that usernames are in. This formula uses the MATCH function to create a list of usernames and then the FREQUENCY function to count the unique values in the match lists. You need two MATCH lists to make FREQUENCY happy because it requires two arguments, hence the redundancy. It took about an hour for me to put it together, most of that was spent finding the row numbers that corresponded to the time segment borders.

But as I finished it up and sent it off to the requesting architect, I thought, there must be an easier way. And of course there is. So here's how you do the same thing in ESM using queries:






















So, it's just EndTime with the hour function applied, and TargetUserName with the count function applied, and the Unique box (DISTINCT for the Oracle DBA's playing at home) checked. And then on the Conditions tab you create your filter to select only the events you want to query against. That's it.

Once the query is created, just run the Report Wizard and go. All told, it's about 90 seconds to the same thing with a query and report that it took an hour to do in Excel.

Sunday, September 20, 2009

The 'Cyberwarfare' Problem

Last week I attended ArcSight's annual user conference in Washinton DC. More about that in a later post. During the conference, ArcSight hosted a panel discussion on cyberwarfare. In DC, where many of ArcSight's biggest customer are based, this is a hot topic, and there will be a lot of time spent discussing it and a lot of money spent on defending against it, maybe.

What struck me about the panel discussion were two comments, both made by James Lewis, one of the panelists, and a director at the Center for International and Strategic Studies. At one point, Mr. Lewis invoked Estonia as an example of state-sponsored cyberwarfare, and made the comment that, "the Russians are tickled that they got away with it." Not ten minutes later, an audience member asked a question about retaliation against cyber-attacks. Mr. Lewis responded to the question by pointing out the problem of attribution. That is, from the logs that the victim systems generated, the IP address(es) recorded can't reliably be used to identify the actual individual(s) responsible for the attack.

Now, I don't intend to pick on James Lewis. It just so happened that one person on the panel expressed the paradox of cyberwarfare. The attribution problem is a big problem for all outsider attacks, not just cyberwarfare. A decade ago, security analysts were calling it "the legal firewall" because US-based hackers would first hack computers in China, Indonesia, Venezuela, or another country that doesn't openly cooperate with US law enforcement, and then hack back into the US from there, causing an investigative barrier that would hinder or prevent an investigation being able to get back to the attacker's actual location.

So knowing that there's a very real problem with being able to identify the source country for Internet-based attacks, it stands to reason that using the same limited forensic data to not only identify the actual source of an attack, but to determine that it is in fact state-sponsored, and not, say, a grassroots attack armed by a teenager, is a stretch. And for that reason, the question of cyberwarfare is an open one. Until a government actually comes forward and claims responsiblity for an attack, it's unprovable.

So as the government spends $100M on cyberdefense over the next six months, it's important to try and answer the question, "What is the military actually defending against?" At the very least, it's fair to say nobody knows for certain.

Wednesday, August 12, 2009

Inbox 3

Teguh writes,

Hi Paul,
could you give some guide to administering logger? i searched thru
google, but found nothing significant. How to(s) and tutorial would be enough i
guess. Does it have to have syslog server for the logger to be able to read data
from?
Thanks..

The documentation for Logger is available from ArcSight's download center. Only registered customers have access, but I assume that if you've got a Logger box, that generally qualifies you.

With regard to your second question, yes Logger has a syslog server. It actually has a few. In Logger nomenclature these are "receivers." Logger supports UDP and TCP syslog, FTP and SSH file pull, NFS and CIFS remote filesystem. Logger also supports some ArcSight-specific receivers including a SmartMessage receiver for events forwarded from ESM and CEF-over-syslog (OK, ArcSight wouldn't agree that this is specific to their products, but despite the C standing for Common, CEF is anything but. At least right now.)
  1. Configuring Logger to act as a syslog server is pretty straightforward.
  2. From the web interface, navigate to Configuration, Event Input/Output.
  3. On the "Receivers" tab, click the Add button.
  4. Name your connector and set the type as "UDP Receiver" then click Next.
  5. The defaults for Compression Level and Encoding are fine. Select the IP address you want the listener to reside on, and set the port number. The default syslog server port is UDP/514.
  6. Click Save.
  7. On the "Receivers" tab, click the little no-smoking image next to the new receiver to enable it.