Sunday, March 30, 2008

On Firewall Obsolescence

This has been a popular topic the past week. But I think that in both Richard's and William's respective positions, it's a game of semantics that's being played here. The real debate on whether or not firewalls are still relevant is actually taking place in marketing meetings, not product development meetings, though the two are inexorably linked. I'll tell you why.

Firewalls are not an actual control, or even a technology. They are a concept. Without reposting the chronological history of the firewall here, I'll simply point out that firewalls have been made up of varying combinations of packet filters, state tables, address translators, and proxies. Vendor-driven convergence has further complicated defining what a firewall is by adding VPN tunneling, content filtering, anti-virus, and IDS/IPS functionality to their products and continuing to call them "firewalls."

So let's do away with the repackaged-converged-for-post-2002-sales firewall and break it down by function. Here's what's not obsolete:
  • Packet filter with a state table - Easy and fast to deploy for outbound Internet traffic. Most modern OS's come with one built in. If you're doing outbound traffic without it and instead proxying everything you are either:
    • Using SOCKSv4 on AIX 3.3 and your uptime dates back to 1995 ...or...
    • Only allowing SMTP and DNS outbound, which makes you my hero
  • Proxies - Content filtering or inline anti-virus with your "UTM" appliance? Transparent proxy. Web application firewall? Reverse proxy. PIX? They call yours fixups. Check Point? They call yours "Security Servers." FYI - you have proxies.
  • NAT - Unless you're The University of Michigan and have a spreadsheet to keep track of the public Class A's that you own, you need NAT. For that matter, University of Michigan has NAT, too. But they don't have to. You do.
Here's what is obsolete:
  • tcp_wrappers - Replaced by stateful firewall in the kernel.
  • Straight up packet filtering - Memory is cheap. Your routers all have state tables now.
What Richard and William are really complaining about is how most shipping firewalls don't already handle security for complicated app-protocol-over-HTTP stuff like SOAP, XMLP, and even the easier stuff that's getting everybody in trouble these days like SQL injection, XSS, and SEO poisoning. It's this valid, if semantically challenged, line of dialog that is fostering a perceived demand for new capabilities in a firewall and also sending vendors scrambling to position their products in a way that differentiates them from what's out there right now. It seems that every time that this happens, somebody has to declare the old stuff dead. This has happened to firewalls before as well as anti-virus and IDS.

The thing is, this isn't new functionality, it's new content. Back in 2001 I was writing patterns for Raptor's HTTPD proxy to discard inbound CodeRed exploit attempts against web servers. And since that time, I've been writing custom Snort rules to detect or block specific threats including attacks against web servers. Whether by proxy or by packet payload inspection, we've already seen the underlying concept that will make these new security features possible. The hard part isn't getting the data to the rule set. The hard part is going to be writing rules to prevent injection attacks via XML. This is where security vendors discover that extensible protocols are a real bitch.

Wednesday, March 26, 2008

Useless Statistics Returns!

Breaking News: Information Security is still not a science and vendors still suck at statistics.

What? You already knew that? Well, somebody forgot to tell WhiteHat. You'd think they might learn from their competitor's mistakes.

I'll save you 4 of the 5 minutes necessary to read the whole thing by summarizing WhiteHat's press-release-posing-as-a-study for you. They collected vulnerability statistics from an automated scanning tool that they sell (and give away demo use of during their sales cycle). From that, they generated some numbers about what percent of sites had findings, what types of findings were the most common, and what verticals some of the sites' owners are in. Then they let their marketing folks make silly claims based on wild speculation based on inherently flawed data. Anyway, I guess this isn't the first one that WhiteHat has put out there. They've been doing it quarterly for a year. But this is the first time I had a sales guy forward one to me. Can't wait for that follow-up call.

So what's wrong with WhiteHat's "study?" First, they collected data using an automated tool. Anybody that does pen-testing knows that automated tools will generate false positives. And based on my experience - which does not include WhiteHat's product, but does include most of the big name products in the web app scanner space - tests for things like XSS, CSRF, and blind SQL injection are, by their nature, prone to a high rate of false positives. No coincidence, XSS and CSRF top their list of vulnerabilities found by their study.

Second, their data is perhaps even more skewed by the fact that they let customers demo their product during their sales cycle. And if you want to demonstrate value doing pre-sales, you will want to show the customer how the product works when you know there will be results. Enter WebGoat, Hacme Bank, and the like. These are an SE's best friends when doing customer demos because there's nothing worse than scanning the client's web app only to come up with no results. It doesn't show off the product's full capabilities, and it pretty much guarantees that the customer won't buy. Of course, what these do to the "study" is to artificially drive the number of findings up. Way up.

Finally, and perhaps best of all, when Acunetix did this exact same thing last year, it turned into a giant, embarrassing mess. Mostly for Joel Snyder at NetworkWorld. The real killer for me is that I know that Jeremiah Grossman, WhiteHat's CTO and a smart guy, was around for that whole thing.

Oh, well. Maybe we'll luck up and Joel Snyder will give us a repeat performance as well.

But just like last time, the real loser is the infosec practitioner. This kind of "research" muddies the waters. It lacks any rigor or even basic data sampling and normalization methodologies. Hell, they don't even bother to acknowledge the potential skew inherent in their data set. It's not that WhiteHat's number is way off. In fact, I'd say it's probably pretty reasonable. But if they - or if infosec as a professional practice - want to be taken seriously, then they (and we) need to do something more than run a report from their tool for customer=* and hand it to marketing to pass around to trade press.

Monday, March 24, 2008

Quicky Binary File Visual Analysis

I've been reading Greg Conti's book, Security Data Visualization. If I'm honest, I was looking for new ideas for framing and presenting data to folks outside of security. But that's not really what this book is about. It's a good introduction to visualization as an analysis tool, but there's very little polish and presentation to the graphs in Greg's book.

It's what I wasn't looking for in this book, however, that wound up catching my eye. In Chapter 2, "The Beauty of Binary File Visualization," there's a comparison (on p31) that struck me. It's a set of images that are graphical representations of Word document files protected via various password methods. It was clear by looking at the graphs which methods were thorough and effective and which ones weren't. And it struck me that this is an accessible means of evaluating crypto for someone like me who sucks at math. And, hey, I just happen to have some crypto to evaluate. More on that some other time, but here's what I did. It's exceedingly simple.

I wanted to take a binary file of no known format and calculate how many times a given byte value occurs within that file, in no particular order. I wrote a Perl script to do this:

#!/usr/bin/perl
use strict;
my $buffer = "";
my $file = $ARGV[0] or die("Usage: bytecnt.pl [filename] \n");
open(FILE, $file) or die("Could not open file : $file\n");
my $filesz = -s $file;
binmode(FILE);
read(FILE, $buffer, $filesz, 0);
close(FILE);
my @bytes = ();
foreach (split(//, $buffer)) {
  $bytes[ord($_)]++;
}
for my $i (0 .. 255) {
  print "$i $bytes[$i]\n";
}

This script will output two columns worth of data, the first being the byte value (0-255), and the second being the number of times that byte value occurred in the file. The idea is to redirect this output to a text file and then use it to generate some graphs in gnuplot.

For my example, I analyzed a copy of nc.exe and also a symmetric-key-encrypted copy of that same file. I generated two files using the above Perl script, one called "bin.dat" and another called "crypt.dat". Then I fired up Cygwin/X and gnuplot and created some graphs using the following settings:

# the X axis range is 0-255 because those are all of
# the possible byte values

set xrange [255:0] noreverse nowriteback

set xlabel "byte value"

# the max Y axis value 1784 comes from the bin.dat file
# `cut -d\ -f2 bin.dat |sort -n |uniq |tail -1`

set yrange [1784:0] noreverse nowriteback

set ylabel "count"


I then ran:

plot "bin.dat" using 1:2 with impulses

..which generated this:



I then repeated the process with the other file:

plot "crypt.dat" using 1:2 with impulses

...which generated this:




As you can see, there's a clear difference between the encrypted file and the unencrypted file when it comes to byte count and uniqueness. Using the xrange/yrange directives in gnuplot helps emphasize this visually as well. The expectation would be that weak, or "snake-oil" crypto schemes would look more like the unencrypted binary and less like the PGP-encrypted file.

Thursday, March 20, 2008

Prior Art

OK, I don't have anything resembling a patent claim here, but a year ago I described a need and a potential solution for targeted phishing attacks against Credit Unions. Today, Brandimensions announced StrikePhish, a service offering in that very space.

So instead of the millions of dollars I deserve, I'll settle for StrikePhish buying me a beer at the next con I see them at. ;-)

Wednesday, March 19, 2008

B(uste)D+

Apparently, SlySoft's new release of AnyDVD has the ability to strip BD+, Blu-Ray's DRM scheme that has gotten some very credible acclaim.

It turns out that one of the folks behind BD+, Nate Lawson, is giving a talk on DRM at RSA next month. I'm interested to see what Nate has to say about BD+ being broken. That's why I asked him. :-)

Cool Firefox / JavaScript Trick

Here's an easy trick for deobfuscating JavaScript within Firefox. Via BornGeek and Offensive Computing.

  1. Launch Firefox and browse to 'about:config'
  2. Create a new boolean config preference named browser.dom.window.dump.enabled and set it to 'true'
  3. Close Firefox. Now run "firefox.exe -console". A console window will open along with the browser window.
  4. Edit file containing obfuscated JavaScript and replace "document.write" with "dump"
  5. Open the file in Firefox.
    1. Disable NoScript, Firebug, or other scripting add-ons and reload the file if necessary.
  6. Switch to the JavaScript Console window. Now you can read the deobfuscated code.
You can do this with Rhino or SpiderMonkey, so it's nothing new. But the setup is really simple and easy to use, so if you've been avoiding the other tools available because they're hard to use, this may be what you've been waiting for.

Sunday, March 16, 2008

My Not-So-Secret Glee

When I heard that the only remaining semi-above-board sploit broker is calling it quits, I couldn't help but smile. We still have 3Com and iDefense buying exploits outright. For now. But to see that the "0Day eBay" model is failing for reasons beyond a sudden lack of staff, well that is good news.

I wish Adriel, Simon, and the rest of the folks at Netragard / SNOSoft no ill will whatsoever. I hope their business continues to prosper and that they continue to be positive, active members of the infosec community. That said, I've mentioned my stance on the buying and selling of software vulnerabilities before. There are very real ethical issues here. And, more importantly, there are very real security implications for corporations and end users, who seem to have no representation in the discussion about those ethics.

Thursday, March 13, 2008

Abstract

I've asked to be considered for presenting at this year's ArcSight User Conference. Today I sent my abstract over and will hopefully be on the agenda this year to talk about ArcSight Tools.

The incident handlers at [my company] use ArcSight Tools in their investigations as a way to quickly and easily collect additional intelligence from existing data stores in their environment. Come see how, with very little custom code, they have harnessed existing applications and services to quickly gather in-depth information about servers, users, workstations, and external hosts during an investigation. In addition to seeing how [my company] has leveraged ArcSight Tools, learn some of the simple tricks that will help you go back to your office and do the same.

I've blogged about Tools before and this presentation aims to be an expansion of that concept. The truth is, there's lots of great data in your environment that isn't in a log flow somewhere. And while it maybe doesn't belong in your SIM, you want it at your fingertips when investigating a potential incident. It's good to have answers to questions like, "What does this server do?," or "Is this user a local admin?," or "What is this person's boss' phone number?" close at hand.

Anyway, I hope to have more to say about ArcSight Tools soon.