Posted: 25 Jul 2008 11:33 AM CDT
Couple of interesting tools that seem to have been released recently:ManTech Memory DD ManTech Memory DD captures a record of physical, or random access memory which is lost when the computer is shutdown. Released at no charge under the GPL license for government and private use, ManTech's Memory DD (MDD) is capable of acquiring memory images from the following Microsoft® products: Windows® 2000, Windows Server 2003, Windows XP®, Windows Vista®, and Windows Server 2008.
The main difference between ManTech tool and win32dd, is that win32dd is mainly a kernel mode application — then it avoids to use user-land API to write to an output file, everything is done with native functions. Thus, it means a faster dumping… This point isn't negligible when you have one million page to dump in one single.
Posted: 25 Jul 2008 05:26 AM CDT
There has been a lot of hype about this one, but this flaw is a real threat and the working exploits are now available in the wild. To top that, they have already been ported into Metasploit! I hope all the major ISPs are in a patching frenzy right now and not thinking to themselves that there [...] ShareThis
Read the full post at darknet.org.uk
Posted: 25 Jul 2008 12:04 AM CDT
Is it just me, or is this whole DNS disclosure debate bringing out the self-righteous pricks out in force? Sheesh. The people needed a patch. The people got a patch. If people didn’t patch, they are stupid. If people didn’t patch because they didn’t like how Dan handled this, they are stupid. If people didn’t patch because they didn’t know all the details about the flaw, they are stupid.
If you are not an operator and you feel like Dan should have kept his mouth shut about the issue until EVERY SINGLE FRIGGIN’ software manufacturer and EVERY SINGLE FRIGGIN’ home grown version of DNS and EVERY SINGLE FRIGGIN IDS/IPS vendor had a patch (read comments here), then, well, you’re not stupid, but you are unrealistic.
Posted: 24 Jul 2008 09:46 PM CDT
Several weeks ago, in "A Question of Ethics", I asked EC readers whether it would be ethical "to deliberately seek out files containing PII as made available via P2P networks". I had recently read an academic research paper that did just that, and was left conflicted. Part of me wondered whether a review board would pass such a research proposal, or whether the research in the paper was even submitted for review. Another part told me that the information was made publicly available, so my hand-wringing was unwarranted. In the back of my mind, I knew that as information security researchers increasingly used the methods of the social sciences and psychology these ethical considerations would trouble me again.
Through Chris Soghoian's blog post regarding the ethical and legal perils possibly facing the authors of a paper which describes how they monitored Tor traffic, I realized I was not alone. Indeed, in a brief but cogent paper, Simson Garfinkel describes how even seemingly uncontroversial research activities, such as doing a content analysis on the SPAM one has received, could run afoul of existing human research subject review guidelines.
Garfinkel argues that strict application of rules governing research involving human subjects can provide researchers with incentives to actively work against the desired effect of the reviews. He further suggest thats
My concern at the moment is with the other side of this. I just read a paper which examined the risks of using various package managers. An intrinsic element of the research behind this paper was setting up a mirror for popular packages under false pretenses. I don't know if this paper was reviewed by an IRB, and I certainly don't have the expertise needed to say whether it should have been allowed to move forward if it was. However, the fact that deception was used made me uneasy. Maybe that's just me, but maybe there are nuances that such research is beginning to expose and that we as an emergent discipline should strive to stay on top of.
Posted: 24 Jul 2008 05:09 PM CDT
Likely the single most controversial part of the Mirage NAC product involves its use of ARP. Stiennon referred to "ARP twiddling" as our means of quarantine during our recent, er, discussion on the usefulness of NAC. As tempting as it was to respond to the comment in the thread, I thought perhaps it was better to leave it for its own post. So, what's up with our use of ARP? Why do we do it this way and why haven't we moved more quickly to one of the more common means of quarantining endpoints? Here's the lowdown that before has been kept more on the down-low.
More than just quarantine.
One of the first discussions we (at least I) end up having with prospects surrounds the notion of device state. The knowledge of network entry and exit forms the bookends of the NAC process. Any NAC solution must have this basic notion in order to provide the necessary governance. To be sure, many options exist. Some solutions take alerts from the switching infrastructure (SNMP traps, Syslog messages, whatever) based upon either the link state of the port or the population of the bridging tables. Others may have a RADIUS hook, on the assumption that any device that's connecting is authenticating in one way or another. Still others may have a DHCP hook and use address assignment as the state trigger. Finally, some of the inline solutions simply wait to see traffic flow through them. There are three basic reasons we like ARP for this: First, and most importantly, it's immediate since it's part of the initial stack initialization. Second, it's independent of the switching infrastructure, in that it works the same way regardless of downstream switches, whether the switches can send traps, etc.) Third, it's independent of endpoint characteristics, such as OS type and whether the address is statically assigned or dynamically assigned.
But also quarantine
Yes, we use ARP for quarantine. The marketing side of the house prefers "ARP Management" to "ARP Poisoning" or "ARP Twiddling." I don't especially care. The best way to enumerate why we do it this way is to back up and review, at a slightly higher level, what we think quarantining should be and mean:
Quarantining should be fast
This almost seems like it could go without saying. Whether effected for the purposes of device-specific remediation or more generalized network protection, time is both money and risk. A quarantining method that takes, for example, seconds to put in place is, at least to us, a non-starter.
Quarantining should be holistic
One of the fundamental disagreements I have with Stiennon's notion of network security (rant alert) is that packet (as opposed to endpoint connection) dropping is sufficient as a mitigation method. With a threat blended into, say, a bot (with an https control channel), a file-sharing based worm, a keystroke logger that may be logging data but not sending it, and a spam relay, the very notion of separating "good" traffic from "bad" traffic is silly. Take even one of the highly outdated threats (and quaint, by today's standards) threats like Blaster and Welchia. The "bad traffic" in those cases was Windows networking traffic, whose primary usage was *inside* the infrastructure. So then a "good traffic/bad traffic" policy removes windows networking functions from the connection, which removes access to both file shares and (assuming Exchange) corporate email. Now then, how useful, really, is that endpoint's connection? Well, they can get their stock quotes. And they can get to internal portal-based applications. Boy, there's a great idea. "I see, Mr/Ms user that you're infected with malware; welcome to my Oracle Financials application." How does that idea pass muster with anyone?
Quarantining should be full-cycle
This may or may not be a point of debate, but we continue to believe that a quarantine mechanism that is only available at admission time is wholly inadequate. This is the "main" thing keeping us from leveraging 802.1x as a quarantine mechanism (the lack of this capability in 802.1x environments has been a rant of mine before, since the RFC that would allow for this is 5 years old). As much as I like the idea of enforcement at the point of access, I simply do not see how it's workable unless and until it's possible to revoke previously granted access based on policy.
Quarantining should be transparent
I put this last since I think it's one with the largest amount of wiggle room. The thing I always liked the least about SNMP and CLI based VLAN moving is that there remains the need to get the endpoint to go request a new address as a result of the VLAN change. Similarly, the best that DHCP can offer is to play around with the timing elements, but that's not the same as the ability to do quarantine on-demand per policy. Not to beat a dead horse or anything, but RFC 3576 would get us there if the switch vendors would just implement it. Did I mention that the RFC is 5 years old?
So, there you have it. We use ARP for state because it's fast, robust and hard to bypass. We use ARP for quarantine because, quite frankly, we've yet to see any other quarantine mechanism that can fit the bill.
I almost forgot. The main knock against our "ARP twiddling" approach, beyond just the philosophical objection, is that it's easy to bypass. Is it? Some methods may be. Ours is not. Not from our own internal testing, and not from installations spanning 550 customers across 38 countries. And, yes, we thought of the static cache-entry trick already.
Posted: 24 Jul 2008 04:42 PM CDT
Yesterday’s well hyped NAC debate over at Network World certainly received some attention. I was only able to check in between meetings but they posted an entire transcript and it makes interesting reading.
In one corner, Joel Snyder, well respected NAC expert and Interop regular. I’m partial towards Joel because he has been a voice of reason in the networking space for many years, and his work on NAC and NAP is second to none.
In the other corner, Richard Stiennon, self-titled ‘ Security Industry Innovator’ who regularly exclaims ‘NAC is dead’ to anyone who reads his column at Network World. I don’t believe I’ve ever met Richard, but he was previously at Gartner (where he exclaimed IDS was dead), worked at Fortinet and Webroot for a brief period and recently joined an Australian startup doing MSS. I was doing MSS in Australia in 1995 as it happens, so I guess I must be a security innovator as well.
The debate itself got off to an early start with an argument over the definition of NAC. Richard was pretty obtuse and I think Joel did well to stay on topic. Ultimately a lot of what Joel said struck a chord with me - for example.
Every NAC deployment I’ve looked at, and everyone I’ve heard about, has a surprise factor…The surprise is how UNcompliant PCs are with the host AV. Talk to the most Microsoft-savvy IT departments in the world. They’ll tell you they were astonished at how low their compliance level was.
This is exactly what we’ve seen in the field at Napera. We regularly see the surprise factor within a few minutes after plugging in an N24. Whether it’s the endpoint software that IT purchased but nobody is actually running, the PC’s that are months behind on patches or the devices on the network the IT admin had no idea were even there, the ability to see and then manage this situation is what our customers are passionate about. A customer I spoke with yesterday in the health care field said it was like having a microscope on his network for the first time.
Other folks have weighed in on the debate. Alan Shimel claimed a KO by Joel in the first round. Alan, this is one of those times where I can do nothing but agree. The outcome of the debate is clear and even Richard’s former colleagues at Gartner agree there is a healthy NAC market - Joel gets it, and Richard doesn’t. Thanks to Network World for staging the debate!
Posted: 24 Jul 2008 04:29 PM CDT
Posted by Cody Pierce
I would say besides the navigation keys (Esc, Enter, Ctrl-Enter, Arrows), the most often sequence I use is X / Ctrl-X. That's right, cross references. Okay, maybe I use others just as much, but for today's MindshaRE we will be discussing cross references in IDA (I wanted to add some impact to the topic). I will briefly cover what they are, the different types of references, and share some scripts utilizing xrefs that hopefully make your day easier.
MindshaRE is our weekly look at some simple reverse engineering tips and tricks. The goal is to keep things small and discuss every day aspects of reversing. You can view previous entries here by going through our blog history.
Cross references in IDA are invaluable. They show any code, or data, which reference or are referenced from your current position within the binary. This can be in the form of function references, local variable cross references, or data xrefs. When navigating a binary one of the most common uses is function cross references. We need the ability to see what other pieces of code may hit the one we are interested in. This is exactly what xrefs are for.
Lets say we want to see all functions that call the following routine:
.text:76F2CA90 Dns_GetRandomXid proc near ; CODE XREF: Dns_BuildPacket+79
.text:76F2CA90 ; Dns_NegotiateTkeyWithServer+20C
.text:76F2CA90 call Dns_RandGenerateWord
.text:76F2CA95 jmp loc_76F23D8A
.text:76F2CA95 Dns_GetRandomXid endpPressing Ctrl-X (JumpXref) when are cursor is on the functon name we get the following dialog listing the cross references.
Note that references are also visible as automatic comments under the function name. This is useful for enumerating xrefs at a glance. The number of references shown is configurable through the SHOW_XREFS variable in IDA.CFG.
Looking at the dialog box we see four columns, Direction, Type, Address, Text. The first column denotes which direction in the binary the caller is located. Up being before the current function, at a lower address. Down being after the function, at a higher address. Type, indicates what type of cross reference we are looking at. In our example all our references are of type "p" meaning procedure, later we will see others also exists such as write, read, and offset. Address is the location of the cross reference. In this instance the address is the location of the call to our current function. In our example we have symbols so it is an offset from that symbol, but if we didn't have symbols it would be a typical hex address. Text, is the text that appears at our references address. For this example we see a typical call, others may show the instructions reading or writing to our cross reference.
Having symbols helps us easily identify the purpose of a function. Cross references help us identify the path code takes to a function. As an example I created an IDA Python script called get_recursive_xrefs.py that takes matters one step further. This script will take a function, and recursively grab cross references to the function. This gives us a calling tree as far back as possible to the current function. Running the script produces the following sample output:
Getting xrefs to 76f39c9f (Dns_RandGenerateWord)
DoMultiMasterUpdateAs you can see we get a nice list of calling functions. In an instant I can find what might create transaction IDs in Windows.
I mentioned earlier that functions are certainly not the only thing you can cross reference. When in a function we can also reference local or global variables in an operand (JumpOpXref). Lets look at the next cross section of assembly.
.text:76F39CCC inc edi
.text:76F39CCD push edi
.text:76F39CCE push offset aMicrosoftStron ; "Microsoft Strong Crypto"...
.text:76F39CD3 push ebx
.text:76F39CD4 lea eax, [ebp+var_4]
.text:76F39CD7 push eax
.text:76F39CD8 call ds:_imp__CryptAcquireContextA
.text:76F39CDE test eax, eax
.text:76F39CE0 jnz short loc_76F39CE5Putting our cursor on the local name "var_4" at address 76F39CD4 and pressing "X" gives us a familiar dialog box.
We discussed all of the columns already, the only new information is the "w", and "r" types. I alluded to this earlier but it simply means the instruction either reads, or writes to the target variable. This can be extremely helpful in identifying when a variable is initialized.
Lets move to a different section in the binary. Looking through the .data section of most binaries can be interesting. We obviously see lots of global addresses that are used to hold values. Knowing both the x-ref shortcuts (Ctrl-X, X) can get us the references to those locations. But lets look at another common occurrence in the data section. Vtables are accessed via the data section in most cases. The problem is the only xref from the data section will be at the beginning of the vtable. However if we do a xref on each function in the binary we can determine a function that is called from the data section. For instance the xrefs to the function MxWireRead look like this.
Following those show us an obvious table of handlers.
.data:0104E300 RRWireReadTable ; DATA XREF: Wire_CreateRecordFromWire+43
.data:0104E300 dd offset CopyWireRead
.data:0104E304 dd offset AWireRead
.data:0104E308 dd offset PtrWireRead
.data:0104E30C dd offset PtrWireRead
.data:0104E310 dd offset PtrWireRead
.data:0104E314 dd offset PtrWireRead
.data:0104E318 dd offset SoaWireRead
.data:0104E31C dd offset PtrWireRead
.data:0104E320 dd offset PtrWireRead
.data:0104E324 dd offset PtrWireRead
.data:0104E328 dd offset CopyWireRead
.data:0104E32C dd offset CopyWireRead
.data:0104E330 dd offset PtrWireRead
.data:0104E334 dd offset CopyWireRead
.data:0104E338 dd offset MinfoWireRead
.data:0104E33C dd offset MxWireRead...
...This is clearly a tedious process, perfect for automating with a script. So I wrote an IDA Python script called find_data_section_functions.py which runs through every functions xrefs that originate from the data section producing the following example output:
0x0103606d: zoneTransferSendThreadFinally, there are three additional graphs IDA provides the user for viewing cross references. These graphically display the same cross references we covered, and can even display a down graph showing all the functions your current function may call. These can be located in the Views->Graphs->Xrefs to/Xrefs from/User xrefs chart. While these may be handy they suffer two problems. First and foremost you can't navigate them. That means you can see an interesting function but have to switch back to IDA and manually type the address in to jump there. Secondly, it can be unwieldy often times showing thousands of functions. This can be limited using the User xrefs chart and limiting the recursion depth, but i would rather just run a script I can interact with. Play around with the graphs, you may find them very helpful.
There are many many other uses for cross references. I can't possibly cover everything in this little post. I hope this has been a good intro to them, or maybe sparked some ideas of your own. Leave a comment if you have any novel uses, or other useful hints. I hope you enjoyed this weeks MindshaRE.
Posted: 24 Jul 2008 04:22 PM CDT
I like to talk about innovate products and Xobni, the plugin for Outlook, definitely fits the bill. I blogged about Xobni on my NWW blog back in February and as you can tell from that post, I was and still am excited about Xobni. Unlike most things that get installed on my computer only to be removed a few days or weeks later, the "coolness" of Xobni hasn't worn off. More importantly the usefulness of Xobni causes me to have it stick around and take up real estate in my Outlook window. But Xobni isn't perfect, either. I see some real challenges to be able to truly gain the benefits it could bring to email, but we'll talk about that in a moment.
Here's a video tour of Xobni. Also check out my podcast interview with Matt Brezina, co-founder of Xobni. I'm starting to do more product reviews and strategy work as part of my Converging Network business, which is a pleasure since I enjoy working with and assessing new products and trends anyway.
(Contact me if you are interested in finding out more about my Converging Network product strategy services.)
Xobni - The Movie
Xobni - Email's New Connection To People
Now that Xonbi integrates with LinkedIn, I find that I use it a lot more. It's actually the little features I Xobni I like most. Showing someone's portrait loaded up on LinkedIn when I click on their email makes the connection to that person even more real. It makes email just a little more personal. And, if I don't know them well, it's easy to go learn about the person from their LinkedIn profile. (You have a LinkedIn profile with a picture uploaded don't you? Here's mine. Lets connect!)
One of the most useful things about Xobni is knowing the email habits of the people I converse and work with regularly. The little bar chart showing the distribution time of emails received from them throughout the day lets me know when they are more likely to read the emails I send, or take my call. This could also be invaluable to a sales person looking to reach clients, though I'm not sure people these days answer phone calls from people they don't know. (Sales people tell me virtually no one answers their business phone much any more.)
Xobni - Changing How You Use Email
It's rare for me to keep a gadget or plugin around for long. Their installed half-life is usually about 2 days, or no more than two weeks on my computers. So you know Xobni must be delivering something of value, especially given the screen real estate it takes in Outlook.
Changing how you use email is a double-edged sword, as I'll talk more about in a moment. I find the attachments ("Files Exchanged") section of the Xobni plug-in one if it's most useful functional features. It can prevent a lot of searching for the right email with the right attachment, and you can dig in deeper if you want to see the email or email thread the attachment was a part of.
I haven't found that I use the "XYZ's Network" section (where it shows you other people who have been in conversations with you and this person) as much as I thought I would. It's a great idea, but I just haven't added that capability into my email use thought patterns for some reason. The "Email Conversations" thread is also something that I don't use much, mostly because I don't find the way the threads are presented as being that useful. I'll say some more about this down below.
Xobni - Kudos For Being A Well Behaved Outlook Plugin
My first rule of all plugin is "be useful". I really don't need an Adobe Acrobat plugin for Outlook or PowerPoint. Is use the print driver to create pdf files. Same for screen captures. That's why I have SnagIt. So, unless there's a really good reason why this plugin is needed, don't create them in the first place, and certainly don't install them by default. Xobni definitely meets the "be useful" criteria.
The second rule is "don't create other problems". How many times does your Outlook crash because of some funky plugin or software incompatibly. It seems virtually guaranteed that if any other software other than Outlook touches your pst and ost files, you're doomed for the dreaded "Not Responding" message. I have to say that I've had relatively few problems with Xobni and Outlook. Not that its never happened, as I have encountered a few situations where Xobni had the files open that Outlook needs in order to start properly. But the problems and crashes have been very, very few.
Kudos to the Xobni team for figuring out how to do this. They should bottle up whatever they are doing and help all the other software guys figure out how to do the same.
Xobni - The Challenge Of Getting The Benefits
Xobni has two big challenges in my view. First, all of Xobni's capabilities are constrained by being in an Outlook sidebar plugin. There's limited screen real estate, and it's mostly vertical. Networks of people (lists), conversations (lists), viewing email threads, all have to be viewed in this small area and it does detract from its usability and usefulness. Because of this, I don't use the email threads feature much at all, and the relatively static content (time distribution bar graph, email stats, portrait and contact info) are the things I look at and use most. It's a tough row to hoe being in a sidebar and Xobni would be much more useful if it was integrated into the email client itself. Tell me again why Microsoft hasn't gobbled up Xobni by now? Hmm.
Xobni also implies multiple user behavior changes to access its benefits. We use email clients so frequently everyday, all through the day, that the use case habits we've formed with Outlook are hard very to break. Instead of sorting back and forth between sender and sent date in order to locate what I'm looking for, you have to break that habit and look in the Xobni sidebar for what you might hunting to find. You have to remember "oh, there's another way to find the last version of that attachment sent to Bob", and go over and use Xobni to do that. On the flip side, being an Outlook sidebar plugin is an advantage over being a separate application from Outlook all together.
Breaking patterns and habit changes are something every product faces to varying degrees, but email's so heavily used that those habits are more difficult to break.
Xobni - Conclusion: Download It. You'll Use It.
Download Xobni. I think that title pretty much sums it up.
This posting includes an audio/video/photo media file: Download Now
Posted: 24 Jul 2008 03:44 PM CDT
I had some time last night while the kids were asleep to record a podcast. I have been wanting to get into it again, but I just haven’t dedicated the time to it. But I finally did it. I explain a bit of what I am planning for this podcast (product manufacturer interviews, etc.). Not sure yet what all I want to do, but I would like to put some time into it. We’ll see what happens.
Mostly I talk about the DNS flaw and the issues with disclosure that have been brought out again. It has been a while since those issues have been on the forefront, so it is interesting. Really it is me rambling, but that is what I do best.
Music is from .22 and Wendy Wall.
This posting includes an audio/video/photo media file: Download Now
Posted: 24 Jul 2008 03:16 PM CDT
The vulnerability is real, and the risk is high. Patch your stuff.
I'm currently sitting in on the Dan Kaminsky Blackhat Webinar. There was not a whole lot of interesting technical details revealed that aren't already public facing. The majority of the discussion was begging and pleading people to implement the patches for this problem.
Dan made a point to state that the leak of the vulnerability details is not an issue at this point. Instead, rightfully so, focusing the discussion on getting the world to patch.
Some notable quotes from the webinar are inline below:
- "At least two exploits packs have been released in the last 24 hours"
- "86% down to 52% percent vulnerable targets thanks to our groups disclosure effort"
- "Where do we go from here? Oh there's going to be an awesome debate on that!"
And my favorite...
- "It's in Metasploit now, it's going to destroy us!" - Dan Kaminsky
What's my take on it?
One thing that the entire debacle reinforces is that responsible disclosure does work (to a degree). The major issue with the process as executed was that too much self promotion, by many different hands, was involved thus causing other researchers to jump all over it and eventually leak the details to the world. The circle was made too big with no accountability for people who didn't keep things secret. When money is involved nothing will be kept secret. All a researcher can do is his/her best to get things secure before releasing the details of the vulnerability to the general public. Dan did what he could and I applaud him for for the good faith effort that he made.
Would it have been safer to just have Dan K suck it up and let people think he was full of crap instead of bringing in a trusted circle of researchers to confirm his findings? Possibly.
Would people have patched without having additional third party independent researchers confirm Dan's findings? Possibly Not.
Would full disclosure have made the Internet more secure at a faster rate? Absolutely not.
In a future blog post we'll debate the validity of weaponizing this vulnerabaility within days of disclosure. Was this good, bad, or indifferent? Criminal? Good for the world? What are you thoughts?
Posted: 24 Jul 2008 02:33 PM CDT
It turns out I'm going to DefCon after all. It will be my first DefCon and my first trip to Las Vegas. As a matter of fact, it will be my first trip to Pacific Time ;-) I'll get in Thursday evening late (midnight) and I will leave again Sunday evening (9pm-ish). I am also looking forward to meeting some of the people I've been talking to online for a long time now! T minus two weeks and counting.
Posted: 24 Jul 2008 01:16 PM CDT
Vox Libertas has taken an approach that I can appreciate. On the one hand, many people are unhappy with the telecom immunity. I'm one of them. But people I respect are also saying that it's a good compromise, and compromise means you don't get everything you want.
Vox Libertas goes to the trouble of (shock, horror) reading the primary sources and explaining what's in the new FISA bill. He also shows his own sources.
No matter what you think, this is worth reading.
Posted: 24 Jul 2008 05:29 AM CDT
MoocherHunter™ is a mobile tracking software tool for the real-time on-the-fly geo-location of wireless moochers and hackers. It’s included as part of the OSWA Assistant LiveCD we mentioned quite recently.’ I wanted to mention this tool separately as I think it’s very cool! MoocherHunter™ identifies the location of an...
Read the full post at darknet.org.uk
Posted: 24 Jul 2008 02:38 AM CDT
Posted: 24 Jul 2008 12:49 AM CDT
When Dan Kaminsky released a cryptic announcement that one of the core technologies (DNS, the Domain Name System) tying the Internet together was vulnerable to a critical weakness it gained the attention of many people, especially given that many of the software vendors who create the vulnerable software had come together to address the problem and the fact that Kaminsky was going to delay the release of information until early August, at the Las Vegas Black Hat conference.
Despite the secrecy about the details of the vulnerability, if you don't want anyone else to work it out for you, then don't tell anyone you've found something. The lack of openness about the issue led many to start speculating and eventually Halvar Flake hit upon the correct answer. When Kaminsky himself challenged others to look into the security of DNS and look at what might have been missed, the outcome was almost guaranteed. Indeed, since the vulnerability was correctly speculated on, exploit code has been publicly released through a number of websites and mailing lists.
Since the correct guessing of the vulnerability, the general response has been one of panic. Those who have read and understood the technical details have largely been left scratching their heads - there's not really anything new there. All it demonstrates is a corner case of a previously known issue. Certainly the issue is one that should have been fixed properly the first time, but for whatever reason it wasn't.
What is more interesting is to see the vitriol that has now emerged as people realise the information is out there. Some of the most serious claims have been levelled against the team at Matasano Chargen for having been the ones to actually spill the beans, as Halvar Flake had only speculated about the details. The pulled post at Matasano Chargen did more to get people to sit up and take notice than it would have if it was left in place and the fact that they had declared that they were part of the trusted few who had the details confirmed by Dan Kaminsky only further validated for many people what had been posted.
Part of the problem is once data has been published on the Internet it is awfully hard to completely retract it, even if it has only been there for a couple of hours in total. As the retracted post at Matasano Chargen promised technical details on the vulnerability it was quickly snapped up by the lucky few who were able to see it and then reproduced on numerous other sites.
Information Security has egg on its face over this issue. It shows how immature the industry can be and how poor many people's skills are at managing release and coordination of information. To his credit Dan Kaminsky did find something that hadn't been fixed. Whether that is an old problem or not is irrelevant for the time being, as it affected a significant portion of the Internet's DNS servers and required a coordinated effort by vendors to do something about it.
The whole incident has left a sour taste in many mouths.
Is Black Hat or DefCon the place to release all about a vulnerability? After the debacle surrounding David Maynor and Jon Ellch's Black Hat OS X wireless vulnerability demonstration in 2006, perhaps people who are looking to release sensitive vulnerability information with some flair should reconsider the pre-release media blitz. It runs the very high risk of turning what might be a valid issue into a circus and leaving all involved worse off for the experience.
Richard Bejtlich suggests that the incident might have been better handled if initial and full disclosure was handled by an impartial third party and the conference used for post-disclosure discussion and the details of how the vulnerability was found. The problem is then finding what can be regarded as an impartial third party.
The open discussion that was created following the initial announcement turned up a more serious problem, which will continue to have problems for users long after most systems are updated to address the vulnerability. NAT, a very common technology that allows for multiple systems to sit behind a single network connection wasn't considered in the vulnerability equation but it was soon realised that the method implemented to protect against the vulnerability would break down when network traffic encountered most NAT devices, with the result of zero protection against the vulnerability.
The whole idea of responsible disclosure, most famously set out by Rain Forest Puppy, has broken down in this case. Those who were not briefed in with details on the vulnerability feel that security by obscurity was the gameplan and watching how the incident played out in the media and how those who knew were (mis)managing the information reinforced this idea for them. As far as those who did know the details, they saw the withholding of information as a necessary step to prevent widespread attack before updated systems could be put in place. The problem was that this left everyone else having to guesstimate the severity of the vulnerability, or having to trust the claims being made by people who weren't releasing enough information to back up their claims.
The problem with the approach taken was that it was set up such that the carrot being dangled was too tempting for everyone to leave alone until Black Hat. When the vulnerability was finally released, it didn't seem to make a lot of sense, surely the vulnerability wasn't as simple as that. With the way that a number of people in the know were talking it sounded like the world was about to end.
Within the structure of a DNS response it is possible for amplifying data to be returned about a domain so that subsequent requests to that domain or subdomains can be made more efficiently, either by identifying the correct authoritative server to query or by supplying the data direct to the requesting system so that it doesn't need to poll the server.
It is this particular feature which is the key to the whole discovery made by Dan Kaminsky. While it should not be possible (poor implementation of the specification aside) for this amplifying data to change the details of other domain entries, it is possible for the amplifying data to change the details for parent domains. This means that a poisoned response for poisoned.example.com can change the details for example.com.
Without the source port randomisation, it has been discovered that it is possible to overcome the message ID randomisation and inject a fake response that poisons the entry for the top domain in around 10 seconds on a fast modern system. To achieve this, numerous requests are made for fake subdomains until the right combination of ID and timing have been found to inject the response. The solution of adding increased randomisation to the source ports used in making the requests adds another layer of complexity for the hacker to overcome, one which is enough for this point in time.
It is a band-aid type solution? Only time will show, but it might prove good enough for the next few years at least. Perhaps a better solution would be that every domain should include a wildcard subdomain entry that identifies the legitimate main server as the authoritative one for all subdomains for that particular domain. Sending this wildcard information in the DNS response would result in increased network traffic but it would also completely neutralise a spoofing attack (unless the attacker is lucky enough to have the right combination of ID, timing, and source port to beat the legitimate response to the end user). It might break some business models that rely upon selling / marketing subdomains and mean more authoritative DNS servers need to be set up, but that is what might be necessary to completely neutralise the vulnerability.
Posted: 23 Jul 2008 04:17 PM CDT
OK, I haven’t given out an OJ award in a while, but The Hoff deserves this far and away. He wrote a poem about the DNS flaw debacle and the debate that has ensued about disclosure, and it is the most awesome display of security poetry to date in my not-so-humble opinion.
So Chris, here ya’ go man. Enjoy!
Posted: 23 Jul 2008 11:09 AM CDT
From some comments Dwayne made on yesterday’s post.
IT- GRC is just threat / vulnerability pairing when you consider external regulatory compliance pressures as the Threat Community. If you think of it this way, you might be able to understand why I’m not keen on the value of GRC many current solutions. As Shrdlu (or was it rybolov?) once said - GRC is (usually*) just a report. Turns out, it’s just a threat/vulnerability pairing report.
* “usually” is my addition.
Posted: 23 Jul 2008 07:09 AM CDT
OK, so the Matasano people accidentally let everyone know what the DNS flaw was. I posted my thoughts on that at my CW blog. But then I read Pete Lindstrom’s little post about the issue, and I just have to wonder what Pete is thinking. Pete says this:
Wow. So Mr. Lindstrom, how do you propose that Dan let people know they need to patch their DNS WITHOUT TELLING THEM?!?!? Dan did everything he could not to let anyone but a few select "need-to-know" people about the flaw. He told them so they could develop patches. Then he announced it after they developed the patches. He did a great job with this.
What he didn’t want getting out was the details of the attack. But I am pretty sure Dan knew that this would happen eventually. There are too many people out there looking at this now for it not to come out. But hey, a man can hope, right??
So seriously Pete, think about it. Dan was trying to keep the flaw itself a secret before he announced so patches could get developed, then he announced so people would would know there was a flaw and would patch, and then he was trying to keep the details secret after he announced so people had time to patch. But he couldn’t NOT tell people and expect them to patch.
Posted: 23 Jul 2008 05:42 AM CDT
In the story we recently covered where Terry Childs had locked San Fransisco officials out of their own network, there is a new development. He’s handed over the passcode to the Mayor, Gavin Newsom. It seems he came to his senses and he also seems to have VERY little faith in the IT administration for the [...] ShareThis
Read the full post at darknet.org.uk
Posted: 23 Jul 2008 04:30 AM CDT
You might have read the Search Networking article about NAC and Server virtualization. I did. At first I was amused as I know everyone that is mentioned (With some of them I worked closely along the years, and we meet at least every year to party). But then, after some folks I know asked me to comment, I thought it would make sense to record my thoughts, as the article contains some disturbing "facts" that might cause confusion.
So for the sake of mankind and the security industry, I'll give you my rant on server virtualization alongside feedback on the article. For those who know me, it should not be surprising to find that I have comments on the very first paragraph...
Network security, especially network access control (NAC), is the Achilles' heel of server virtualisation. With virtual servers moving around the data centre, traditional access control is difficult to apply. This can be particularly challenging when organizations need to meet stringent data audit control standards for compliance with payment card industry (PCI), healthcare industry (HIPAA) and governance (Sarbanes-Oxley).
First, I would argue that NAC is all about admission control and as such, it is a client-side solution, to protect the network from rogue and non-complaint endpoints. Then I would add that the PCI standard, which requires organizations to meet stringent data control do not mention NAC at all, but never mind, I'd like to take the high road and focus on the topics discussed.
The main issue mentioned is that security systems lose visibility into the traffic running across a virtual LAN, which may change as the virtual machines (VMs) move across physical machines. I agree. Virtual systems require us to think differently. For more than two years now, SecureSphere provides VLAN doctoring thus it can protect the virtual systems even when external appliances are used to protect and audit multiple virtual systems (it also includes router, network firewall, IPS, Web Application Firewall). Using the object oriented policy model (in contrast with a traditional security ACL format) also ensures that the physical location of the protected server is not an issue. Add that favorite mode of SecureSphere operation is as a layer two transparent bridge and you have the formula of what the article calls "new thinking."
Trying not to sound like an elitist arse is challenging (I'm not!) but I take it personally when I'm being advised to "think differently." I am !! At any rate, the article does include some good comments about the differences between Imperva as an Application Data Security vendor and the traditional security solutions vendor. I would even agree that in some cases (but not the one mentioned in the article) forcing VM traffic to an external device is not a good approach: Thus, in some cases, database agents are needed to audit activity within the VM. The agents can be installed on a virtual machine in a network agnostic, administrator-friendly, yet tamper evident manner. Such agents must be light and very, very accurate as the impact of false positives on the server itself could be very expensive.
NAC and Client Virtualization is another topic. I'll let the relevant vendors comment.
Posted: 23 Jul 2008 04:23 AM CDT
It is now "official" I'm spending the past week and the next few days at the Coolest place in the Mediterranean.
It's "Home at the end of the world" as Henry Alford wrote.
Posted: 23 Jul 2008 04:01 AM CDT
The European Court of Human Rights has ordered the Finnish government to pay out €34,000 because it failed to protect a citizen's personal data. One data protection expert said that the case creates a vital link between data security and human rights."Data blunders can breach human rights, rules ECHR" on Pinsent Masons Out-Law blog.
|You are subscribed to email updates from Black Hat Security Bloggers Network |
To stop receiving these emails, you may unsubscribe now.
|Email Delivery powered by FeedBurner|
|Inbox too full? Subscribe to the feed version of Black Hat Security Bloggers Network in a feed reader.|
|If you prefer to unsubscribe via postal mail, write to: Black Hat Security Bloggers Network, c/o FeedBurner, 20 W Kinzie, 9th Floor, Chicago IL USA 60610|