Posted: 08 Aug 2008 02:00 AM CDT
The second and final day of Blackhat has come and gone. Some good presentations today, and probably more interesting to the critical/control system area of security. The activity at Defcon is already starting to pickup, and lots of parties going on tonight from the major and minor players and generally a lot of winding down for the presenters.
I started the day with a great presentation from Felix Lindner on forensics in Cisco IOS. Essentially examining full memory dumps, and some of the configuration and debugging techniques available on IOS. This is something that I think we could see applied to PLCs, assuming the PLC has some sort of rudimentary debugging interface it could be trivial to checksum the RAM/ROM and detect changes, both intentional and unintentional. Also interesting pickup from the presentation is that there are estimated to be somewhere in the neighborhood of 100,000 different IOS builds out in the wild and approx 15,000 of which are currently supported by Cisco.
Travis Goodspeed has done some interesting work on dumping and reprogramming the firmware on the MSP430 microcontroller. Fascinating research, but honestly I didn’t have enough of an electrical engineering background to completely understand it, lots of waveforms.
The SCADA fuzzing presentation was interesting. There was a lot of buzz leading up to the talk and rumors floating around about vendor lawyers and court orders, but in the end the presentation was given. Essentially Sergey Bratus of Dartmouth College, working with TCIP was able to cause a lot of damage to some real SCADA systems. With no real knowledge of the proprietary protocols Sergey was able to use some compression techniques along with some evolutionary fuzzing to completely crash the system. No details of exploits and such were given, and the presenters were careful not to give any real details about the vendors affected. There is quite a mess of protocols floating around these critical systems and anyone who’s looked at them knows that they aren’t exactly the cleanest/clearest, and the only solution to that is open and peer reviewed standards. A lot of side talk after the presentation about asset owners pushing vendors, and government intiatives/requirements.
Lastly, there was a big announcement from Microsoft. I was unable to attend as I was in the SCADA talk above, but it appears that they’re going to begin sharing information with customers and partners on a more official basis. From the Q&A that I caught the last bit of there seemed to be hints of MS working with 3rd party software developers to fix vulnerabilities in their software running on the Windows platform. Few details were given, and they were clear that they wouldn’t be acting as a CERT, but clearly they’re preparing to be more involved with the process. I have to think that they’re going to be most interested in Enterprise software, but without a doubt there will be some interested in critical systems as well. It will be interesting to see how the program shapes up over the coming months.
Thats all for now, the chaos of Defcon really gets going tommorrow, should be some interesting stuff, and one very interesting on involving cell phones.
Posted: 07 Aug 2008 10:59 PM CDT
Posted: 07 Aug 2008 06:03 PM CDT
I’m sitting in the Extreme Client-side exploitation talk here at Black Hat and it’s highlighting a major website design risk that takes on even more significance in mashups and other web 2.0-style content.
Nate McFeters (of ZDNet fame), Rob Carter, and John Heasman are slicing through the same origin policy and other browser protections in some interesting ways. At the top of the list is the GIFAR- a combination of an image file and a Java applet. Since image files include their header information (the part that helps your system know how to render it) and JAR (java applets) include their header information at the bottom. This means that when the file is loaded, it will look like an image (because it is), but as it’s rendered at the end it will run as an applet. Thus you think you’re looking at a pretty picture, since you are, but you’re also running an application.
So how does this work for an attack? If I build a GIFAR and upload it to a site that hosts photos, like Picassa, when that GIFAR loads and the application part starts running it can execute actions in the context of Picassa. That applet then gains access to any of your credentials or other behaviors that run on that site. Heck, forget photo sites, how about anything that let’s you upload your picture as part of your profile? Then you can post in a forum and anyone reading it will run that applet (I made that one up, it wasn’t part of the presentation, but I think it should work). This doesn’t just affect GIF files- all sorts of images and other content can be manipulated in this way.
This highlights a cardinal risk of accepting user content- it’s like a box of chocolates; you never know what you’re gonna get. You are now serving content to your users that could abuse them, making you not only responsible, but which could directly break your security model. Things may execute in the context of your site, enabling cross site request forgery or other trust boundary violations.
How do manage this? According to Nate you can always choose to build in your own domain boundaries- serve content from one domain, and keep the sensitive user account information in another. Objects can still be embedded, but they won’t run in a context that allows them to access other site credentials. Definitely a tough design issue. I also think that, in the long term, some of the browser session virtualization and ADMP concepts we’ve previously discussed here are a god mitigation.
Posted: 07 Aug 2008 03:36 PM CDT
One of the more interesting session I went to yesterday was a talk by Chris Hoff called "The Four Horsemen of the Virtualization Apocalypse." (If you've never read Hoff's blog, you should check it out at http://rationalsecurity.typepad.com/.)
I thought I was keeping a close eye on security and virtualization issues, but this talk illustrated how wide and varied the topic really is. This was not about Blue Pill and it wasn't about having security monitors in the hypervisor - instead he focused on how virtualizing physical devices (e.g. switches, systems) will cause lots of problems for security architects and administrators.
Briefly, here are the four horsemen:
Now, if you want to read the much more thorough version, see Hoff's original post here.
Okay, how does this all relate to the title of my post? Not much. However, much later on day one, things really started rolling.
After being crowded out of the Shadow Bar, a bunch of us ended up over at Casa Fuente (A cigar bar in Caesars forum). Five minutes after arriving, someone spilled a drink in my lap, big fun! It turns out that it was Stepto's birthday, and Hoff makes sure everyone has a drink and we all sing happy birthday to Stepto. Check out part of it, courtesy of Jack Daniel:
Immediately after the toast, Jennifer Jabbusch knocks over a table, falls to the floor and begins having a seizure. Stepto rushes over, trying to help, and just about that time, she flips over and starts laughing - total fakeout! Everybody bursts out laughing.
Shortly after that, they closed for the night and kicked us out and we all headed over to Cleopatra's Barge. There weren't enough seats or tables for us, but I noticed that the "reserved" barge seating was empty. Drawing upon a clever technique (i.e. sometimes called "asking") I social engineered a waitress into letting us have the reserved area. Within mere minutes, several security geeks are on the dance floor, doing us proud.
This leads me to the Four Horsemen of Cleopatra's Barge. (Though I was out there too, I am excluding myself since simply because I can.)
Though our collective dancing does not signal the end of the world, it certainly capped an excellent day
Posted: 07 Aug 2008 02:57 PM CDT
I am a non-native English speaker. Usually I can get along just fine with small talk and volubility chats, but when it comes to grammar I have to sweat before a decent paragraph can be posted or sent.
My biggest challenge is the comma. This small punctuation mark makes my life difficult as I have a tendency to use it very often and probably more than I should. As a service to all the others that might be facing a similar challenge, I would like to recommend a book that was given to me not too long ago and helped me to overcome my comma challenges.
The book "Elements of site" (an online version is available) was written almost 100 years ago yet it provides invaluable guidance how to use English properly. The elementary rules of usage section includes the most common comma usage rules including:
(click to see a larger image)
Posted: 07 Aug 2008 12:30 PM CDT
I saw this article in the Arizona Republic Monday about how the insurance companies are able to save money by gathering health care records electronically, make more accurate analyses of patients (also saving money) and be able to adjust premiums (i.e., make more money) based upon your poor health or various other things. You know, like ‘pre-existing’ conditions, or whatever concept they choose to make up.
Does anyone think that they will be offered an option? The choice of not providing these electronically? Not a chance. This will be the insurer’s policy, and you can choose to not have insurance, or turn over your records.
Does this violate HIPPA? To me it does, but since you are given the illusion of choice, their legal team will surely protect them with your ‘agreement’ to turn over these electronic documents. And why not, with all the money they saved through data analysis, they have plenty of money for their legal expenses.
Does anyone think that the patient will be allowed to see this data, verify accuracy, or have it deleted after the analysis? Not a chance. Your medical data will most likely have a “half life” longer than your life span. That stuff is not going anywhere, unless it is leaked of course. But then you will be provided a nice letter in the mail about how your data may or may not have been stolen and how you can have free credit monitoring services if you sign this paper saying you won’t sue. It’s like watching a car wreck in slow motion. Or a Dilbert comic strip.
Let me take another angle on the data accuracy side of this proposition. When I first graduated college, I walked down the street to open a checking account with one of the big household names in banking. For the next 12 months I received a statement each month, and not one of those banking statements was 100% correct. Every single statement had an error or an omission! My trials and angst with a certain cell phone provider are also well documented. Once again, charges for things I did not order, rates that were not part of the plan, leaked personal data, and many, many other things during the first year. I had one credit card for a period of 12 years, and like clockwork, a late fee was charged every 6-9 months despite postmarks and deposit dates which conclusively showed I was on time. I finally got tired of having to call in to dispute it, and just plain fed up with what I assumed was a dastardly business practice to generate additional revenue from people too lazy to look at their bills or pick up the phone and complain. I had a utility company charge me $900, for a single month, on a vacant home I had moved out of three months prior. One out of two grocery store receipts I receive is incorrect in that one or more prices are wrong or one of the items scans as something that it is not. Other companies who saved my credit card information, without my permission, tried to bill me for things I did not want nor purchase. Electronic records typically have errors, they are not always caught, and there may or may not be a method to address the problem.
The studies I have seen on measuring the accuracy of data contained within these types of databases is appalling. If memory serves, over 20% of the data contained in these databases is inaccurate due to entry or transcription errors, is incorrect logic errors in transformational algorithms, or has become inaccurate with the passage of time. That later item means each subsequent year, the accuracy degrades further. There is no evidence that Ingenix will have any higher accuracy rates, or will not be subject to the same issues as other providers, such as Choicepoint. They say computers don’t lie, but they are flush with bogus data.
Now think about how inaccurate information is going to affect you, the medical advice you receive, and the cost of paying for treatment! There is a strong possibility you could be turned down for insurance, or pay twice as much for insurance, simply because of data errors. And most likely, the calculation itself will not be disclosed, for “Pharmacy Risk Score” or any other actuarial calculation. If this system does not have a built-in method for periodically certifying accuracy and removing old information, it is a failure from the start. I know this is a recurring theme for me, but if companies are going to use my personal information for their financial gain, I want to have some control over that information. Insurance companies will derive value from electronic data sharing because it makes their jobs easier, but the consumer will not see any value from this.
Posted: 07 Aug 2008 11:52 AM CDT
Yesterday was a great day at Black Hat. I would tell you all about it, but it seems Mitchell thinks that it best that we don't talk about what goes on here at Black Hat. Now, far be it from me to break "Cardinal Rules" (has anyone ever really thought about what exactly is a "cardinal rule"? Why not a Blue Jay or Falcon rule?) but if we can't talk about it, what good is it. I think Mitchell is confusing divulging the really juicy Vegas stuff, from just the mundane. So let me tell you about my excellent adventure yesterday at Black Hat.
Posted: 07 Aug 2008 11:45 AM CDT
If you felt the winds of change last week, it wasn't only SoCal's earthquake aftershock. McAfee, announced that it is "redefining the entire data protection market." In a press release, last week, they announced the acquisition of Reconnex, a network based DLP vendor, that followed an earlier acqusition (2006) of the host based DLP vendor Onigma. Since I was part of another team that was "redefining the market" earlier, I feel that given the time perspective (almost two years) there are few DLP lessons that can be learned from McAfee's acquisitions that apply to the DAM space, especially related to the network vs. host debate.
Ask anyone that attempted to deploy a DLP solution and he'll tell you that the main obstacles are a result of the deployment-related issues: Identifying the starting point, the number of endpoint systems, data discovery, classification and host protection. Database Activity Monitoring and Security solutions are less effected by those factors due to the inherent nature of the DBMS model and business applications that they are protecting (Note to self: a good idea for another post).
Choosing a host based solution for activity monitoring and security, when the bulk of operations are performed over the network, will increase complexity, add burden on the host (increased CPU % and memory footprint) and requires a very complicated policy management model, as McAfee probably found out. Unlike laptops that can be stolen, left behind or simply lost, database servers have a tendency to stay in a known location. Sure, there are some use cases such as separation of duties (SOD) and privileged user monitoring (PUM) that require the monitoring to take place on the database server itself.
Database activity monitoring (especially for auditing purposes and compliance with regulations like PCI) requires inspection of all database activity. Network based solutions (like SecureSphere) uses a network appliance to examine the transactions related to the database, either in inline or sniffing mode (off a TAP or a SPAN port usually). However, this method presupposes that the activity is occurring between the database server and a remote (i.e., non-local) application or user over a database connection.
When the database activity is not available for inspection by the network appliance (e.g. some database activity is performed locally on the database server or via one of the few unsupported encryption methods), something else is needed. In this case SecureSphere uses a light-weight database agent that is installed on the audited database and can examine all relevant communications, eliminating blind spots.
This light-weight database agents captures only relevant database activity and sends traffic to a gateway appliance for analysis and audit using different transport methods. It has negligible performance impact on the mission critical database server and allows to maintain a unified policy model with network activities and therefore reduces the overall administrative burden.
Looking at the past and learning from experience tells us that heavy agents and agent only approaches indeed are limited in DAM as well as in the DLP market. Solutions that rely on host alone will never scale. Sure, there might be a scenario when one will buy into this concept and even will try to implement it, looking into large scale production deployments, it will only be possible with combination of fast, reliable network based solution and lightweight, hassle-free host agents.
Will McAfee redefine the market (again)? only time will tell.
Posted: 07 Aug 2008 11:39 AM CDT
Day 1 of Black Hat 2008 is in the books. It's great to see a lot of old friends, and it seems this year (more than the last two) many of the folks I'm talking to are more focused on the networking than on the session. Not me. I'm still fired up about seeing really smart guys discuss what they are up to and give me a lot of food for thought about how we need to continue protecting ourselves.
I ended up hitting almost all the sessions I wanted to, so let me go through some quick observations.
The Mogull and I recorded a quick podcast yesterday as well. We talk about Kaminsky and Hoff's pitches and come the conclusion that basically we're screwed. You can check it out at the Network Security Podcast site.
Before I head off to Day 2, I have to relay my latest Vegas star sighting. To wrap up the night Shimmy, Mitchell, Adrian Lane and I are catching a little late night breakfast at Caesars. Sitting right next to us is Jeff Dye, one of the finalists on this season's Last Comic Standing. You all know what big fans of comedy the Boss and I are, so it was great to see him in person. He's a very nice guy and he really is that pretty. They are announcing the winner of the show tonight, so I told Jeff we'd be pulling for him.
Only in Vegas...
Posted: 07 Aug 2008 08:30 AM CDT
Posted: 07 Aug 2008 07:55 AM CDT
Bob Dylan was right. The times are changing, especially in the web security war. It turns out that the hacker group behind the Coreflood Trojan have stolen at least 463,582 usernames and passwords while flying under the radar. How did they accomplish this? Instant messaging worm? Emailing malware out, via a botnet, to everyone and their dog? According to SecureWorks Director of Malware Research Joe Stewart, it all started with a drive-by attack:
So basically, the attackers plan is to put an infected website up, let one user access it and get infected, and then wait for the domain administrator to log into that workstation. After the administrator has logged in, and the malware has privileges, it propagates like an update to all other systems on the network.
Also, the group “did not rely on zero-day attacks, just standard exploits that one can get from various underground forums“.
According to Stewart:
Ah, the old Keyser Soze trick - The greatest trick the devil ever pulled was convincing the world he did not exist. And like that… he is gone.
Posted: 06 Aug 2008 10:59 PM CDT
Posted: 06 Aug 2008 07:21 PM CDT
Here's a piece I wrote that came out Monday on SearchSecurity's Network Security newsletter about the LDAP, both its future and its history.
Posted: 06 Aug 2008 06:01 PM CDT
Posted: 06 Aug 2008 06:00 PM CDT
From this (funny) video, I have found on Kaminsky blog (the guy who gave new life to the old DNS cache poisoning issue) seems that large part of the major ISP's DNS servers have been patched.
After Kaminsky's publication of the vulnerability exploit code gone wild and ported to HD Moore's Metasploit framework just few days late.
Not even 2 weeks after the breakthrough, HD Moore's company web site has been hijacked by spammers poisonoing At&T DNS Server serving his compa [...]
Posted: 06 Aug 2008 01:47 PM CDT
This October, in India and Bangladesh, there is a planned roll out of a technology that will enable anyone to transfer money between bank accounts, credit cards and phones via text messages from a cellular (mobile) phone. Using Obopay, you can sign up for an account, and start moving your money around like its nobody’s business.
From the article:
The question is, however, will security be an afterthought or will it be a primary focus of this offering? Enabling the access to, and money transfer between, accounts from a mobile platform will require rigorous security safeguards. Surely Obopay has thought of this right? Well, the Obopay website states that it indeed secure as you are required to specify a PIN number upon the creation of your account. This PIN is used any time you send money so “even if you lose your phone your money is safe”….safe?….SAFE?
Why isn’t multi factor authentication a requirement? How easy would it be for someone to pick up your cell phone and empty out your bank account if they knew your super-secret PIN number? How easy would it be for someone to beat your PIN number out of you?
These are all questions that I would have expected to be addressed during the design and implementation of this new technology integration. Alas, it appears that this is not so. Why is that again?
More from the article:
Ahhh…that’s right. Money. I often forget that making boat loads of money is always justification for poor application security planning.
Posted: 06 Aug 2008 09:34 AM CDT
In speaking with some of my fellow Twits, we all agreed that it would have been great to get to Black Hat this year (as well as some of the other security conferences that have passed). We all have reasons for not being able to attend these premier events but the general consensus is that the lack money remains the primary reason.
For me it is both a lack of money and a geographic location issue. Living in Fredericton, New Brunswick, Canada does not provide me with an easy route to Las Vegas, San Francisco, or Boston where most of the conferences are located. The airfare in Canada is ridiculous and for me to hop to a major US city I must first go through Toronto or Montreal. For example, to get to Las Vegas, I’m looking at roughly 4 flights and a round trip price tag of ~$2000. That doesn’t include the price of the conference that I would be attending, hotel, food, and so on. If any of my readers are married, you know that justifying a nearly $6000 price tag to your spouse for you to head to Las Vegas for a week without them, is about as easy as pulling your bottom lip over your head.
To that end, I have decided to found the Poor Bastards/Babes Who Can’t Afford Security Conferences (PBWCASC) (pronounced Peb-Wah-Cask). I have not yet decided if a website will be created or who will be hosting the telethon - Sally Struthers has a good track record…maybe I’ll reach out to her.
So join me, brothers and sisters, rise up against the cost of security conferences and join PBWCASC today!
(P.S. - scientists have not yet determined a way to join an “idea” at this time…please check back later when technology has caught up to my imagination.)
Posted: 06 Aug 2008 03:40 AM CDT
Recently, we are witnessing a trend of old vulnerabilities making a comeback. Some examples are the old TRACE XSS trick reappearing in many web applications, HTTP verb tampering exploits in the major web platforms and, of course, the recent SQL injection attacks that recycle a well known (but deadly) injection technique.
I see many replies to posts about those "oldies" that dismiss their importance and generally regard them as another (lame) way to get some spotlight for the authors. However, I believe that reappearing vulnerabilities should receive careful attention since they might indicate some inherent security misconception that, if not attended, will reappear time and again through different exploits. Actually, vulnerability comebacks can be our chance to test ourselves and make sure we really solved the disease and did not simply make its symptoms disappear.
For example, we can learn much from the verb tampering vulnerabilities that were reported in webappsec. This example illustrates the risk in relying solely on a black list approach to secure a web application. Generally, if one bases his web application security on black list rules he is left unprotected from those attacks that he just did not think about. In the verb tampering context, if one configures his server to authenticate GET requests to admin pages it does not mean that the server will do the same for HEAD requests to the same pages. This, of course, reopens the everlasting debate between the black list supporters and the white list supporters (personally I believe that neither one can work without the other). Nevertheless, this comeback could be an alarm call for us indicating that the problem IS NOT solved and that we must reopen this debate and reevaluate our stand.
Bottom line; pay respect to the elderly. You never known when they will be back to kick your *ss.
Posted: 05 Aug 2008 10:59 PM CDT
Posted: 05 Aug 2008 04:43 PM CDT
Every now and then, a company would take credit for being able to deliver some solution or develop a unique marketing term (yes, usually, it has some 'life cycle' in it). Having said that, I believe that companies should be measured by overall strategy and vision and their ability to execute on that vision.
Today, we announced another piece of a unique solution that is part of an overall, larger vision and solution strategy to combine security and activity monitoring for web, databases and enterprise applications.
In June, we announced that we extended the Application Security Life Cycle to production systems and delivered broadest PCI compliance. Today, we are announcing Web Activity Monitoring capability to close the loop between security operations and developers. The two announcements are related and just two parts of our overall vision to protect Application Data.
The first announcement was the industry's first closed loop solution for managing the Web application security life cycle on production systems. It includes not only the ability to take vuln data from scanners, but feeds changes back to them...this is what closed the loop.
Today, we announced that SecureSphere Web Application Firewall adds Web Activity Monitoring (WAM) to automate the discovery and accelerate the remediation of application vulnerabilities in production systems. Not only that SecureSphere can block attacks (including one packet attacks), it can record malicious inputs and application responses to provide development teams with the information they need to pinpoint and fix coding flaws. As an application security company, we were focusing on blocking the attacks, but in order to provide the developers with better understanding of their code, we had to provide the ability to capture and keep all the relevant data as well as block the attack. Those two tasks might be contradicting and we had to spend a great amount of time to find the right way.
The diagram below illustrate how SecureSphere bridges the development and production realms for web applications. Note that the timeline is not displayed: One can argue that code can be fixed manually, however, the WAF integration with scanners and the new WAM component would accelerate the protection process.
(Click on the image to see a bigger picture)
Andrew Jaquith, from Yankee Group provides an excellent explanation:
"...Imperva's Web Activity Monitoring solution feeds alerts and reports to both security and development teams, closing the loop between security operations and application developers."
As I stated above, this is part of a bigger vision. There are more dots to connect....
|You are subscribed to email updates from Black Hat Security Bloggers Network |
To stop receiving these emails, you may unsubscribe now.
|Email Delivery powered by FeedBurner|
|Inbox too full? Subscribe to the feed version of Black Hat Security Bloggers Network in a feed reader.|
|If you prefer to unsubscribe via postal mail, write to: Black Hat Security Bloggers Network, c/o FeedBurner, 20 W Kinzie, 9th Floor, Chicago IL USA 60610|