Wednesday, June 25, 2008

Spliced feed for Security Bloggers Network

Spliced feed for Security Bloggers Network

Why don't AV vendors make it easy? [StillSecure, After All These Years]

Posted: 25 Jun 2008 06:38 AM CDT

One of the newer, but very well known members of the 155+ blogs of the Security Bloggers Network, is the Errata Security blog from Dave Maynor, Rob Graham and Marisa Fagan.  Dave has a post up today about his frustrations with trying to remove McAfee AV from his new mobile phone. I share his frustration.  Having run Windows Mobile for over a year now and changing ROMS in addition to installing and deleting a multitude of applications, I am often frustrated by the lack of visibility you have into the files and system on Windows Mobile.  if an application does not remove itself cleanly, you are hosed.

A far larger frustration for me though is removing AV vendors security from any computer, mobile or otherwise.  It is not just a McAfee thing either.  Symantec, CA and Microsoft are just impossible to remove with out a major pain.  What is the reason?  Do they make it hard because they think people might remove them by mistake?  I don't think so.  Like Dave says, when does AV become a virus itself?

Zemanta Pixie

Week of War on WAF’s: Day 2 — A look at the past [tssci security]

Posted: 25 Jun 2008 02:07 AM CDT

Web application experts have been asking WAF vendors the same questions for years with no resolution.  It’s not about religion for many security professionals — it’s about having a product that works as advertised.

My frustration is not unique.  I am not the first person to clamor on about web application firewalls.  Jeff Williams pointed me to a post that Mark Curphey made in 2004.  Today, Curphey appears to have a change of heart — his latest blog post provides a link to URLScan, which some claim is like mod-security for Microsoft’s Internet Information Server (IIS).  Microsoft released URLScan Beta 3.0 in order to curtail the massive problem of over two million Classic ASP web applications that have become infected due to the SQL injection attacks.

Here is the post where the frustration of WAF and their vendors first began:

—–Original Message—–
From: The OWASP Project [mailto:[EMAIL PROTECTED]
Sent: Tuesday, 16 November 2004 2:34 PM
To: [EMAIL PROTECTED]
Subject: An Open Letter (and Challenge) to the Application Security
Consortium

An Open Letter (and Challenge) to the Application Security Consortium

Since its inception in late 2000 the Open Web Application Security Project (OWASP) has provided free and open tools and documentation to educate people about the increasing threat of insecure web applications and web services. As a not-for-profit charitable foundation, one of our community responsibilities is to ensure that fair and balanced information is available to companies and consumers.

Our work has become recommended reading by the Federal Trade Commission, VISA, the Defense Information Systems Agency and many other commercial and government entities.

The newly unveiled Application Security Consortium recently announced a “Web Application Security Challenge” to other vendors at the Computer Security Institute (CSI) show in Washington, D.C. This group of security product vendors proposes to create a new minimum criteria and then rate their own products against it.

The OWASP community is deeply concerned that this criteria will mislead consumers and result in a false sense of security. In the interest of fairness, we believe the Application Security Consortium should disclose what security issues their products do not address.

As a group with a wide range of international members from leading financial services organizations, pharmaceutical companies, manufacturing companies, services providers, and technology vendors, we are constantly reminded about the diverse range of vulnerabilities that are present in web applications and web services. The very small selection of vulnerabilities you are proposing to become a testing criteria are far from representative of what our members see in the real world and therefore do not represent a fair or suitable test criteria.

In fact, it seems quite a coincidence that the issues you have chosen seem to closely mirror the issues that your technology category is typically able to detect, while ignoring very common vulnerabilities that cause serious problems for companies.

Robert Graham, Chief Scientist at Internet Security Systems, recently commented on application firewalls in an interview for CNET news. When asked the question “How important do you think application firewalls will become in the future?” his answer was “Not very.”

“Let me give you an example of something that happened with me. Not long ago, I ordered a plasma screen online, which was to be shipped by a local company in Atlanta. And the company gave me a six-digit shipping number. Accidentally, I typed in an incremental of my shipping number (on the online tracking Web site). Now, a six-digit number is a small number, so of course I got someone else’s user account information. And the reason that happened was due to the way they’ve set up their user IDs, by incrementing from a six-digit number. So here’s the irony: Their system may be so cryptographically secure that (the) chances of an encrypted shipping number being cracked is lower than a meteor hitting the earth and wiping out civilization. Still, I could get at the next ID easily. There is no application firewall that can solve this problem.

With applications that people are running on the Web, no amount of additive things can cure fundamental problems that are already there in the first place.”

This story echoes some of the fundamental beliefs and wisdom shared by the collective members of OWASP. Our experience shows that the problems we face with insecure software cannot be fixed with technology alone.  Building secure software requires deep changes in our development culture, including people, processes, and technology.

We challenge the members of the Application Security Consortium to accept a fair evaluation of their products. WASP will work with its members (your customers) to create an open set of criteria that is representative of the web application and web services issues found in the real world. OWASP will then build a web application that contains each of these issues. The criteria and web application will be submitted to an independent testing company to evaluate your products.

You can submit your products to be tested against the criteria (without having prior access to the code) on the basis that the results are able to be published freely and will unabridged.

We believe that this kind of marketing stunt is irresponsible and severely distracts awareness from the real issues surrounding web application and web services security. Corporations need to understand that they must build better software and not seek an elusive silver bullet.

We urge the Consortium not to go forward with their criteria, but to take OWASP up on our offer to produce a meaningful standard and test environment that are open and free for all.

Contact: [EMAIL PROTECTED]
Website: www.owasp.org

Barracuda to Sourcefire: We see your CEO bet, and raise you to $8.25, call [StillSecure, After All These Years]

Posted: 24 Jun 2008 11:27 PM CDT

Barracuda continues their poker game with Sourcefire today raising their $7.50 all cash bid to $8.25.  Are Dean and company just bluffing for publicity or are they willing to keep playing and stay in this game until all the cards are on the table?  I don't know for sure, but find it interesting that Barracuda did say to Sourcefire that they would be willing to explore ways that would show cards 2Sourcefire's increased value to Barracuda and based upon that increase their offer.  Of course $8.25 is still to low, but it is getting closer.  If the offer gets near 10 bucks, Sourcefire has some serious decisions to make.  In the meantime, Barracuda will again reap the PR bounty from having a seat at the hottest poker game in security.

CharmSec Infosec Meetup Event - Thursday, 6-26: Normal Meeting [NovaInfosecPortal.com]

Posted: 24 Jun 2008 11:20 PM CDT

Here is some information regarding this week’s Thursday CharmSec infosec meetup event. There isn’t an O’s game so the bar shouldn’t be that crowded. (more…)

CapSecDC Infosec Meetup Event - Wednesday, 6/25: Normal Meeting [NovaInfosecPortal.com]

Posted: 24 Jun 2008 10:55 PM CDT

Here is some information regarding this week’s Wednesday CapSecDC infosec meetup event. Sorry for the late posting… (more…)

Burning Both Ends [BumpInTheWire.com]

Posted: 24 Jun 2008 10:52 PM CDT

This has been quite a week so far.  Not only is it the week of the BBQ cookoff I’m in but work has been a real butt kicker.  Between two Exchange outages and an apparent over-saturation of a storage area network causing multiple cluster problems there has been plenty to do.  The XenApp session reliability issue has been resolved.  I was lucky enough to have the product manager for XenApp Platinum Edition, Jill Alexander, leave a comment.  The resolution was to apply Hotfix Rollup Pack PSE450W2K3R02.

As I mentioned earlier its BBQ week.  If anyone is going to be in Lenexa, KS at the Great Lenexa BBQ Battle stop by pit 226 and have a beer with Mr. Bump.  This isn’t grilling a few brats in your backyard.  This is a two day gut check.  We will check in at 7:00 AM Friday morning, finish getting set up as soon as possible to have the smoker fired up by about noon to start smoking ribs for a 5:30 dinner.  Then we party the rest of the night and start cooking the competition meat at about midnight or so.  Its an all night affair and is going to be real test of the stamina.  Mind over matter.  If you don’t mind it doesn’t matter.

Microsoft: Rise in SQL Injection Attacks [Infosecurity.US]

Posted: 24 Jun 2008 07:37 PM CDT

Microsoft (NASDAQ: MSFT) has released a new Security Advisory specifically related to a reported rise in SQL Injection Attacks, exploiting unverified user input. Not surprising. Hewlett Packard has developed a tool, monikered HPScrawlr to assist system adminsitrotors, database adminitrators and others tasked with database, application and OS security with the discovery of vulnerable code. For further information [...]

Microsoft Malicious Software Removal Tool Discovers Over 2 Million Infected PCs [Infosecurity.US]

Posted: 24 Jun 2008 07:36 PM CDT

News has surfaced that after it’s latest update (the early June release) the Microsoft (NASDAQ: MSFT) Malicious Software Removal Tool has discovered over 2 million infected PCs, via TechWorld. Kudo’s to the Microsoft Security Team.

Opinion: When Infosecurity Goes To Far [Infosecurity.US]

Posted: 24 Jun 2008 07:35 PM CDT

John Espenschied at Computerworld reports on when information security may go to far, or at least fails to display Common Sense….several anecdotes that are typical of environments that fail to provide proper, much less prudent, security policy, predicated both on best practice and business reuirements.

Microsoft announces Black Box, White Box, and WAF [Jeremiah Grossman]

Posted: 24 Jun 2008 04:19 PM CDT

Apparently the mass SQL Injection attacks have really woken people up and they're probably flooding the MS blogs and inboxes with pleas for assistance. No doubt a lot of them use Twitter. :) Site owners are desperate to protect their old legacy ASP classic code. To help the situation Microsoft has just announced 3 free new toys specifically targeted at SQLi.

1) The Microsoft Source Code Analyzer for SQL Injection (MSCASI) is a static code analysis tool that identifies SQL Injection vulnerabilities in ASP code. In order to run MSCASI you will need source code access and MSCASI will output areas vulnerable to SQL injection (i.e. the root cause and vulnerable path is identified).


Cool. If anyone wants to provide feedback on effectiveness, I'd really like to know!


2) Microsoft worked with the HP Web Security Research group to release the Scrawlr tool. The tool will crawl a website, simultaneously analyzing the parameters of each individual web page for SQL Injection vulnerabilities.

This is nice of HP to offer, but the product limitations seem somewhat onerous to me...

* Will only crawls up to 1500 pages
* Does not support sites requiring authentication
* Does not perform Blind SQL injection
* Cannot retrieve database contents
* Does not support JavaScript or flash parsing
Will not test forms for SQL Injection (POST Parameters)

Hmm, if used MSCASI and Scrawlr are used at the same time, can we call this Hybrid Analysis? :)

3) In order to block and mitigate SQL injection attacks (while the root cause is being fixed), you can also deploy SQL filters using a new release of URLScan 3.0. This tool restricts the types of HTTP requests that Internet Information Services (IIS) will process. By blocking specific HTTP requests, UrlScan helps prevent potentially harmful requests from being executed on the server. It uses a set of keywords to block certain requests. If a bad request is detected, the filter will drop the request and it will not be processed by SQL.

IIS's equivalent to ModSecurity on Apache. Cool stuff, first used it a LOONG time ago and no doubt solid improvements have been made. From the description it appears to still be using a black list negative security model approach to protection. How about that!? :) Looks like the only thing they left out is some kind of DB or system clean up for those who have already suffered an incident. I'm hearing that the hacked count is up to 2 million sites now. Ouch.

Network Security Is Not Dead [Carnal0wnage Blog]

Posted: 24 Jun 2008 12:57 PM CDT

There have been a few comments out on the blogosphere about NETSEC being dead. NETSEC is not dead, its not going to be dead for a LONG time if ever. If something is dead, I can unplug it, remove it from the rack, and never think about it again.

To me NETSEC is (short list) router ACLs, firewall rules, VLANs, IPSEC, & domain policy. I know thats not everything, but it should be enough to illustrate my point. We could also argue domain policy but I think that its a valuable and necessary piece of security in any MS network.

Now I agree that NETSEC as a primary defense and entry point is dead (there probably won't be another DCOM), I agree that client side attacks completely bypass firewall rules (initially--the exploitation piece anyway, the shell is another matter), I agree that the endpoint is now the new border, and I agree that Application Hacking (webapp, user, browser, etc) is where security IS/is heading.

What I don't agree with is that I don't need my firewall rules and router ACLs anymore. Some examples...

-without NETSEC do we still have DMZs?
-with no DMZs and no way to control who can talk to who on your network with either FW rules or router ACLs, what is going to stop the attacker once they exploit that web app and either get a shell or credentials to log in with?
-How do I stop the attacker once he has that shell with client side privileges? Do I just let them have free reign?
-How do I stop that outbound connection that alot of times can be caught with the right type of proxies (bluecoat and similar "appliances"). Is my layer7 FW going to catch that?

All of these people that say that network hacking is dead obviously don't have to do anything else in their pentests other than exploiting web applications. Unless you got really friggin lucky and that web application housed the data you were looking for, you are back to the old school network game of moving around the network, setting up shop on hosts in the LAN, doing privilege escalation and with no rules or devices in place what is going to stop the attacker from exfiltrating that data out without being seen? Where are your logs if you do catch them with no NETSEC devices?

thoughts? I'm wrong alot, so if I'm wrong do let me know.

Twitters users angry about SQL Injection hacks on their websites [Jeremiah Grossman]

Posted: 24 Jun 2008 12:11 PM CDT

The mass SQL injection attacks have impacted the lives of a lot of Twitter users out there. I did a search for "SQL Injection" and the results are page after page of misery, time wasted cleaning things up, and cursing up a storm. You can really feel their pain and the worst is probably not yet over. Still gotta fix all that legacy code. Here are some of my favorites tweets…

shartley: Cleaning yet another SQL injection attack. I'm F'n sick of cleaning up after lazy programming that took place during my year away.

jamesTWIT: To the hacker who designed the SQL injection bot. I hope you die and not a fast death...something slow and painful. Like caught in a fire!

chadmonahan: Dearest SQL Injection people, I don't like you. Yours, CM

programwitch: F'n SQL injection hacks.

Anirask: Damnit. Our main website is down cause of SQL Injection attacks. You figure devs would sanitize their inputs against this shit..

Network Security Projects Using Hacked Wireless Routers [Room362.com]

Posted: 24 Jun 2008 10:28 AM CDT

Just wanted to pimp Paul from PaulDotCom’s class coming up here shortly. Also, to register go to  http://www.pauldotcom.com/sans  and help their podcast out.

SANS Institute - SANSFIRE 2008

Wednesday, July 23, 2008 : 9am - 5pm
Paul Asadoorian, Defensive Intuition
6 CPE Credits

Security Risk: Your Admin [Birchtree Blog]

Posted: 24 Jun 2008 03:51 AM CDT

Did you know every third IT-Admin with a master password has looked at data that he is not supposed to see? Like your paycheck or the mail about the coming merger?

Besides treating your admin nicely there are not many thinngs you can do
* change the passwords more often then 'never' or 'every quater'
* encrypt stuff
* pray

By the way: When IT admins were asked what they would take with them when they leave the company they said:
* the customer database
* the list with the passwords

Surprise!

IT-Admins schnüffeln Mitarbeitern hinterher - Knowledge Center - Security - computerwoche.de

A Different Form of JAR Hell [aut disce, aut discede]

Posted: 24 Jun 2008 01:11 AM CDT

In my last post I used a Java applet to steal password hashes. Part two, covering NTLMv2, is on its way. Today however, I'm going to discuss SunSolve #233323 - a vulnerability that was fixed in the March updates to the JRE. Anyone who caught my ToorCon talk will have already heard me discuss this issue.


Java Web Start has provision for resources, signed JAR files that contain either Java classes or native libraries that can be cached for use by one or more applications. JARs containing Java classes are extracted as per the usual Java caching mechanism (i.e. written to disk using randomly generated names) whereas native libraries are extracted with their original filenames. Interestingly filenames can include parent path sequences (e.g. ..\..\..\..\test.txt). This means that "nativelibs" can be written outside the cache folder. But that's ok because nativelib resources need to be signed and therefore explicitly trusted by the user, right?


Not exactly. Take a look at the following code snippet, which resembles the vulnerable Java Web Start code, and see if you can spot the bypass (it's not exactly obvious):


try
{
// Open the JAR file specifying true to indicate
// we want to verify the JarFile is signed.
JarFile jf = new JarFile(UserSuppliedFile, true);

Enumeration e = jf.entries();
while (e.hasMoreElements())
{
ZipEntry ze = (ZipEntry) e.nextElement();
InputStream i = jf.getInputStream(ze);
byte b[] = new byte[i.available()];
i.read(b);

// Call our method to write the bytes
// to disk

WriteFileToDisk(ze.getName(), b);
}
}
catch (SecurityException se)
{
// Some sort of signature verification error
System.out.println("Security Error: " + se.toString());
}
catch (IOException ioe)
{
System.out.println("File Error: " + ioe.toString());
}


If you spotted the problem, well done! If not, here's a hint courtesy of an IBM article on signed JARs:


Each signer of a JAR is represented by a signature file with the extension .SF within the META-INF directory of the JAR file. The format of the file is similar to the manifest file -- a set of RFC-822 headers. As shown below, it consists of a main section, which includes information supplied by the signer but not specific to any particular JAR file entry, followed by a list of individual entries which also must be present in the manifest file. To validate a file from a signed JAR, a digest value in the signature file is compared against a digest calculated against the corresponding entry in the JAR file.


What if a file doesn't have a corresponding manifest entry? It turns out the above code will happily call WriteFileToDisk anyway and there'll be no exception thrown. We can use this bypass to append a file to a signed resource and have it drop a java.policy file in the user's home directory allowing applets and Web Start applications to do bad things.

Let's take a look at how the Jarsigner tool that ships with the JDK validates signed JARs. Jarsigner correctly detects JARs containing both signed and unsigned content:



The code snippet below shows the enumeration of ZipEntrys; it's taken from sun.security.tools.JarSigner:


Enumeration e = entriesVec.elements();

long now = System.currentTimeMillis();

while (e.hasMoreElements()) {
JarEntry je = (JarEntry) e.nextElement();
String name = je.getName();
CodeSigner[] signers = je.getCodeSigners();
boolean isSigned = (signers != null);
anySigned |= isSigned;
hasUnsignedEntry |= !je.isDirectory() && !isSigned
&& !signatureRelated(name);


The code retrieves the entry's CodeSigners; if there are none the entry is deemed unsigned.


As an aside, it's actually possible to fool Jarsigner. Take a look at the signatureRelated method, which is called above:


    /**
* signature-related files include:
* . META-INF/MANIFEST.MF
* . META-INF/SIG-*
* . META-INF/*.SF
* . META-INF/*.DSA
* . META-INF/*.RSA
*/
private boolean signatureRelated(String name) {
String ucName = name.toUpperCase();

if (ucName.equals(JarFile.MANIFEST_NAME) ||
ucName.equals(META_INF) ||
(ucName.startsWith(SIG_PREFIX) &&
ucName.indexOf("/") == ucName.lastIndexOf("/"))) {
return true;
}


Jarsigner ignores unsigned files that start with the prefix "META-INF/SIG-":



Anyway, back to the Web Start issue. Soon after discovering this bug I realised it was effectively moot for I hadn't seen any security dialogs even when working with fully signed JARs. It turned out there were none. Ever. You could simply even use a self-signed JAR! Still, it's a great example of a managed language providing a simple interface (JarFile) that masks a complex implementation; if the contract between the caller and the callee is not clearly defined (however simple the interface), developers can write insecure code without knowing it.


So that's it for now. There's also some interesting behaviour when loading applets containing signed and unsigned content but I'll save that for another day.



Cheers

John

p.s. In case you were wondering, JAR hell is Java's form of DLL hell.

I Just Missed The Heathkit Generation [The Converging Network]

Posted: 24 Jun 2008 12:01 AM CDT

This weekend I rebuilt two of my computers. One has a power supply problem and the other is my Windows Server 2008 system I use for testing software and trying stuff out. I tinkered for hours rebuilding those two systems. Part of it was verifying the problem was actually the power supply, and the rest was just me rerouting this cable or that, moving drives around or just plain goofing off. I've always been a tinkerer like that. I used to drive my dad completely bonkers because whenever he got some new tool, gadget or electronics doodad, I'd take it apart to see how it work and then put it back together, er, most of the times I'd put it back together. It was just cool to see how things worked.

Though I don't generally build PCs for work, I do in the lab, often trying out some new processor, graphics card, or what have you. Its just something I like to do. I kind of caught the PC building bug when I started building PCs with my son, Phill, who now does PC work and support for his vocation. But I also realize those "building" tendencies go back even further.

When I was a kid, I had chemistry sets, microscopes, and breadboard electronics kits, the kinds where you could wire up a basic radio by connecting wires to the spring junction pegs. In high school I got into hi-fi stereo systems. I really studied up on all the different manufactures and models, frequently being able to spout out the wow-and-flutter of this tape deck, the wattage of this amp, or the signal-to-noise ratio of some other gadget. A lot of entirely useless factoids that most people had no idea what I was talking about.

One thing I missed out on were Heathkits. Heathkits were those electronics kits for building stereos, AM/FM radios, ham radios, and lots of transistor-based electronics test gear gadgets. As I remember, they didn't have chips but only used transistors... the analog version of electronics, the stuff you used a soldering iron to put together. Maybe they had computer chips later on, I don't know, but they stopped making kits in 1991. I was a bit young for Heathkits and then skipped from stereos right to my first computer, the Apple II Plus. The Apple II was my Heathkit, like Star Trek TNG is for generations after the classic Star Trek.

Like some people wish they would have learned the guitar when they were young, for some reason I wish I had put together at least one Heathkit. Sometimes it's okay to have a something like that which you never got today. If I'd made a Heathkit, then there would probably be something else I wished I'd had a chance to put together.

New Fortinet Patents May Spell Nasty Trouble For UTM Vendors, Virtualization Vendors, App. Delivery Vendors, Routing/Switching Vendors... [Rational Survivability]

Posted: 23 Jun 2008 09:54 PM CDT

FortinetCheck out the update below...

Were I in the UTM business, I'd be engaging the reality distortion field and speed-dialing my patent attorneys at this point.

Fortinet has recently had some very interesting patent applications granted by the PTO.

Integrated network and application security, together with virtualization technologies, offer a powerful and synergistic approach for defending against an increasingly dangerous cyber-criminal environment. In combination with its extensive patent-pending applications and patents already granted, Fortinet's newest patents address critical technologies that enable comprehensive network protection:

  • U.S. Patent #7,333,430 - Systems and Methods for Passing Network Traffic Data - directed to efficiently processing network traffic data to facilitate policy enforcement, including content scanning, source/destination verification, virus scanning, content detection and intrusion detection;

  • U.S. Patent #7,340,535 - System and Method for Controlling Routing in a Virtual Router System - directed to controlling the routing of network data, and providing efficient configuration of routing functionality and optimized use of available resources by applying functions to data packets in a virtual environment;

  • U.S. Patent #7,376,125 - Service Processing Switch - directed to providing IP services and IP packet processing in a virtual router-based system using IP flow caches, virtual routing engines, virtual services engines and advanced security engines;

  • U.S. Patent # 7,389,358 - Distributed Virtual System to Support Managed, Network-based Services - directed to a virtual routing system, which includes processing elements to manage and optimize IP traffic, useful for service provider switching functions at Internet point-of-presence (POP) locations.

These patents could have some potentially profound impact on vendors who offer "integrated security" by allowing for virtualized application of network security policy.  These patents could easily be enforced outside of the typically-defined UTM offerings, also.

I'm quite certain Cisco and Juniper are taking note as should be anyone in the business of offering virtualized routing/switching combined with security -- that's certainly a broad swath, eh?

On a wider note, I've actually been quite impressed with the IP portfolio that Fortinet has been assembling over the last couple of years.  If you've been paying attention, you will notice (for example) that that they have scooped up much of the remaining CoSine IP as well as recently acquired IPlocks' database security portfolio.

If I were they, the next thing I'd look for (and would have a while ago) is to scoop up a Web Application Firewall/Proxy vendor...

I trust you can figure out why...why not hazard a guess in the comments?

/Hoff

Updated:  It occured to me that this may be much more far-reaching than just UTM vendors, that basically this could affect folks like Crossbeam, Check Point, StillSecure, Cisco, Juniper, Secure Computing, f5...basically anyone who sells a product that mixes the application of security policy with virtualized routing/switching capabilities...

How about those ASA's or FWSMs?  How about those load balancers with VIPs?

Come to mention it, what of VMware?  How about the fact that in combining virtual networking with VMsafe, you've basically got what amounts to coverage by the first two patents:

U.S. Patent #7,333,430 - Systems and Methods for Passing Network Traffic Data - directed to efficiently processing network traffic data to facilitate policy enforcement, including content scanning, source/destination verification, virus scanning, content detection and intrusion detection;

U.S. Patent #7,340,535 - System and Method for Controlling Routing in a Virtual Router System - directed to controlling the routing of network data, and providing efficient configuration of routing functionality and optimized use of available resources by applying functions to data packets in a virtual environment;

Whoopsie.

Now, I'm not a lawyer, I just play one on teh Interwebs.



links for 2008-06-24 [Raffy's Computer Security Blog]

Posted: 23 Jun 2008 09:33 PM CDT

Week of War on WAF’s: Day 1 — Top ten reasons to wait on WAF’s [tssci security]

Posted: 23 Jun 2008 09:13 PM CDT

Hello, and welcome to the Week of War on WAF’s, the same week that ends whereby PCI-DSS Requirement 6.6 goes into effect as a deadline for many merchants. Today is the first day. So far, Marcin has identified some of the problems with web application firewalls. We were able to identify what we would like to see in WAF’s, both commercial and open-source in the future (since they do not work properly today). In this post, I want to start off the week by listing the Top ten reasons to wait on WAF’s.

Top tens reasons to wait on WAF’s

  1. WAF vendors won’t tell us what they don’t block
  2. Requires a web application security expert with code knowledge, HTTP knowledge, and a lot of time / careful planning to configure
  3. Gartner leaves WAF’s off their Magic Quadrant for web application security on purpose
  4. No truth in advertising leads to a false sense of security
  5. Vendors show signs of desparation, claiming imperatives and illegality in addition to just the standard FUD
  6. Attacks that are claimed to be blocked are coincidentally also found in WAF solutions themselves (e.g. XSS in the security reports or web configuration panels)
  7. Every organization that has installed a blocking WAF has also been in the media for known, active XSS and/or SQL injection
  8. Second-order (i.e. non-HTTP or unprotected path) injections cannot be blocked or even detected
  9. Real-world web application attacks are more often strings of attacks, or at the business logic layer — WAF’s cannot detect or prevent these kinds of attacks
  10. PCI-DSS Requirement 6.6 can be met with compensating controls, web application security scanners, automated security review tools, and/or manual review of the pen-test or code varieties

We understand and realize that the ideas of a blocking WAF are very popular right now. There are many supporters behind the WAF and VA+WAF movements. While we’d also like to support what the rest of the community sees as the future — we also want to make sure that it is the right thing to do.

One of the best ways to move forward with any given technology is to look at its faults. We learn best in IT when things fall apart — when they break. TS/SCI Security has put a lot of thought, practice, and research into WAF technology. Marcin’s most recent post demonstrates our list of requirements (e.g. block outbound) and nice-to-have’s (e.g. good documentation). Some vendors might already have this sort of outbound blocking functionality, and we’re not even aware of it! Other vendors could have clearly defined “VA+WAF blocking” documentation, which could even be internal engineering or strategy documents that should be out in the open (or at least available to paying customers).

Also — if we do end up demonstrating that WAF, VA+WAF, APIDS, ADMP, or other solution is less viable than a new, better idea — let’s move this research into the forefront.

Maltego Goes Communal [Room362.com]

Posted: 23 Jun 2008 02:11 PM CDT

Now that everyone and their mother has posted about Back|Track Final being released I feel that I am safe in disclosing that information. But on to the topic, with said release, the folks over at Paterva have released a “Community” edition of Maltego. Straight from the horses mouth, here are the limitations:

Limitations

The Community Edition is limited in the following ways:

  • A 15second nag screen
  • Save and Export has been disabled
  • Limited zoom levels
  • Can only run transforms on a single entity at a time
  • Cannot copy and paste text from detailed view
  • Transforms limited to 75 per day
  • Throttled client to TAS communication

Also, directly on the heals of this release is a community forums! Which haven’t quite been linked to from the main site, but I HAVE AUTHORIZATION THIS TIME!... not going to make the same mistake twice. Anyways, go check it out.

VirtSec Not A Market!? Fugghetaboutit! [Rational Survivability]

Posted: 23 Jun 2008 12:49 PM CDT

Moneyhook Thanks to Alan Shimel and his pre-Blackhat Security Bloggers Network commentary, a bunch of interesting folks are commenting on the topic of virtualization security (VirtSec) which is the focus of my preso at Blackhat this year.

Mike Rothman did his part this morning by writing up a thought-provoking piece opining on the lack of a near-term market for VirtSec solutions:

So I'm not going to talk about technical stuff. Yet, I do feel compelled to draw the conclusion that despite the dangers, it doesn't matter. All the folks that are trying to make VirtSec into a market are basically just pushing on a rope.

That's right. Now matter how hard you push (or how many blog postings you write), you are not going to make VirtSec into a market for at least 2 years. And that is being pretty optimistic. So for all those VCs that are thinking they've jumped onto the next big security opportunity, I hope your partnership will allow you to be patient.

Again, it's not because the risks of virtualization aren't real. If guys like Hoff and Thomas say they are, then I tend to believe them. But Mr. Market doesn't care what smart guys say. Mr. Market cares about budget cycles and priorities and political affiliations, and none of these lead me to believe that VirtSec revenues are going to accelerate anytime soon.

Firstly, almost all markets take a couple of years to fully develop and mature and VirtSec is no different.  Nobody said that VirtSec will violate the laws of physics, but it's also a very hot topic and consumers/adopters are recognizing that security is a piece of the puzzle that is missing.

In many cases this is because virtualization platform providers have simply marketed virtualization as being "as secure" or "more secure" than than their physical counterparts.  This, combined with the rapid adoption of virtualization, has caused a knee jerk reactive reaction.

By the way, this is completely par for the course in our industry.  If you act surprised, you deserve an Emmy ;)

Secondly, and most importantly to me, Mike did me a bit of a disservice by intimating that my pushing the issues regarding VirtSec are focused solely on the technical.  Sadly, that's so far off base from my "fair and balanced" perspective on the matter because along with the technical issues, I constantly drum home the following:

"Nobody Puts Baby In the Corner"

Painting only one of the legs of the stool as my sole argument isn't accurate and doesn't portray what I have been talking about for some time -- and agree with Mike about -- that these challenges are more than one-dimensional.

The reality is that Mike is right -- the budget, priority and politics will bracket VirtSec's adoption, but only if you think of VirtSec as a technical problem. 

/Hoff

Web application firewalls: A slight change of heart [tssci security]

Posted: 23 Jun 2008 10:35 AM CDT

We’ve been beating the drum for some time now, expressing our opinions of web application firewalls (WAFs). You might have sided with us on this issue, are against us, or are just tired from it all by now. This post is about to change all that, and show that we are not 100% anti-WAF, and that there are some useful applications for them.

Why WAFs do not work

In a post on why most WAFs do not block, Jeremiah Grossman quoted Dan Geer:

When you know nothing, permit-all is the only option. When you know something, default-permit is what you can and should do. When you know everything, default-deny becomes possible, and only then.

Jeremiah then stated that to implement a default-deny WAF (which would offer the most security, but carries with it the greatest business impact), you need to know everything about your app, at all times — even when it changes. How appealing is this, given the amount of resources you currently have? Who will be responsible for maintaining the WAF? These are all questions you should be asking yourself. Jeremiah then goes on to say that default-permit is necessary in web applications — going against everything we’ve learned in security over the past 40 years. Wait… what??

Some context that can attest to our reasoning

Over the last several weeks, I’ve been evaluating several web application firewalls. They all have their own list of cons and more cons. On my first day, having sat down at their consoles, I was a little overwhelmed by all the options present — application profiles, signatures, policies, etc. It all came to me as I worked through it and read the manuals, though frankly, I don’t see how anyone without a web application security background can keep up with it all. I fear these devices will be deployed and forgotten, relying solely on their ability to learn and self-adjust.

Let’s talk about the consoles used to monitor and maintain the WAF. One vendor had a fat app, which was a bit clunky, non-intuitive and had multiple usability issues. The number one issue that comes to mind is on the monitoring panel — to watch alerts in real-time, you need to set an automatic refresh rate which updates the entire display, which makes it impossible to analyze HTTP requests/responses during this time. If you’re scrolled down to a certain location of a request, and the console refreshes, you lose your position and are brought back up to the top. I don’t understand why the entire screen had to be updated, rather than a particular frame.

Another vendor used a webapp to manage itself, which was in my opinion much nicer and easier to use, albeit slower. When on the alert monitoring page, you had to manually click a button to refresh the alerts, and viewing requests/responses was a major pain. The application utilized AJAX on pages that could do without, but in areas that could benefit from them, they resorted to old web tactics.

In the course of my testing, I started by taking RSnake’s XSS cheatsheet and creating Selenium test cases for attacking our own vulnerable web application (See our talk, Path X from ShmooCon). For those unfamiliar with Selenium, it’s a browser driver that performs functional testing, though we have showed how it can be used for security testing. We didn’t use WebGoat (or any other vulnerable apps), reasoning that the vendors must have tested against those and know them inside out for just such occasions. Renaud Bidou had an excellent presentation on How to test an IPS [PPT] from CanSecWest ‘06 which I believe can be applied to testing WAFs for those interested. Suffice to say, the WAF’s did not detect ALL of the XSS from the cheatsheet that was thrown at it, which is pretty sad. I would have expected they at least get that right.

That brings us to second-order, persistent XSS and SQL injection attacks. When a web application strings together data from multiple sources, detection of such attacks can be very hard. The WAF cannot account for this logic, thus allowing an attacker to effectively bypass the WAF by staging his XSS/submitting multiple payloads to various sources. When the application then pieces the data together, an XSS (SQL injection, etc) condition exists. The problem with this? Your WAF never detected it, and you have no idea your site’s been attacked and is now hosting malicious scripts.

There are just some attacks a WAF will never detect. HTML / CSS injection through HTML / CSS is just one example. Go on over to http://google.com/search?q=cache%3Atssci-security.com — can you describe what is going on here?

Or how about CSRF? Insecure session management? What can a WAF do to protect against business logic flaws? We can go on and on, and yet vendors still claim protection against OWASP Top 10, which if you believe shows you know nothing about web application security.

How WAFs can help

So I lied, we haven’t changed our minds about WAFs. But wait! I’ll let you know what would change our minds at least a little, which would show that WAFs can have their purposes. Without this though, I can’t recommend any organization spend the money on such devices — especially if they need to meet compliance requirements where other options do exist.

The value of WAF Egress, revisited

What should a WAF do? Block attacks on the egress / outbound, while staying out of the inbound flow of traffic. I’m not talking about signature based blocking either. This is the tough part, because it’s almost impossible. One way I see it working though, is if the application keeps the content (HTML), presentation (CSS), and behavior (JavaScript) separated. The application should not serve any inline scripts, but instead serve script files that would alter the content on the client side. This would make, e.g. outbound XSS prevention possible because a WAF could then detect inline scripts in the content. None of the WAFs I evaluated could detect a client being exploited by a persistent XSS condition. This would also tell me how many users were affected by the XSS attack, which we haven’t seen any numbers on apart from the number of friends Samy had when he dropped his pants and took a dump all over our industry.

Jeremiah and I got a picture with him wearing "Samy is my hero" shirts. I haven't laughed that hard in a long time! But to quote a sanitized version of what one guy said, "Samy knew nothing about webappsec and one day he walked in, dropped his pants and took a huge dump on our industry and then left again. And we just looked around at one another and said, 'What just happened?'"

Another way to get this right is to apply the work done by Matias Madou, Edward Lee, Jacob West and Brian Chess of Fortify in a paper titled: Watch What You Write: Preventing Cross-Site Scripting by Observing Program Output [PDF]. They go on to talk about capturing normal behavior of an application during functional testing, and then attacking the application as if in a hostile environment, where it is then monitored to ensure it does not deviate from normal behavior. Basically, it’s all about monitoring your application output in areas that are known to be dynamic.

In-depth, the Foritfy work is using dynamic taint propagation, by which “taint propagation” or “taint tracking” is similarly done with static analysis in order to trace misused input data from source to sink. This is also a corollary to the work that Fortify has presented on before with regards to Countering the faults of web scanners through bytecode injection [PDF]. While web application security scanners only demonstrate 20-29 percent of the overall security picture because of surface and code coverage for the inputs of the application under test, dynamic taint tracking goes a long way to providing more coverage for these kinds of tests because it’s done as white-box dynamic analysis instead of functional black-box runtime testing.

The value of XHTML

My fellow blogger, Andre Gironda, helped out with the praise section for the book, “Refactoring HTML: Improving the Design of Existing Web Applications”, by Elliotte Rusty Harold. It’s hard to disagree with the notion that XHTML can help with both quality and security issues, as well as make applications and content easier to refactor and work with.

When you’re recoding thousands or millions of lines of code, wouldn’t well-formedness and validity be the primary requirements for working with such large volumes of code? If anything, well-formedness and content validity make the chores much easier to deal with. Rusty has this to say in his book:

[…] there are two things [authors for the Web] are very likely to write: JavaScript and stylesheets. By number, these are by far the most common kinds of programs that read web pages. Every JavaScript program embedded in a web page itself reads the web page. Every CSS stylesheet (though perhaps not a program in the traditional sense of the word) also reads the web page. JavaScript and CSS are much easier to write and debug when the pages they operate on are XHTML rather than HTML. In fact, the extra cost of making a page valid XHTML is more than paid back by the time you save debugging your JavaScript and CSS.

Since web application firewalls today cannot convert HTML on the outbound to XHTML, this is certainly a job for the content writers (sometimes, but often not the developers) to deal with. In the Refactoring HTML book, Rusty also talks about the tools necessary to develop content on the web:

Many HTML editors have built-in support for validating pages. For example, in BBEdit you can just go to the Markup menu and select Check/Document Syntax to validate the page you’re editing. In Dreamweaver, you can use the context menu that offers a Validate Current Document item. (Just make sure the validator settings indicate XHTML rather than HTML.) In essence, these tools just run the document through a parser such as xmllint to see whether it’s error-free.

If you’re using Firefox, you should install Chris Pederick’s Web Developer — https://addons.mozilla.org/en-US/firefox/addon/60 — plug-in. Once you’ve done that, you can validate any page by going to Tools/Web Developer/Tools/Validate HTML. This loads the current page in the W3C validator. The plug-in also provides a lot of other useful options in Firefox.

Whatever tool or technique you use to find the markup mistakes, validating is the first step to refactoring into XHTML. Once you see what the problems are, you’re halfway to fixing them.

Speaking of properly validated and easy to read/use content, what irked me throughout my evaluation most was documentation. Vendors: do not bundle a ton of HTML files together and call it a manual. If you’re looking to do that, please use DocBook if you’re not going to make a PDF available. Better yet, give us a hard copy.

No comments: