Spliced feed for Security Bloggers Network |
Why don't AV vendors make it easy? [StillSecure, After All These Years] Posted: 25 Jun 2008 06:38 AM CDT One of the newer, but very well known members of the 155+ blogs of the Security Bloggers Network, is the Errata Security blog from Dave Maynor, Rob Graham and Marisa Fagan. Dave has a post up today about his frustrations with trying to remove McAfee AV from his new mobile phone. I share his frustration. Having run Windows Mobile for over a year now and changing ROMS in addition to installing and deleting a multitude of applications, I am often frustrated by the lack of visibility you have into the files and system on Windows Mobile. if an application does not remove itself cleanly, you are hosed. A far larger frustration for me though is removing AV vendors security from any computer, mobile or otherwise. It is not just a McAfee thing either. Symantec, CA and Microsoft are just impossible to remove with out a major pain. What is the reason? Do they make it hard because they think people might remove them by mistake? I don't think so. Like Dave says, when does AV become a virus itself? |
Week of War on WAF’s: Day 2 — A look at the past [tssci security] Posted: 25 Jun 2008 02:07 AM CDT Web application experts have been asking WAF vendors the same questions for years with no resolution. It’s not about religion for many security professionals — it’s about having a product that works as advertised. My frustration is not unique. I am not the first person to clamor on about web application firewalls. Jeff Williams pointed me to a post that Mark Curphey made in 2004. Today, Curphey appears to have a change of heart — his latest blog post provides a link to URLScan, which some claim is like mod-security for Microsoft’s Internet Information Server (IIS). Microsoft released URLScan Beta 3.0 in order to curtail the massive problem of over two million Classic ASP web applications that have become infected due to the SQL injection attacks. Here is the post where the frustration of WAF and their vendors first began:
|
Posted: 24 Jun 2008 11:27 PM CDT Barracuda continues their poker game with Sourcefire today raising their $7.50 all cash bid to $8.25. Are Dean and company just bluffing for publicity or are they willing to keep playing and stay in this game until all the cards are on the table? I don't know for sure, but find it interesting that Barracuda did say to Sourcefire that they would be willing to explore ways that would show Sourcefire's increased value to Barracuda and based upon that increase their offer. Of course $8.25 is still to low, but it is getting closer. If the offer gets near 10 bucks, Sourcefire has some serious decisions to make. In the meantime, Barracuda will again reap the PR bounty from having a seat at the hottest poker game in security. |
CharmSec Infosec Meetup Event - Thursday, 6-26: Normal Meeting [NovaInfosecPortal.com] Posted: 24 Jun 2008 11:20 PM CDT |
CapSecDC Infosec Meetup Event - Wednesday, 6/25: Normal Meeting [NovaInfosecPortal.com] Posted: 24 Jun 2008 10:55 PM CDT |
Burning Both Ends [BumpInTheWire.com] Posted: 24 Jun 2008 10:52 PM CDT This has been quite a week so far. Not only is it the week of the BBQ cookoff I’m in but work has been a real butt kicker. Between two Exchange outages and an apparent over-saturation of a storage area network causing multiple cluster problems there has been plenty to do. The XenApp session reliability issue has been resolved. I was lucky enough to have the product manager for XenApp Platinum Edition, Jill Alexander, leave a comment. The resolution was to apply Hotfix Rollup Pack PSE450W2K3R02. As I mentioned earlier its BBQ week. If anyone is going to be in Lenexa, KS at the Great Lenexa BBQ Battle stop by pit 226 and have a beer with Mr. Bump. This isn’t grilling a few brats in your backyard. This is a two day gut check. We will check in at 7:00 AM Friday morning, finish getting set up as soon as possible to have the smoker fired up by about noon to start smoking ribs for a 5:30 dinner. Then we party the rest of the night and start cooking the competition meat at about midnight or so. Its an all night affair and is going to be real test of the stamina. Mind over matter. If you don’t mind it doesn’t matter. |
Microsoft: Rise in SQL Injection Attacks [Infosecurity.US] Posted: 24 Jun 2008 07:37 PM CDT Microsoft (NASDAQ: MSFT) has released a new Security Advisory specifically related to a reported rise in SQL Injection Attacks, exploiting unverified user input. Not surprising. Hewlett Packard has developed a tool, monikered HPScrawlr to assist system adminsitrotors, database adminitrators and others tasked with database, application and OS security with the discovery of vulnerable code. For further information [...] |
Microsoft Malicious Software Removal Tool Discovers Over 2 Million Infected PCs [Infosecurity.US] Posted: 24 Jun 2008 07:36 PM CDT |
Opinion: When Infosecurity Goes To Far [Infosecurity.US] Posted: 24 Jun 2008 07:35 PM CDT John Espenschied at Computerworld reports on when information security may go to far, or at least fails to display Common Sense….several anecdotes that are typical of environments that fail to provide proper, much less prudent, security policy, predicated both on best practice and business reuirements. |
Microsoft announces Black Box, White Box, and WAF [Jeremiah Grossman] Posted: 24 Jun 2008 04:19 PM CDT Apparently the mass SQL Injection attacks have really woken people up and they're probably flooding the MS blogs and inboxes with pleas for assistance. No doubt a lot of them use Twitter. :) Site owners are desperate to protect their old legacy ASP classic code. To help the situation Microsoft has just announced 3 free new toys specifically targeted at SQLi. 1) The Microsoft Source Code Analyzer for SQL Injection (MSCASI) is a static code analysis tool that identifies SQL Injection vulnerabilities in ASP code. In order to run MSCASI you will need source code access and MSCASI will output areas vulnerable to SQL injection (i.e. the root cause and vulnerable path is identified). Cool. If anyone wants to provide feedback on effectiveness, I'd really like to know! 2) Microsoft worked with the HP Web Security Research group to release the Scrawlr tool. The tool will crawl a website, simultaneously analyzing the parameters of each individual web page for SQL Injection vulnerabilities. This is nice of HP to offer, but the product limitations seem somewhat onerous to me... * Will only crawls up to 1500 pages * Does not support sites requiring authentication * Does not perform Blind SQL injection * Cannot retrieve database contents * Does not support JavaScript or flash parsing Will not test forms for SQL Injection (POST Parameters) Hmm, if used MSCASI and Scrawlr are used at the same time, can we call this Hybrid Analysis? :) 3) In order to block and mitigate SQL injection attacks (while the root cause is being fixed), you can also deploy SQL filters using a new release of URLScan 3.0. This tool restricts the types of HTTP requests that Internet Information Services (IIS) will process. By blocking specific HTTP requests, UrlScan helps prevent potentially harmful requests from being executed on the server. It uses a set of keywords to block certain requests. If a bad request is detected, the filter will drop the request and it will not be processed by SQL. IIS's equivalent to ModSecurity on Apache. Cool stuff, first used it a LOONG time ago and no doubt solid improvements have been made. From the description it appears to still be using a black list negative security model approach to protection. How about that!? :) Looks like the only thing they left out is some kind of DB or system clean up for those who have already suffered an incident. I'm hearing that the hacked count is up to 2 million sites now. Ouch. |
Network Security Is Not Dead [Carnal0wnage Blog] Posted: 24 Jun 2008 12:57 PM CDT There have been a few comments out on the blogosphere about NETSEC being dead. NETSEC is not dead, its not going to be dead for a LONG time if ever. If something is dead, I can unplug it, remove it from the rack, and never think about it again. To me NETSEC is (short list) router ACLs, firewall rules, VLANs, IPSEC, & domain policy. I know thats not everything, but it should be enough to illustrate my point. We could also argue domain policy but I think that its a valuable and necessary piece of security in any MS network. Now I agree that NETSEC as a primary defense and entry point is dead (there probably won't be another DCOM), I agree that client side attacks completely bypass firewall rules (initially--the exploitation piece anyway, the shell is another matter), I agree that the endpoint is now the new border, and I agree that Application Hacking (webapp, user, browser, etc) is where security IS/is heading. What I don't agree with is that I don't need my firewall rules and router ACLs anymore. Some examples... -without NETSEC do we still have DMZs? -with no DMZs and no way to control who can talk to who on your network with either FW rules or router ACLs, what is going to stop the attacker once they exploit that web app and either get a shell or credentials to log in with? -How do I stop the attacker once he has that shell with client side privileges? Do I just let them have free reign? -How do I stop that outbound connection that alot of times can be caught with the right type of proxies (bluecoat and similar "appliances"). Is my layer7 FW going to catch that? All of these people that say that network hacking is dead obviously don't have to do anything else in their pentests other than exploiting web applications. Unless you got really friggin lucky and that web application housed the data you were looking for, you are back to the old school network game of moving around the network, setting up shop on hosts in the LAN, doing privilege escalation and with no rules or devices in place what is going to stop the attacker from exfiltrating that data out without being seen? Where are your logs if you do catch them with no NETSEC devices? thoughts? I'm wrong alot, so if I'm wrong do let me know. |
Twitters users angry about SQL Injection hacks on their websites [Jeremiah Grossman] Posted: 24 Jun 2008 12:11 PM CDT The mass SQL injection attacks have impacted the lives of a lot of Twitter users out there. I did a search for "SQL Injection" and the results are page after page of misery, time wasted cleaning things up, and cursing up a storm. You can really feel their pain and the worst is probably not yet over. Still gotta fix all that legacy code. Here are some of my favorites tweets… shartley: Cleaning yet another SQL injection attack. I'm F'n sick of cleaning up after lazy programming that took place during my year away. jamesTWIT: To the hacker who designed the SQL injection bot. I hope you die and not a fast death...something slow and painful. Like caught in a fire! chadmonahan: Dearest SQL Injection people, I don't like you. Yours, CM programwitch: F'n SQL injection hacks. Anirask: Damnit. Our main website is down cause of SQL Injection attacks. You figure devs would sanitize their inputs against this shit.. |
Network Security Projects Using Hacked Wireless Routers [Room362.com] Posted: 24 Jun 2008 10:28 AM CDT Just wanted to pimp Paul from PaulDotCom’s class coming up here shortly. Also, to register go to http://www.pauldotcom.com/sans and help their podcast out. |
Security Risk: Your Admin [Birchtree Blog] Posted: 24 Jun 2008 03:51 AM CDT Did you know every third IT-Admin with a master password has looked at data that he is not supposed to see? Like your paycheck or the mail about the coming merger? Besides treating your admin nicely there are not many thinngs you can do * change the passwords more often then 'never' or 'every quater' * encrypt stuff * pray By the way: When IT admins were asked what they would take with them when they leave the company they said: * the customer database * the list with the passwords Surprise! IT-Admins schnüffeln Mitarbeitern hinterher - Knowledge Center - Security - computerwoche.de |
A Different Form of JAR Hell [aut disce, aut discede] Posted: 24 Jun 2008 01:11 AM CDT In my last post I used a Java applet to steal password hashes. Part two, covering NTLMv2, is on its way. Today however, I'm going to discuss SunSolve #233323 - a vulnerability that was fixed in the March updates to the JRE. Anyone who caught my ToorCon talk will have already heard me discuss this issue. Java Web Start has provision for resources, signed JAR files that contain either Java classes or native libraries that can be cached for use by one or more applications. JARs containing Java classes are extracted as per the usual Java caching mechanism (i.e. written to disk using randomly generated names) whereas native libraries are extracted with their original filenames. Interestingly filenames can include parent path sequences (e.g. ..\..\..\..\test.txt). This means that "nativelibs" can be written outside the cache folder. But that's ok because nativelib resources need to be signed and therefore explicitly trusted by the user, right? Not exactly. Take a look at the following code snippet, which resembles the vulnerable Java Web Start code, and see if you can spot the bypass (it's not exactly obvious): try If you spotted the problem, well done! If not, here's a hint courtesy of an IBM article on signed JARs: Each signer of a JAR is represented by a signature file with the extension .SF within the META-INF directory of the JAR file. The format of the file is similar to the manifest file -- a set of RFC-822 headers. As shown below, it consists of a main section, which includes information supplied by the signer but not specific to any particular JAR file entry, followed by a list of individual entries which also must be present in the manifest file. To validate a file from a signed JAR, a digest value in the signature file is compared against a digest calculated against the corresponding entry in the JAR file. What if a file doesn't have a corresponding manifest entry? It turns out the above code will happily call WriteFileToDisk anyway and there'll be no exception thrown. We can use this bypass to append a file to a signed resource and have it drop a java.policy file in the user's home directory allowing applets and Web Start applications to do bad things. Let's take a look at how the Jarsigner tool that ships with the JDK validates signed JARs. Jarsigner correctly detects JARs containing both signed and unsigned content: The code snippet below shows the enumeration of ZipEntrys; it's taken from sun.security.tools.JarSigner: Enumeration e = entriesVec.elements(); The code retrieves the entry's CodeSigners; if there are none the entry is deemed unsigned. As an aside, it's actually possible to fool Jarsigner. Take a look at the signatureRelated method, which is called above: /** Jarsigner ignores unsigned files that start with the prefix "META-INF/SIG-": Anyway, back to the Web Start issue. Soon after discovering this bug I realised it was effectively moot for I hadn't seen any security dialogs even when working with fully signed JARs. It turned out there were none. Ever. You could simply even use a self-signed JAR! Still, it's a great example of a managed language providing a simple interface (JarFile) that masks a complex implementation; if the contract between the caller and the callee is not clearly defined (however simple the interface), developers can write insecure code without knowing it. So that's it for now. There's also some interesting behaviour when loading applets containing signed and unsigned content but I'll save that for another day. Cheers John p.s. In case you were wondering, JAR hell is Java's form of DLL hell. |
I Just Missed The Heathkit Generation [The Converging Network] Posted: 24 Jun 2008 12:01 AM CDT This weekend I rebuilt two of my computers. One has a power supply problem and the other is my Windows Server 2008 system I use for testing software and trying stuff out. I tinkered for hours rebuilding those two systems. Part of it was verifying the problem was actually the power supply, and the rest was just me rerouting this cable or that, moving drives around or just plain goofing off. I've always been a tinkerer like that. I used to drive my dad completely bonkers because whenever he got some new tool, gadget or electronics doodad, I'd take it apart to see how it work and then put it back together, er, most of the times I'd put it back together. It was just cool to see how things worked. Though I don't generally build PCs for work, I do in the lab, often trying out some new processor, graphics card, or what have you. Its just something I like to do. I kind of caught the PC building bug when I started building PCs with my son, Phill, who now does PC work and support for his vocation. But I also realize those "building" tendencies go back even further. When I was a kid, I had chemistry sets, microscopes, and breadboard electronics kits, the kinds where you could wire up a basic radio by connecting wires to the spring junction pegs. In high school I got into hi-fi stereo systems. I really studied up on all the different manufactures and models, frequently being able to spout out the wow-and-flutter of this tape deck, the wattage of this amp, or the signal-to-noise ratio of some other gadget. A lot of entirely useless factoids that most people had no idea what I was talking about. One thing I missed out on were Heathkits. Heathkits were those electronics kits for building stereos, AM/FM radios, ham radios, and lots of transistor-based electronics test gear gadgets. As I remember, they didn't have chips but only used transistors... the analog version of electronics, the stuff you used a soldering iron to put together. Maybe they had computer chips later on, I don't know, but they stopped making kits in 1991. I was a bit young for Heathkits and then skipped from stereos right to my first computer, the Apple II Plus. The Apple II was my Heathkit, like Star Trek TNG is for generations after the classic Star Trek. Like some people wish they would have learned the guitar when they were young, for some reason I wish I had put together at least one Heathkit. Sometimes it's okay to have a something like that which you never got today. If I'd made a Heathkit, then there would probably be something else I wished I'd had a chance to put together. |
Posted: 23 Jun 2008 09:54 PM CDT Were I in the UTM business, I'd be engaging the reality distortion field and speed-dialing my patent attorneys at this point. Fortinet has recently had some very interesting patent applications granted by the PTO.
These patents could have some potentially profound impact on vendors who offer "integrated security" by allowing for virtualized application of network security policy. These patents could easily be enforced outside of the typically-defined UTM offerings, also. I'm quite certain Cisco and Juniper are taking note as should be anyone in the business of offering virtualized routing/switching combined with security -- that's certainly a broad swath, eh? On a wider note, I've actually been quite impressed with the IP portfolio that Fortinet has been assembling over the last couple of years. If you've been paying attention, you will notice (for example) that that they have scooped up much of the remaining CoSine IP as well as recently acquired IPlocks' database security portfolio. If I were they, the next thing I'd look for (and would have a while ago) is to scoop up a Web Application Firewall/Proxy vendor... I trust you can figure out why...why not hazard a guess in the comments? /Hoff Updated: It occured to me that this may be much more far-reaching than just UTM vendors, that basically this could affect folks like Crossbeam, Check Point, StillSecure, Cisco, Juniper, Secure Computing, f5...basically anyone who sells a product that mixes the application of security policy with virtualized routing/switching capabilities... |
links for 2008-06-24 [Raffy's Computer Security Blog] Posted: 23 Jun 2008 09:33 PM CDT |
Week of War on WAF’s: Day 1 — Top ten reasons to wait on WAF’s [tssci security] Posted: 23 Jun 2008 09:13 PM CDT Hello, and welcome to the Week of War on WAF’s, the same week that ends whereby PCI-DSS Requirement 6.6 goes into effect as a deadline for many merchants. Today is the first day. So far, Marcin has identified some of the problems with web application firewalls. We were able to identify what we would like to see in WAF’s, both commercial and open-source in the future (since they do not work properly today). In this post, I want to start off the week by listing the Top ten reasons to wait on WAF’s. Top tens reasons to wait on WAF’s
We understand and realize that the ideas of a blocking WAF are very popular right now. There are many supporters behind the WAF and VA+WAF movements. While we’d also like to support what the rest of the community sees as the future — we also want to make sure that it is the right thing to do. One of the best ways to move forward with any given technology is to look at its faults. We learn best in IT when things fall apart — when they break. TS/SCI Security has put a lot of thought, practice, and research into WAF technology. Marcin’s most recent post demonstrates our list of requirements (e.g. block outbound) and nice-to-have’s (e.g. good documentation). Some vendors might already have this sort of outbound blocking functionality, and we’re not even aware of it! Other vendors could have clearly defined “VA+WAF blocking” documentation, which could even be internal engineering or strategy documents that should be out in the open (or at least available to paying customers). Also — if we do end up demonstrating that WAF, VA+WAF, APIDS, ADMP, or other solution is less viable than a new, better idea — let’s move this research into the forefront. |
Maltego Goes Communal [Room362.com] Posted: 23 Jun 2008 02:11 PM CDT Now that everyone and their mother has posted about Back|Track Final being released I feel that I am safe in disclosing that information. But on to the topic, with said release, the folks over at Paterva have released a “Community” edition of Maltego. Straight from the horses mouth, here are the limitations:
Also, directly on the heals of this release is a community forums! Which haven’t quite been linked to from the main site, but I HAVE AUTHORIZATION THIS TIME!... not going to make the same mistake twice. Anyways, go check it out. |
VirtSec Not A Market!? Fugghetaboutit! [Rational Survivability] Posted: 23 Jun 2008 12:49 PM CDT Thanks to Alan Shimel and his pre-Blackhat Security Bloggers Network commentary, a bunch of interesting folks are commenting on the topic of virtualization security (VirtSec) which is the focus of my preso at Blackhat this year. Mike Rothman did his part this morning by writing up a thought-provoking piece opining on the lack of a near-term market for VirtSec solutions:
Firstly, almost all markets take a couple of years to fully develop and mature and VirtSec is no different. Nobody said that VirtSec will violate the laws of physics, but it's also a very hot topic and consumers/adopters are recognizing that security is a piece of the puzzle that is missing. In many cases this is because virtualization platform providers have simply marketed virtualization as being "as secure" or "more secure" than than their physical counterparts. This, combined with the rapid adoption of virtualization, has caused a knee jerk reactive reaction. By the way, this is completely par for the course in our industry. If you act surprised, you deserve an Emmy ;) Secondly, and most importantly to me, Mike did me a bit of a disservice by intimating that my pushing the issues regarding VirtSec are focused solely on the technical. Sadly, that's so far off base from my "fair and balanced" perspective on the matter because along with the technical issues, I constantly drum home the following:
"Nobody Puts Baby In the Corner" Painting only one of the legs of the stool as my sole argument isn't accurate and doesn't portray what I have been talking about for some time -- and agree with Mike about -- that these challenges are more than one-dimensional. The reality is that Mike is right -- the budget, priority and politics will bracket VirtSec's adoption, but only if you think of VirtSec as a technical problem. /Hoff |
Web application firewalls: A slight change of heart [tssci security] Posted: 23 Jun 2008 10:35 AM CDT We’ve been beating the drum for some time now, expressing our opinions of web application firewalls (WAFs). You might have sided with us on this issue, are against us, or are just tired from it all by now. This post is about to change all that, and show that we are not 100% anti-WAF, and that there are some useful applications for them. Why WAFs do not work In a post on why most WAFs do not block, Jeremiah Grossman quoted Dan Geer:
Jeremiah then stated that to implement a default-deny WAF (which would offer the most security, but carries with it the greatest business impact), you need to know everything about your app, at all times — even when it changes. How appealing is this, given the amount of resources you currently have? Who will be responsible for maintaining the WAF? These are all questions you should be asking yourself. Jeremiah then goes on to say that default-permit is necessary in web applications — going against everything we’ve learned in security over the past 40 years. Wait… what?? Some context that can attest to our reasoning Over the last several weeks, I’ve been evaluating several web application firewalls. They all have their own list of cons and more cons. On my first day, having sat down at their consoles, I was a little overwhelmed by all the options present — application profiles, signatures, policies, etc. It all came to me as I worked through it and read the manuals, though frankly, I don’t see how anyone without a web application security background can keep up with it all. I fear these devices will be deployed and forgotten, relying solely on their ability to learn and self-adjust. Let’s talk about the consoles used to monitor and maintain the WAF. One vendor had a fat app, which was a bit clunky, non-intuitive and had multiple usability issues. The number one issue that comes to mind is on the monitoring panel — to watch alerts in real-time, you need to set an automatic refresh rate which updates the entire display, which makes it impossible to analyze HTTP requests/responses during this time. If you’re scrolled down to a certain location of a request, and the console refreshes, you lose your position and are brought back up to the top. I don’t understand why the entire screen had to be updated, rather than a particular frame. Another vendor used a webapp to manage itself, which was in my opinion much nicer and easier to use, albeit slower. When on the alert monitoring page, you had to manually click a button to refresh the alerts, and viewing requests/responses was a major pain. The application utilized AJAX on pages that could do without, but in areas that could benefit from them, they resorted to old web tactics. In the course of my testing, I started by taking RSnake’s XSS cheatsheet and creating Selenium test cases for attacking our own vulnerable web application (See our talk, Path X from ShmooCon). For those unfamiliar with Selenium, it’s a browser driver that performs functional testing, though we have showed how it can be used for security testing. We didn’t use WebGoat (or any other vulnerable apps), reasoning that the vendors must have tested against those and know them inside out for just such occasions. Renaud Bidou had an excellent presentation on How to test an IPS [PPT] from CanSecWest ‘06 which I believe can be applied to testing WAFs for those interested. Suffice to say, the WAF’s did not detect ALL of the XSS from the cheatsheet that was thrown at it, which is pretty sad. I would have expected they at least get that right. That brings us to second-order, persistent XSS and SQL injection attacks. When a web application strings together data from multiple sources, detection of such attacks can be very hard. The WAF cannot account for this logic, thus allowing an attacker to effectively bypass the WAF by staging his XSS/submitting multiple payloads to various sources. When the application then pieces the data together, an XSS (SQL injection, etc) condition exists. The problem with this? Your WAF never detected it, and you have no idea your site’s been attacked and is now hosting malicious scripts. There are just some attacks a WAF will never detect. HTML / CSS injection through HTML / CSS is just one example. Go on over to http://google.com/search?q=cache%3Atssci-security.com — can you describe what is going on here? Or how about CSRF? Insecure session management? What can a WAF do to protect against business logic flaws? We can go on and on, and yet vendors still claim protection against OWASP Top 10, which if you believe shows you know nothing about web application security. How WAFs can help So I lied, we haven’t changed our minds about WAFs. But wait! I’ll let you know what would change our minds at least a little, which would show that WAFs can have their purposes. Without this though, I can’t recommend any organization spend the money on such devices — especially if they need to meet compliance requirements where other options do exist. The value of WAF Egress, revisited What should a WAF do? Block attacks on the egress / outbound, while staying out of the inbound flow of traffic. I’m not talking about signature based blocking either. This is the tough part, because it’s almost impossible. One way I see it working though, is if the application keeps the content (HTML), presentation (CSS), and behavior (JavaScript) separated. The application should not serve any inline scripts, but instead serve script files that would alter the content on the client side. This would make, e.g. outbound XSS prevention possible because a WAF could then detect inline scripts in the content. None of the WAFs I evaluated could detect a client being exploited by a persistent XSS condition. This would also tell me how many users were affected by the XSS attack, which we haven’t seen any numbers on apart from the number of friends Samy had when he dropped his pants and took a dump all over our industry.
Another way to get this right is to apply the work done by Matias Madou, Edward Lee, Jacob West and Brian Chess of Fortify in a paper titled: Watch What You Write: Preventing Cross-Site Scripting by Observing Program Output [PDF]. They go on to talk about capturing normal behavior of an application during functional testing, and then attacking the application as if in a hostile environment, where it is then monitored to ensure it does not deviate from normal behavior. Basically, it’s all about monitoring your application output in areas that are known to be dynamic. In-depth, the Foritfy work is using dynamic taint propagation, by which “taint propagation” or “taint tracking” is similarly done with static analysis in order to trace misused input data from source to sink. This is also a corollary to the work that Fortify has presented on before with regards to Countering the faults of web scanners through bytecode injection [PDF]. While web application security scanners only demonstrate 20-29 percent of the overall security picture because of surface and code coverage for the inputs of the application under test, dynamic taint tracking goes a long way to providing more coverage for these kinds of tests because it’s done as white-box dynamic analysis instead of functional black-box runtime testing. The value of XHTML My fellow blogger, Andre Gironda, helped out with the praise section for the book, “Refactoring HTML: Improving the Design of Existing Web Applications”, by Elliotte Rusty Harold. It’s hard to disagree with the notion that XHTML can help with both quality and security issues, as well as make applications and content easier to refactor and work with. When you’re recoding thousands or millions of lines of code, wouldn’t well-formedness and validity be the primary requirements for working with such large volumes of code? If anything, well-formedness and content validity make the chores much easier to deal with. Rusty has this to say in his book:
Since web application firewalls today cannot convert HTML on the outbound to XHTML, this is certainly a job for the content writers (sometimes, but often not the developers) to deal with. In the Refactoring HTML book, Rusty also talks about the tools necessary to develop content on the web:
Speaking of properly validated and easy to read/use content, what irked me throughout my evaluation most was documentation. Vendors: do not bundle a ton of HTML files together and call it a manual. If you’re looking to do that, please use DocBook if you’re not going to make a PDF available. Better yet, give us a hard copy. |
You are subscribed to email updates from Black Hat Security Bloggers Network To stop receiving these emails, you may unsubscribe now. | Email Delivery powered by FeedBurner |
Inbox too full? Subscribe to the feed version of Black Hat Security Bloggers Network in a feed reader. | |
If you prefer to unsubscribe via postal mail, write to: Black Hat Security Bloggers Network, c/o FeedBurner, 20 W Kinzie, 9th Floor, Chicago IL USA 60610 |
No comments:
Post a Comment