Saturday, April 26, 2008

Spliced feed for Security Bloggers Network

Spliced feed for Security Bloggers Network

Producing Secure Software With Software Security Enhanced Processes [Writing Secure Software]

Posted: 26 Apr 2008 07:57 AM CDT

On behalf of OWASP and as a PR for the organization I wrote an article for the April edition of in-secure magazine: Producing Secure Software With Software Security Enhanced Processes. Besides evaluating the pro and cons of different software security enhanced process models such as MS-SDL, OWASP-CLASP and Cigital-Security TouchPoints, I deal with the basic steps of the process that are 1) assessment of the software engineering and information security processes currently used by the organization, 2) implementation of the process models within the software security framework, 3) software security metrics and measurements. Regarding the assessment I deal with maturity levels and I published a maturity level curve that I took from the old CLAPS methodology as a reference (herein included). The maturity curve depicted in the article comes from old CLASP documentation that dates back to when CLASP was owned by Secure Software. The reason I put that figure in the article is to make the point that security activities can be effectively built into the SDLC if tied to reaching capability maturity levels. This is based upon my experience in rolling out software security framework for large financial organizations. The concept is very intuitive "per se", some activities like metrics and measurements require higher maturity level then others. Others like training and awareness are pre-requisite to achieve maturity and allow the adoption of other activities such as secure code reviews for example. I had recently done an extensive work on mapping software security roadmap to the maturity levels assessed for the financial organization I currently work for. Such CMM takes into account SDLC methodologies, risk management processes as well as security tools and training and awareness levels across all departments within the organization. The maturity exercise helps in setting a roadmap to determine what can be achieved in the short term and in the long term and at which cost. A reader of my article, Dan Fiedler suggested to map CLASP, MS SDL and TP to the maturity curve. I think this can be very useful in terms of comparison of what maturity levels can be achieved using different models making some assumption to the adoption spread within the organization.

Proposal For Building Anti-Phishing Tool Countermeasure [Writing Secure Software]

Posted: 26 Apr 2008 05:53 AM CDT

I recently responded to a student email that asked ideas for building an anti-phishing tool. My initial recommendation was to look at the types of phishing attacks and the potential countermeasures. This is the approach I underdtook when developing a product for ISS (now IBM ISS) called ISS crosscheck back in 1998 which purpose was to adaptively change the counteremeasure (e.g. Realsecure policy) depending on finding a known vulnerability was deceted via the ISS scanner. In application security we need something similar that adaptively detect the attack, in this case the phishing tool being used and then trigger the countermeasure. Phishing threats and countermeasures are very well dealt with in the work done by Aaron Emigh @ Radix labs: http://www.antiphishing.org/Phishing-dhs-report.pdf. Phishing is in an essence a social engineering attack that even if has a common delivery (e.g. malicious link via email) it can take different forms of attack vectors for the phishing "tool" itself: it can be for example a non sophisticated phished site built by registering a fake domain and look alike webpage (you can build one yourself via CGI and very little knowledge required) or a tool that links to the legitimate site URL and exploits a vulnerability such as cross frame scripting (XFS) , cross site scripting (XSS), weak authentication and Cross Site Request Forgery (CSRF). Other forms of phishing might include a phish web proxy to perform a man in the middle attack that connects to the original site (no copy of the original site). Attacks of this nature can break MFA (multi factor authentication) controls such as hardware tokens (One Time Passwords) and they are actually easier to do then the old ones that require re-building the web site in CGIs. The risk associated to this phishing attacks is therefore higher (exploitability risk factor is higher). Phishing attacks that use botnets and MiTM (Man in The Middle) make the IP geolocation and machine fingerprinting techniques used by risk authentication controls (e.g. Cyota RSA) useless from the phishing mitigation perspective. When successfully attacking IP geolocation, the machine perpetrating the attack can be, let say in Rumenia with a remotely controlled botnet proxy that shares the same geolocation or ISP as the victim's machine in the East Bay USA. To successfully attacking device fingerprinting the phisher will reply such fingerprinting information from the phished machine. Attacks of this nature were reported since 2005 in Europe and since 2006 in USA . Current phishing countermeasures can be build in the web site such as strong authentication via PKI. Tricipher http://www.tricipher.com/threats/index.html have developed with the countermeasure for MiTM phishing in mind. Understanding what current web application authentication controls do mitigate phshing is critical as well what the threat is and which countermeasures are effective. For the anti-phishing countermeasure to work need to be capable to identify different phishing tools and that's probably the most difficult task since some of these tools are very sophisticated and complex. Currently, for the most sophisticated forms of phishing tools you can look at Rockphish whose functionality is not well documented apart from some research found here http://www.avertlabs.com/research/blog/index.php/2007/11/page/2/ and here http://www.avertlabs.com/research/blog/index.php/2007/12/ the tool will create new IPs on the flight so that taking down the IPs will not prevent the attack. This represents a challenge from the incident response perspective. In the case of the class of phishing that exploits common web vulnerabilities on the legal site you need first at a tool that can scan for CSRF, XFS, XSS, and weak authentication vulnerabilities. The tool needs to web scan for the target site for web application vulnerabilities that can be used for phishing the site. As a result of this scan a countermeasure consists in implementing input validation, transaction codes, strong authentication. A more sophisticated tool needs to be able to identify the phishing tool itself (the attack) having clues on how operates and how is deployed and configured (domain). Part of the identification might need to perform some protocol analysis to identify proxies, botnets, etc. Eventually out of this phishing attack vector scan you need to be capable to develop a signature of the phsihing tool itself. The tool needs a library (knowledge base) of known phishing sites/tools to create a baseline for which the anti-phishing tool can be used for testing the countermeasure. Unknown phishing tools can be learnt with a honeypot. Finally once the phishing site is identified with the countermeasure you need a tool that can do incident response: in the case of phishing it means try to find a way to take down the site via the register and the IP that the site is hosted to. Taking this further it could also mean to try to counter-hack the phishing site such as finding and exploiting vulnerabilities on the phishing site itself (see Crack the hackers http://www.itp.net/news/516118-i post) such as causing a DDos on the phshing site by exploiting a buffer overflow and other vulnerabilities etc. Some of this phishing tools are build to attack and to defend themselves but they are also prone to security bugs as any commercial software tool. Good resources to start a study of phishing are also the OWASP phishing web page http://www.owasp.org/index.php/and the anti-phishing working group http://www.antiphishing.org/. Maybe this is could be good proposal for a OWASP grant (next Spring of Code 2009) or for some pioneer VC willing to invest in a new security tool. If we take Gartner figures of the billion of $$ lost in phishing, probably investing some hundred thousands in a phishing countermeasures is worth the investement nevertheless just adopting a risk management perspective: if cost of building the tool less than cost of loosing the asset and no tool is available then maybe is worth building one

Afternoon Delight [BumpInTheWire.com]

Posted: 25 Apr 2008 09:29 PM CDT

Gonna find my baby, gonna hold her tight
gonna grab some afternoon delight.
My motto’s always been; when it’s right, it’s right.

How’s that for the worst intro to a post ever? I did manage to take this afternoon off and partake in a little socialization in the Power and Light district of KC. I hadn’t been down there yet so it was all new to me. I have to admit its a great step forward for Kansas City. It almost seems weird to have that kind of area now beings downtown has been a ghost town for so long.

Todays reward did not come about unearned.  The crazy DHCP problem I’ve written about in the past showed up again yesterday.  Fortunately some other odd behavior was reported along with it this time.  In addition to the odd DHCP lease issue there were a lot disconnects reported.  Yesterday afternoon TB noticed a couple of instances of this problem being reported for a department being in the incorrect VLAN.  I thought nothing of it because it was only a couple and cables had been moved around so much in the troubleshooting of this problem that a port or two being in the wrong VLAN didn’t seem unreasonable.  This morning another piece of the puzzle was added when we got actual visual confirmation of an endpoint getting an IP address for a subnet different than what the port was configured for.  A little “debug dhcp detail” on the switch and the puzzle became a little clearer.  The log and terminal monitor revealed this…

%CDP-4-NATIVE_VLAN_MISMATCH: Native VLAN mismatch discovered on FastEthernet2/20 (210), with CSC06INDWC FastEthernet3/29 (147).

Looking at this it doesn’t take a rocket scientist to realize something was a miss.  What was on port 2/20?  What was on port 3/29?  “Show cdp neighbor” put the final piece of the puzzle in place.  Strangely this switch was a neighbor to itself…twice.  it was connected to itself on port 2/20 and 3/29.  Ahhh HA!  A loop.  This native VLAN confusion exposed another issue, a misconfigured VLAN interface.  The VLAN interface had “ip-helper address” statements referrencing DHCP servers in a different location.  The loop was sending DHCP requests to the incorrect scope at a different location which in turn was filling up the DHCP scopes.  This explained the inability to obtain a DHCP address that would randomly appear.

As Tigger would say…this mystery is history!

Holier than marketing people - not! [StillSecure, After All These Years]

Posted: 25 Apr 2008 09:09 PM CDT

Venus_fly_traps So here is one of my pet peeves about the IT world. Too many "technical" people consider themselves (pick one:) superior, smarter, more ethical, better than, their marketing counterparts. Hey people, everybody is selling something all of the time, even if it is themselves. Case in point, a recent "spat" between my bud Mike Rothman and another friend, Misha Govshteyn. Now Rothman and I go back a bit and have had our share of blog bad blood, but all in good spirit. Misha is a good guy too. Anyone who knows where to find a schmaltz herring in Houston after all can't be too bad. And my friend Farnum who serves as the peanut gallery in this story is solid as well. OK now that we have the players, lets lay out the story.

It seems that Alert Logic had a webinar titled _ Simple & Affordable PCI Compliance w/ Alert Logic. Mike thought that this was very misleading marketing from the slimy, no ethics, don't understand the real pain marketing folks at Alert. They are preying on the simpletons who are responsible for security and PCI compliance in the world and Mike delivers his full venomous wrath (according to Misha anyway, I bet Mike could be worse) on Alert Logic and their marketing team. Misha than responds with his own venomous wrath, that Rothman is literally full of baloney, a shameless self-promoter on par with Michael Savage. To add fuel to this fire comes Michael Farnum, who tells Misha in his comments that while he likes Alert Logic, "many manufacturers use their marketing as fly traps."

OK, here is my take. To Mike Rothman: come on Mike, you never did anything like that when you were a marketing guy? What are you some kind of reformed smoker? What would you have them name the webinar: "PCI is hard and our stuff can only help a little". Give it a rest. Also a little respect for the people they are marketing too. I think they realize what is what and can separate the bull from the cream. To Misha, hey at least Mike gave you some PR. I understand your frustration but instead of pointing at everyone else, say we stand by the name and that does it. Most of all to my buddy Farnum, dude, we know what you do, it is just a question of price. If those Venus Fly Trap marketing people weren't drawing people in, you would have to have a second job to feed the family and many not have the leisure time for blogging.

But seriously folks, marketing people have a hard job too. It is not that they are not technical or don't understand what is involved in PCI compliance or the like. It is their job to make these webinars appealing. I don't think most marketing people think of what they are doing is being misleading. They try to make these webinars deliver as advertised. The same way engineers try to make a product work as intended. Lets understand that it "takes a village" to develop, market, sell and support a product. Everyone has their job to do and for the most part do it the best they can and again for the most part with the highest of professional standards. Thinking that marketing people are slimy fly traps does a disservice to them, the people they market too and frankly comes across as self-serving arrogance.

Announcing Winners of Debix Contest [securosis.com]

Posted: 25 Apr 2008 05:10 PM CDT

It took a little longer than expected, thanks to me being totally swamped post-surgery until now, but let’s congratulate our winners of a free year of Debix identity theft protection: myemailisvalid, Jay, and Brett.

Thanks for participating, and you’ll be hearing from us over email.

Not a CISSP ?!?! [Carnal0wnage Blog]

Posted: 25 Apr 2008 03:35 PM CDT

Chris Eng over at veracode has an interesting post on their blog about immunityinc's "not a cissp" button.

If you've been under a rock, here is the button:


I've got mixed feelings about the button. For one thing, I've seen a couple of CISSPs wearing that button at defcon/shmoocon, i guess they were practicing some SE. But secondly, its easy for people in the top 5% of the security game to say you don't need certifications because they (most importantly) already have that level of experience and name recognition. Dave Aitel doesn't need to take a test and throw some letters after his name to prove to anyone he knows his stuff, he proved himself long ago but i cant imagine he came out of the womb with that much fu, maybe he did I don't know.

For us mere mortals who are just trying to get a paycheck and get some experience alot of places are requiring certifications to be on the contract or get the job or even to get your resume to the hiring manager. For .mil/.gov this is because of 8570. To me, requiring certifications is a step in the right direction. Since no one has come forward with a scalable "hands-on" way to certify people, that paper test (for now) will have to do. At least people are trying to get qualified people in the slots, saying CISSP or some other cert makes you automatically qualified is another matter.

I'll be the first one to agree with Chris that "that like many security certifications, it's an ineffective measure of a security professional's practical abilities." See my CEH != Competent Pentester post but the game is the game. If you have to sit for a test to do/get the job then stop bitching and take your test and move on with it. If you want to stand your ground and just bitch and not get the job, enjoy your time on the geek squad.

Napera at Interop 2008 [Napera Networks]

Posted: 25 Apr 2008 02:39 PM CDT

The Napera team will be at Interop 2008 in Vegas next week. We’ll be demonstrating our products and talking to analysts and press about our approach to securing the small and medium enterprise. If you'd like to meet, please visit us at the Microsoft NAP partner pavilion - Booth 1719. We’ll post further updates from the show floor during the week.

Dissecting the Automatic Patch-Based Exploit Generator [Observations of a digitally enlightened mind]

Posted: 25 Apr 2008 01:41 PM CDT


There has been a lot of recent discussion on the Automatic Patch-Based Exploit Generator paper (here), and although it is compelling, it is far from the mass exploit generating, digital apocalypse one might be led to believe.  It is clear that evolving techniques are automating many aspects of what has been a very manual reverse engineering process. It is also clear that the time to protect is decreasing dramatically. From code red, which had a 6-month lead time from patch to exploit, to recent 0-day and targeted attacks, we are quickly entering an era where traditional techniques are becoming too slow, too cumbersome, and too prone to error or service disruption to be effective.

Looking at the OODA loop <observe, orient, decode, and act> it becomes even more clear that an attacker has an advantage in that their time to reverse-engineer a patch or other protection mechanism will almost always be faster than a defendants time to reverse-engineer an attack - additionally the consequence of time is far more prevalent for the defense.

If one factors in cost (c), which would include some measure of difficulty (d) , expense (e) and time (t), coupled with risk, which is some measure of penalty (p) and likelihood (l) of being caught the results leave little doubt that automatic malware generation will not only increase in sophistication and speed, it will also increase in population exposure.

Anyway back to the APEG paper, in which it states

However, it is insufficient to simply locate the instructions which have changed between P and P’. In order for APEG to be feasible, one has to solve the harder problem of automatically constructing real inputs which exploit the vulnerability in the original unpatched program.

They go on to state what looks like vulnerability checking against input validation errors, not exploit generation - all of the security researchers, especially those who have dealt with developing vulnerability scanning checks will note the difference

Our approach to APEG is based on the observation that input-validation bugs are usually fixed by adding the missing sanitization checks. The added checks in P’ identify a) where the vulnerability exists and b) under what conditions an input may exploit the vulnerability. The intuition for our approach is that an input fails the added check in P’ is likely an exploit in P. our goal is to 1) identify the checks added in P’, and 2) automatically generate inputs which fail the added checks.

This would have been an extremely useful tool for the vulnerability check writing teams at nCircle, Qualys, and the rest of the VA industry, but as for automatically generating exploit code, well, that is possible if we bound the statement to automatically generating exploit code against input validation errors.

This is still impressive and I would welcome the opportunity to better understand what I am missing or what will be done with the next evolutionary leap to automating malware generation. In the meantime organizations must continue to move away from the traditional reactive, ad-hoc, firefighting mode of information security and towards more agile and effective processes and technologies that decrease attack vectors and dramatically reduce the time to protect.

For more detailed analysis of the paper and the reverse-engineering process I would suggest you read the following excellent posts:

Robert Graham (here)

This paper promises “automatic patch-based exploit generation”. The paper is a bit overstated, this isn’t possible. By “exploit” the paper does not mean “working exploit”. That’s an important difference. Generating fully functional exploits by reverse engineering a patch takes a lot of steps, this paper automates only one of them, and only in certain cases.

Halvar Flake (here)

Anyhow, long post, short summary: The APEG paper is really good, but it uses confusing terminology (exploit ~= vulnerability trigger) which leads to it’s impact on patch distribution being significantly overstated. It’s good work, but the sky isn’t falling, and we are far away from generating reliable exploits automatically from arbitrary patches. APEG does generate usable vulnerability triggers for vulnerabilities of a certain form. And STP-style solvers are important.

Update 4/25/2008 AW: Added additional analysis links below

IBM/ISS Frequency X Blog (here)

The paper describes a toolset that produces exploits from patches almost instantly, and goes on to discuss the implications of instant exploit generation from patches, raising the specter of worms propagating in the hours while patch distribution is still taking place.

However, the toolset that is actually described in the technical details of the paper does not provide that sort of capability. The tool does not only require a patch diff, but also either an input that reaches the vulnerable code, or an indication by the tool’s user of the specific locations where the attacker controlled data that ultimately exercises the vulnerable code is input into the program. From that information the tool produces a set of inputs that would be rejected by the patched version.

QuickTime 0day for Vista and XP [GNUCITIZEN Media Portfolio]

Posted: 25 Apr 2008 12:57 PM CDT

A remote vulnerability exists in the QuickTime player for Windows XP and Vista (latest service packs). Other versions are believed to be affected as well. For now, no details will be released regarding the method of exploitation.

Because we are an information security think tank and because we encounter some very interesting vulnerabilities in our work, we often share our findings with the masses in order to give something back to the community. It is good to take but it is even better when you give. Unfortunately, the situation in UK is changing and we, as whitehat hackers, have to adjust to these changes. Therefore, we have been experimenting with a number of disclosure methods in the past couple of months. We’ve tried everything, from full-disclosure to partial-disclosure, private-disclosure and no disclosure at all. Now is time to move to something totally different and if we find it working for us, we will share the secret with you for the better of the community. Please bare with us. This is just one of our social experiments.

A remote vulnerability exists in the QuickTime player for Windows XP and Vista (latest service packs). An attacker could exploit the vulnerability by constructing a specially crafted QuickTime supported media file that allows remote code execution if a user visited a malicious Web site, opened a specially crafted attachment in e-mail or opened a maliciously crafted media file from the desktop.

If a user is logged on with administrative privalages, the attacker could take complete control of an affected system. An attacker could then install malicious programs, view, change, delete sensitive data, or create new accounts with full user rights. Users who are logged on with less privileged account could be less impacted than users who operate with administrative user rights.

The vulnerability was successfully tested in Windows XP SP2 and Windows Vista SP1 environments. Other versions are believed to be exploitable as well. The vulnerability is currently held private. The GNUCITIZEN team is following responsible disclosure practices. Therefore, the vulnerability details will be privately disclosed to the vendor in a short period of time. This advisory is meant to inform the public and raise the consumer’s awareness.

The video above demonstrates the issue on Windows Vista and Windows XP. The Windows Vista demo is rather slow because it runs from a 512MB VMWare station.

Marty Lederman, on a roll [Emergent Chaos]

Posted: 25 Apr 2008 12:57 PM CDT

You see, the CIA apparently uses the less dangerous version of "waterboarding" -- not the Spanish Inquisition method, but the technqiue popularized by the French in Algeria, and by the Khmer Rouge -- involving the placing of a cloth or plastic wrap over or in the person's mouth, and pouring or dripping water onto the person's head. That's the civilized version of waterboarding -- the benign, anodyne, variant of the water treatment, the kind carefully administered by professionals. We would never dream of the barbaric practice of actually forcing the water into the nose and mouth.
Go read "The Underdeveloped Jurisprudence of the Forcing/Pouring Distinction" and wonder how the next President is going to avoid prosecution.

Is Keylogging Legal? [LiveBolt Identity Blog]

Posted: 25 Apr 2008 12:51 PM CDT

In the hierarchy of security, keyloggers were once considered small potatoes. The threat was deemed unlikely, and almost fanciful. Not anymore. Keyloggers are everywhere. For those new to the area, a keylogger does just what it sounds like — records keystrokes — which can then be played back. For those who can remember, it’s similar to a typewriter ribbon, which had an imprint of every key pressed and their order - a literal scroll of all the work done on that keyboard. Only digital keyloggers are a heck of a lot easier to read back.

Keylog output is scary — plain text of your IDs and passwords, your instant messaging, your private emails - it’s all there.

There are two types of keyloggers - software programs and hardware. The software programs can be detected (usually) with anti-virus/spam or even specialty keylogger-detecting software. The hardware loggers are much tougher. To catch these, you need to do a visual inspection of your keyboard and check for a dongle like this but even worse is one of these - the keylogger is inside the keyboard. (makes me wonder about trying to ’sniff’ my wireless keyboard/mouse combo… looks like it can be done, thanks to this article…) (more…)

Microsoft Security Intelligence Report V4 [Emergent Chaos]

Posted: 25 Apr 2008 12:08 PM CDT

Microsoft Security Intelligence Report (July - December 2007)
This volume of the SIR focuses on the second half of the 2007 calendar year (from July through December) and builds upon the data published in the previously released volumes of the SIR. Using data derived from several hundred million Windows users, and some of the busiest online services on the Internet, this report provides an in-depth perspective on trends in software vulnerability disclosures as well as trends in the malicious and potentially unwanted software landscape, and an update on trends in software vulnerability exploits. The scope of this fourth volume of the report has been expanded to include a focus on privacy and breach notifications, and a look at Microsoft’s work supporting law enforcement agencies worldwide in the fight against cyber criminals. [Emphasis added.]
Emergent Chaos readers are unlikely to learn new details in the analysis. What's important to me is that this helps to establish a new normal baseline around the way we're using information that's disclosed and gathered by folks like Attrition.

Wireless Scanning [Andy, ITGuy]

Posted: 25 Apr 2008 10:56 AM CDT

A couple of days ago I got on the bus to make the trip from Downtown Atlanta to the suburbs where I live. I pulled out my laptop to do some work and was just about to disable my wireless radio when up popped a "Wireless Network Found" message. I closed it and was about to go ahead and disable the radio when I thought it would be interesting to run NetStumbler and see what I could see as we drove through town. It was rather interesting and I decided to do a little categorizing and let y'all know what I found. I decided to do it again the next day and compare it to the first day. Here is a summary and some thoughts.

Disclaimer: Before I get into this I want to make it perfectly clear that I am NOT a wireless guru. I have lots to learn and some of what I have to say may have perfectly good exploitations or I may be WAY off base. Feel free to give me constructive feedback via comments or direct email. 

The first thing I noticed was that all 11 standard channels in 802.11a,b,g were used. Then I noticed that there were some other channels listed. They are 36, 40, 48, 56, 157. Honestly I wasn't even aware that you could use these other channels. What does that mean and how do you do it? I'd like to learn more about this. I looked to see if there were any common denominators about the devices that reported this but couldn't really find anything useful. The second day I picked up traffic on the same channels plus one that I didn't see on day one, channel 64.

Next I noticed that over the 2 days I saw 696 different devices, 388 on day 1 and 509 on day 2. So that means that 201 devices showed up on one day that didn't on the other day. That can be explained by several things. They may have been off that day. Maybe the bus was going too fast to pick them up one day and not the next. One day I may have had less interference in that area than the other.

280 had no encryption enabled on them. The rest were reported as having WEP enabled but I doubt that is correct. I don't know if it's the version of NetStumbler that I'm using or what but everything is reported as WEP. I checked it against my home system which is running WPA2 and it showed up as WEP.

42 showed up as being ad-hoc which means that they were more than likely other laptop users who were broadcasting their signals. In looking at the SSID's shown by these ad-hoc networks either there are lot's of "evil twins" set up or possibly NetStumbler just didn't get enough of a signal and read on what was really going on with them. In comparing ad-hoc to AP I only found 2 that looked like they were possibly "evil twins" based on SSID reported. Again if the others were then I was not able to pick up the "real" AP in my scan due to range or interference. 

Speed ranged between 11mbs to 54mbs with 22, 36, and 48 mbs also reporting. The vast majority of there were 54 mbs.

There were lots of vendors reported with the obvious ones present. Cisco, Aruba, Linksys, DLink, Netgear. There were several that I am not familiar with like Farallon, Eprigram, Sercom, Compex. Then some that I'm familiar with but only slightly like Gemtek, Z-Com, Airespace. I noticed several Symbol devices which I know is a popular handheld scanner manufacturer. I'm not sure if they make AP's also but these did show up as AP's. Again this goes back to me not being overly familiar with the world of wireless and who does what and especially not the specifics of how and why NetStumbler reports what it reports in the way it reports it. :)

Just a couple more thoughts and then I'm through. I noticed that a majority of the SSID's reported gave out too much information. Either company name, or some identifier that makes it easy to figure out who this AP belongs to such as a building number or something similar. All you had to do was look at the SSID and then at street numbers or business names and be able to put 2 and 2 together to find the owner. Not the wisest choice but in today's world of wireless hacking it doesn't take much for the bad guys to find out who you are pretty quickly anyway.

The last thing is I wanted to share with you a few of the funnier or more unique SSID's that I found. Sad to say this is as creative as people in this part of town seem to get. Oh, well.

Belkin Sucks
But Why ???
SSID Name
Your Mom
Funkdafied
Hotboysin1205
Smallpoxgirl
Tuffygoestovegas

Live In Concert [Liquidmatrix Security Digest]

Posted: 25 Apr 2008 10:49 AM CDT

Well, we’re no Led Zeppelin, Tragically Hip or Peter Gabriel. That being said it should a great time tonight when the band that I play bass for, “The Shiitake Project” takes the stage at Clinton’s in Toronto.

We are raising money for prostate cancer research. The show gets rolling around 9 pm and cover is $10 or greater if you feel like donating more.

Have a great weekend all!

I'm limited, it's official [IT Security: The view from here]

Posted: 25 Apr 2008 10:23 AM CDT

As of today I am now operational as a Limited Company in the UK. Robert Newby & Associates (RNA) now has a bank account and a registered trading number. So that means you can hire me as a consultant, and my associates of course.

SecurEMEA is still growing, and thanks to Rich Mogull's little podcast with me at RSA, I've been inundated with requests for help. It soon became clear to me talking to people about this that this is not going to be a quick process however, and rightly so. By the very nature of what I am offering to clients in the UK, there is a lot of checking, double checking and hesitation to move forwards.

SecurEMEA is being set up to help bring companies into the UK, to address the channel here, and then growth into the wider European market. I'm going to stick my neck out here and say that the majority of US IT security vendors (successful or otherwise) do not know the UK channel to market very well. There are many who haven't even tried, but there are also a brave few who have. Out of these brave few, there are a smaller percentage still who have succeeded.

One company I heard of recently, and who will always remain nameless in my presence, spent 2 years and $3 million pushing a team of 15 people into the UK, thinking they had a great product, and indeed they did. After 2 years, they realised that sales were flatlining, and had to set about chopping the staff back down. This cost more $$$, and took time they didn't have. Then the process of building the company here had to start again. Owch.

Many companies do it cheaply now. I was fortunate to work for Vormetric over here 4 years ago, there was 1 RSM, and me with my spanners. Later a UK sales guy joined and sales dived, so the UK operation became too expensive to maintain at that time. They are going to have a much better year this year, mark my words. Ingrian did the same, 1 RSM, me with a spanner set, and a UK sales guy. They were luckier, and managed to be acquired, but not for a huge hill of beans, and no-one made much out of it. SafeNet will probably not do much with the technology this year in my opinion.

All of this costs money, and as can be seen, is not guaranteed to be successful. An RSM costs $100-150k a year, an SE $100-150k a year, and extra sales guys, the same again. IF they make a sale, then they pay for themselves, but not the offices, infrastructure, etc. And you can't guarantee you're getting good sales guys. And you can't manage them effectively from 5000 miles away. So, this is still cheap is it?

SecurEMEA is based on the premise that I've done it before, and know what mistakes not to make again. I know what sales guys to work with, and who to avoid like the plague (you know who you are!) I know which distributors work well with tricky technologies, and which resellers are more than box shifters. Initially I am planning to do this fairly intensely with a couple of companies only, but in time I would like this to be THE place that people come to break into the UK market.

So, you can see that the people who will be most aware of the benefit that SecurEMEA can bring will be companies who have already tried and been burnt to some degree. Those who haven't tried might be persuaded that they can do without, and indeed, it is possible, especially with products that have a very long sales cycle.

If you can't afford to wait that long, or spend that much cash however, why not drop me a line?

Visio in Ascii [The InfoSec Blog]

Posted: 25 Apr 2008 09:51 AM CDT

http://search.cpan.org/dist/App-Asciio/lib/App/Asciio.pm

This gtk2-perl application allows you to draw ASCII diagrams in a modern
(but simple) graphical application. The ASCII graphs can be saved as
ASCII or in a format that allows you to modify them later

So what does this have to do with security?

Well, one of the security risks we face is that Microsoft Office applications (among others) have embedded Visual Basic, often with extensions. These have been susceptible to macro viruses.

Yes, I’m aware that there are mechanisms for defending against this, but they are software, and we know that in the long run errors will be introduced in upgrades or patches and the bad guys will find alternative avenues of attack. The real problem is that VB is embedded in the application.

So this is a solution. We go back to the “data is data” era, when data was not executable. See also all the “why HTML mail is evil” articles - go Google for them.

Happy Friday.

How not to hire a security executive who’s on parole [The InfoSec Blog]

Posted: 25 Apr 2008 08:53 AM CDT

http://www.networkworld.com/news/2008/042308-how-not-to-hire-a.html?page=1

One of the first questions to ask during an audit is “Do you have Policy?” (which is part of the ISMS - see ISO-27001)

Then after checking that for completeness and sufficiency start checking if its communicated to staff and if its followed.

Since policy defines how an organization is to be run, this is the top-down approach. Its why bottom up things like pen testing are a waste of time. The policy-driven approach ensures that there are processes and procedures in place, it allows for metrics and for improvement of both the compliance and the details processes themselves.
(CMM etc)

See also “Who Ya Gonna Call?

Security Briefing: April 25th [Liquidmatrix Security Digest]

Posted: 25 Apr 2008 07:49 AM CDT

FSA To Banks, Smarten Up [Liquidmatrix Security Digest]

Posted: 25 Apr 2008 07:35 AM CDT

Things don’t look so rosy in the FSA report with respects to how financial institutions handle data security.

From eGov Monitor:

The Financial Services Authority (FSA) has published today its report on Data Security in Financial Services. Whilst it might make for uncomfortable reading, this is a timely report from the FSA, and its relevance extends beyond the firms that the FSA directly regulates. The omissions the FSA identifies and standards it expects are not peculiar to the financial services industry.

The underlying message from the Information Commissioner’s Office (ICO) and the FSA is clear in the report: they are going to get tough on firms that are not taking security breaches seriously enough, and firms ignore the guidance in this report at their peril.

Read on.

Article Link

Chocolate Owns Your Passwords [Darknet - The Darkside]

Posted: 25 Apr 2008 06:44 AM CDT

The same old story, if you ask people for something they will most likely give it without thinking of the consequences.. Even more so if you are a pretty girl, and in this case you offer someone chocolate. Hey who doesn’t love chocolate? I have to say I don’t love it enough to give out my [...]SHARETHIS.addEntry({ title: "Chocolate...

Read the full post at darknet.org.uk

Are these “Top 10″ dumb things or not? [The InfoSec Blog]

Posted: 25 Apr 2008 06:44 AM CDT

At “10 dumb things users do that can mess up their computersDebra Littlejohn Shinder brings up some interesting common failings. Lets look at her list, because I have a different take.

#1: Plug into the wall without surge protection
#2: Surf the Internet without a firewall
#3: Neglect to run or update antivirus and anti-spyware programs
#4: Install and uninstall lots of programs, especially betas
#5: Keep disks full and fragmented
#6: Open all attachments
#7: Click on everything
#8: Share and share alike
#9: Pick the wrong passwords
#10: Ignore the need for a backup and recovery plan

Well, they seem interesting, but …
The big “but” gets back to one of my favourite phrases:

Context Is Everything

Very simply, in my own context most of this is meaningless. It may well be in yours as well.

Lets first look at the stated and unstated context, which should have been made clear up front.

The author mentions Windows XP a couple of times without making it clear which version, and only a passing reference to other versions of Windows. There is no mention of any other operating systems, Mac OSX, Linux, BSD, OLPC, or even embedded systems in PDAs. I can surf the net with Trusty Old Newton. More on that in a moment.

She also fails to mention the context in which the computer is being used. Is this a home personal system, a home office system, a small business or a larger commercial enterprise with its own IT and InfoSec departments? This matters not only from the point of view of meeting this points but of legal ramifications.

Many of us in InfoSec use the terms “diligence” and “care”. We usually omit the word “due” so as to avoid the legal meaning and the gunnysack of baggage that gets dragged in. ‘Diligence‘ means a constant and earnest effort and application. ‘Care‘ means the effort is serious and devoted. Neither of these terms are used in the article. However one would reasonably expect these to be part of the approach in business of any kind or even in a home setting where personal assets need to be protected and perhaps children to be cared for. The author fails to mention this too.

Plug into the wall without surge protection.

I’d rate this as ‘necessary but not sufficient’ for a number of reasons.
First and foremost the author does not make it clear that a UPS and a surge protector are not the same thing. Yes, many UPSs include surge protection, but think about these two things for a moment.

  1. You can have surge protection but still loose data when the power fails.
    This isn’t just about the work that you’ve done sine the last ’save’, although loosing that can be serious. That loss of power may occur at a critical point for the hardware causing corruption of the file system (disk drive, networked or USB). It is almost certainly going to cause a loss of your train of thought, and that may be very serious.
  2. Surge protection wears out.
    Most people are unaware that surge protectors have a limited life and its not measured in time but in how much energy (aka surges) they have to absorb. So one day your surge protector isn’t going to protect you any more. FINIS. Game Over. The surge gets though and your machine is toasted.
    How do you know when your protector has used up its surge capacity? Generally you don’t, though some newer ones do have an indicator.
    What can you do about it? Not a lot, except buy a new one.

That’s why I like using a high-end laptop as a workstation. The power-brick and the battery do protect against surges and the battery acts as UPS. Sort of.

But please note that not all UPSs are created equally. Its not just about battery power. I’ll save that for another article.

Surf the Internet without a firewall.

While this is good advice in general, the specifics are the killer.

My firewall is a separate machine, an old HP Vesta P1 with 256Meg of RAM and a 30Meg and a CD reader. If you feel so inclined you could probably pick up something like this from the Salvation Army for about $10.
I run the IP-COP firewall on it. I’ve run other firewalls including the Mandriva MDF with its sophisticated GUI. I loved playing with Shorewall, which is one of the most flexible open source firewalls I’ve met. But IP-COP is small, fast and reliable. It has plugins for caching and for handling Dynamic DNS, as well as many other functions if you chose to install the plugins.

Why have I chosen to run a separate firewall rather than the software or modem based approach that the author of the article suggests? There are may reasons, but prime among them is the principle of Separation of Duties. I’m a firm believer in the idea that each thing should do just one thing and do it well, and the idea of a ’security appliance’ or of running the firewall on the host (i.e. the target) doesn’t appeal to me.

Perhaps there should be a “solely” in there.

Neglect to run or update antivirus and anti-spyware programs

This is another “Context is Everything” situation.

At home, even though I have an ‘always on’ broadband connection, I have a Linux based firewall and all my servers and laptops run Linux. Its not that Linux is guaranteed 100% protection against all forms of malware, but at least its not the highly vulnerable situation of Windows that necessitates running AV software.

And lets face it, as Bob Bergener at VMyths points out, AV software is getting less and less effective and the cycles of malware are getting more capable and more aggressive and more insidious.

But its not just me and its not just Linux. I have a number of high profile clients who put AV software on their corporate laptops and workstations … but it is disabled. Its there, I’m forced to conclude, to satisfy the auditors. However these organizations don’t suffer from malware attacks for other reasons, most notably that they have strict control over outside access. For the most part, there is none. Internals users are not allowed to use the Internet except under special conditions. Incoming and outgoing mail is aggressively filtered.

We’re beginning to see this kind of access control with products from Ironport (Cisco) and Proofpoint. These are “appliances” more available to smaller sites. In all probability most users of these products aren’t going to use their full capability and will still want another layer of protection against malware.

Sadly, the most effective one is the one that is weakest and is also the most easily subverted. Its user awareness and discipline. Don’t open unexpected attachments, download and run strange programs, visit dubious sites. See below.

Please don’t think that I’m saying having a firewall is an excuse for not keeping your software well maintained. There are many reasons for keeping up to date quite apart from making the software attack-proof. The the mantra “If it ain’t broke, don’t fix it” is not a reasonable stance with something as complex as software. It may be broken in ways that you don’t see or haven’t seen yet. This is quite different from choosing not to apply a change because you’ve analyzed it and determine that it is not appropriate.

And lets not forget that a firewall has lots of limitation - most are designed to protect the internal network from the outside world and assume that the internal network is trustworthy. Hence its no use at all if an internal machine is infected by some other means.

Install and uninstall lots of programs, especially betas

I was at IT360 and heard David Rice, the author of “Geekonomics” speak on software quality. One point he made was that the large software vendors treat all users as the “beta testers” for their products. He says:

“Software buyers are literally crash test dummies for an industry that is remarkably insulated against liability, accountability, and responsibility for any harm, damages or loss that should occur because of manufacturing defects or weaknesses that allow cyber attackers to break into and hijack our computer systems.”

So while this point may be a good one, we are all on the roundabout and can’t get off.

Keep disks full and fragmented

This is a meaningless and unhelpful generalization.

Firstly, I see an amazing amount of nonsense published about de-fragmentation. It warrants a posting and discussion in its own right, but please, don’t buy into this myth.

The second thing is that I DO keep a disk full and never run de-fragmentation on it. But then I have my hard drives partitioned. One contains the operating system, just what is needed to boot; another contains the system and libraries. These are pretty full and apart from the upgrades and occasional patches (which are less frequent and less extensive with Linux than Windows) there is very little “churn” on these partitions. I can leave them almost full. This includes auxillary programs where I keep on-line documentation (”manual pages”) and things like icons, wallpaper, themes and so on.

Next up is the temporary partition - /tmp in Linux parlance. Its the scratch workspace. It is cleaned out on every reboot and by a script that runs every night, but most programs clean up their temporary files after themselves. This partition looks empty most of the time. There’s no point de-fragmenting it and no point backing it up.

Another few partitions deal with what can be termed “archives”. These may be PDFs of interest or archived e-mail. Backup of these is important but they are in effect ‘incremental’ storage so there is no ‘churn’, just growth, so de-fragmentation is completely irrelevant.

So what’s left? Partitions that deal with “current stuff”, development, writing, so forth. These are on fast drives, aggressively backed up, and use journaled file systems for integrity.

But overall I simply don’t do ANY de-fragmentation. I think its a waste of time for a number of reasons.

The first is that it simply makes no sense in any of the contexts above. The second is that given high speed disks and head activity and good allocation strategies in the first place, its not going to help.

The third and most significant is that since I use volume management software it can’t possibly help.

I use LVM on all my Linux platforms to manage disk allocation. If you read up on it you’ll see that it means that a contiguous logical volume may not correspond to a contiguous physical allocation on the disk. Since LVM subsumes RAID as well, it may not even be on a single physical drive.

Remember:

Now, after reading that article, speculate about how I do backups :-)

Open all attachments

Good advice at last! Sadly human nature seems perverse. People seem to be sucked in to reading attachments and visiting dubious web sites (see below) and admonishions don’t seem enough to change their behaviour.

Perhaps evolution has failed us; perhaps we need a Darwinian imperative so that people foolish enough to do this can no longer contribute to the gene (or is it meme?) pool.

Click on everything

More good advice, more efforts to overcome human stupidity.

Share and share alike

Context is everything

Oh dear. This doesn’t make sense any more. To be effective in business you do need to share data. I don’t need to go into detail, but I will mention that most businesses need a web site to share information with customers, prospects and investors.

There are now many web-based businesses based on sharing, Flicr, Facebook, LinkedIn and the like.

And lets not forget that the whole “Open Source” model is about sharing.

Pick the wrong passwords

There are two things I object to here.
The first is the hang-up with passwords. They are, to coin a phrase, “so twentieth century“.

The problem isn’t dreaming up passwords - we get nonsense like this:

Lets face it, there;’s no real problem dreaming up passwords.
Certainly not for me. I had to learn by heart poems and passages from famous works, chunks of Shakespeare and that kind of thing at school. I can always pull out something, take first letters, mange them however.

But the real problem, whether you have this repertoire or whether you use a generator software tools, is remembering them. Oh, and forgetting them when you have to change them. Oh, and knowing which one applies where.

This is the point that Mike Smith makes in his book, “Authentication” and is why people write down passwords or use passwords that are essentially mnemonics or use the same password for many situations.

Twenty years ago I only had to deal with a few passwords, now I have to deal with hundreds. Almost every web site I visit demand that I log in.

We have reached a point now where using ’strong’ password technology is becoming a liability and using passwords is and of itself an increasing risk. The likelihood that a new employee will re-use a password he’s used on a public web site for his corporate login is high. The load on his memory is just too great. This is why there is a market for software that remembers your passwords. But how portable is it? USB drives, you say? I seem to loose USBs with alarming frequency.

So, how happy are you with doing financial transaction over the Internet using just a password as authentication, even if it is over a SSL connection? I’m not very happy. This is a subject that deserves a long blog article in its own right, but lets just point out that banks in Canada and the US have chosen not to use the more secure “two factor” and “one time pad” authentication systems that are normal for European and Scandinavian banks, and so have put their customers at risk. Not all the risks have to do with the Internet connection.

Some banks have moved to what they call “two factor” authentication. Well, it certainly isn’t really what the security industry calls “two factor”. At best it might be called ‘two passwords‘ - instead of asking you just your password they will ask for the password and then one of a set or previously agreed questions like “what was the colour of your first car“. It gives the illusion of security, but its just a double-password. Compare it to having a lock on your screen door and your front door. If the theif comes in by breaking a window or by stealing your keys (or the book you have your passwords written down in since you have so many of them!) then this doens’t help.

Real “Two-Factor” authentication has two different things. A password is “something you know“. The colour of your first car is also something you know. Its also something other people can know.

A real second factor would be “something you have” like your bank Client Card that you use with your personal identification number (P.I.N.) which is “something you know“. Both have to be used together. Someone might know - or guess - your PIN without you knowing about it, but if you loose possession of the card you do now about it.

Another factor is “something you are” - biometrics. Recognition of your fingerprint or iris along with a password.

Of course these more secure methods require more technology which is why most web sites fall back to the only thing they are sure you have - a keyboard.

Rick Smith’s book is …
Authentication: From Passwords to Public Keys” ISBN 0201615991

See his home page at http://www.smat.us/crypto/index.html
He refers there to ..

A companion site, The Center for Password Sanity, examines the
fundamental flaws one finds in typical password security policies
and recommends more sane approaches.
http://www.smat.us/sanity/index.html

See also ‘The Strong password dilemma’ at http://www.smat.us/sanity/pwdilemma.html

And not least of all the cartoon at http://www.smat.us/sanity/index.html

Seriously: go read Rick Smith’s book.

There is a lot of nonsense out there about passwords and a lot of it is
promulgated by auditors and security-wannabes.

Ignore the need for a backup and recovery plan

As you can see above, I’ve made things easy for backups.

One reason for this is that the real problem is not having a backup and recovery plan, is the doing of it, making it a habit, a regular part of operations.

That is one reason most larger organizations use centralized services, so that the IT department takes care of backups. Its a major incentive for “thin clients” where there is no storage at the workstation that needs to be backed up.

Its also one reason that I partition my drives so I can identify what is ’static’ and what is ‘dynamic’.

One of my great complaints about Microsoft Windows is that everything is on the C: drive. I very strongly recommend partitioning your drives. Having a D: drive and remapping your desktop and local storage there makes things so much easier. It also helps to have a separate partition for the swap area and for temporary files. Sadly, while this is possible and is documented (search Google for details), its not straight forward. Which is sad, because it is a very simple and effective way of dealing with many problems. No the least of which is that you can re-install Windows without over-writing all your data.

Spammer, Sharp Like Beach Ball [Liquidmatrix Security Digest]

Posted: 24 Apr 2008 08:15 PM CDT

Wow, how stupid do they think I am?

It’s a rhetorical question wise guy.

Here’s a phishing email that I received this evening.

——————–
From: Chianelli, Russell R.
Date: Thu, Apr 24, 2008 at 8:05 PM
Subject: UNICEF ORGANISATION DONATION AWARDED PIN NUMBERS U-777-1815, D-01-47 CONTACT INFOS (**********@yahoo.com.hk)
To: undisclosed-recipients

UNICEF ORGANISATION DONATION.
Unicef Organisation

Concern.
The Unicef Orgnasation, Would like to notify you that you have been chosen by the board of trustees as one of the final recipients of a cash Grant/Donation for your own personal, educational, and business development. The Unicef Orgnasation was formed in 1947 after WWII to help children displaced by the war. It was then called the United Nations International Children’s Emergency Fund. The United Nations Organization (UNO) and the European Union (EU) was conceived with the objective of human growth, educational, and community development.
To celebrate the 27th anniversary program, The Unicef Organisation is giving out a yearly donation of One Million Four Hundred and Seventy Thousand United States Dollars. These specific Donations/Grants will be awarded to 70 lucky international recipients worldwide; in different categories for their personal business development and enhancement of their educational plans. At least 17% of the awarded funds should be used by you to develop a part of your environment. This is a yearly program, which is a measure of universal development strategy.
Based on the Continental selection exercise of internet,data base websites and millions of supermarket cash invoices worldwide, you were selected among the lucky recipients to receive the award sum of US$1,470,000.00 (One Million Four Hundred and Seventy Thousand United States Dollars) as charity donations/aid from the Unicef Orgnasation and the UNO in accordance with the enabling act of Parliament. (Note that all beneficiaries email addresses were selected randomly from Various internet Job websites or a shop’s cash invoice around your area in which you might have purchased something from).
You are required to contact the Permanent Secetary below for qualification documentation (ed. note: emphasis added) and processing of your claims. After contacting our office, you will be given your pin number, which you will used in claiming the funds. Please endeavor to quote your Awarded pin numbers (U-777-1815, D-01-47) in all discussions.
Permanent Secetary- Mr. Peter Geroge
Email: *********@yahoo.com.hk
Finally, all funds should be claimed by their respective beneficiaries, no later than14 days after notification. Failure to do so will mean cancellation of that beneficiary and its donation will then be reserved for next year’s recipients. On behalf of the Board kindly, accept our warmest congratulations.
Happy New Year.

Regards.
Sir. williams Charlton
(Online Coordinator)

Happy New Year…riiiight.

Now, call me crazy but, I’m fairly certain that Unicef doesn’t use Yahoo for their email. In all seriousness if you receive an email like the aforementioned, delete it.

Now, where’s did I leave that whack-a-mole mallet?

PCI 6.6 clarification - Am I missing something? [Rory.Blog]

Posted: 24 Apr 2008 03:50 PM CDT

Recently there have been some clarifications around a couple of sections of the PCI-DSS, in particular one on section 6.6 .

This update has created some comment and articles but none of the ones I've read has focused on the main point, as far as I can see...

Previously there were two options for satisfying Section 6.6

  • An Code Review (either manual or tool assisted) of in-scope web applications, or
  • Placement of an appropriately configured Web Application Firewall to protect the application

Now (unless I'm reading this incorrectly) there's an additional one

Completion of a manual or assisted web application vulnerability review...

The confusing part is that this third option isn't split out but is listed under the "application code review" section.

My feeling is that this'll affect a lot of merchants (and vendors) if they were planning on either spending money on WAFs or Code reviews and will now use a standard web application review (which they may already be undertaking as part of other security work....)

Another interesting point which I don't know the answer too is whether a single review which covered both penetration testing techniques and web application assessment techniques could be used to satisfy 6.6 and 11.3...

Cose da sapere, i lavoratori remoti [varie // eventuali // sicurezza informatica]

Posted: 24 Apr 2008 03:15 PM CDT

4 cose che i telelavoratori dovrebbero sapere circa la sicurezza dei dati aziendali


Be aware that almost every data decision has a security implication. Chi decide se collegarsi ad un hostpot o ad un cavo di rete che trova in giro lo fa a suo rischio. Peccato che e' un rischio anche per l'azienda!

Your children aren't afraid to download. Il PC di lavoro é il PC di lavoro. Punto.

Be a responsible gadget geek. Il palmare, o lo smartphone (il sottoscritto va in giro con entrambi) ed altri gadget, possono avere su dati sensibili. Su quasi tutti c'e' il codice di blocco.

Don't forget it -- shred it. Parole sante! Se siete di quelli che stracciano le cose e buttano i coriandoli in due pattumiere diverse, abbiamo qualcosa in comune.

No comments: