Monday, April 28, 2008

Spliced feed for Security Bloggers Network

Spliced feed for Security Bloggers Network

Big Brother is *really* Watching You! [/dev/random] [Belgian Security Blognetwork]

Posted: 28 Apr 2008 07:59 AM CDT

BigBrother

Take care if you travel to the United States. It was a long story but finally authorities have now the right to analyze data contained on hard drives! The US Court of Appeal confirmed their rights to check the files stored on incoming notebooks!

Source: http://www.reseaux-telecoms.net/actualites/lire-les-douaniers-americains-ont-bien-le-droit\
-de-copier-les-disques-durs-des-visiteurs-18062.html
(French link)

Another viruswriting contest ... oh no, not again! [Wavci] [Belgian Security Blognetwork]

Posted: 28 Apr 2008 06:46 AM CDT

There will be a new contest at the Defcon hacker conference this August: Called Race-to-Zero, the contest will invite Defcon hackers to find new ways of beating antivirus software. Contestants will get some sample virus code that they must modify and try to sneak past the antivirus products. Awards will be given for "Most elegant obfuscation", "Dirtiest hack of an obfuscation", "Comedy value" and "Most deserving of beer"... The contest was announced Friday. The contest organizers say that they're trying to help computer users understand just how much effort is required to skirt antivirus products. The Race-to-Zero sponsors hope to present the contest results during Defcon. The contest is not organized by Defcon, but is one of the unofficial events that the show's organizers have encouraged attendees to arrange. Defcon runs Aug. 8 to Aug. 10 at the Riviera Hotel & Casino in Las Vegas.
To my opinion this is very unethical, it's like creating new samples of a biological virus and that's something you also try not to do, isn't it. And actually, encouraging people to do this as a contest is really over the top. I predict that a lot of AV and security vendors will have a lot of comment on this topic during the next weeks!

Difference between ITIL v3 and ISO 20000 [Security4all] [Belgian Security Blognetwork]

Posted: 27 Apr 2008 06:58 PM CDT

I know ITIL but I didn't hear about ISO 20000 before. So let's have a look at it and begin with the following: Whitepaper: ITIL® V3 and ISO/IEC 20000 by Jenny Dugmore and Sharon Taylor This...

Hack.lu 2008 conference coming on the 22nd - 24th of October [Security4all] [Belgian Security Blognetwork]

Posted: 27 Apr 2008 06:32 PM CDT

There were some rumors that there wouldn't be a new Hack.lu. But luckily, the rumors weren't true. I happened to see this "small" announcement on their website last week. Announcement: Yes there...

Targeted attacks using Acrobat's pdf and a little new trick [Security4all] [Belgian Security Blognetwork]

Posted: 27 Apr 2008 05:49 PM CDT

In the beginning of February, a critical security patch for Acrobat Reader was released. And now it's being actively used in targeted attacks. Here is an interesting analysis from SANS: Ever since...

Why right brain people will take over the world [Security4all] [Belgian Security Blognetwork]

Posted: 27 Apr 2008 10:40 AM CDT

I used to consider myself as a 'leftie'. A logical and analytical person with not much (need for) creativity. Most of the educational institutions have either exact sciences or creative curricula...

Mass Site Hack Proves no Site is Truly Safe [Sunnet Beskerming Security Advisories]

Posted: 27 Apr 2008 10:33 AM CDT

There has been a lot of coverage of a widespread (estimated at more than half a million sites) set of web server attacks that have been taking place for a number of weeks using an unfortunately-common SQL injection opportunity to take control of back end databases, and sites themselves. So much concern and confusion has surrounded what is going on that Microsoft's Security Response Center have released a statement to clarify the nature of the attacks as reported to them. Although there has been a new IIS vulnerability disclosed in recent weeks, the attacks are only making use of poor site and database maintenance practices - using SQL injection to exploit sites.

For site visitors who visit an affected site, JavaScript is used to try and download / run malware that then targets a number of commonly used technologies in order to gain full control over the system.

It goes to show that input validation is a critical component of the security picture for a site and it is a problem that is still not being properly addressed by many sites, including a lot that should know better.

If anything else is needed to concern site operators, it is research from David Litchfield that demonstrates an almost-generic attack method against Oracle databases.

In one simple set of attacks, previously trustworthy sites can now no longer be considered trustworthy and it is another blow to services that tout their ability to mark a site as being 'Hacker Safe' or otherwise safe for visiting (like SiteAdvisor).

DefCon Competition has Antivirus Vendors Complaining [Sunnet Beskerming Security Advisories]

Posted: 27 Apr 2008 09:57 AM CDT

DefCon is known for a range of 'out there' type activities and presentations and it looks like this year is going to be no different. A contest that is being organised on the sidelines of this year's convention is already raising eyebrows and complaints from around the Information Security industry.

In a nutshell, the aim of the contest is to successfully modify malware samples so that they pass through a number of antivirus scanners without detection, while still retaining the malware capability. It could be seen as a polymorphism competition - how much can you change the code and still retain the same function.

What the contest is seeking to achieve is nothing more than what is happening continuously on the Internet, where malware developers are continually fine-tuning their software to best avoid detection. It should also show up the antivirus tools that are making use of poor signature detection mechanisms and those that are using weak heuristics to detect previously unknown malware. The big problem for the antivirus developers is that it is possible to effectively drive a truck through the holes in their systems and it isn't going to take much for competitors to bypass most tools. It will be interesting to see how the competition organisers set about increasing the difficulty of each round.

Antivirus developers are complaining about the competition, though most of the complaints sound like the developers are having a hard time keeping their technology within spitting distance of the malware authors. Even with the complaining, it probably won't take long for the competition samples to appear in definition files and in the count of malware types being detected. It is strange, though, how competitions like CTF, or the recent 0-day competition at CanSecWest, do not attract much complaint, but as soon as antivirus or antimalware tools are targeted it is too much for people.

It is the latest in a number of interesting competitions where the practical attack value of what is being done is greater than in other competitions. This contest ranks up with miniscule-XSS competitions and archives of XSS / SQL injection vulnerable sites.

Preparing for the EICAR conference 2008 in Laval, France [Wavci] [Belgian Security Blognetwork]

Posted: 27 Apr 2008 06:42 AM CDT

I'm preparing myself to go to the EICAR conference this year, however just before it, I will have a stop at the AMTSO meeting in Amsterdam(Netherlands). You can find more info about both conferences or organisations at http://www.eicar.org and http://www.amtso.org
Let's hope that we got interesting results at the AMTSO meeting where the industry wants to improve the malware-tests.
I heard as well a lot of gossip about our nice EICAR conference. Will it go on or not this year, was for instance one of the questions... well I can assure you ... It will go on and the place seems to be more beautiful than everybody thinks at this moment.

The most secure table at the Data News Award Gala 2008. [Wavci] [Belgian Security Blognetwork]

Posted: 27 Apr 2008 06:38 AM CDT

Last Thursday I was at the Data News Awards Gala event. About 13 awards were given to the most innovative or interesting companies for the past year. During the breaks we listened to some nice live music from Sophie or Gunther Neefs. CISCO got the award for the best security company of the year. It's stupid that there was no award for the most secure table. That should have been our table ... we got us (Kaspersky Lab), Apple and Guy Kindermans, the security journalist from DataNews, at our table. You can find more at the Data News website.

The originator of “Red Heart China” gets his website hacked!! Europeans responsible? [The Dark Visitor]

Posted: 26 Apr 2008 09:53 PM CDT

Started to wonder why all those hearts were appearing on Chinese blogs and the answer may just be, the Red Heart China MSN:

About 2.3 million Chinese MSN users have added a pattern of “red heart” and the English word “China” in front of their online signatures to show their unity and patriotism.

MSN China spokesman Feng Guangshun released the figure on Thursday. Many more people have opened their MSN accounts to find a message which asked them to add the “red heart” and “China” in front of their signatures.

A bit more on Red Heart from the Wall Street Journal:

When Xingrong Chen logged into MSN Messenger yesterday, she found a message from a friend inviting her to join China's latest Internet craze:

"Please add (L) China after your name on MSN, to show the unity of Chinese people around the world. Please send this message to your friends on MSN."

She followed the instruction and within a second, a red heart icon and the word "China" appeared beside her user name.

"I have no idea who first raised this idea, and it doesn't matter." the 24-year old Shanghai resident said, "My MSN contact list is red all over now!"

Youku video of people explaining Red Heart China:

Well, apparently not everyone is as excited about this new wave of patriotism sweeping China. According to many news sources in China (24 April 08), the man who originated the Red Heart China signature has had his website 5sai.com hacked.

  1. CEO Chen Huaiyuan said that the day before yesterday, the 5sai.com website came under attack from four foreign IP addresses and as of last night, the attacks still had not stopped
  2. Statistical data from the 5sai.com server showed that the IP addresses were located in Europe
  3. During the high frequency periods of the attack they were receiving two to three attacks every second and during the low peaks it was three to four attacks every minute
[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Interview with a professional hacker [Robert Penz Blog]

Posted: 26 Apr 2008 03:58 PM CDT

After the DDOS attack against my blog this week , I decided to go to the channel I wrote in my initial hacker post about, as I believed that the most likely attacker is hacker I wrote about. After I joined the channel, the hacker opened a query to identify me as he thought I’m a bot. I wrote him that I’ve some questions and that I want to talk to him. He agreed to it and this post contains the important parts of the discussion and some thoughts off mine. He calls himself xeQt.

The first question I asked was if he did the DDOS attack against my blog. He said that he doesn’t do DDOS attacks and that my blog is no challenge for him. He told me that he has other methods to get even. After some discussion he said me that he don’t even know my blog, you will see in a later post that this is most likely not entire true. Within that discussion I also posted a link to my initial post, he said that he won’t click onto it but later said that he has an VPN for this anyway. As I will write in a second post he clicked onto it.

He than was interested if he got one of my servers, which I could decline as it was a server of a friend. This discussion leaded than to the point that he said that I should get used to DDOS attacks as he gets them daily, as he writes bad about other hacker groups which than attack him.

I asked him than if I could use this discussion in my blod and he said yes, as he has nothing to hide and that only a miracle will get him busted. I asked him if he has nothing to loose. He told me that he has no life so it doesn’t matter anyway and that he does not have a own internet connection, and therefore he beliefs he is safe. I guess he is using a open WLAN of one of his neighbors.

My next question was what he gains from the hacking that servers. He answered following: "I sell them to scammers, spammers", this leaded to the question how long a hacked server stays online normally. He told me that this can vary from one day to one year, and that it depends what is done with the server. Which I can tell is quite true, as most of the time I get only called if the machine has a unusual high CPU usage, generates too much traffic or a mail server administrator detects spam mails from one of the servers in his network.

He than said that most server administrator don’t have much knowledge about Linux and that they don’t secure the systems and that he secures the servers for them and sells them to spammers or people who need root or botnet clients. With securing he means that he closes the attack vector that he used to gain access to the system so no other hacker can take that machine from him. To get a better picture of the size of his operations. He said to me that the hacks 500 servers daily. This means that he does not look for special target but for lowest hanging fruits for which he can gain automatic or semi automatic access to make a living.

We had also some other points (more technical) but these where the most interesting parts for my readers. I want to say thanks to xeQt for talking with me and allowing me to write about our discussion. I will write a second post with some plausibility checks as already written above, so stay tuned.

Afternoon Delight [BumpInTheWire.com]

Posted: 25 Apr 2008 09:29 PM CDT

Gonna find my baby, gonna hold her tight
gonna grab some afternoon delight.
My motto’s always been; when it’s right, it’s right.

How’s that for the worst intro to a post ever? I did manage to take this afternoon off and partake in a little socialization in the Power and Light district of KC. I hadn’t been down there yet so it was all new to me. I have to admit its a great step forward for Kansas City. It almost seems weird to have that kind of area now beings downtown has been a ghost town for so long.

Todays reward did not come about unearned.  The crazy DHCP problem I’ve written about in the past showed up again yesterday.  Fortunately some other odd behavior was reported along with it this time.  In addition to the odd DHCP lease issue there were a lot disconnects reported.  Yesterday afternoon TB noticed a couple of instances of this problem being reported for a department being in the incorrect VLAN.  I thought nothing of it because it was only a couple and cables had been moved around so much in the troubleshooting of this problem that a port or two being in the wrong VLAN didn’t seem unreasonable.  This morning another piece of the puzzle was added when we got actual visual confirmation of an endpoint getting an IP address for a subnet different than what the port was configured for.  A little “debug dhcp detail” on the switch and the puzzle became a little clearer.  The log and terminal monitor revealed this…

%CDP-4-NATIVE_VLAN_MISMATCH: Native VLAN mismatch discovered on FastEthernet2/20 (210), with CSC06INDWC FastEthernet3/29 (147).

Looking at this it doesn’t take a rocket scientist to realize something was a miss.  What was on port 2/20?  What was on port 3/29?  “Show cdp neighbor” put the final piece of the puzzle in place.  Strangely this switch was a neighbor to itself…twice.  It was connected to itself on port 2/20 and 3/29.  Ahhh HA!  A loop.  This native VLAN confusion exposed another issue, a misconfigured VLAN interface.  The VLAN interface had “ip-helper address” statements referrencing DHCP servers in a different location.  The loop was sending DHCP requests to the incorrect scope at a different location which in turn was filling up the DHCP scopes.  This explained the inability to obtain a DHCP address that would randomly appear.

As Tigger would say…this mystery is history!

Not a CISSP ?!?! [Carnal0wnage Blog]

Posted: 25 Apr 2008 03:35 PM CDT

Chris Eng over at veracode has an interesting post on their blog about immunityinc's "not a cissp" button.

If you've been under a rock, here is the button:


I've got mixed feelings about the button. For one thing, I've seen a couple of CISSPs wearing that button at defcon/shmoocon, i guess they were practicing some SE. But secondly, its easy for people in the top 5% of the security game to say you don't need certifications because they (most importantly) already have that level of experience and name recognition. Dave Aitel doesn't need to take a test and throw some letters after his name to prove to anyone he knows his stuff, he proved himself long ago but i cant imagine he came out of the womb with that much fu, maybe he did I don't know.

For us mere mortals who are just trying to get a paycheck and get some experience alot of places are requiring certifications to be on the contract or get the job or even to get your resume to the hiring manager. For .mil/.gov this is because of 8570. To me, requiring certifications is a step in the right direction. Since no one has come forward with a scalable "hands-on" way to certify people, that paper test (for now) will have to do. At least people are trying to get qualified people in the slots, saying CISSP or some other cert makes you automatically qualified is another matter.

I'll be the first one to agree with Chris that "that like many security certifications, it's an ineffective measure of a security professional's practical abilities." See my CEH != Competent Pentester post but the game is the game. If you have to sit for a test to do/get the job then stop bitching and take your test and move on with it. If you want to stand your ground and just bitch and not get the job, enjoy your time on the geek squad.

Dissecting the Automatic Patch-Based Exploit Generator [Observations of a digitally enlightened mind]

Posted: 25 Apr 2008 01:41 PM CDT


There has been a lot of recent discussion on the Automatic Patch-Based Exploit Generator paper (here), and although it is compelling, it is far from the mass exploit generating, digital apocalypse one might be led to believe.  It is clear that evolving techniques are automating many aspects of what has been a very manual reverse engineering process. It is also clear that the time to protect is decreasing dramatically. From code red, which had a 6-month lead time from patch to exploit, to recent 0-day and targeted attacks, we are quickly entering an era where traditional techniques are becoming too slow, too cumbersome, and too prone to error or service disruption to be effective.

Looking at the OODA loop <observe, orient, decode, and act> it becomes even more clear that an attacker has an advantage in that their time to reverse-engineer a patch or other protection mechanism will almost always be faster than a defendants time to reverse-engineer an attack - additionally the consequence of time is far more prevalent for the defense.

If one factors in cost (c), which would include some measure of difficulty (d) , expense (e) and time (t), coupled with risk, which is some measure of penalty (p) and likelihood (l) of being caught the results leave little doubt that automatic malware generation will not only increase in sophistication and speed, it will also increase in population exposure.

Anyway back to the APEG paper, in which it states

However, it is insufficient to simply locate the instructions which have changed between P and P’. In order for APEG to be feasible, one has to solve the harder problem of automatically constructing real inputs which exploit the vulnerability in the original unpatched program.

They go on to state what looks like vulnerability checking against input validation errors, not exploit generation - all of the security researchers, especially those who have dealt with developing vulnerability scanning checks will note the difference

Our approach to APEG is based on the observation that input-validation bugs are usually fixed by adding the missing sanitization checks. The added checks in P’ identify a) where the vulnerability exists and b) under what conditions an input may exploit the vulnerability. The intuition for our approach is that an input fails the added check in P’ is likely an exploit in P. our goal is to 1) identify the checks added in P’, and 2) automatically generate inputs which fail the added checks.

This would have been an extremely useful tool for the vulnerability check writing teams at nCircle, Qualys, and the rest of the VA industry, but as for automatically generating exploit code, well, that is possible if we bound the statement to automatically generating exploit code against input validation errors.

This is still impressive and I would welcome the opportunity to better understand what I am missing or what will be done with the next evolutionary leap to automating malware generation. In the meantime organizations must continue to move away from the traditional reactive, ad-hoc, firefighting mode of information security and towards more agile and effective processes and technologies that decrease attack vectors and dramatically reduce the time to protect.

For more detailed analysis of the paper and the reverse-engineering process I would suggest you read the following excellent posts:

Robert Graham (here)

This paper promises “automatic patch-based exploit generation”. The paper is a bit overstated, this isn’t possible. By “exploit” the paper does not mean “working exploit”. That’s an important difference. Generating fully functional exploits by reverse engineering a patch takes a lot of steps, this paper automates only one of them, and only in certain cases.

Halvar Flake (here)

Anyhow, long post, short summary: The APEG paper is really good, but it uses confusing terminology (exploit ~= vulnerability trigger) which leads to it’s impact on patch distribution being significantly overstated. It’s good work, but the sky isn’t falling, and we are far away from generating reliable exploits automatically from arbitrary patches. APEG does generate usable vulnerability triggers for vulnerabilities of a certain form. And STP-style solvers are important.

Update 4/25/2008 AW: Added additional analysis links below

IBM/ISS Frequency X Blog (here)

The paper describes a toolset that produces exploits from patches almost instantly, and goes on to discuss the implications of instant exploit generation from patches, raising the specter of worms propagating in the hours while patch distribution is still taking place.

However, the toolset that is actually described in the technical details of the paper does not provide that sort of capability. The tool does not only require a patch diff, but also either an input that reaches the vulnerable code, or an indication by the tool’s user of the specific locations where the attacker controlled data that ultimately exercises the vulnerable code is input into the program. From that information the tool produces a set of inputs that would be rejected by the patched version.

Are these “Top 10″ dumb things or not? [The InfoSec Blog]

Posted: 25 Apr 2008 06:44 AM CDT

At “10 dumb things users do that can mess up their computersDebra Littlejohn Shinder brings up some interesting common failings. Lets look at her list, because I have a different take.

#1: Plug into the wall without surge protection
#2: Surf the Internet without a firewall
#3: Neglect to run or update antivirus and anti-spyware programs
#4: Install and uninstall lots of programs, especially betas
#5: Keep disks full and fragmented
#6: Open all attachments
#7: Click on everything
#8: Share and share alike
#9: Pick the wrong passwords
#10: Ignore the need for a backup and recovery plan

Well, they seem interesting, but …
The big “but” gets back to one of my favourite phrases:

Context Is Everything

Very simply, in my own context most of this is meaningless. It may well be in yours as well.

Lets first look at the stated and unstated context, which should have been made clear up front.

The author mentions Windows XP a couple of times without making it clear which version, and only a passing reference to other versions of Windows. There is no mention of any other operating systems, Mac OSX, Linux, BSD, OLPC, or even embedded systems in PDAs. I can surf the net with Trusty Old Newton. More on that in a moment.

She also fails to mention the context in which the computer is being used. Is this a home personal system, a home office system, a small business or a larger commercial enterprise with its own IT and InfoSec departments? This matters not only from the point of view of meeting this points but of legal ramifications.

Many of us in InfoSec use the terms “diligence” and “care”. We usually omit the word “due” so as to avoid the legal meaning and the gunnysack of baggage that gets dragged in. ‘Diligence‘ means a constant and earnest effort and application. ‘Care‘ means the effort is serious and devoted. Neither of these terms are used in the article. However one would reasonably expect these to be part of the approach in business of any kind or even in a home setting where personal assets need to be protected and perhaps children to be cared for. The author fails to mention this too.

Plug into the wall without surge protection.

I’d rate this as ‘necessary but not sufficient’ for a number of reasons.
First and foremost the author does not make it clear that a UPS and a surge protector are not the same thing. Yes, many UPSs include surge protection, but think about these two things for a moment.

  1. You can have surge protection but still loose data when the power fails.
    This isn’t just about the work that you’ve done sine the last ’save’, although loosing that can be serious. That loss of power may occur at a critical point for the hardware causing corruption of the file system (disk drive, networked or USB). It is almost certainly going to cause a loss of your train of thought, and that may be very serious.
  2. Surge protection wears out.
    Most people are unaware that surge protectors have a limited life and its not measured in time but in how much energy (aka surges) they have to absorb. So one day your surge protector isn’t going to protect you any more. FINIS. Game Over. The surge gets though and your machine is toasted.
    How do you know when your protector has used up its surge capacity? Generally you don’t, though some newer ones do have an indicator.
    What can you do about it? Not a lot, except buy a new one.

That’s why I like using a high-end laptop as a workstation. The power-brick and the battery do protect against surges and the battery acts as UPS. Sort of.

But please note that not all UPSs are created equally. Its not just about battery power. I’ll save that for another article.

Surf the Internet without a firewall.

While this is good advice in general, the specifics are the killer.

My firewall is a separate machine, an old HP Vesta P1 with 256Meg of RAM and a 30Meg and a CD reader. If you feel so inclined you could probably pick up something like this from the Salvation Army for about $10.
I run the IP-COP firewall on it. I’ve run other firewalls including the Mandriva MDF with its sophisticated GUI. I loved playing with Shorewall, which is one of the most flexible open source firewalls I’ve met. But IP-COP is small, fast and reliable. It has plugins for caching and for handling Dynamic DNS, as well as many other functions if you chose to install the plugins.

Why have I chosen to run a separate firewall rather than the software or modem based approach that the author of the article suggests? There are may reasons, but prime among them is the principle of Separation of Duties. I’m a firm believer in the idea that each thing should do just one thing and do it well, and the idea of a ’security appliance’ or of running the firewall on the host (i.e. the target) doesn’t appeal to me.

Perhaps there should be a “solely” in there.

Neglect to run or update antivirus and anti-spyware programs

This is another “Context is Everything” situation.

At home, even though I have an ‘always on’ broadband connection, I have a Linux based firewall and all my servers and laptops run Linux. Its not that Linux is guaranteed 100% protection against all forms of malware, but at least its not the highly vulnerable situation of Windows that necessitates running AV software.

And lets face it, as Bob Bergener at VMyths points out, AV software is getting less and less effective and the cycles of malware are getting more capable and more aggressive and more insidious.

But its not just me and its not just Linux. I have a number of high profile clients who put AV software on their corporate laptops and workstations … but it is disabled. Its there, I’m forced to conclude, to satisfy the auditors. However these organizations don’t suffer from malware attacks for other reasons, most notably that they have strict control over outside access. For the most part, there is none. Internals users are not allowed to use the Internet except under special conditions. Incoming and outgoing mail is aggressively filtered.

We’re beginning to see this kind of access control with products from Ironport (Cisco) and Proofpoint. These are “appliances” more available to smaller sites. In all probability most users of these products aren’t going to use their full capability and will still want another layer of protection against malware.

Sadly, the most effective one is the one that is weakest and is also the most easily subverted. Its user awareness and discipline. Don’t open unexpected attachments, download and run strange programs, visit dubious sites. See below.

Please don’t think that I’m saying having a firewall is an excuse for not keeping your software well maintained. There are many reasons for keeping up to date quite apart from making the software attack-proof. The the mantra “If it ain’t broke, don’t fix it” is not a reasonable stance with something as complex as software. It may be broken in ways that you don’t see or haven’t seen yet. This is quite different from choosing not to apply a change because you’ve analyzed it and determine that it is not appropriate.

And lets not forget that a firewall has lots of limitation - most are designed to protect the internal network from the outside world and assume that the internal network is trustworthy. Hence its no use at all if an internal machine is infected by some other means.

Install and uninstall lots of programs, especially betas

I was at IT360 and heard David Rice, the author of “Geekonomics” speak on software quality. One point he made was that the large software vendors treat all users as the “beta testers” for their products. He says:

“Software buyers are literally crash test dummies for an industry that is remarkably insulated against liability, accountability, and responsibility for any harm, damages or loss that should occur because of manufacturing defects or weaknesses that allow cyber attackers to break into and hijack our computer systems.”

So while this point may be a good one, we are all on the roundabout and can’t get off.

Keep disks full and fragmented

This is a meaningless and unhelpful generalization.

Firstly, I see an amazing amount of nonsense published about de-fragmentation. It warrants a posting and discussion in its own right, but please, don’t buy into this myth.

The second thing is that I DO keep a disk full and never run de-fragmentation on it. But then I have my hard drives partitioned. One contains the operating system, just what is needed to boot; another contains the system and libraries. These are pretty full and apart from the upgrades and occasional patches (which are less frequent and less extensive with Linux than Windows) there is very little “churn” on these partitions. I can leave them almost full. This includes auxillary programs where I keep on-line documentation (”manual pages”) and things like icons, wallpaper, themes and so on.

Next up is the temporary partition - /tmp in Linux parlance. Its the scratch workspace. It is cleaned out on every reboot and by a script that runs every night, but most programs clean up their temporary files after themselves. This partition looks empty most of the time. There’s no point de-fragmenting it and no point backing it up.

Another few partitions deal with what can be termed “archives”. These may be PDFs of interest or archived e-mail. Backup of these is important but they are in effect ‘incremental’ storage so there is no ‘churn’, just growth, so de-fragmentation is completely irrelevant.

So what’s left? Partitions that deal with “current stuff”, development, writing, so forth. These are on fast drives, aggressively backed up, and use journaled file systems for integrity.

But overall I simply don’t do ANY de-fragmentation. I think its a waste of time for a number of reasons.

The first is that it simply makes no sense in any of the contexts above. The second is that given high speed disks and head activity and good allocation strategies in the first place, its not going to help.

The third and most significant is that since I use volume management software it can’t possibly help.

I use LVM on all my Linux platforms to manage disk allocation. If you read up on it you’ll see that it means that a contiguous logical volume may not correspond to a contiguous physical allocation on the disk. Since LVM subsumes RAID as well, it may not even be on a single physical drive.

Remember:

Now, after reading that article, speculate about how I do backups :-)

Open all attachments

Good advice at last! Sadly human nature seems perverse. People seem to be sucked in to reading attachments and visiting dubious web sites (see below) and admonishions don’t seem enough to change their behaviour.

Perhaps evolution has failed us; perhaps we need a Darwinian imperative so that people foolish enough to do this can no longer contribute to the gene (or is it meme?) pool.

Click on everything

More good advice, more efforts to overcome human stupidity.

Share and share alike

Context is everything

Oh dear. This doesn’t make sense any more. To be effective in business you do need to share data. I don’t need to go into detail, but I will mention that most businesses need a web site to share information with customers, prospects and investors.

There are now many web-based businesses based on sharing, Flicr, Facebook, LinkedIn and the like.

And lets not forget that the whole “Open Source” model is about sharing.

Pick the wrong passwords

There are two things I object to here.
The first is the hang-up with passwords. They are, to coin a phrase, “so twentieth century“.

The problem isn’t dreaming up passwords - we get nonsense like this:

Lets face it, there;’s no real problem dreaming up passwords.
Certainly not for me. I had to learn by heart poems and passages from famous works, chunks of Shakespeare and that kind of thing at school. I can always pull out something, take first letters, mange them however.

But the real problem, whether you have this repertoire or whether you use a generator software tools, is remembering them. Oh, and forgetting them when you have to change them. Oh, and knowing which one applies where.

This is the point that Mike Smith makes in his book, “Authentication” and is why people write down passwords or use passwords that are essentially mnemonics or use the same password for many situations.

Twenty years ago I only had to deal with a few passwords, now I have to deal with hundreds. Almost every web site I visit demand that I log in.

We have reached a point now where using ’strong’ password technology is becoming a liability and using passwords is and of itself an increasing risk. The likelihood that a new employee will re-use a password he’s used on a public web site for his corporate login is high. The load on his memory is just too great. This is why there is a market for software that remembers your passwords. But how portable is it? USB drives, you say? I seem to loose USBs with alarming frequency.

So, how happy are you with doing financial transaction over the Internet using just a password as authentication, even if it is over a SSL connection? I’m not very happy. This is a subject that deserves a long blog article in its own right, but lets just point out that banks in Canada and the US have chosen not to use the more secure “two factor” and “one time pad” authentication systems that are normal for European and Scandinavian banks, and so have put their customers at risk. Not all the risks have to do with the Internet connection.

Some banks have moved to what they call “two factor” authentication. Well, it certainly isn’t really what the security industry calls “two factor”. At best it might be called ‘two passwords‘ - instead of asking you just your password they will ask for the password and then one of a set or previously agreed questions like “what was the colour of your first car“. It gives the illusion of security, but its just a double-password. Compare it to having a lock on your screen door and your front door. If the theif comes in by breaking a window or by stealing your keys (or the book you have your passwords written down in since you have so many of them!) then this doens’t help.

Real “Two-Factor” authentication has two different things. A password is “something you know“. The colour of your first car is also something you know. Its also something other people can know.

A real second factor would be “something you have” like your bank Client Card that you use with your personal identification number (P.I.N.) which is “something you know“. Both have to be used together. Someone might know - or guess - your PIN without you knowing about it, but if you loose possession of the card you do now about it.

Another factor is “something you are” - biometrics. Recognition of your fingerprint or iris along with a password.

Of course these more secure methods require more technology which is why most web sites fall back to the only thing they are sure you have - a keyboard.

Rick Smith’s book is …
Authentication: From Passwords to Public Keys” ISBN 0201615991

See his home page at http://www.smat.us/crypto/index.html
He refers there to ..

A companion site, The Center for Password Sanity, examines the
fundamental flaws one finds in typical password security policies
and recommends more sane approaches.
http://www.smat.us/sanity/index.html

See also ‘The Strong password dilemma’ at http://www.smat.us/sanity/pwdilemma.html

And not least of all the cartoon at http://www.smat.us/sanity/index.html

Seriously: go read Rick Smith’s book.

There is a lot of nonsense out there about passwords and a lot of it is
promulgated by auditors and security-wannabes.

Ignore the need for a backup and recovery plan

As you can see above, I’ve made things easy for backups.

One reason for this is that the real problem is not having a backup and recovery plan, is the doing of it, making it a habit, a regular part of operations.

That is one reason most larger organizations use centralized services, so that the IT department takes care of backups. Its a major incentive for “thin clients” where there is no storage at the workstation that needs to be backed up.

Its also one reason that I partition my drives so I can identify what is ’static’ and what is ‘dynamic’.

One of my great complaints about Microsoft Windows is that everything is on the C: drive. I very strongly recommend partitioning your drives. Having a D: drive and remapping your desktop and local storage there makes things so much easier. It also helps to have a separate partition for the swap area and for temporary files. Sadly, while this is possible and is documented (search Google for details), its not straight forward. Which is sad, because it is a very simple and effective way of dealing with many problems. No the least of which is that you can re-install Windows without over-writing all your data.

Nearly 1 million people on the US terrorist watch list [belsec] [Belgian Security Blognetwork]

Posted: 25 Apr 2008 05:25 AM CDT

Such lists make only sense if only the most pertinent persons are included. You will never arrive at stopping all the sleepers and unknown militants without at the same time denying access, travel or privacy to hundreds of thousands of other persons that have done nothing wrong than just to correspond to a mix of characteristics that will trigger a formulae.

If you want to read more about the list (official information) You can't know if you are on the list and it is not clear how you can be removed from the list.

 You can follow here the number of persons added to the list - presumely.

Daily caffeine ‘protects brain’ [/dev/random] [Belgian Security Blognetwork]

Posted: 25 Apr 2008 01:51 AM CDT

Caffeine is often associated to evil. Sometimes it’s good, sometimes it’s not. This time, it’s positive: http://news.bbc.co.uk/2/hi/health/7326839.stm

It’s time for a coffee break! ;-)

Spear Phishing with Better Business Bureau complaints [StillSecure, After All These Years]

Posted: 25 Apr 2008 12:04 AM CDT

I received the following email yesterday purporting to be from the BBB. It looked phishy to me, so of course I did not click the link and did a little investigating. However, I could see how someone would be fooled on this one, thinking someone filed a bogus complaint against them. Almost as good as the subpoena story I heard from a customer last week. Beware of stuff like this!

BBB CASE #841246605

Complaint filed by: Brian Williams
Complaint filed against:
Business Name: StillSecure
Contact: Alan Shimel
BBB Member: YES
Complaint status: -
Category: Contract Issues
Case opened date: 4/20/2008
Case closed date: -

Download a copy of this complaint so you can print it for your records (DON'T CLICK THIS)
On February 23 2008, the consumer provided the following information: (The consumer indicated he/she DID NOT received any response from the business.)
The form you used to register this complaint is designed to improve public access to the Better Business Bureau of Consumer Protection Consumer Response Center, and is voluntary. Through this form, consumers may electronically register a complaint with the BBB.Under the Paperwork Reduction Act, as amended, an agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB control number. That number is 246-967.
© 2008 US.BBB.org, All Rights Reserved.

Hurry Up and Wait [BumpInTheWire.com]

Posted: 24 Apr 2008 10:39 PM CDT

It is really quite comical how things work out.  Today we had a “mini” disaster recovery test for our mainframe.  Like most things, every DR test comes with a couple of meetings prior to the actual work being done.  The part I’m involved in is really pretty simple and only involves ensuring a VPN tunnel to the recovery site comes up.  The simple part is the config for this VPN was created back in 2005 and our end has not changed since.  Every test comes with the same sense of importance stressed on this VPN creating successfully.  I get asked several times if somebody will be available at the time the test starts to ensure the VPN comes up.  I give the same bullshit answer every time…”Of course, if I’m not here El Sidekick is here at 6:00 AM every day so he will be here.”  I give the same answer because I know that if the test is scheduled to start at 8:00 AM they won’t be to a point where they need the VPN until 2 hours later at the earliest. 

Today was no different.  The test was scheduled to start at 8:00 AM.  Shortly after 2:00 PM they were at a point where they needed the VPN.  The VPN was up at 8:15 and sat idle for 6 hours.  The tardiness of the VPN creation wasn’t even an issue on our end.  For the third test in a row the DR site clowns didn’t put a default gateway on their config.  Silly clowns.

This has been a crazy busy week.  The stars must have been aligned this week to cause me to be this busy.  I think I’m going to reward myself by taking Friday afternoon off.  If I can’t blow it out on a Friday when the hell can I blow it out?

The Day I met Bruce Schneier at InfoSecuity Europe ‘08 [IT Security Expert]

Posted: 24 Apr 2008 08:14 PM CDT

No matter the profession or walk of life we are all in, we all have our heroes and mentors, for some it is the likes of Einstein, Winston Churchill, Lance Armstrong, Tiger Woods or Richard Branson, for others it's Elvis or Amy Winehouse. For me it's Bruce Schneier, who first made a name for himself as a predominant cryptography expert in 1960s and in recent times has evolved into a fresh and forward thinking security guru. Sure this proves that I'm geek for sure, but for those who have ever read any of Schneier's recent books, blog entries or heard him speak will understand where I coming from.

I can't say I agree with absolutely everything Bruce says, but what grabs me is his unique approach, perspective and understanding of security and the information security industry. Bruce takes a large step back, then cuts out all the politics, security company marketing and associated sales hype, at which point you are left with the bare bones and the questions on what security is really suppose to be about. Which is, what do you want to protect, what are the risks, how will the security solution mitigate those risks, what risks does the security solution introduce and finally what are the costs, inconvenience and trade-offs around the security solution to mitigate the original risk.

As a security professional you have to careful not to fall into the trap and tunnel vision in chasing perfect security and zero risk, because there is simply no such thing as perfect security and zero risk! Then the other side of this coin is to ensure the security is appropriate for the risk, making sure the security cost and trade-offs are viable against mitigating the actual risk of attack. Let me take a "real world" UK example, I sure someone might of raised this one, but in order to reduce the risk another London Underground bombing, we could impose a security counter measure of searching all passengers and their bags prior to them entering the system, like we do at airports. It might reduce the risk of attack, but when thinking about the trade-offs, which is huge passenger inconvenience and high costs in employing extra staff to carry out all the searches, does this make it a worthwhile security solution in relation to the risk? The rational answer is clearly no, as it's just not viable, and so we continue to accept this risk of terrorist attack. OK, let's say we went with that security solution, at the end of the day, there still would be a risk of terrorist attack on the London Underground, and the only real way to completely mitigate that is to completely shutdown the underground system!
With business IT Security the same approach should apply, sure there are areas of Law and Industry compliance which must always be followed, but when dealing with security problems outside these areas, I always try to emulate that great Schneier vision, take that step back, making sure the business trades-offs and costs are balanced against the attack risk, it's not always that easy, the real difficulty is in quantifying elements, especially the attack risk. Fortunately for me, I utilise some of my own methods and practices which I have built up over the years to mitigate typical business risks, while causing minimal security trade offs and cost.

Anyway, yesterday I attended InfoSecurity Europe, and I was chuffed to pieces, as not only did I get to listen to Bruce Schneier talk about the Security Industry, but I got to briefly meet him and I got a signed copy of his latest book, Beyond Fear. Which is a must read not only for Security Professionals, but for anyone in general who wants to understand what security is about without knowing any of the technical jargon. I also recommend signing up to Crypto-Gram Newsletter run by Bruce at http://schneier.com/.

After the doors shut at InfoSecurity (ISC)2 EMEA held an event which I attended. From my perspective as CISSP member, I have to say EMEA (ISC)2 is progressing well under the leadership of John Colley, the event itself is evidence of this. Amongst the (ISC)2 bigwigs at this event, was former White House Cyber Security Advisor and (ISC)2 Security Strategist for (ISC) Prof. Howard A. Schmidt, who was also a keynote speaker at InfoSecurity Europe, again another guy who I can listen to all day. http://www.isc2.org/

Finally I met several guys from the UK Chapter of ISSA (Information System Security Association), I promised that I would sign up and get involved after learning that whey were planning more events in northern England. http://www.issa-uk.org/

No comments: