Wednesday, September 24, 2008

Spliced feed for Security Bloggers Network

Spliced feed for Security Bloggers Network

I so want to be a Forrester analyst [Security For All]

Posted: 24 Sep 2008 02:14 AM CDT


Now that would be a totally sweet gig. No experience necessary, no research required. Just collect the swag from vendors. Totally sweet deal - sign me up.

Now hang on there, that’s harsh - even for you! Yeah, well what conclusion am I supposed to come to with this report on the state of Network Access Control (NAC)? Actually I should start at the beginning with how I came across this amazing piece of … information.

So I’m browsing the blogoshere, just minding my own business, looking for NAC news. I should mention that in real life I make my living developing a NAC system. So when I come across this article, it totally pegged the old BS-O-meter. I mean nailed it.

Microsoft NAP Leading the NAC Pack

It didn't surprise us when Forrester Research put Microsoft NAP as the frontrunner in the Network Access Control market. "Microsoft's NAP technology is a relative newcomer but has become the de facto standard…," said Rob Whiteley in his report. While Cisco and others might be able to claim more direct revenue from NAC products as of now, I believe Microsoft has the technology and framework that positions it for success.
As Tim Greene pointed out in his NAC newsletter, "the result is interesting because it's not based on how many units were sold or performance tests but rather on evaluation of how well the products would meet the challenges of a set of real-world deployment situations."
Tim hit the nail on the head, as NAP works in the real world, not just in a complex architectural diagram that only exists in a 30-page white paper. I think NAP's success is twofold: One, NAP is built into the operating system on the client and server, making it easier for customers to use and deploy; and, two, NAP is one of those rare examples of Microsoft truly achieving interoperability and playing nice with others.

So at this point, I’m thinking well sure, these Napera guys are NAC vendors who are trying to ride the NAP wave so I’ll cut them some slack. I mean you do have to dial down the sensitivity on the old BS-O-Meter when dealing with marketing copy. But they reference an article by Tim Greene in his NAC newsletter. So I go there thinking surely they must have taken Tim totally out of context for their own vulgar marketing purposes. But much to my astonishment, (after navigating past NetworkWorld’s lame cover ad - which shows up as a nice blank page for those of us who block doubleclick - get a clue guys!) those Napera flaks were pretty much quoting Tim verbatim.

Microsoft comes out on top of the NAC heap in an evaluation of 10 vendors that was published recently by Forrester Research.

The result is interesting because it's not based on how many units were sold or performance tests but rather on evaluation of how well the products would meet the challenges of a set of real-world deployment situations.

Which led me to the original report by Forrester. By now my poor BS-O-Meter is toasted.

In Forrester’s 73-criteria evaluation of network access control (NAC) vendors, we found that Microsoft, Cisco Systems, Bradford Networks, and Juniper Networks lead the pack because of their strong enforcement and policy. Microsoft’s NAP technology is a relative newcomer, but has become the de facto standard and pushes NAC into its near-ubiquitous Windows Server customer base.

So at this point I can no longer remain silent - you guys broke my BS-O-Meter! And it was industrial strength! So NAP “would meet the challenges of a set of real-world deployment situations“? What color is the sky in your real-world?

Here’s the deal guys. Until all enterprises make the switch to Windows Server 2008, there is no real NAP install base. Also, NAP is critically dependent on these nifty little client and server plugin combos - System Health Agents (SHA) and System Health Validators (SHV), that fill the roles of TNC Integrity Measurement Collectors (IMC) and Integrity Measurement Verifiers (IMV) respectively. It not a bad idea since the SHA’s are managed by a single client-side meta agent, and the SHV’s are plugins on the server side (the Network Policy Server (NPS) to be exact). But the real strength of this idea is that everyone who has some endpoint component they want to monitor for policy purposes (like say an AV package) just builds an SHA and corresponding SHV to be part of the happy NAP family. As of now there is one, count ‘em, one SHA/SHV set provided to the “near-ubiquitous Windows Server customer base“. And guess who provides it (hint - they build a well known OS). So if your endpoint policies require only the Microsoft Security Center stuff and all of your endpoints are Windows XP SP3 or Vista Business+ and your servers are Windows Server 2008 you are golden! Both of you. Maybe I’m wrong and Napera has partnered with a whole bunch of competing endpoint security vendors to get all the system heath gizmos that they have been developing in secret. Hey - they do make this claim:

Napera then builds on the NAP platform to provide a single solution that combines health enforcement for both Windows and Macintosh computers with identity enforcement and guest access.

Whoa - A Mac SHA? I had no idea that OS/X had the basic plumbing to support such a beast! Oh wait - I get it - it’s a TNC IMC. So what’s the SHV for that bad boy look like? You see, I’ve written an SHV (no I’m not going to tell you how it works) and I’m pretty sure the Napera guys are blowing marketing smoke. If not I’d love a demo of an actual working system (not a “30-page white paper”). Preferably in my real-world.

So this brings me back to my original point. I want to be a Forrester analyst. I mean if I can make conclusions “not based on how many units were sold or performance tests but rather on evaluation of how well the products would meet the challenges of a set of real-world deployment situations“. Dude! sign me up. Don’t get me wrong - in all likelihood NAP will eventually become a “de facto standard” (well duh, it’s a Microsoft framework) and that’s not a bad thing. It’s just not there yet. In the meantime I need a new BS-O-Meter.

      

Links for 2008-09-23 [del.icio.us] [Anton Chuvakin Blog - "Security Warrior"]

Posted: 24 Sep 2008 12:00 AM CDT

$700 Billion Plus Wall Street Bailout…um…NO. [Vincent Arnold]

Posted: 23 Sep 2008 06:52 PM CDT

You know, I typically stay on the fence and try to stay as neutral as possible when it comes to political issues and postings on my website. On occasion, an issue that will unfathomable consequences WILL finally entice me to comment.

Well the issue of the century is at hand and I say ” Hell NO!” Why on this green earth am I going to have to pay for the stupidity of others? And then be asked, “oh and by the way, we really have no clue, really really screwed up things but have decided that YOU will have to clean up our mess and um, can you give the money to us in a week?”

Oh yeah, sure thing, we are always in the habit of handing over $700 Billion off of a 3 page financial bailout plan request with a clause in section 8 of said plan stating “Decisions by the Secretary pursuant to the authority of this Act are non-reviewable and committed to agency discretion, and may not be reviewed by any court of law or any administrative agency”.

ARE YOU FRIGGIN’ NUTS?!?!?

The best quote I’ve seen so far on this:

Rep. Jim McDermott, D-Washington, also asked why Congress should trust the administration to administer the bailout plan properly when, in his view, it has lost its credibility with the American people.

“Trust is something that is already bankrupt. The bank of trust in this administration is absolutely bankrupt,” McDermott said. “They have misled, lied, misrepresented, whatever word you want to use, on issue after issue. And now they give us seven days to come back, take out your wallet and give them everything that’s in it.”

I still can’t believe this is happeing.

Source

The Breach Reporting Dillema [securosis.com]

Posted: 23 Sep 2008 06:36 PM CDT

Over at Emergent Chaos, Adam raises the question of whether we are seeing more data breaches, or just more data breach reporting. His post is inspired by a release from the Identity Theft Resource Center stating that they’ve already matched the 2007 breach numbers this year.

Personally, I think it’s a bit of both, and we’re many years away from any accurate statistics for a few reasons:

  1. Breaches are underreported. As shown in the TJX case, not every company performs a breach notification (TJX reported, other organizations did not). I know of a case where a payment processor was compromised, records lost for some financial services firms that ran through them, and only 1 of 3-4 companies involved performed any breach notification. Let’s be clear: they absolutely knew they had a legal requirement to report and that their customer information was breached, but they didn’t.
  2. Breaches are underdetected. I picked on some of the other companies fleeced along with TJX that later failed to report, but it’s reasonable that at least some of them never knew they were breached. I’d say less than 10% of companies with PII even have the means to detect a breach.
  3. Breaches do not correlate with fraud. Something else we’ve discussed here before. In short, there isn’t necessary any correlation between a “breach” notification and any actual fraud. Thus the value of breach notification statistics is limited. A lost backup tape may contain 10 million records, yet we don’t have a singe case that I can find where a lost tape correlated with fraud. My gut is that hacking attacks result in more fraud, but even that is essentially impossible to prove with today’s accounting.
  4. There’s no national standard for a breach, never mind an international standard. Every jurisdiction has its own definition. While many follow the California standard, many others do not.

Crime statistics are some of the most difficult to gather and normalize on the planet. Cybercrime statistics are even worse.

With all that said I need to go call Bank of America since we just got a breach notification letter from them, but it doesn’t reveal which third party lost our information. This is our third letter in the past few years, and we haven’t suffered any losses yet.

-rich

PDF Security Pain: We Told You So [securosis.com]

Posted: 23 Sep 2008 06:03 PM CDT

Thanks to Slashdot, here’s a story on Adobe PDF vulnerabilities:

The Portable Document Format (PDF) is one of the file formats of choice commonly used in today's enterprises, since it's widely deployed across different operating systems. But on a down-side this format has also known vulnerabilites which are exploited in the wild.

I normally ignore stories coming out of vendor labs on new exploits that are coincidentally blocked by said vendor’s products, but on occasion they highlight something of interest.

Back in February I mentioned three applications that are a real pain in our security behinds- IE/ActiveX, QuickTime, and Adobe Acrobat (the entire PDF format, to be honest). It’s nice to see a little validation. Each of these, in its own way, allows expansion of their formats.

In the Adobe case they keep shoveling all sorts of media types and scripting into the format. This creates intense complexity that, more often than not, leads to security vulnerabilities. When you manage an open format, content validation/sanitization is an extremely nasty problem. Unless you design your code for it from the ground up, it’s nearly impossible to keep up and lock down a secure format. I suspect Adobe’s only real option at this point is to start failing with grace and focus on anti-exploitation and sandboxing (if that’s even possible- I’ll leave it up to smarter people than me).

Truth is I should have also put Flash on the list. My bad.

-rich

IDS - the beast that just won't die [StillSecure, After All These Years]

Posted: 23 Sep 2008 01:42 PM CDT

Ellen Messmer has an interesting article up in Network World today (I wish Network World would stop that annoying page fold over ad that forces you to click close to view the page. It is just a pain in the butt and I wouldn't buy anything from anyone using that type of ad just on principle.), around the latest results of an Infonetics research survey commissioned by Tipping Point. The respondents were mostly from big companies with about 10k employees. Remembering who commissioned this report, you need to take this numbers with a grain of salt, but some interesting findings:

1. Cisco is hands down the market leader in IPS.  It is almost universally agreed by this reports findings and in other reports, that while the Cisco product is far from the best in usability and functionality, by sheer numbers it dwarfs the other IPS vendors. That continually amazes me that everyone knows the product is not good, yet people still use it.  For me that just reinforces the notion that people put IPS in as checkboxes.  They really don't care if they work or not, are easy or not and are up to date or not.  They just want to say they have something.  When their local friendly Cisco rep throws it in with the shiny switch, they are happy campers.

2. Most people are finally deploying in line, but not filtering and blocking. Of course the Tipping Point customers overwhelmingly had the box in line. Tipping Point was always an in line IPS, so that is to be expected.  The Sourcefire boxes on the other hand tend to be deployed out of band more often. The IBM/ISS and McAfee IPS are more in the middle. Regardless of whether they were in line or out of band, though the amount of filters that were being used to actually block traffic was way low.  Most people are still alerting, not blocking.  IDS is not dead, that is clear.

3. A sizable number of users do not update the latest filters (Tipping Point lingo for signatures and rules).  This is the one that really blew me away.  With all of the focus on zero day and all you would think people want to be up to date against the latest attacks.  Evidently not.  Even given that some people like to test the filters first, I would think they find themselves into the field pretty quickly, but it looks like I am wrong.  Maybe this is a big company versus mid-market thing though. I don't think mid-market companies have the time and resources to go through that type of QA check. They expect their IPS vendor to send down signatures that don't break the box.

All in all, despite Richard Stiennon's prediction of the death of IDS, it appears that we are still a long way off from everyone using their IPS as an IPS.

Reblog this post [with Zemanta]

Updated Citect Snort Signature [Digital Bond]

Posted: 23 Sep 2008 01:33 PM CDT

I took some time to circle back to the Citect ODBC vulnerability and the signature we released for it a couple weeks ago.  After talking to some others in the community and taking another look at things it looks like there was some evasion for the previous signature.  The first signature we released should alert on the public exploit script, but this new revision shoud alert on all attempts to exploit this vulnerability.  That said, evasion may still be possible depending on the underlying IDS/IPS platform that this signature is installed on; make sure that your system can’t be beat by fragmented packets, or overlapping segments, things like that, know your product and/or ask your vendor (and preferably not one of the sales guys).

It seems that the length value is provided in two locations, and the first provided length can be a valid length but the exploit is still possible due to the length provided in the second packet.  As such we’ve adjusted our signature and the new revision is below:

alert tcp $EXTERNAL_NET ANY -> $HOME_NET 20222 (msg:”CitectSCADA ODBC Overflow Attempt”; flow:established,to_server; content:”|02 00 00 00 00|”; depth:5; byte_test:4,>,132,0,relative; reference:cve,2008-2639; sid:1111601; rev:2; priority:1;)

Any feedback from the community is welcome and encouraged as far as improving this, or any of our other signatures.

One Man’s Frustrations With “Risk Management” [RiskAnalys.is]

Posted: 23 Sep 2008 01:05 PM CDT

Chris, who is a male in Government C&A has a blog with a wonderful title: How is that Assurance Evidence?

I’d love to have another blog even more specific - “Ok, that Assurance is Evidence Of What, Exactly?

Today he has a great article called:

What’s the matter with Risk Management?

And “in short, it’s everything.” It pretty much sums up why I had to grow to re-evaluate how our industry does risk, risk management, approaches controls & vulnerability and find a new way.   A couple of things jump out at me in reading Chris’ article:

1.)  Just because that Deming cycle sucks and is full of unknowns doesn’t mean “risk” doesn’t exist, nor that it isn’t of primary importance. Nor does it mean that in the absence of model & methodology, we won’t be “doing” risk analysis anyway - just in an ad hoc method and completely from “the gut”.

Our industry calls these unstructured risk analysis “Best Practices”, as it’s an easy and convenient way of sweeping the unknowns under the rug of bureaucracy and enforcing it via peer pressure.

2.)  What this “suckiness” does mean is that your model and methodology aren’t helping you. As Chris intimates, there is too much uncertainty in the inputs for his model (they are, in the language of Bayesians - too subjective to be useful priors).

Take for example how we might be approaching the “controls” part of our analysis.  Chris writes:

“2. What are the controls that we have to employ?
800-53, ISO 27001, PCI, etc.

Still kinda good, but we basically know that ISO is relatively voluntary and NIST supplies a control catalog and not policies. So here we have to take the control catalog, and mash our policies into it.”

I wouldn’t call this “kinda good” at all :)  These control catalogs only provide a hierarchy within which to look for evidence of  our ability to resist an attacker.  They are incapable of making any claim about the effectiveness of the controls when they are operated at 100% efficiency, or more importantly, what % efficiency our specific organization operates at.

Let’s use Chris Hayes’ Initech as our fictional example.

Initech has a control (a back door on a loading dock).  Now the locks on the door are 100% capable of locking the door.  This is different than saying that they are capable of frustrating all but the top 5% of lockpicking burgalars.  It is also diffferent than saying that in a sample of several “walk around audits” the doors are left open 20% of the time (they are not in compliance with policy 100% of the time).  Even worse, that 80% of the time the door is not propped open?  Yeah, tailgating is a known issue.

So we have several different variables here that we need to account for (and it’s just a door).  But the analogy stands that most “risk management” methodologies are “We have a door, yes/no?” And most GRC platforms, when asked for their “opinion” will simply say “door is needed” or, even worse, “a door policy is needed”.

3.)  Criticality and the Source of Value is all messed up in these Risk Management models.

Chris writes:

Someone wants me to tell them which boxes are more critical than others. This is mainly because of budgetary or operational reasons. To which I usually say “All of them, it is a system after all”.

This literally made me laugh out loud.  And this sort of “rate the firewall as Risk = 500 but rate the actual business application as Risk = 157″ thing is also endemic.  Now Chris is very smart here.  He correctly identifies that the value is tied to the business process the systems support, and not to a specific box.  Oh, we scan at the specific box level - but because of the nature of systemic failures - all the boxes in the process are inexorably interrelated.

One of the reasons I really like FAIR is that the losses are quantified (or qualified) based not on some amorphous value of the box or the process itself, but losses are linked to the actions that the threat will take. Take systems in a highly regulated industries as an example.  Usually the most probable losses aren’t due to system compromise per se, but in the disclosure the compromise causes (regulators are a threat source, after all).  But many “risk management” methodologies will say “online banking is worth $2 billion, the value of the systems is therefore $2 billion”.  And suddenly we’re telling executive management that there’s a 60% probability that they’ll lose $2 billion.

4.)  If the primary source of prior information for your “risk management” methodology is a vulnerability scanner - you’re doing it wrong.  Chris writes:

So we ran a scan and now we have a report. A snapshot in time to make all decisions. Where did these vulnerability ratings come from? Do I even know if my system is at risk? What if I spend my time on vulnerabilities that have no threat?

So first, my thoughts are that actual “vulnerability” must be a comparison of the force a threat can apply, and our ability to resist that force (this is a probability statement, btw).

Changing your thinking about vulnerability now helps us understand the problem in several new ways.  First, you can start to divorce yourself from the scanner.  After all, the scanner is simply providing you with current state information that is usually just relevant variance from policy. It doesn’t really tell you about real “weakness in a system” because the system is an interrelated mess of people, processes and IT assets.

5.)  Finally, most “risk management” approaches just *don’t* do a good job of helping us understand the how’s and why’s of managing risk. In the past, I’ve referred to these standards as really being “issue management” because they are at their heart, an act of discovery - a formal process around gathering prior information.  They are not, in and of themselves, capable of linking the issues discovered to the root cause.  And these root causes?  Yeah, they’re the things that create “risk”.  Not a threat, not a vulnerability, not the existence of an asset - the amount of risk that we have stems from our capability to manage it.

So Chris, I completely agree - but I wouldn’t give up yet.  There actually are a few of us who are focused on what you suggest:

Where to go from here: A fundamental revamp of how to deal with Risk. Where risk professionals focus on the treating the sickness and not the symptoms, and come up with some new success/actionable metrics.

Chris, there’s nothing I want to do more than that.

Behavioral Monitoring [securosis.com]

Posted: 23 Sep 2008 11:28 AM CDT

Behavioral Monitoring

A number of months ago when Rich released his paper on Database Activity Monitoring, one of the sections was on Alerting. Basically this is the analysis phase, where the collected data stream is analyzed in context of the policies that are to be enforced, and the generation of an alert when a policy is violated. In that section he mentioned the common types of analysis, and one other that is not typically available but makes a valuable addition: Heuristics. I feel this is an important tool for policy enforcement- not just for DAM, but also for DLP, SIM, and other security platforms- so I wanted to elaborate on this topic.

When you look at DAM, the functional components are pretty simple: Collect data from one or more sources, analyze data in relation to a policy set, and alert when a policy has been violated. Sometimes data is collected from the network, sometimes from audit logs, and sometimes directly from the database’s in-memory data structures. But regardless of the source, the key pieces of information about who did what are culled from the source, and mapped to the policies, with alerts via email, log file entries, and/or SNMP traps (alerts). All pretty straightforward.

So what are heuristics, or ‘behavioral monitoring’?

Many policies are intended to detect abnormal activity. But in order to quantify what is abnormal, you first have to understand what is normal. And for the purposes of alerting, just how abnormal does something have to be before it warrants attention? As a simplified example, think about it this way; you could watch all cars passing down a road and write down the speed of the cars as they pass by. At the end of the day, you could take the average vehicle speed and reset the speed limit to that average; that would be a form of behavioral benchmarking. If we then started issuing tickets to passing motorists 10% over or under that average: a behavior-based policy.

This is how behavioral monitoring helps with Database Activity Monitoring.

Typical policy enforcement in DAM relies on straight comparisons; for example, if user X is performing Y operation, and location is not Z, then generate an alert. Behavioral monitoring builds a profile of activity first, and then compares events not only to the policy, but also to previous events. It is this historical profile that shows what is going on within the database, or what normal network activity against the database looks like, to set the baseline. This can be something as simple as failed login attempts over a 2-hour time period, so we could keep a tally of failed login attempts and then alert if the number is greater than three. But in a more interesting example, we might record the number of rows selected by a by a specific user on a daily basis for a period of a month, as well as an average number of rows selected by all users over the same of a month. In this case we can create a policy to alert if if a single user account selects more that 40% above the group norm, or 100% more than that user’s average selection.

Building this profile comes at some expense in terms of processor overhead and storage, and this grows with the number of different behaviors traits to keep track of. However, behavior polices have an advantage in that they help us learn what is normal and what is not. Another advantage: as building the profile is dynamic and ongoing, thus the policy itself requires less maintenance, as it automatically self-adjusts over time as usage of the database evolves. The triggers adapt to changes without alteration of the policy. As with platforms like IDS, email, and web security, maintenance of policies and review of false positives forms the bulk of the administration time required to keep a product operational and useful. Implemented properly, behavior-based monitoring should both cut down on false positives and ease policy maintenance.

This approach makes more sense, and provides greater value, when applied to application-level activity and analysis. Certain transaction types create specific behaviors, both per-transaction and across a day’s activity. For example, to detect call center employee misuse of customer databases, where the users have permission to review and update records, automatically constructed user profiles are quite effective for distinguishing legitimate from aberrant activity- just make sure you don’t baseline misbehavior as legitimate! You may be able to take advantage of behavioral monitoring to augment Who/What/When/Where policies already in place. There are a number of different products which offer this technology, with varying degrees of effectiveness. And for the more technically inclined, there are many good references. If you are interested send me email, and I will provide specific references.

-Adrian

2008 Breaches: More or More Reporting? [Emergent Chaos]

Posted: 23 Sep 2008 11:01 AM CDT

Dissent has some good coverage of an announcement from the ID Theft Resource Center, "ITRC: Breaches Blast ’07 Record:"
With slightly more than four months left to go for 2008, the Identity Theft Resource Center (ITRC) has sent out a press release saying that it has already compiled 449 breaches– more than its total for all of 2007.

As they note, the 449 is an underestimate of the actual number of reported breaches, due in part to ITRC’s system of reporting breaches that affect multiple businesses as one incident. This year we have seen a number of such incidents, including Administrative Systems, Inc., two BNY Mellon incidents, SunGard Higher Education, Colt Express Outsourcing, Willis, and the missing GE Money backup tape that reportedly affected 230 companies. Linda Foley, ITRC Founder, informs this site that contractor breaches represent 11% of the 449 breaches reported on their site this year.

I don't have much to add, but I do have a question: are incidents up, or are more organizations deciding that a report is the right thing to do?

IPsec Ideas Applied to Control Systems? [Digital Bond]

Posted: 23 Sep 2008 10:20 AM CDT

Or: “A Few Simple Suggestions for Improving Core Control System Security”

The core precepts of IT security are confidentiality, integrity and authentication, precepts not present in the design of most control systems, but there are some simple changes whose implementation would serve to greatly improve the security of control systems. Changes which could be readily and easily added to the majority of in the field systems.

Currently, in the majority of control systems, if an attacker can penetrate into the control system network segment, the lack of the core security principle allows the attacker to “own the system”. These failures are most notably:

*The use of poor authentication schemas e.g. no passwords, default passwords, and cleartext password exchanges. Authentications is critical in any system as it ensures that a user is who they “appear” to be and authentication limits the privileges that a user had to modify data and interact with system processes. This inherent weakness in control systems allows attackers a variety of attack pathways including; firmware attacks, and simple vectors into “rooting” control system services on both field devices and pc based servers and workstations.

*No use of data integrity schemas. The lack of integrity controls allows attackers to manipulate the data communicated on the system through man in the middle, IP spoofing and various other attacks.

*No encryption. Though confidentiality of the control system communication may not be of the upmost importance, the encryption of password and key exchanges would serve to greatly improve the security postures of control systems.

The main impetus of this posting is to suggest some simple modifications to control system designs that will correct these faults. Changes that could be readily implemented on existing control systems and that would not excessively tax the resources of field devices nor introduce excessive latency into time critical communications. Many of these suggestions are based on the specifications for the IPsec data authentication (Authentication Header) standard.

Though there has been a lot of discussion of encrypting all communications on control systems, there are various trip-falls to implementing this. Namely the field devices lack the computational horsepower to perform full packet encryption and decryption. This then requires the use of a bump in the line encryption device, which could be very costly and difficult to deploy on all devices “in the field” of large installation.

Data confidentiality on control systems via strong encryption is also not an essential necessity. Full packet encryption makes debugging communications virtually impossible, and outside of authentication exchanges what if any of the data exchanged needs the protection of encryption? Note that “outside of authentication exchanges” is key.

The IPsec  Authentication Header standard describes a robust schema for providing data authenticity, integrity and confidentiality. By following the basis described by IPsec’s Authentication Header we can develop a similar security headers for control system protocols. This header would contain:

*Encrypted authentication key, token or password. This is equivalent to the Security Parameters Index (SPI) of the IPsec standard.

*Encrypted packet sequence number.

*Integrity Check Value. Like the IPsec ICV this is a hash of the packets data including the majority of the headers. This hash is recomputed by the receiver and if the calculated has does not match the sent hash then the packet has been tampered with.

Each of these fields is a 32 bit value adding 96 bits to the end of a packet.

The addition of this security header to packets in control system protocol ensures Authentication and Integrity. If the packet is faked, replayed, or modified by an unauthorized user (attacker) the lack of the proper authentication key will ensure that the packet is rejected. The hash of the packet will also compute incorrectly doubly assuring that the packet will not be processed and assuring the integrity of the data in the packet. Man in the middle attacks with the specific goal of packet tampering are now moot.

Returning to the topic of authentication. A recent SCADASEC list discussion indicates that the use of no or default passwords is rampant in control systems, a finding already widely known by everyone who has spent any time in these environments. This practice must change. Vendors must provide systems that require new passwords be chosen during the installation/configuration phase of system deployment. These passwords must be changed to passwords that meet some minimum password policy determined by the asset owner. 

As asset owners and system maintainers have voiced some concern over operators having sufficient time to type in passwords during critical situations, the use of some type of stored token/key on a fob or USB stick may also be permissible. This would allow for the system to quickly query the key, while still providing for strong authentication.

Strong authentication also allows for better accountability, auditing and forensics as each login and transaction can now be correlated to a specific user.

Passwords exchanged across the network, wherever in the packet they occur must be encrypted. If the above IPsec style authentication and integrity schema is employed then each packet can be signed by its originator’s/user’s password/key.

As this schema only employs 96 bits of encrypted/hashed data it is a feasible implementation for security on many existing control systems and field devices. This schema removes the ease by which attackers can now infiltrate control systems by using no passwords, default passwords and sniffing passwords off of the network and also provides data integrity by which the tampering and replaying of data can be detected and avoided. This schema also increases the ability to perform real auditing and forensics on control systems.

4 out of 5 choose . . . Safe Access for NAC [StillSecure, After All These Years]

Posted: 23 Sep 2008 08:18 AM CDT

One of the things I am most proud of regarding our Safe Access NAC system is that when major companies in the network infrastructure and endpoint management space look to OEM or go to market with a NAC solution, more often than not they choose Safe Access.  These are companies that have done lots of due diligence on the market and the technology.  When they do a balanced analysis, the StillSecure product wins consistently.  Today we announced that the latest major technology company to choose StillSecure's Safe Access technology to power their own NAC product is Novell

We are of course very pleased that a company with the distribution footprint of Novell will be entering the NAC market in a strategic partnership with us.  They join some of the biggest names in the network infrastructure and endpoint space in either OEM'ing or co-selling Safe Access.  These types of partnerships and channels are key to our strategy here at StillSecure.

So if you are looking for a great NAC solution, have a look at the Novell NAC product.  You may want to look at some others as well.  But any of the NAC solutions you look at may just have StillSecure technology behind it!

Reblog this post [with Zemanta]

Commtouch Wins Frost & Sullivan Messaging Security Technology Innovation of the Year Award [Commtouch Café]

Posted: 23 Sep 2008 08:03 AM CDT

Please join with me in celebrating the prestigious award that Commtouch was just honored with: The Frost & Sullivan 2008 European Messaging Security Technology Innovation of the Year Award, in recognition of Commtouch’s “superior protection of email inboxes around the globe from unwanted and malicious email.” Award description and document (requires registration) Press Release about the award [...]

Six degrees of separation [StillSecure, After All These Years]

Posted: 23 Sep 2008 07:56 AM CDT

In this age of outsourcing, securing information that gets further and further away from your direct control becomes harder and harder to control.  The point was driven home again for me today reading a story about a data breach at Grady Memorial Hospital in Atlanta. Unlike other data breaches where a laptop was lost or somebody was able to hack into the hospitals network, this data breach was caused by simplest, but hardest to stop method, human error. It seems that some medical information was being transcribed and instead of being put in a password protected (like that is secure, but fodder for a blog post another day) the confidential information was put on a publicly available web site. 

Of course your favorite web spiders indexed the page and when a doctor did a Google search of his name he was surprised to find this page with confidential notes and information on his patients.  He then notified the hospital who investigated this apparent HIPAA violation.  What they found, according to the article in the Atlanta Journal-Constitution was this:

Grady outsourced the job of transcribing the notes to a Marietta firm, Metro Transcribing Inc., which outsourced the work to a Nevada contractor, Renee Lella. Lella, in turn, turned the work over to a firm in India, Primetech Infosystems.

So how is Grady Hospital supposed to have any control over Primetech Infosystems? It is this 6 degrees of separation that make outsourcing gone wild a potential security nightmare.  As data gets further away, it gets harder to control.  So next time you are going to outsource, you need to check who your outsourcer outsources to.

Reblog this post [with Zemanta]

Quick Cha Cha Test [Jon's Network]

Posted: 23 Sep 2008 01:17 AM CDT

I heard from Daniel about ChaCha. I’ve never used a mobile search service nor had the slightest clue how they work, but I was impressed enough by the answer they sent Daniel that I figured I would try it out.

I texted “Is the Bald Eagle and endangered species?”. About 5 minutes later I received this:

Yes it is. Have a great day! http://search.chacha.com/u/762jt53n

But the Bald Eagle isn’t an endangered species. ChaCha sent me a false answer.

If you follow the link, you’ll see my conversation along with these two links:

Visit Source Website

View info about your guide Eric W.

I basically outsourced a google search to someone else, who then breezed through a page about the Bald Eagle and somehow figured they were endangered, despite the source website clearly stating:

The Bald Eagle was listed as Endangered in most of the U.S. from 1967 to 1995, when it was slighted upgraded to Threatened in the lower 48 states. The number of nesting pairs of Bald Eagles in the lower 48 states had increased from less than 500 in the early 1960’s to over 10,000 in 2007. They had recovered sufficiently to delist them from Threatened status on June 28, 2007.

I found myself looking for a way to correct my guide’s answer or at least to be able to vote him down or something. I can see the ChaCha service being useful but I would need the research to be accurate. Maybe along with the answer they can indicate the guide’s trust rating. Or perhaps they can referee the answers, sending your question through two guides before they text you the answer. I can definitely imagine outsourcing simple research to ChaCha in that case.

UPDATE: I texted back to ChaCha: “The bald eagle is not an endangered species.”

Melissa C. responded:

Your right! The bald eagle, America’s national symbol, is flying high after spending three decades in recovery. On MORE?

Her source website returns CNN’s 404 page though.

Info Cards Are Awesome; But Are Identifying Parties Really Ready to Do This Right? [Security Provoked]

Posted: 23 Sep 2008 12:43 AM CDT

Perhaps the greatest thing about information cards is that they might finally free us from the purpose-defeating and idiotic practice of using Social Security numbers as a nigh-universal identifier. But it won’t work unless the Identifying Parties find a way to balance security with portability, and can smartly manage distribution, expiration and destruction.

As Information Week reported Microsoft released a whitepaper calling for all hands on deck to support an “Information Card system that uses an interoperable vendor-neutral framework for identity management and provides end users with direct control of their digital identities.”

Here, freakin, here! Nevertheless, there’s still a ways to go before information cards are really a viable solution–most of the barriers are merely administrative peccadilloes that need to be sorted out by the parties providing and verifying identity data.

A few words about Social Security numbers… As I’ve griped about before, the big problem with using SSNs in identity management is that we tend to use them as though they’re a “something-you-are like” –like your fingerprint or your retina, valued for being individual, unchangeable and nigh-impossible to copy–but in actuality an SSN is just a “something-you-know,” like a password….

…Except that an SSN is worse than a password, even worse than useless as an identifier. First of all, it’s easy to brute force: always nine characters (unless it’s truncated, and then you may only need four) and only 10 possibilities for each character. Second of all, unlike a fingerprint, (or a driver’s license number, or a bank account number, or really just about any piece of personal data) there is no way to verify that the Social Security Number you’ve provided is actually yours.

Why is it impossible to verify? Because the Social Security Administration will not tell anyone what SSN belongs to whom, apparently because it would jeopardize individuals’ privacy and the security of the numbers.

The Administration won’t even tell Fair Isaac, the company that maintains all those records TransUnion, Equifax and use when calculating your credit score. This is at the root of the whole financial fraud conundrum. Right now, Fair Isaac might very well have credit report data for three different people, with three different names all using the same Social Security number. Fair Isaac knows that at least two of those are fraudulent, but there’s no way of knowing which one, because the Social Security Administration will not tell them. The only really legitimate way of proving that SSN is yours is by providing, in person, a hard-copy Social Security card–which by the way never expires, carries none of the anti-spoofing technology now in drivers’ licenses or passports, and isn’t even considered a valid proof of U.S. citizenship when you’re applying for a passport.

The beautiful thing about information cards is that they make this SSN nonsense disappear, by using an entirely different logic (translation: they use logic, period) than that used by the Social Security Administration and those parties that accept/require SSNs as proof of identity.

With an information card, the information itself isn’t very important–in some instances it might actually be entirely unnecessary. The important part is that the information has been verified by a trusted identifying party. “An information card from a bank may not need to contain the user’s account number. All it needs to do is provide a digitally signed confirmation from the bank that says “yes I really am the bank, and yes this user really is who he says he is, and yes indeed he does have an account with us [and maybe even] yes he’s got the money to pay for whatever it is he’s purchasing through your Web service.” An information card from the Department of Motor Vehicles might not need to give a gambling Web site your birthdate; it simply needs to say “I’m the DMV and I confirm that this person is at least 18 years old.”

In this way it’s better than the hard copy IDs in your wallet. The relying party gets the information they want, without forcing the user to toss around personal data willy-nilly.

However the trouble with information cards is that they’re simply digital files, which means they can be copied by someone other than the identifying party, stored on several devices and transmitted over insecure channels.

InfoCards, like encryption keys must be both transmitted and stored securely. (Can one never escape the slings and arrows of key management?) If an attacker gets hold of those Infocards then the identifying (verifying) party will, unknowingly, attest to the fraudulent claims of the attacker without the attacker needing to know a single piece of the victim’s personal information.

So first off, the user and the relying party must exchange infocards over a secure connection to avoid a nefarious man in the middle from snatching a copy of that card. Hopefully encrypted communication of keys is something relying parties are doing already.

Second, those InfoCards locked up tight while they’re simply at rest. The “InfoCard” folder residing on a user’s desktop could be encrypted, or the infocards could be stored and encrypted in the client machine’s TPM chip (Trusted Platform Module).

The trouble there is that–unlike the cards in your wallet–you’re not willing to strap your machine to your back and carry it with you everywhere you go. (Though some of us–and I’m not naming names–kind of do lug our machines everywhere, even to that concert I went to last week.)

The infocards must be portable–so it’s likely they’ll be stored on a smartcard or an encrypted USB stick.

Yet they must also be secure–so that portable device containing the issued information card should be obtained in-person, directly from the identifying party. (If you lose it you have to go back in person, just like if you lose your driver’s license or your birth certificate.) Further, privileges to create, copy or edit that information card should be exclusive to the identifying party

However, you don’t really want to loop a zillion USB sticks alongside your housekeys. So it would be great if you could put them all on one stick. However, if you’re not permitted to copy all of those information cards onto one handy USB stick, then you must hope that the identifying parties will agree to place the data onto the USB stick you provide.

It sounds like all this calls for interoperable standards. Hopefully the collaborative work being done by Microsoft and the Information Card Foundation will help us move in that direction.

Infocards are awesome, far better than the current options and entirely worth the effort–but, like many of our most promising security technologies, administrative hurdles must be overcome before the technology can really take off.

(More on this stuff in our Identity 2.0 Summit at CSI 2008: Security Reconsidered.)

Links for 2008-09-22 [del.icio.us] [Anton Chuvakin Blog - "Security Warrior"]

Posted: 23 Sep 2008 12:00 AM CDT

Unplug Your Wall Warts and Save the Planet? [Last In - First Out]

Posted: 22 Sep 2008 06:28 PM CDT

Do wall warts matter?

Photo051Lets' try something unique. I'll use actual data to see if we can save the planet by unplugging wall transformers.

Step one – Measure wall wart power utilization.

Remember that Volts x Amps = Watts, and Watts are what we care about. Your power company charges you for kilowatt-hours. (One thousand watts for one hour is a kWh).

Photo048Start with one clamp-on AC ammeter, one line splitter with a 10x loop (the meter measures 10x actual current)and one wall wart (a standard Nokia charger for an N800).

And we have zero amps on the meter.

OK - That meter is made for measuring big things, so maybe I need a different meter.


Lesson one

Wall warts don't draw much current. They don't show up on the ammeters' scale even when amplified by a factor of 10.

Try again - this time with an in-line multimeter with a 300mA range.

Photo050

Children - don't try this at home - unless you are holding on to your kid brother and he is properly grounded to a water pipe.

(just kidding.....)

That's better. It looks like we have a couple milliamps current draw.

Try a few more. The ones I have (Motorola, Samsung, Nokia) are all pretty close to the same. Lets use a high estimate of 5mA @ 120v, or about a half of a watt.

Similarity, checking various other parasitic transformers, like notebook computer power bricks, yields currents in the low milliamp ranges. When converted to watts, the power draw for each brick is somewhere between one-half and two watts.

To make sure the numbers are rational and that I didn't make a major error somewhere, I did the simplest check of all. I placed my hand on the power bricks. When they are plugged into the wall and nothing is plugged into them, they are not warm. (Warm = watts).

One more sanity check. Plugging three notebook power supplies and three phone power supplies into a power strip shows about 30mA @ 120v for all six bricks, which is under 4 watts, or less than a watt each. My measurement are rough (I don't have a proper milliamp meter), but for estimates for a blog that nobody actually reads, they should be close enough.

So lets pretend that I want to save the planet, and that unplugging power bricks is the way I'm going to do it. I'll need to periodically plug them in to charge whatever they are supposed to charge. Lets assume they'll be plugged in for 4 hours per day and unplugged 20 hours per day. If I have a half dozen power bricks, I'll save around 5 watts x 20 hours = 100 watt-hours per day, or the amount of electricity that one bright light bulb uses in one hour. That would add up to 35kWh (Kilowatt-hours) per year. Not bad, right?

Until you put it into perspective.

Perspective

Let's take the other end of the home appliance spectrum. The clothes dryer (clothes tumbler to those on the damp side of the pond). That one is a bit harder to measure. The easiest way is to open up the circuit breaker box and locate the wires that go to the dryer. Photo046

Hooking up to the fat red wire while the dryer is running shows a draw of about 24 amps @ 220 volts. I did a bit of poking around (Zzzztttt! Oucha!!.....Damn...!!!) and figured out that the dryer, when running on warm (verses hot or cold) uses about 20 amps for the heating element and about 4 amps for the motor. The motor runs continuously for about an hour per load. The heating element runs at about a 50% duty cycle for the hour that the dryer is running on medium heat.

Assume that we dry a handful of loads per week and that one load takes one hour. If the motor runs 4 hours/week and the heating element runs half the time, or two hours per week, we'll use about a dozen kWh per week, or about 600 kWh per year. That's about the same as 100 wall warts.

How about doing one less load of clothes in the dryer each week? You can still buy clothes lines at the hardware store - they are over in the corner by the rest of the End-of-Life merchandise, and each time that you don't use your clothes dryer, you'll save at least as much power as a wall wart will use for a whole year.

Lets do another quick check. Lets say that I have a small computer that I leave run 24 hours per day. Mine (an old SunBlade 150 that I use as a chat and file server) uses about 60 watts when powered up but not doing anything. That's about 1.4 kWh per day or about 500kWh per year, roughly the same as my clothes dryer example and roughly the same as 100 wall warts. Anyone with a gaming computer is probably using twice as much power. So how about swapping it out for a lower powered home server?

Notebooks, when idling with the screen off, seem to draw somewhere between 15 and 25 watts. (Or at least the three that I have here at home are in that range). That's about half of what a low end PC draws and about the same as 25 wall warts. So using a notebook as your home server will save you (and the planet) far more that a handful of wall warts. And better yet, the difference between a dimly lit notebook screen and a brightly lit one is about 5 watts. Yep - that's right, Dimming your screen saves more energy than unplugging a wall wart.

Make this easier!

How about a quick and dirty way of figuring out what to turn off without spending a whole Sunday with ammeters and spreadsheets? It's not hard.

If it is warm, it is using power.

The warmer it is, the more power it is using. (Your laptop is warm when it is running and cold when it is shut off, right?). And if you can grab onto it without getting a hot hand, like you can do with a wall wart, (and like you can't do with an incandescent light bulb) it isn't using enough electricity to bother with.

The CO2

So why do we care? Oh yeah - that global warming thing. Assuming that it's all about the CO2, we could throw a few more bits into the equation. Using the CO2 calculator at the National Energy Foundation in the UK and some random US Dept of Energy data, and converting wall warts to CO2 at a rate of 6kWh per wall wart per year and 1.5lbs of CO2 per kWh, it looks like you'll generate somewhere around 4kg of CO2 per year for each wall wart, +/- a kg or two, depending on how your electricity was generated.

Compare that to something interesting, like driving your car. According to the above NEF calculator and other sources, you'll use somewhere around a wal-wart-years worth of CO2 every few miles of driving. (NEF and Sightline show roughly 1kg of CO2 every two miles of driving). So on my vacation this summer I drove 6000 miles and probably used something like 3000kg of CO2. That's about 700 wall-wart-year equivalents (+/- a couple hundred wwy's).

Take a look at a picture. (Or take a look at a rather....take a look at a cheezy Google chart with the axis labels in the wrong order....)

co2_

Can you see where the problem might be? (Hint - It's the long bright blue bar)

Obviously my numbers a nothing more than rough estimates. But they should be adequate to demonstrate that if you care about energy or CO2, wall warts are not the problem and unplugging them is not the solution.

Should you unplug your wall warts?

You can do way better than that!


Disclaimer: No wall warts were harmed in the making of this blog post. Total energy consumed during the making of the post: 5 - 23 watt CFL bulbs for 2 hours = 230 watt-hours; 5 - 25 watt incandescent bulbs for 1/2 hour = 62.5 watt-hours; one 18 watt notebook computer for 3 hours = 54 watt-hours; one 23 watt notebook for 3 hours = 69 watt-hours; Total of 415 watt-hours, or 28.8 wall-wart-days. Any relationship between numbers in this blog post and equivalent numbers in the real world is coincidental. See packaging for details.

Our Take On The McAfee Acquisitions [securosis.com]

Posted: 22 Sep 2008 06:23 PM CDT

I’ll be honest- it’s been a bit tough to stay up to date on current events in the security world over the past month or so. There’s something about nonstop travel and tight project deadlines that isn’t very conducive to keeping up with the good old RSS feed, even when said browsing is a major part of your job. Not that I’m complaining about being able to pay the bills.

Thus I missed Google Chrome, and I didn’t even comment on McAfee’s acquisition of Reconnex (the DLP guys). But the acquisition gods are smiling upon me, and with McAfee’s additional acquisition of Secure Computing I have a second shot to impress you with my wit and market acumen.

To start, I mostly agree with Rothman and Shimel. Rather than repeating their coverage, I’ll give you my concise take, and why it matters to you.

  1. McAfee clearly wants to move into network security again. SC didn’t have the best of everything, but there’s enough there they can build on. I do think SC has been a bit rudderless for a while, so keep a close eye on what starts coming out in about 6 months to see if they are able to pull together a product vision. McAfee’s been doing a reasonable job on the endpoint, but to hit the growth they want the network is essential.
  2. Expect Symantec to make some sort of network move. Let’s be honest: Cisco will mostly cream both these guys in pure network security, but that won’t stop them from trying. They (Symantec and McAfee) actually have some good opportunities here- Cisco still can’t figure out DLP or other non-pure network plays, and with virtualization and re-perimeterization the endpoint boys have some opportunities. Netsec is far from dead, but many of the new directions involve more than a straight network box. I expect we’ll see a passable UTM come out of this, but the real growth (if it’s to be had) will be in other areas.
  3. The combination of Reconnex, CipherTrust, and Webwasher will be interesting, but likely take 12-18 months to happen (assuming they decide to move in that direction, which they should). This positions them more directly against Websense, and Symantec will again likely respond with combining DLP with a web gateway since that’s the only bit they are missing. Maybe they’ll snag Palo Alto and some lower-end URL filter.
  4. SC is strong in federal. Could be an interesting channel to leverage the SafeBoot encryption product.

What does this mean to the average security pro? Not much, to be honest. We’ll see McAfee and Symantec moving more into the network again, likely using email, DLP, and mid-market UTM as entry points. DLP will really continue to heat up once the McAfee acquisitions are complete and they start the real product integration (we’ll see products before then, but we all know real integration happens long after the pretty new product packaging and marketing brochures).

I actually have a hard time getting overly excited about the SC deal. It’s good for McAfee, and we’ll see some of those SC products move back into the enterprise market, but there’s nothing truly game changing. The big changes in security will be around data protection/information centric security and virtualization. The Reconnex deal aligns with that, but the SC deal is more product line filler.

But you can bet Webwasher, CipherTrust, and Reconnex will combine. If it doesn’t happen within the next year and a half, someone needs to be fired.

-Rich

Is PCI DSS "Too Prescriptive"? [Anton Chuvakin Blog - "Security Warrior"]

Posted: 22 Sep 2008 05:43 PM CDT

I did this fun panel on PCI compliance at SecureWorld Bay Area the other week. What is interesting is that almost every time there is a discussion about PCI DSS, somebody crawls out of the woodwork and utters the following: "PCI is too prescriptive!", as if it is a bad thing (e.g. I mentioned it before here)

I used to react to this with "Are you stupid?! PCI being prescriptive is the best thing since sliced cake :-) Finally, there is some specific guidance for people to follow and be more secure!" BTW, in many cases end users who have to comply with PCI DSS still think it is "too fuzzy" and "not specific enough" (e.g. see "MUST-DO Logging for PCI"); and they basically ask for  "a compliance TODO list." (also see this and especially this on compliance checklists)

But every time it happens, I can't stop but think - why do people even utter such utter heresy? :-) And you know what?  I think I got it!

When people say "PCI is too prescriptive," they actually mean that it engenders "checklist mentality" and leads to following the letter of the mandate blindly, without thinking about WHY it was put in place (to protect cardholder data, share risk/responsibility, etc). For example, it says "use a firewall" and so they deploy a shiny firewall with a simple "ALLOW ALL<->ALL" rule (an obvious exaggeration - but you get the point!) Or they have a firewall with a default password unchanged... In addition, the proponents of "PCI is too prescriptive" tend to think that fuzzier guidance (and, especially, prescribing the desired end state AND not the tools to be installed) will lead to people actually thinking about the best way to do it.

So the choices are:

  1. Mandate the tools (e.g. "must use a firewall") - and risk "checklist mentality", resulting in BOTH insecurity and "false sense" of security.
  2. Mandate the results (e.g. "must be secure") -  and risk people saying "eh, but I dunno how" - and then not acting at all, again leading to insecurity.

Take your poison now?! Isn't compliance fun? What is the practical solution to this? I personally would take the pill #1 over pill #2 (and that is why I like PCI that much), but with some pause to think, for sure.  I think organizations with less mature security programs will benefit at least a bit from #1, while those with more mature programs might "enjoy" #2 more...

BTW, this post was originally called "Isn't Compliance Fun?!"  I had a few fierce debates with some friends and all of them  piled on me to convince me that "compliance is boring, while security is fun!" The above does illustrate that there are worthy and exciting intellectual challenges in the domain of regulatory compliance. It is not [only] a domain of minimalists (who just "want the auditor to go away") and mediocrity, as some think. What makes security fun - the people aspect, the ever-changing threat landscape, cool technology, high uncertainty, even risk - also apply to compliance ...

So, need a cool marketing slogan BUT hate "making compliance easy"?  Go for "Making Compliance Fun!" :-)

All posts on PCI - some are fun:-)

SOURCE Boston opens Call for Papers for 2009 event [SOURCE Conference Blog]

Posted: 22 Sep 2008 12:35 PM CDT

SOURCE Boston 2008 featured some fantastic speakers, which made for a compelling event for both our attendees and even our board of advisors. In designing the scope and content for SOURCE Boston 2009 we realized that while we collectively have a great team of resources for filling speaking sessions we owe it to our attendees and our sponsors to cast a wider net and try to bring in the best speaking talent available. That said, today we’ve opened up a Call for Papers for next year’s event.

We’re still going to hold true to our focus of issues and trends relative to the business and technical scope of security, and we remain application security focused. CFPs will be reviewed by our advisory board and speakers will be notified before the end of the year.

The CFP really summarizes the details of what we are seeking but what I will say is that we are not looking for sales-oriented or vendor sales-oriented presentations. Those won’t even be considered. We want talks that focus in on the most pressing business and technology trends in security. Our audience last year was comprised of C-level executives, venture capital firms, security researchers and engineers. While there are many people in attendance who make buying decisions speaking sessions are not the time to reach them during SOURCE Boston. We believe in keeping the content as pure and educational as possible.’

Check out SOURCE Boston 2009’s current speaker list to see examples of what we’re seeking.

– Stacy Thayer

On irresponsible disclosure and cosmic black holes [ImperViews]

Posted: 22 Sep 2008 11:19 AM CDT

Last week a security researcher published a "zero-day vulnerability" regarding a specific CCTV control server (you can hardly call zero day something that can be accessed through Google though). Expected fire from CCTV vendor did not fail to arrive shortly after the disclosure.

This is no longer an isloated incident but rather a growing trend in this past year. I am seeing more and more "full disclosures" prior to vendor patching. Believing that this is indeed becoming a trend I had to give it a second thought. The only thing I came up with is that this is a strong counter-reaction to practices that have been established in recent years.

After a wild "full disclosure" period in the early 2000s, researchers and mainstream mailing-lists have agreed on taking the path of responsible disclosure. However, this evolved into a situation in which hardly any details are given for any vulnerability (even after a vendor patch is released) and vulnerabilities reported to vendors remain undisclosed to users for years! 

Don't get me wrong - I am completely against irresponsible disclosure, but taking the POV of a researcher this is extremely frustrating. Having discovered more than a handfull of vulenrabilities and reporting them to vendors I find out that some vendors are so devoted to responsible disclosure that they don't even bother to get back to you with status updates on the vulnerability nor a notification of when the vulnerability has been patched (if at all). Even more troublesome is that some (well established) vendors do not even have the proper procedures in place for vulnerability management (at least when it comes to some of their products). Not mentioning names - a critical vulnerability we found 2 years ago in a commercial database software has not received any attention except for some initial acknowledgment. Needless to say that at least to versions released by the vendor since that are still vulnerable.

Actually, the frustration security researchers feel is not even secret - it is shouted out and written on the walls (or rather mailing lists). In a recent example, Kevin Finisterre's disclosed zero-day vulnerability regarding SCADA systems. He published the vulnerability only after the software vendor downplayed its criticality. I noticed an article a week later where the vendor actually removed that previous advisory and replaced it with one explaining the severity of the bug!

 

 

Spkeaing of SCADA systems, I'm sure happy that hackers were only able to deface the Large Hadron Collider Web site last week. I would be terified to think what would happen had they hacked into the control system itself, staring up a control sequence that would eventually generate cosmic black holes that would swollow us all in what will be recorded as the most notable hacking incident in the Univerese. It would be that one great leap to humanity the physicists did not consider!

 

Amichai

Petit compte-rendu de Black Hat [SOURCE Conference Blog]

Posted: 22 Sep 2008 06:59 AM CDT

Innovative income-generation system [mxlab - all about anti virus and anti spam]

Posted: 22 Sep 2008 04:16 AM CDT


When you receive a message with a subject “Innovative income-generation system which YOU ordered” with the Unique Income Generation Toolkit (UIGT) and the file Instruction.zip attached to it, do not fall for it. the virus is know as Worm.Win32.AutoRun.ohz by Kaspersky or the Trojan.Kobcka.FR by Bitdefender.

Dear Valued Customer,

Order ID: 74347
Order Total: $59.99

Description: Innovative income-generation system

We are sending you the Unique Income Generation Toolkit (UIGT) developed by the Institute of Innovative Business and Financial Technologies (IIBFT), which you ordered on 9/21/2008.

Your unique UIGT activation code is: DAAAA3E5-B6

Please take a look at the instruction and get acquainted with the activation system, which is strictly confidential.

Please find the list of the company‚s addresses and phone numbers along with further information on UIGT in the enclosed instruction.

______________________________

If you believe this message has reached you by mistake, please contact the support service via phone or e-mail provided in the same instruction.

Respectfully,
Manager (IIBFT)
Andrew Long

The malware can be described as a debugger that is injected into the execution sequence of a target application. This 'debugger' can then be run everytime an application is started on an infected computer

the file %ProgramFiles%\Microsoft Common\wuauclt.exe is created, Windows registry will be modified and connection can be made by the virus to servers on the internet http://*****.ru/ld.php?v=1&rs=13441600&n=1&uid=1.

MX Lab has intercepted a few samples of this virus but there’s no outbreak - at least on our systems and at this time of writing - but only 9 of the 36 anti virus engines do detect the virus so it’s important not to open the attachement and run the exe.

Virus Total permlink and MD5: 2ddc320f9b9e1302696166e8372072ba.

      

No comments: