Friday, May 9, 2008

Spliced feed for Security Bloggers Network

Spliced feed for Security Bloggers Network

Render unto Ceasar things which are Ceasar's ... [StillSecure, After All These Years]

Posted: 09 May 2008 07:37 AM CDT

. . . and unto security vendors things that deal with security.  So it seems to be what Citrix CTO, Simon Crosby is saying in this audio interview on Search Security with Rob Westervelt.  I was all set to write an article on the operationalization of security and all when I noticed that virtuoso of virtual security, Hoff beat me to the punch with his call of BS on Simon.

Hoff is right on.  We can't afford the same old, same old of letting the OS or network vendor or in this case the virtual machine vendor build the product and have a separate security industry bolted on and clean up the mess.  People want secure virtualization, they don't want to think about what they have to buy and install to make their virtual machines secure, they want security designed in from the beginning.  I am surprised that Simon Crosby would even suggest this, it is frankly so 2001.  Lets hope someone over at Citrix takes a que from the VMsafe program and does a little more thinking about security before hand.  We can't afford any other option.

Microsoft Security Patch Advance Notification - May 2008 [Sunnet Beskerming Security Advisories]

Posted: 09 May 2008 03:49 AM CDT

As the second Tuesday of the month will be with us next week, Microsoft have provided advance notice of the patches that they expect to release on that day.

This month there are four patches scheduled for release, three Critical patches, and one Moderate. The three Critical patches address remote code execution risks in Office (2) and Windows (1), with the Moderate patch addressing a Denial of Service vulnerability affecting Windows Live OneCare, Microsoft Antigen, Microsoft Windows Defender, and Microsoft Forefront Security. It is important to note for OS X users that Microsoft will be issuing Critical updates for Office 2004 and 2008.

What is probably most surprising is the patch to be released for the Microsoft Jet Database Engine, a technology which was widely reported that it would not be receiving any further updates from Microsoft.

Learning an email marketing lesson…the hard way [Commtouch Café]

Posted: 09 May 2008 12:00 AM CDT

It’s serious egg on my face time. Let me explain. To track our interaction with partners and potential partners, we use the well-known CRM system, As I have mentioned in a previous post, we try to be very careful only to email people to who have requested to receive our mail. This is [...]

More on Application Security Metrics [Security Retentive]

Posted: 08 May 2008 11:57 PM CDT

Eric Bidstrup of Microsoft has a blog entry up titled "How Secure is Secure?" In it he makes a number of points related, essentially, to measuring the security of software and what the appropriate metrics might be.

I'd been asking the Microsoft guys for a while whether they had any decent metrics to break down the difference between:
  • Architectural/Design Defects
  • Implementation Defects
I hadn't gotten good answers up to this point because measuring those internally during the development process is a constantly moving target. If your testing methodology is always changing, then its hard to say whether you're seeing more or fewer defects of a given type than before, especially as a percentage. That is, if you weren't catching a certain class of issue with the previous version of a static analysis tool but now you are, its hard to correlate the results to previous versions of the software.

Eric says:
Microsoft has been releasing security bulletins since 1999. Based on some informal analysis that members of our organization have done, we believe well over 50% of *all* security bulletins have resulted from implementation vulnerabilities and by some estimates as high as 70-80%. (Some cases are questionable and we debate if they are truly "implementation issues" vs. "design issues" – hence this metric isn't precise, but still useful). I have also heard similar ratios described in casual discussions with other software developers.
In general I think you're likely to find this trend across the board. Part of the reason though is that in general implementation defects are easier to find and exploit. Exploiting input validation failures that result in buffer overflows is a lot easier than complicated business logic attacks, multi-step attacks against distributed systems, etc.

We haven't answered whether there are more Architectural/Design defects or Implementation defects, but from an exploitability standpoint, its fairly clear that implementation defects are probably the first issues we want to fix.

At the same time, we do need to balance that against the damage that can be done by an architectural flaw, and just how difficult they can be to fix, especially in deployed software. Take as an example Lanman authentication. Even if implemented without defects, the security design isn't nearly good enough to resist exploit. Completely removing Lanman authentication from Windows and getting everyone switched over to it has taken an extremely long time in most businesses because of legacy deployment, etc. So, as much as implementation defects are the ones generally exploited and that need patching, architectural defects can in some cases cause a lot more damage and be harder to address/remediate once discovered/exploited.

Another defect to throw into this category would be something like WEP. Standard WEP implementations aren't defect ridden. They don't suffer from buffer overflows, race conditions, etc. They suffer from fundamental design defects that can't be corrected without a fundamental rewrite. The number of attacks resulting from WEP probably isn't known. Even throwing out high profile cases such as TJ Maxx and Home Depot, I'm guessing the damage done is substantial.

So far then things aren't looking good for using implementation defects as a measuring stick of how secure a piece of software is. Especially for widely deployed products that have a long lifetime and complicated architecture.

Though I suppose I can come up counter-examples as well. SQL-Slammer after all was a worm that exploited a buffer overflow in MS-SQL Server via a function that was open by default to the world. It was one of the biggest worms ever (if not the biggest, I stopped paying attention years ago) and it exploited an implementation defect, though one that was exploitable because it was part of the unauthenticated attack surface of the application - a design defect.

All this really proves is that determining which of these types of defects to measure, prioritize, and fix is a tricky business and as always, you mileage may vary.

As Eric clearly points out the threat landscape isn't static either. So, what you think is a priority today might change tomorrow. And, its different for different types of software. The appropriate methodology for assessing and prioritizing defects for a desktop application is substantially different than that for a centrally hosted web application. Differences related to exploitability, time-to-fix, etc.

More on that in a post to follow.

File System Audit [Matt Flynn's Identity Management Blog]

Posted: 08 May 2008 09:57 PM CDT

Anton Chuvakin of LogLogic posted today on some of the intricacies of Windows native file system audit. If you have a need for monitoring access or changes to files, beware of the do-it-yourself method. Chuvakin provides insight on some of the challenges.

One of the things that NetVision engineers brought to market long before I joined is a very slick file system monitoring solution. Slick mostly because you have extreme control over which events you want to capture. You can filter on server, folder, file, person acting, event type (read, create, modify, delete, ACL or attribute changes) – you can even specify times of day to activate a particular policy. And you can have different policies for different files or folders. You can also choose what to do when an event occurs. For some events, write it to a database or file. For others, send an email too or kick off another process. None of it relies on system logs and the reports are delivered in a nice web UI running on Crystal Reports. So the business people get relevant results without having to understand the tech stuff.

Some of our customers even use our filtering to narrow down the events that are then fed into an enterprise security event management or log management system (like LogLogic). It's File System audit made easy.

Its about the kids, stupid [StillSecure, After All These Years]

Posted: 08 May 2008 09:17 PM CDT

Matt Asay has a blog up on "OLPC's capitulation to Windows...". In it Matt waxes poetic about what a mistake Nicholas Negroponte is making by embracing Windows for the OLPC laptop project. Matt points to Groklaw, Richard Stallman and the rest of the Redmond revolutionaries who want to see Negroponte tarred and feathered and question his vision. Hey, lets face it the "m" word is toxic to that crowd. But I really think Matt is just plain twisted about this and about what OLPC is really about. Here is what Matt has to say, "OLPC is rather about liberating developing nations from their vassal status that continually keeps them at the mercy of the pricing and licensing of Microsoft and other proprietary vendors." No Matt, that is not what OLPC is all about and that is what the problem is! OLPC is about getting a laptop in the hands of every kid in the world. It is about giving these kids a chance to learn and grow up to compete in the global economy with the same tools that kids in this country have. It has nothing to with your views of Microsoft being a 21st century imperialistic empire.

Matt both of my boys have OLPC laptops, I know what it is like using them. The Sugar interface is tough. As Negroponte says, it is a amorphous blob. The command line structure of the laptop made it hard for me to retrieve and install files. File names are truncated and kept in non-standard directories. When kids are learning windows in school, this is difficult for them. The laptops are a tool for them to learn, it shouldn't be about learning the tool. It needs to be more main stream for kids to be able to leverage it across the world. It needs to be more standards based. I don't care if it is open source standards or closed source standards but it has to be better. Windows will give it that.

But ultimately Matt, I feel that the OLPC project was hijacked by the open source movement as a "Trojan horse" to overthrow Windows. If that was your intention great. Me, I was a lot more humble and noble in what I thought it was. I thought was about getting a computer in the kids hands and having them learn and contribute.

GooglePOPs - Cloud Computing and Clean Pipes: Told Ya So... [Rational Survivability]

Posted: 08 May 2008 08:42 PM CDT

In July of last year, I prognosticated that Google with it's various acquisitions was entering the security space with the intent to not just include it as a browser feature for search and the odd GoogleApp, but a revenue-generating service delivery differentiator using SaaS via applications and clean pipes delivery transit in the cloud for Enterprises.

My position even got picked up by  By now it probably sounds like old news, but...

Specifically, in my post titled "Tell Me Again How Google Isn't Entering the Security Market? GooglePOPs will Bring Clean Pipes..." I argued (and was ultimately argued with) that Google's $625M purchase of Postini was just the beginning:

This morning's news that Google is acquiring Postini for $625 Million dollars doesn't surprise me at all and I believe it proves the point.

In fact, I reckon that in the long term we'll see the evolution of the Google Toolbar morph into a much more intelligent and rich client-side security application proxy service whereby Google actually utilizes client-side security of the Toolbar paired with the GreenBorder browsing environment and tunnel/proxy all outgoing requests to GooglePOPs.

What's a GooglePOP?

These GooglePOPs (Google Point of Presence) will house large search and caching repositories that will -- in conjunction with services such as those from Postini -- provide a "clean pipes service to the consumer.  Don't forget utility services that recent acquisitions such as GrandCentral and FeedBurner's too bad that eBay snatched up Skype...

Google will, in fact, become a monster ASP.  Note that I said ASP and not ISP.  ISP is a commoditized function.  Serving applications and content as close to the user as possible is fantastic.  So pair all the client side goodness with security functions AND add GoogleApps and you've got what amounts to a thin client version of the Internet.

Here's where we are almost a year later.  From the Ars Technica post titled "Google turns Postini into Google Web Security for Enterprise:"

The company's latest endeavor, Google Web Security for Enterprise, is now available, and promises to provide a consistent level of system security whether an end-user is surfing from the office or working at home halfway across town.

The new service is branded under Google's "Powered by Postini" product line and, according to the company, "provides real-time malware protection and URL filtering with policy enforcement and reporting. An additional feature extends the same protections to users working remotely on laptops in hotels, cafes, and even guest networks." The service is presumably activated by signing in directly to a Google service, as Google explicitly states that workers do not need access to a corporate network.

The race for cloud and secure utility computing continues with a focus on encapsulated browsing and application delivery environments, regardless of transport/ISP, starting to take shape.   

Just think about the traditional model of our enterprise and how we access our resources today turned inside out as a natural progression of re-perimeterization.  It starts to play out on the other end of the information centricity spectrum.

What with the many new companies entering this space and the likes of Google, Microsoft and IBM banging the drum, it's going to be one interesting ride.


Citrix's Crosby & The Mother Of All Cop-Outs [Rational Survivability]

Posted: 08 May 2008 06:51 PM CDT

Bullshit_button In an article over at, Simon Crosby, the CTO of Citrix, suggests that "Virtualization vendors [are] not in the security business." 

Besides summarizing what is plainly an obvious statement of fact regarding the general omission of integrated security (outside of securing the hypervisor) from most virtualization platforms, Crosby's statement simply underscores the woeful state we're in:

While virtualization vendors will do their role in protecting the hypervisor, they are not in the business of catching bad guys or discovering vulnerabilities, said Simon Crosby, chief technology officer of Citrix Systems.

Independent security vendors will play a critical role in protecting virtual environments, he said. "The industry has already decided a long time ago that third party vendors are required to secure any platform," Crosby said. In this interview, Crosby agrees that using virtual technology introduces new complexities and security issues.

He said the uncertainties will be addressed once the industry matures.

I'm sure it's reasonable to suggest that nobody expects virtualization platform providers to "...catch bad guys," but I do expect that they employ a significant amount of resources and follow an SDLC to discover vulnerabilities -- at least in their software.

Further, I don't expect that the hypervisor should be the place in which all security functionality is delivered, but simply transferring the lack of design and architecture forethought from the hypervisor provider to the consumer by expecting someone else to clean up the mess is just, well, typical.

I love the last line.  What a crock of shit.  We've seen how well this approach had worked with operating system vendors in the past, so why shouldn't the "next generation" of OS vendors -- virtualization platform providers -- follow suit and not provide for a secure operating environment?

Let's see, Microsoft is investing hugely in security.  Cisco is too.  Why would the other tip of the trident want to?  VMware's at least taking steps to deliver a secure hypervisor as well as API's to help secure the  VM's that run atop of it.   Where's Citrix in this...I mean besides late and complaining they weren't first?

So, in trade for the "open framework for security ecosystem partnership" cop-out, we get to wait for the self-perpetuating security industry hamster wheel of pain to come back full circle. 

The fact that the "industry" has "decided" that "third party vendors are required to secure any platform" simply points to the ignorance, arrogance and manifest destiny we endure at the hands of those who are responsible for the computing infrastructure we're all held hostage with. 

Just so I understand the premise, the security industry (or is it the virtualization industry?) has decided that the security industry instead of the OS/infrastructure (virtualization) vendors are the one's responsible to secure the infrastructure -- and thus our businesses!?  What a shocker.  Way to push for change, Simon.

I can't even describe how utterly pissed off these statements make me.


Symantec Marketing BS Sucks [Amrit Williams Blog]

Posted: 08 May 2008 05:47 PM CDT

We all know that corporate marketing tends to suck, but this nonsense from Vontu is bordering on ridiculous. from the Symantec/Vontu website (here):

The Vontu Enforce Platform enables business units to actively participate in their company’s Data Loss Prevention practice and has delivered these results to FORTUNE 100 companies :

* 90% reduction in number of data loss incidents 10 days after deploying automated sender notification
* Over 99.7% reduction in the total number of customer records exposed over a two-year period
* Reduced incident flow “down to a trickle.”

90% reduction in data loss incidents 10 days after deploying - wtf? a. isn’t 10% of data loss still really bad and b. how were they able to tell what baseline data loss was. Total, complete and utter BS!

Over 99.7% reduction in the total number of customer records exposed over a two-year period - wtf? Seriously, wtf?

Reduced incident flow “down to a trickle” - wtf is a “trickle”?

All I can say is ‘Wow’ [.:Computer Defense:.]

Posted: 08 May 2008 02:57 PM CDT

I read this today on a local news site and the only thought that went through my head was "wow"... Essentially a malicious individual hacked the Epilepsy Foundation's website and posted hundreds of rapidly flashing images. While I don't condone it... I can understand why people think they should target websites for profit or pride... but this? It's just plain mean... It makes me wonder what the world is coming to.

Update: Apparently this is old news and I'm a little slow finding out about it.

Social networks like Twitter also target for spammers [mxlab - all about anti virus and anti spam]

Posted: 08 May 2008 02:23 PM CDT

Popular social networks are facing a difficult time to stop spammers from abusing their networks. Twitter, a micro-blogging network site where you can publish text updates via SMS, instant messaging, email, Twitter's website and third party applications, is one of the many.

They recently started to blacklist people who spam other members and are posting these results on the Twitter Blacklist. At this time they already have 378 blacklisted members on this web site and it’s growing.


Improved Security on the Identity Infrastructure [Matt Flynn's Identity Management Blog]

Posted: 08 May 2008 02:02 PM CDT

I wanted to expand on my earlier post about Extending the ROI of Provisioning. Here's a visual aide to help the discussion:

There's nothing new in this illustration. It simply shows that the provisioning engine connects to multiple identity data stores. As we know, provisioning systems have the potential to do a very good job at providing work flow and business rules around creation and management of user accounts across multiple systems. They may even have some additional capabilities around Separation of Duties enforcement, user attestation, user self-service password management, reporting on rights (based on its view), and more.

The Gap

What it doesn't do, however, is protect the connected data stores against direct access. For example, the DBA still has direct access to the database and the Directory Administrator still has direct access to the directory. They can create new accounts, view information, and change permissions. The system may be able to see when new user accounts are created during its next scheduled run, but that capability isn't always enough. I'll give an example.

One of these LDAPs is not like the other

I purposely shaded the Network Directory so that it stands out from the others. That's because it is different. Since the market for the Network Directory consists almost entirely of just two vendors (Microsoft and Novell) and one has a much larger percentage of the market (Microsoft), I'll just use Microsoft's Active Directory (AD) as the example.

Now, back to the gaps:

  • Scheduling: When provisioning systems connect to AD, the connection and sync processes are often scheduled. And AD has a time lag in replication (usually 15 minutes). S0, if the sync is done hourly against a particular DC, the total time that a new account may be in existence on a different DC without being noticed by the provisioning system is a little more than an hour. Can you do damage in an hour? I could create an account, make it a domain admin, log onto servers, change rights, access files, and remove my trail from the logs within an hour.

  • Coverage Scope: The connection may be made to a particular portion of the AD tree. So, if you created an account in a portion of the tree that isn't monitored by the provisioning system, it wouldn't get picked up.

  • Source: Some provisioning systems use AD as the source. So, in that scenario a new account in AD would potentially create accounts and/or rights across multiple other systems. So, by specifying rights or group memberships, an AD administrator could grant himself rights to other connected systems (perhaps in between attestation cycles).

  • Account Type: Provisioning systems generally only look at user accounts based on object type. So, you could create an iNetOrgPerson instead of a User object.

  • Activity Scope: Provisioning systems don't even try to monitor failed logon attempts or failed user creates at the local systems. They also don't watch file open activity or file changes. What if the provisioning system pulls a feed from a text file and someone modifies that file? There's no knowledge of activity other than a particular type of account being created.

All of these can be applied to other connected data stores as well. For example, scope is an issue for relational database tables. The provisioning system may only watch specific tables or may completely ignore local accounts in the RDBMS itself. Likewise, if AD is not the source, the HR database is likely the source which yields the same issue for the HR DBA.


My point isn't that provisioning systems are weak. They do what they do very well. But, you can improve the overall security posture of the environment by including localized protection on the connected data stores as well. Encrypt the database. Monitor DBA activity and Directory Administrator activity. Watch directories for failed attempts to create or modify accounts. Watch for failed authentication attempts. In a nutshell, ensure that accounts and permissions are being managed through the provisioning system into which you've built the business rules and work flow to ensure that rights are being managed effectively.

And if you have to respond to auditors for compliance reasons, you can say you're certain that accounts are only being created according to policy; instead of you hope that to be the case.

I've heard the argument that this might be overkill (admittedly an over-simplified characterization of the argument). OK. In some scenarios, maybe you don't need tighter security. You only care about work flow efficiency and cost cutting. Or you're OK with the level of improvement in your security posture that traditional user provisioning systems provide. I'm not saying that anyone should ignore the risk analysis process. But, if compliance is an issue and you want to prove compliance beyond reasonable doubt or just simplify the audit process, solutions that locally monitor the connected systems may provide value.

And if you can demonstrate that 100% of your user and rights management processes are funneled through the provisioning system with appropriate work flows, I think you could justify claiming a much improved ROI on the overall solution with minimal additional investment.

Disclaimer? Yes, NetVision can help with reporting and monitoring on your Network Directories (both major vendors) and related file systems. But that's no reason for me not to talk about it!

Breaking Out of Windows RemoteApps [RioSec]

Posted: 08 May 2008 12:44 PM CDT

Microsoft has included a new feature in Windows Server 2008 to allow sharing individual applications through Terminal Services.  This is not a new concept - Citrix has been offering something similar for a long time.  They also are now offering a Terminal Services Gateway and TS Web Gateway for accessing Terminal Services, and RemoteApps, from the Internet.  What isn't well known, but also isn't new, is the ability to 'break out' of these applications and access other applications and files on the Terminal Server.   It is very easy to break out of GUI apps even for non-technical people.  Below I will highlight a few examples of running other applications from a RemoteApp, and later I will follow with a number of configuration suggestions for securing your server. 

read more

Mailings from - continued [mxlab - all about anti virus and anti spam]

Posted: 08 May 2008 11:23 AM CDT

Remember our posting about Well we see in the logs a change of behaviour regarding the mailings of

Last time it was a lot of trouble getting of that list and they send their mailings too many times. To give you an idea: we intercepted emails from FashionShopping on a daily base since april until yesterday, May 8th. 80% of the emails where sent to the same recipients meaning that you could have received their mailing on a daily base for more than two weeks if MX Lab didn’t block them accordingly.

These guys send now from a new domain As always a quick visit to a site gives us an ‘under construction’ web site.

The first paragraph under the many images seems to have an unsubscribe link: “Vous avez été invité, mais conformement à la loi sur la confiance dans l’économie numérique, si vous ne souhaitez plus recevoir des propositions par email de la part de Emailing-Direct pour le compte de veuillez cliquer sur le lien suivant : Désinscription”.

However think link gives us the error “Unsubscribe links do not work inside a preview message. In order to test unsubscribe links you will need to do a campaign” on the website A visit to the root of this site gives us a login to an control panel of Expedite Simplicity, email & mobile marketing software.

The second paragraph: “We support responsible and ethical email marketing practices. Please know that we respect your right to be purged from this marketing campaign. Removal from this email distribution list is automatically enforced by our email delivery system. Please click here to start the process for email deletion”.

This will lead us to*************.aspx and yes, here we have an unsubscribe form. You can even contact them at “Emailing Direct - 66, Avenue Des Champs Elysées - ParisFR 75008 FR. So, they have moved from mailing house EmailVision to this company. Did EmailVision received too many complaints?

A WHOIS search on Netsol for gives us some results. The domain is registered to Emailing Direct in Paris, France. The registrant contact email address is When visiting their site we get a nice web site.

A short contact with this ‘company’ learns us that you can get mailings for a minimum fee of € 500. This includes sending out 500.000 emails at € 0,001 per message in a fully managed campaign.

It is clear that pushing your email based campaigns to the limit isn’t always a good thing. Some general tips when you are into email marketing:

  • send from an authorised source, don’t spoof your from address,
  • use your own address, no Gmail or others please, so that subscribers can view the source and can contact you directly
  • make sure that your message has unsubscribe links that work and that remove an email address of the list immediatly
  • send your campaign weekly, monthly,… which is much more accepted by your audience
  • create your content with care (do not only include images but combine with text)

Comments on Core Security’s Wonderware advisory [.:Computer Defense:.]

Posted: 08 May 2008 09:04 AM CDT

There were a couple of random things that I wanted to comment on.

The first was a post by Dave Lewis of Liquidmatrix. The post in question is a discussion of a Wonderware advisory released by Core Security and the level of detail that they provided. Dave doesn't agree with the level of detail provided... as they had details on how to exploit the vulnerability and even showed the assembly from the vulnerable function. He also comments that this isn't responsible disclosure. I'm <sarcasm>really glad to see this debate is coming up again</sarcasm>... but really where's the lack of responsible disclosure? Core reported the vulnerability to the vendor (repeatedly) and went out of their way to ensure the vendor was aware, this is more than a lot of people / companies do. They then continually pushed their advisory release date to accommodate the company. These details are being released after the patch as well.

There's absolutely nothing wrong with this... it's really no different from the level of detail provided by other security vendors that release advisories. Once the patch is out there isn't much to stop malicious individuals from obtaining the assembly to the vulnerable function... a copy of IDA Pro and BinDiff is really all they need. Outside of the assembly... the level of detail provided is really the same as most other security vendors that release advisories. I've seen them include some sort of binary analysis in the past... and most of them contain a text write-up... here's an example with enough text to more than locate the vulnerability from TippingPoint / ZDI:

The specific flaw exists in the oninit.exe process that listens by default on TCP port 1526. During authentication, the process does not validate the length of the supplied user password. An attacker can provide a overly long password and overflow a stack based buffer resulting in arbitrary code execution.

Part of the problem with the InfoSec battle is that the bad guys have essentially unlimited time, where as IS employees have families and lives and work a set schedule. The Core advisory has set internal security teams on their way to developing their own exploits should they need to, without it they'd have had a lot more work to do and it would have taken them more time. Core did everything short of release the related Python and you can't really blame them, since then they'd be giving away their product for free. In the end, what they did was, in my opinion, beneficial to all.

It's one thing to simply release details, but as soon as someone works with the vendor you can't really cry foul when they publish the details. At least not on the 'responsible disclosure' front... because they've followed responsible disclosure and in this case Core Security hasn't done anything different then a number of vendors. Microsoft Tuesday is coming up and watch the mailing lists, each vendor that has reported a vuln usually sends out some sort of advisory and these range from brief overviews to full binary analysis and specific details on exploiting the vulnerability. We've seen it before and we'll see it again... but the patch is out, so they aren't helping the malicious individuals... just the good guys who have time constraints.

Flickr is accessible again at China ! [Telecom,Security & P2P]

Posted: 08 May 2008 07:27 AM CDT

This evening it brings me a surprise to find that Flickr and is accessible again at China. Almost I have forgotten when they were blocked. Now they appear again!

Click here to check my flickr album.


PHP Updates to 5.2.6 [Sunnet Beskerming Security Advisories]

Posted: 08 May 2008 06:45 AM CDT

The PHP Group released version 5.2.6 of the popular scripting language earlier this month. While there were more than 100 bugs fixed with this update, there were several critical security vulnerabilities patched that make updating essential for any administrators or users currently using the 5.x branch of PHP (if you're still stuck using 4.x or earlier you should really consider updating your installation).

Several memory leaks, buffer overflows, safe mode bypasses, and multi-byte character handling are amongst the issues addressed by this update, which is the first one to be released in six months by the PHP Group. Although there are probably many more security vulnerabilities yet to be found or patched (just see Stefan Esser's work, which has been somewhat quiet since the end of last year), the significant number of bugs patched is a continuing good sign from a project that has come under fire in the past for having a mixed approach to the security of their main product.

8 Dirty Secrets of The Security Industry [Amrit Williams Blog]

Posted: 08 May 2008 01:12 AM CDT

Joshua Corman from IBM ISS had a great presentation at Interop “Unsafe at any speed the 7 dirty secrets of the security industry” which has been receiving strong media coverage (here), and (here)…my favorite reference is from Alexander Wolfe at Information Week (here)

An IBM security expert ripped the scab off the dirty little secrets of the security industry in a highly entertaining presentation Wednesday at Interop. Joshua Corman, principal security analyst at IBM Internet Security Systems, highlighted the gaping divide between what customers think they’re buying (safety) versus what security vendors are most intent on selling (stuff that’ll bring in the bucks). Here, in condensed form, is his list.

#0 Vendors do not need to be ahed of the threat they only need to be ahead of the buyer

The goal of the security industry is not to secure, the goal of the security industry is to make money. I think we all know this conceptually, and even with the best intentions in our capitalistic society we must understand that security companies are motivated by profits. This isn’t necessarily a  bad thing, but it should help to dispel the myth that security companies are smarter than hackers, they aren’t, they are just  smarter than the buyers.

#1 AV certifications do not test/require trojans

AV certifications are BS, they are essentially the AV industry’s equivalent of duck, duck, goose as vendors move up and down the rankings from one test to another - who gives a crap if in test 1 AV vendor A detected 98.4% and AV vendor B detected 95.7% and AV vendor C detected 97.6% and then in test 2 it all chagnes, especially when they are not testing their ability to detect the really nasty, stealthy, sophisticated non-replicating malcode that iz in yur bits stealin yor bytes.

BTW - Kurt Wismer is pretty passionate when it comes to Anti Virus, he is like the Guardian Angel of anyone who would dare to speak ill of the poor, defenseless AV companies. He recently posted on why the AV vendors were NOT falling behind using an “dog shit” analogy (here)- classy, professional, and uncomfortably hilarious. Honestly I am not sure what the hell he is talking about, but I am sure he will post his thoughts in triplicate soon :-)

#2 There is no perimeter

The endpoint is the perimeter, the user is the perimeter, the business process is the perimeter, the data is the perimeter - the perimter is not the perimeter. Those who decry securing the endpoint and espouse the virtues of network security obviously do not care about the importance of protecting the ever increasing intermittenty connected, remote computing devices that move in and out of the corporate network like a transient looking for a warm underpass to sleep for the night, all the while bypassing perimeter and network security.

So why should we care if we do a really good job of protecting critical assets with the latest network security thingie? Well ask yourself if confidential data ever makes it’s way onto mobile devices, smart phones, handhelds and laptops - no, you say - really? nothing confidential in email? Bob in accounting doesn’t ever download propritery information to work with over the weekend? Your engineers only access source code from the security of Ninja-proofed, tempest shielded, lead walled closet surrounded by an army of M16 wielding bodyguards?

#3 Risk management threatens vendors

Risk management forces an organization to focus, to move towards policy driven proactive security and away from reactive, ad-hoc security models that drive knee-jerk secuity buying. Security vendors love knee-jerk security buying (see dirty secret #0)

#4 There is more to risk than weak software

There is a myth in information security that postulates the theory that if all software was secure we would eliminate threats - this would be true only if we didn’t allow computers to be turned on, connected to the internet and people were not allowed to use them, but they weren’t really designed for that. We all know there is no patch for human stupidity and social engineering is one of the easiest ways to infect a box, so the never ending cycle of vulnerability disclosure -> scan -> patch -> rinse and repeat keeps us locked into a never ending hamster wheel of misaligned goals and mismatched expectations.

#5 Compliance threatens security

When I was with Gartner we would publish a Cyber Threats Hype Cycle and for many years we placed Regulatory Distraction as a threat to enterprise security. The thinking was that being compliant doesn’t = improving security, whereas implementing strong security measures would generally make one compliant. Although we have made strides in defining more prescriptive compliance initiatives many organizations work to pass an audit as opposed to work to implement controls that actually benefit the organizations security program.

#6 Vendor blind spots allowed storm

Storm eats AV for breakfast, it doesn’t need vulnerabilities, it leverages outstanding social engineering, it is self-defending and resilient…

  • Warning: Pregnant women, the elderly and children under 10 should avoid prolonged exposure to Storm.
  • Caution: Storm may suddenly accelerate to dangerous speeds.
  • Storm contains a liquid core, which, if exposed due to rupture, should not be touched, inhaled, or looked at.
  • Do not use storm on concrete.
  • Do not taunt Storm

Microsoft did not kill storm worm (here), it is still out there lurking in the shadows like a malicious interloper (here) waiting to ridicule your inadequote reactive security measures and laugh at your inability to remove it from the internets.

#7 Security has grown well past “Do it yourself”

The days of dropping in a box and flipping a switch are long gone, we are in an era where the combination of people, process, and technology must be coordinated and well planned or you not only risk a failed deployment but the loss of business or worse.

[Chinese]网络信息安全的度量和考核指标体系(4) - 读Andy的“安全度量” [Telecom,Security & P2P]

Posted: 07 May 2008 10:09 PM CDT


Security MetricsAndy(Andrew Jaquith)的"安全度量"这本书是一本不错的书。几个星期前从网上找来了PDF的英文版,前几天买了中文版(电子工业出版社,2007年12月)。这两天在深圳出差,抽路上的时间把书读了一遍。下面是点到为止式的一些评论。

1 Andy是一位非常资深的安全顾问、企业管理者,他的文笔和见解也引人入胜。这本书很好的分享了他的行业阅历、安全专业顾问经验、呈现技巧、企业管理经验等。但是,坦白说,中文版的翻译不好!其中的专业术语和遣词造句都不够好。不能不说是个遗憾。

2 这本书更适合于在信息安全这个领域工作了有相当长的时间的读者,他有相当多的专业技术知识、运行管理经验等,并且对ISO17799 /27000 /CoBit /ITIL等也已经有不错的积累。这样才会更好的理解和吸收Andy的这些观点。从另外一个角度讲,也只有比较资深的顾问、管理经理才更有机会用到这些知识。呵呵,一般来说,刚入门的顾问或普通工程师不是很有机会参与制定这些安全度量指标的开发。

Replaceing security FUD3 Andy在"安全度量"这本书中对基于ISO17799的度量指标系统和ALE的概念提出了质疑。他认为ISO17799更倾向于从审计者的角度来看问题,而不是从运行管理者的角度来做。这就使得它大部分都是对输出结果的要求描述,而对如何实现避而不谈,或者说不关心。虽然这一点已经在ISO27001的后续发展计划中了,Andy还是认为管理者应该重新考虑建议适合自己的一套度量指标体系,而不要直接引用ISO17799/27001。另外,Andy还讽刺了很多安全顾问公司使用的循环模型:评估-报告-划分优先级-修复-评估,Andy认为这样的模型只能是迫使安全管理人员不停地处于被动的自我否定之中。(这样的观点甚至也间接地讽刺了PDCA)。

4 "安全度量"这本书中尝试建立不同于ISO17799的指标体系。他将重要的安全运行管理分为四个大的子领域:边界防御、覆盖和控制、可用性和可靠性,以及应用安全。就如同Andy自己也讲到的,这里不是基于某种模型的,而是更多的基于经验。所以在这里的四个子领域中并不容易找到相互之间什么直接的联系或者完备性的证明。与此对比,我在之前使用的四个子领域分别是:保护边界、保护桌面、保护核心资产(服务器和应用)、符合性和审计。

5 "安全度量"这本书借用了CoBIT的四过程:PO-AI-DS-ME,来建立安全计划(或者生命周期)的度量指标。

6 本书第六章 - 可视化,是不错的参考,不管是入门的、还是已经比较资深的顾问,对于很多安全运行管理工程师经理也是非常好的参考。我见过很多顾问或工程师拿出来的报告或演讲材料真的是充满了滥用的色彩、线条和图片,而使的重要的"信息"完全被淹没了,最重要的"逻辑"消失了。不少情况下,本来工作做的还是不错的,这样一来,结果不好,非常可惜!

或许很多技术"专家"会认为"内容才是最重要","展现"不过是个形式,没有关系,所以不重视、也不情愿在展现和报告上下功夫。其实,不然。要知道,很多IT管理层并不是技术专家,越高的IT管理层通常对技术的专业知识了解的就越少,他们能够或愿意花在专业技术上的时间也就越少,他们最重要的工作是了解业务逻辑,了解技术的总体趋势。我们知道IT实在是太多技术了,分到安全上的自然会越来越少。所以这里的挑战就是如何用更少的时间展现更为清晰、更为关键的逻辑。这一点也是我到联想以后体会很深的一点 - 如何创建Executive Level的技术报告,一定要非常清楚自己的读者是谁,报告的目的是什么(decision oriented, or just information sharing)。在这一点上,我很高兴我自己以及我的Team都有了很快的提高。

我很欣赏Andy的观点,一个好的"展现"的特点是lean, trim and elegant, 朴素,直观,关注内在的数据和逻辑。Andy在书中举了若干个示例来说明这一点。关于这一点,如果大家读McKinsey的典型报告时会有较深的体会。我强烈推荐这本书的读者、这篇博客的读者在这方面多思考练习体会一下。呵呵,如果您已经是个中高手,也请您和大家也分享一下您的经验。






links for 2008-05-08 [Raffy's Computer Security Blog]

Posted: 07 May 2008 09:34 PM CDT

Down Under: Where Security Is SO Last Tuesday... [Rational Survivability]

Posted: 07 May 2008 09:01 PM CDT

Fail I read this article from Network World (Australia) where the author relayed the pinnings of C-levels from Australia and New Zealand by titling his story thusly: "If only reducing costs was as easy as security, say CIOs"

It seems that based upon a recent study, IDC has declared that "...conquering IT security is a breeze for CIOs.

I'm proud of my Kiwi lineage, but I had no idea my peeps were so ahead of the curve when it comes to enlightened advancements in IT security governance.  They must all deploy GRC suites and UTM or something? 

Anton, there must be something in the logs down there!

As per that famous line in "When Harry Met Sally," I respond with "I'll have what [s]he's having..." 

Check this out:

The IDC Annual Forecast for Management report surveyed 363 IT executives from Australia (254 respondents) and New Zealand (109 respondents) across industries including finance, distribution, leisure and the public sector.

Information security was rated last place in the Top 10 challenges for CIOs.

Threats targeting the application layer were cited as the biggest concern (36%), while spyware (16%) was rated as a bigger threat than disgruntled employees, remote access, and mobile devices.

The CIOs top priority for the next 12 months was reducing costs and addressing a lack of resources. This was followed by meeting user expectations and developing effective business cases.

The top four IT investments for the next year will be in collaborative technologies and knowledge management; systems infrastructure; back office applications; and business intelligence.

I'm no analyst, but allow me to suggest that just because security is not the top priority or "challenge" does NOT mean they have the problem licked.   It simply means it's not a priority!

Perhaps it's that these CIO's recognize that they've been spending their budgets on things that aren't making a difference and should instead be focusing on elements that positively impact corporate sustainability and survivability as an on-going concern instead?

The most hysterical thing about this article -- besides the re-cockulous premise they overly-hyped and the (likely) incorrect interpretation of results the title suggests -- is that on the same page as this article which suggests the security problem is licked, we see this little blurb for a NWW podcast:


So, there we have it.  A direct tie.  Security is solved and failing, all at the same time!



Of Course Defense-In-Depth, er, Defense-In-Breadth Works! [Rational Survivability]

Posted: 07 May 2008 08:41 PM CDT

I don't know what the the hell Ptacek and crew are on about.  Of course defense-in-depth defense-in-breadth is effective.  It's heresy to suggest otherwise.  Myopic, short-sighted, and heretical, I say!

In support, I submit into evidence People's Exhibit #1, from here your honor:


...and I quoteth:

We use layers of security to ensure the security of the traveling public and the Nation's transportation system.

Each one of these layers alone is capable of stopping a terrorist attack. In combination their security value is multiplied, creating a much stronger, formidable system.  A terrorist who has to overcome multiple security layers in order to carry out an attack is more likely to be pre-empted, deterred, or to fail during the attempt.

Yeah!  Get some! It's just like firewalls, IPS, and AV, bitches!  Mo' is betta!

It's patently clear that Ptacek simply doesn't layer enough, is all.  See, Rothman, you don't need to give up!

"Twenty is the number and the number shall be twenty!"

How's that for a metric?

That is all.


7 Dirty Secrets… a Rebuttal [Trey Ford - Security Spin Control]

Posted: 07 May 2008 08:31 PM CDT

Steve Ragan with the Tech Herald has posted a response to Josua Corman’s “7 Dirty Secrets of the Security Industry” presentation, and I got quoted! <my submitted response> Overall, Tim Greene’s brief of Joshua Corman’s presentation does a solid job of discussing the very real need for, “a healthy level of skepticism about what security vendors” communicate. The [...]

OWASP - VA Local Chapter Infosec Meetup Event - Thursday, 5/8: The New Face of CyberCrime / Integrating Security into QA []

Posted: 07 May 2008 03:38 PM CDT

Here is some information regarding this week’s Thursday OWASP - VA Local Chapter infosec meetup event. (more…)

Virtualizing Security Will NOT Save You Money; It Will Cost You More [Rational Survivability]

Posted: 07 May 2008 02:23 PM CDT

Nightofdead In my post titled "The Four Horsemen Of the Virtualization Apocalypse" I brought to light what I think are some nasty performance, resilience, configuration and capacity planning issues in regards to operationalizing virtualized security within the context of security solutions as virtual appliances/VM's in hosts.

This point was really intended to be discussed outside of the context of virtualizing security in physical switches, and I'll get to that point and what it means in relation to this topic in a later post.

I wanted to reiterate the point I made when describing the fourth horseman, Famine, summarized by what I called "Spinning VM straw into budgetary gold:"

By this point you probably recognize that you're going to be deploying the same old security  software/agents to each VM and then adding at least one VA to each physical host, and probably more.  Also, you're likely not going to do away with the hardware-based versions of these appliances on the physical networks.

That also means you're going to be adding additional monitoring points on the network and who is going to do that?  The network team?  The security team?  The, gulp, virtual server admin team?

What does this mean?  With all this consolidation, you're going to end up spending MORE on security in a virtualized world instead of less.

This is a really important issue because over the last few weeks, I've seen more and more discussions surrounding virtualization TCO and ROI calculations, but most simply do not take these points into consideration.

We talk about virtualization providing cooling, power and administrative cost-avoidance and savings.  We hear about operational efficiencies, improved service levels and agility, increased resource utilization and reduced carbon footprint. 

That's great, but with all this virtualized and converged functionality now "simplified" into a tab or two in the management console of your favorite virtualization platform provider, the complexity and operational issues related to security have just faded into the background and been thought of as having been absorbed or abstracted away.

I suppose that might point to why many simply think that security ought to be nothing more than a drop-down menu and checkbox because in most virtualization platforms, it is!

When thinking about this, I rationalized the experience and data points against my concern related to security's impact on performance, scale, and resiliency to arrive at what I think explains this behavior:

Most of the virtualization implementations today, regardless of whether they are client, server, production/QA or otherwise, are still internally-facing and internally-located.  There are not, based upon my briefings and research, a lot of externally-facing "classically DMZ'd" virtualized production instances.

This means that given the majority lack of segmentation of internal networks (from both a networking and security perspective,) the amount of network security controls in place are few.

Following that logic, one can then assume that short of the existing host-based controls which are put in place with every non-virtualized server install, most people continue this operational practice in their virtualized infrastructure; what they did yesterday is what they do today. 

Couple that with the lack of compelling security technologies available for deployment in the virtual hosts, most people have yet to start to implement multiple security virtual appliances on the same host.

Why would people worry about this now?   It's not really a

When we start to see folks ramp up virtual host-based security solutions to protect against intra-vm threats and vulnerabilities (whether internally or externally-facing) as well as to prevent jail-breaking and leapfrog attacks against the underlying hypervisors, we'll start to see these problems bubble to the surface.

What are your thoughts?  Are you thinking about these issues as you plan your virtualization roll-outs?


No comments: