Tuesday, July 8, 2008

Spliced feed for Security Bloggers Network

Spliced feed for Security Bloggers Network

The Catalyst onTour: Soon We’ll Be Making Another Run [The Security Catalyst]

Posted: 08 Jul 2008 07:18 AM CDT

(after reading that title, are you singing the theme to Love Boat yet? If you are, and you miss the program go watch a full episode now: http://www.cbs.com/classics/the_love_boat/)

The book is being printed (finally!). The preview copies are being mailed out. And we have been in the same spot for a few weeks now. It is time to load up the coach and head back out on the roads! Our “ship” is not as big as the Love Boat, but the adventures never cease, and we’re ready for the next one.

What was initially conceived to be the "Campaign Across America" has evolved into the more appropriate "Catalyst onTour." We have the "tour bus" and a desire to see as much of the country as we can. Unlike a rock band going on tour, we have more of a grass-roots approach and a powerful message: each of us makes a difference when it comes to protecting our information, our identities, our children. As the tour rolls on, we seek to bring that message of optimism and support door-to-door. Seriously.

To better explain the Catalyst onTour concept, approach and benefits to business, families and even potential sponsors we are in the process of setting up the catalyst onTour website (hopefully before we leave again in July; it’s next after we update the book website). Minimally, this site will allow you to keep in touch and join (if only virtually) our efforts through writing, pictures, audio and video – and ask questions, make suggestions and otherwise get involved and make a difference!

The July/August Route
RVs are fluid. So the final route is a bit up for negotiation right now (and quite frankly, if you're on the way and would like to work with me, you can easily influence the route). We expect to leave near the end of July and may actually start with a brief stop in Hershey, PA (home of Hershey Chocolate and Hershey Park). Then we're heading toward Las Vegas. After our stop in Arizona, we may head up the West Coast into California, or we may head back East across Texas, into Tennessee, Georgia and then back up North to New York. Then again, anything can and does work when in an RV (try doing that in a plane!).

CoT July Route Out

Tour Leg Anchor Events
This tour leg is currently being anchored by two events with fixed dates:

  • Black Hat in Las Vegas for some semi-private events: August 4-7
  • Sierra Vista, AZ (private event) week of August 11 - 15

Potential Cities and Stops Along the Way
While we have traveled the length of Route 80 before (though not on this trip), this will be an exciting opportunity to see some new cities (and welcome the family to some new States). Potential stops include:

  • Des Moines, IA
  • Omaha, NE
  • Denver, C)
  • Phoenix, AZ

On the way home, we have a lot of options - so if you are somewhere between Arizona and Upstate NY - let us know and we will try to work something out. We are currently planning to circle back to Upstate NY during the first week of September. This gives us a few weeks home before setting out on a series of speaking engagements and client working sessions, a potential trip to Orlando and whatever else influences some onTour segments.
In the meantime, if you want to get an advance copy of the book, learn more about how the tour can help you meet your goals (for example, awareness), raise your profile or even energize your team before the fall… give us a call (800.996.8351) or send me an email (securitycatalyst /shift-2/ gmail.com).

Technorati Tags: , ,

NIST: Due nuove pubblicazioni [varie // eventuali // sicurezza informatica]

Posted: 08 Jul 2008 06:21 AM CDT

Sono due le pubblicazioni appena rilasciate dal NIST, e sono entrambe molto interessanti.

La prima, "Guide to SSL VPNs" parla delle tecnologie che stanno alla base delle SSL VPN, ambito che sta guadagnando sempre più popolarità rispetto alle "tradizionali" IPsec VPN. Si passa poi a scenari d'implementazione e raccomandazioni d'uso. Ovviamente non ho ancora avuto il tempo di leggere la pubblicazione, ma da una prima occhiata é fatta molto bene (come tradizione del NIST).

link - NIST: Guide to SSL VPNs (pdf)

La seconda é una pubblicazione ancora in draft, e quindi può essere commentata prima della chiusura e pubblicazione. Quanto mai attuale anche quest'argomento: "Guidelines on Cell Phone and PDA Security". Con l'utilizzo sempre più massiccio di dispositivi mobili, palmari e telefoni intelligenti che allargano il loro ambito d'azione dal semplice calendario o telefono a dispositivo multifunzionale che integra calendario, email, contatti, telefono e spesso accesso alle risorse aziendali, diventa imperativo porsi l'interrogativo sull'effettivo livello di sicurezza di questi dispositivi e su quali possono essere le linee guida da applicare per un utilizzo sicuro. Ben venga quindi una pubblicazione come questa!

link -
NIST: Guidelines on Cell Phone and PDA Security (Draft, pdf)

Provisioning: Security’s First Step to Measuring Organizational Impact [BlogInfoSec.com]

Posted: 08 Jul 2008 06:00 AM CDT

Security is often accused, occasionally with merit, of being an obstacle to an organization's business. While the drumbeat of cyber threats has at least raised the technology risk consciousness of many business managers, security professionals still have the challenge of quantifying how big an insurance policy makes sense for their organization. We will spend some time in a future article exploring effective security metrics, but one place where security can often measure both its impact and its benefit is in the provisioning process.

Several years ago, while working in financial services, we were under strict internal and regulatory duress to ensure segregation of duties and least privilege access for all associates who had exposure to investment data (about 4000 people). Unfortunately, the manual processes then in place required not only significant administrative overhead from the access administration team but, more distressingly from management's perspective, from senior staff who were constantly barraged with access approval requests from a global user community. Needless to say, these manual processes were as ineffective as they were burdensome an almost constant stream of audit findings indicated.

As with many organizations, both the overhead and ineffectiveness of the access approval process became accepted enterprise costs and there was no organizational mandate to address the challenges strategically. However, one tactical approach after another failed to provide any lasting solution, and served only to increase stress on access administrators and approvers alike.

Security's requests to initiate a strategic solution fell on deaf ears until we were able use some previous lessons learned to make our case financially. While working a few years earlier in the corporate security function, we had sought to quantify the cost in terms of lost productivity of provisioning delays caused by not having a single user identifier and central identity store. While our methodology was pretty raw and

(...)
Read the rest of Provisioning: Security’s First Step to Measuring Organizational Impact (288 words)


© Patrick Foley for BlogInfoSec.com, 2008. | Permalink | No comment
Want more on these topics ? Browse the archive of posts filed under Technical.

This feed is copyrighted by bloginfosec.com. The feed may be syndicated only with our permission. If you feel that this feed is being syndicated by a website other than through us or one of our partners, please contact bloginfosec.com immediately at copyright_at_bloginfosec.com. Thank you!

Blizzard offers two-factor authentication, why doesn't your bank? [spylogic.net]

Posted: 08 Jul 2008 06:00 AM CDT

World of Warcraft

Lots of buzz on the net about Blizzard (creators of World of Warcraft) is offering a $6.50 two-factor authentication token for customers that want an extra layer of protection for their account. Yes, if you didn't know account theft in WoW is on the rise! I commend Blizzard for taking this extra step to help protect their customers...sure two-factor authentication isn't perfect, but regardless it's a step in the right direction.

So why don't more banks and financial institutions set this up for their customers? PayPal was able to do it right (not perfectly, but close)? It comes down to customer support and cost. One of the many ways a bank or financial institution makes money is by offering products that are user friendly and can be used by just about anyone. For someone using a two-factor authentication token with some technical skill it's a cake walk...unfortunately, the average bank user (think about your mom or the person in your family with the least amount of technical skill...yes, the one that calls you to fix their computer...) will most likely be confused as how to use the device and that will be a call to the bank's customer support center (calls cost $$) and lets not forget about the back end infrastructure (servers and IT staff cost $$) and all the additional red tape the institution has in regards to advertising and putting a friendly spin on it to customers.

Martin McKeay and Michael Santarcangelo on the Network Security Podcast (Episode 110) had some good discussion about this. In a nut shell the conversation was about how banks offer many different easy to use services and tying a two-factor solution to all of these products is just not worth the cost, time and effort (except for high wealth customers). Also, what happens when you have multiple accounts at multiple banks? Do you carry around multiple tokens? My opinion? Until there is something easier to use and more secure, I don't see most banks or financial institutions going two-factor anytime soon.

funky javascript [extern blog SensePost;]

Posted: 08 Jul 2008 03:48 AM CDT

found this online last night. try in FF or IE7:

javascript:document.body.contentEditable='true'; document.designMode='on'; void 0

then edit the page in-place, screenshot, and make your scam millions...

at least, it beats editing HTML?

Maryland Breach Notices [Emergent Chaos]

Posted: 08 Jul 2008 02:25 AM CDT

Case Number Date Received Business Name No. of MD residents Total breach size Information breached How breach occurred
153504 06/09/08 Argosy University name, social security number, addresses Laptop computer stolen from employee of SunGard Higher Education
Maryland Information Security Breach Notices are put online by the most-forward looking Douglas F. Gansler, attorney general.

I'm glad that they list case IDs on there. We're getting to the point, what with Attrition.org, Identity Theft resource center, Privacy Rights ClearingHouse, Adam Dodge, Chris Walsh, and probably others I'm forgetting, it's like chaos out there. We need a 'CBE' just to help us all cross-correlate.

Via "I've Been Mugged."

VeriSign extends free reissuance policy [Tim Callan's SSL Blog]

Posted: 07 Jul 2008 04:51 PM CDT

In recent weeks I've spoken a lot about the Debian flaw that enables the creation of weak SSL keys. One thing you may be aware of is that VeriSign suspended charging for replacement of SSL Certificates through the end of June to facilitate the replacement of these certificates.


I'm happy to state that due to the strongly positive reactions toward this policy that we've received from our customers, VeriSign has extended this free replacement offer through the end of July. While we've seen good progress in the replacement of weak Debian certs, with over a million active certificates to look at, you can imagine it requires a little time to make our way through the whole bunch of them. So to facilitate the continued replacement of the existing weak certificates, we're keeping free reissuance alive for a while longer.

Minimizing the Attack Surface, Part 2 [Zero in a bit]

Posted: 07 Jul 2008 04:10 PM CDT

I’m finally getting around to finishing my post on minimizing attack surfaces. Here’s Part 1, in case you missed it.

First, a quick clarification. I noticed that some of the readers who commented on that first post wanted to talk about improving security through the use of various development methodologies or coding frameworks. Those are interesting tangents (and ones that I may write about in the future), but my intention with this post is to discuss a very specific problem related to how people integrate third-party code — that is, the stuff you import or link in but didn’t write yourself.

As I mentioned previously, developers have a tendency to “bolt on” third-party components to applications without understanding the security implications. Often, these components are glossed over or ignored completely during threat modeling discussions. I attempted to illustrate this with my fictitious WhizBang library example in Part 1.

When integrating a third-party component, developers familiarize themselves with the API but generally don’t care how it’s implemented. Granted, that’s how an API is supposed to work; you don’t have to futz around with code beyond the API boundary, and you can blissfully ignore parts of the library that you don’t need. In past consulting gigs, I’ve sat in threat modeling discussions where nobody knew whether a particular library generated network traffic. “We just use the API,” they say. The fact that it works is good enough; nobody seems to care how it works.

That mindset is ideal for rapid development but problematic for security. Failing to understand the complete application, as opposed to just the part you wrote, prevents you from accurately assessing its security posture.

It’s also no coincidence that web app pen testers love third-party components — we get excited when we see “bolted on” interfaces, because we know that developers tend to leave extraneous functionality exposed. The resulting findings usually generate reactions such as “I didn’t even know that servlet had an upload function.”

An Example

Here’s a close-to-home example related to my post about DWR 2.0.5 from the other day. DWR is an Ajax framework that has a variety of operating modes. In-house, we use a subset of DWR’s full functionality — specifically, we interact with it using the “plaincall” method only, so we made sure that the features we didn’t need were disabled via the configuration file. As it turned out, there were vulnerable code paths prior to the “do you have this thing disabled” check. In hindsight, if we had taken more time to understand the exposed interfaces, we could have reduced the attack surface by filtering out unneeded request patterns before they even touched the third-party code.

But wait, you say. What about maintainability? If I whitelist using a point-in-time application profile, doesn’t this create the same maintenance headache as the reviled WAF? It doesn’t have to. Certainly, one option would be to whitelist each and every unique URL that references the DWR framework, e.g.

 /dwr/call/plaincall/myMethod1 /dwr/call/plaincall/myMethod2 /dwr/call/plaincall/myMethod3 

But then you’d have to update the whitelist every time you added or removed functionality from your application. Also, don’t lose sight of the security goal, which is to minimize the amount of exposed third-party code. If I add or remove URLs that list, provided they are still using the “plaincall” method, I’m hitting the same DWR dispatcher every time. So I’ve increased maintenance cost without any security benefit.

A better option is to simply tighten the URL pattern a bit in the J2EE container. Here’s the default configuration:

 <servlet-mapping>   <servlet-name>dwr-invoker</servlet-name>   <url-pattern>/dwr/*</url-pattern> </servlet-mapping> 

Now, instead of allowing every URL starting with /dwr/ to be processed by the DWR library, you could be a little more restrictive:

 <servlet-mapping>   <servlet-name>dwr-invoker</servlet-name>   <url-pattern>/dwr/call/plaincall/*</url-pattern> </servlet-mapping> 

In this configuration, you don’t have to worry about /dwr/call/someothercodepath any more. There is less third-party code exposed, thereby reducing the overall attack surface of the application. (NB: DWR also serves up a couple of Javascript files, so those URL patterns will have to be whitelisted too)

A Logical Extension

Even if you’re not a developer, you should still be thinking about attack surfaces. People download and install blogging platforms such as WordPress, Movable Type, etc. all the time, but how many take additional steps to harden their installations? The concept is the same as the OS hardening analogy I brought up at the very beginning of this discussion.

Similarly, people install third-party WordPress plugins or Joomla components without considering that most of them are written by some random programmer who is a whiz with the plugin API but knows nothing about security?

At the risk of sounding trite, always remember that security is only as strong as the weakest link.

Ignorance, Uncertainty and Doubt [Jon's Network]

Posted: 07 Jul 2008 03:18 PM CDT

RIchard Feynman gave this talk on the value of science over 50 years ago. It’s full of wisdom from a brilliant man.

If we take everything into account — not only what the ancients knew, but all of what we know today that they didn’t know — then I think we must frankly admit that we do not know.

All scientific progress came as a result of doubting existing “knowledge”. To make progress, we have to “recognize our ignorance and leave room for doubt”.

via Big Contrarian

Don't use Clickcaster for podcast hosting [StillSecure, After All These Years]

Posted: 07 Jul 2008 01:41 PM CDT

ClickCaster

Image via Wikipedia

When I find a new product or service that I think is good I am only too happy to let the world know it on my blog. For the past almost 2 years in the notes of every episode of our podcast, I mention and thank ClickCaster for hosting our podcast.

I originally was turned on to ClickCaster by Scott Converse out in Boulder, Co who was the founder of ClickCaster.  When Scott realized that a free model was not going to pay the bills, he instituted a pay model for podcast hosting. I was only too happy to pay for the great service and stats I was receiving. Well a few months ago Scott and team sold ClickCaster to focus on their new project, Medioh!.

The new owners, nexplore promised no changes and same great service.  Since then the stats stopped working, it became harder and harder to post new content and the site was down more than it was up.  Finally after getting no satisfaction from ClickCaster I had no choice but to look for another host.  Mitchell and I have chosen Pod-o-matic to host the podcast going forward.

Of course we don't have all of the episodes moved over yet because ClickCaster isn't even up enough for us to grab all the episodes.  But most of them are up at pod-o-matic and we have already repointed the feedburner/iTunes feed.  So from here on you can hear us at pod-o-matic. 

If you are looking to host your podcast, you don't have to use pod-o-matic, but don't use ClickCaster!

Zemanta Pixie

Chicago Security Community [Infosec Events]

Posted: 07 Jul 2008 12:29 PM CDT

This post is part of the information security communities project.

Hey everyone!

My Name is Steven McGrath, and as a security professional local to the Chicago area, I thought it would be best to share a list of events that I am familiar with in the area:

  • Chicago 2600 - Chicago 2600 is an informal gathering of security professionals, hackers, phreaks, computer enthusiasts, gamers, and the list goes on.   Technical discussions normally happen on a monthly basis as well as a lot of socializing.  The group is very informal and there is a diverse mix of people from young to old, highly technical to still learning.
  • ChiSec - An informal meetup of information security professionals in Chicago. Unlike other meetups, you will not be expected to pay dues, "join up", or present a zero-day exploit to attend.
  • Chicago Snort User’s Group - ChiSNORT is a user group for snort and security professionals.  With a highly technical crowd in everything from big business to non-profits, there is a diverse crowd of people that are more than willing to learn, help, and inform people about information security and SNORT specifically.
  • ISACA Chicago - The Information Systems Audit and Control Association (ISACA) is a professional association of individuals interested in information systems audit, control and security.
  • ISSA Chicago - The mission of the Chicago Chapter is to offer a stimulating combination of discussion forums, hands-on learning, CISSP certification training, conferences, and other events which are designed to enhance understanding and awareness of information security issues for information security professionals.
  • OWASP Chicago - The Open Web Application Security Project (OWASP) is a worldwide free and open community focused on improving the security of application software. Our mission is to make application security "visible," so that people and organizations can make informed decisions about application security risks. Everyone is free to participate in OWASP and all of our materials are available under an open source license.

These are just a few of the security-related event in the area.  With many of them recurring on a monthly basis, there is plenty of opportunities to occupy your free time socializing with a large number of security professionals.  Every group mentioned also has an open, welcoming atmosphere and a diverse range of security professionals, from government to private sector to nonprofit.

New Meme: "Security Idiot" [Anton Chuvakin Blog - "Security Warrior"]

Posted: 07 Jul 2008 11:59 AM CDT

It is official! :-)

"Security idiot" is now an official security meme, as per SecMeme.com :-)

Freakonomics and Data [Emergent Chaos]

Posted: 07 Jul 2008 10:57 AM CDT

There's a really interesting article in the New Republic, "Freaks and Geeks:"
In 2000, a Harvard professor named Caroline Hoxby discovered that streams had often formed boundaries to nineteenth-century school districts, so that cities with more streams historically had more school districts, even if some districts had later merged. The discovery allowed Hoxby to show that competition between districts improved schools. It also prompted the Harvard students to wrack their brains for more ways in which arbitrary boundaries had placed similar people in different circumstances. ...In retrospect, I have come to see this as the moment I realized economics had a cleverness problem. How was it that these students, who had arrived at the country's premier economics department intending to solve the world's most intractable problems--poverty, inequality, unemployment--had ended up facing off in what sometimes felt like an academic parlor game?
It's a very interesting article on the economics of academic economics, and some of the perverse incentives which exist in the field.

Me, I look forward to the day when we have so much data that we can start looking for arbitrary differences and boundaries. I look forward to the day when security has a cleverness problem. No doubt we'll end up calling it database pharming.

Incite Redux: Day 2 - It's time for an Audit Revolution [Security Incite Rants]

Posted: 07 Jul 2008 10:51 AM CDT

Good Morning:
Some days I get to reflect on how lucky I am. I guess when you are sitting on the beach, watching your kids enjoying life, it's as good a time as any to appreciate all that I have. Of course, a unique "feature" of my personality is to never be satisfied - to always be striving for more. Yet, some days it just makes more sense to forget about all that crap. My goals and aspirations of world domination will be there when I return to the office and my daily rituals.

Until then, I think I'll just enjoy the fact that things could be a lot worse.

Have a great day.

Incite #2: It's time for an audit revolution

Contrary to popular belief (and desire), compliance is far from dead and remains a major buying catalyst (and funding source) for all sorts of information security tools, services and the like. Yet, the acrimonious relationship between the auditor and the audited continues to create problems and needlessly burn resources. Forward-thinking security professionals jump on the bleeding edge of innovation treating the auditor as a peer and viewing the audit as a learning opportunity.

Read the original Days of Incite post on this topic.

6-month grade: B-

I need to come clean. Sometimes I get what's right and what's realistic confused. Now there is no doubt that my ideas about how auditors and auditees can work together are right on the money. I've heard enough feedback from enough people I trust that not treating an audit or an assessment like a 15-round fight is a much more productive way to go about things. This approach is laid out in the Pragmatic CSO.

Monster HugBut then again, what's realistic tends to be constrained by people, and people don't really change readily - if ever. It reminds me of one of the great lines in You Don't Mess With the Zohan: "They've been fighting for 2000 years, it will be over soon." Unfortunately, that seems like the story we tell in the security business. We've always fought with auditors and not fighting with them is kind of like asking for peace in the Middle East. Except I do think it's possible.

Just keep in mind that we are all fighting for the same thing - and that's to protect the information and assets of the organization. The auditors want to be able to prove that things are happening. Is that all bad? Of course not, it's quite good - but it takes a different kind of security practitioner to realize that.

What about the whole compliance golden goose? It's still alive and well. As we look forward to the end of 2008 and into 2009, it seems the global economy isn't going to be improving much at all. So we will face even more budget tightening and scrutiny of our investments. Since security is still largely an overhead function, it's going to be even more heavily scrutinized. 

So using the compliance card is not a bad thing at all. But do you buy something that is purported to help with compliance? Of course not. After all, a smart guy figures that GRC is dead. Buy what you need to protect your stuff. That hasn't changed at all. You still need to focus on Security FIRST! If you do that well, you'll be in decent shape for your audits and assessments.

In terms of a grade, the long term trend is intact and the approach is solid. But it'll happen more slowly than I anticipated - so I get a B-. Or go hug your auditor and prove me wrong.

Photo credit: "Monster Hug" originally uploaded by Alberto+Cerriteno

Incite Redux: Day 1 - Express Your Inner Bean Counter [Security Incite Rants]

Posted: 07 Jul 2008 10:17 AM CDT

Good Morning:

Just to give you a general overview of the Incites Redux process, I revisit my 2008 Incites (or projections for those of you not familiar with my lingo). I do this provide some level of accountability, which still seems to be unique in the technology research business. Folks make ridiculous projections, both on market sizing and industry dynamics with impunity. If they are wrong, so what? They still collect their checks and no one is worse for it. Except those poor saps that actually follow their advice.

So hopefully by now you've realized I'm a different kind of analyst and a different type of guy. I not only welcome the scrutiny of my positions, I search it out. So over the next two weeks, I'm going to revisit each of my 2008 Incites and give myself a "grade." Of course, this is self-analysis - but I'm confident that if you strongly disagree with something, you'll let me know. Bashful folks you are not.

Have a great day.

Incite #1: Express Your Inner Bean Counter

Substantiating the value of security continues to plague practitioners, who still can't specifically answer the question: "Are we secure?" Structured security programs (ISO 27001/2, COBIT, Pragmatic CSO) help align programmatic activities, and look for significant advances in the area of security metrics – where the industry begins to gain consensus about what can and should be tracked.

Read the original Days of Incite post on this topic.

6-month grade: D

OK, this is not an auspicious beginning to my 2008 Incites. Some of my buddies told me I was being a bit optimistic to think that we'd see "significant advances" in the area of security metrics. And they were right. But let's not get the cart ahead of the horse. The first part of the Incite deals with security programs, and if anything the desire for the industry to get a "cookbook" of sorts that provides a set of playbooks to do security remains very high.

Count von CountI've fielded a lot of inquiries and questions regarding IS 27001/2 and also COBIT. Throw a little of NIST's 800-100 and 800-53 and it remains clear that most practitioners still have no idea where to start when embarking on a security program. I expect these frameworks will accelerate over the short term. I also think that a lot of fairly pragmatic IT professionals will default to following the 12 requirements of the PCI DSS. Notice that I said pragmatic IT professionals, not Pragmatic CSOs.

Most IT professionals facing down the spectre of PCI compliance don't have much of a choice, but to move towards the 12 requirements. Truth be told, that isn't a bad place to start - but it's not a comprehensive security program. For security professionals (yes, Pragmatic CSOs), the program needs to be more holistic and more structured. Remember, security is a journey not a destination and it's not just about passing the assessment and getting the rubber stamp. It's about actually protecting information.

Which brings us to the metrics discussion. Over the past 9 months, I've gotten fairly deep in the metrics community and lent a hand to start up a consensus group to define what can and should be counted. Basically, I'm still looking for my own answer to "Are we secure?" and at this point, the only answer still is a resounding no! One of the books I'm reading during my summer hibernation is called "The Black Swan" and it's really impacting my view on what security really is and how we should be measuring ourselves.

I've got a lot more thinking to do around the topic, but I think the question "Are we secure?" is the wrong question to ask. To me, "how quickly can we recover from a fixed number of attack scenarios?" seems to be a more appropriate question, especially given that we've never been able to predict where the next major attack will come - and I suspect we never will. But we may be able to model the type of damage an attack will cause, and figure out how long it would take to recover. 

I know that the risk counters out there want and like to build big models to assess the risk and quantify it (and hopefully over time reduce the applicable risk that is being measured), but I'm still not sold on that approach.

To their credit the Risk Management Insight guys have been very patient with my constant wingeing about their approach and Jack has put together a number of thoughtful pieces about why actually quantifying your risk is a good thing. Yet, I'm still questioning how that kind of analysis yields any kind of return relative to the time and resources required to build the model.

But even that's not the point. The point is that we need to bifurcate the metrics process into (for lack of a better term), an elementary school track and a PhD track. There are some very very smart people out there that are talking at a PhD level. Their work is impressive, but it's not accessible. The common men and women out there, just trying to get out through the day, do not track (nor could they collect) the metrics the PhD's are talking about.

Yet, the PhD's tend to be very critical of the "good enough" approach of the rest of the world. The folks that count patches and AV updates and spams blocked. And this is the problem with trying to produce a SINGLE set of metrics for the entire industry. Thus, I now realize the idea of gaining true consensus onsecurity metrics is probably a pipe dream.

We should be looking for AT LEAST two different sets of metrics and data sets. The first is really tracking activity and that is for the unsophisticated practitioner that is just trying to get a handle on what they are doing operationally. We have the data and although it's not perfect, at least it's there. 

Then there is the ongoing research that the PhD's are pushing to model and quantify risk and figure out what the knobs are that really impact security outcomes. Optimally, the PhD's find some stuff that everyone else can use over time. But to think we are going to get to a consensus anytime soon is, well, optimistic. And in this business, hope and optimism get a D.  

Photo credit: "censored" originally uploaded by tifotter

Urgent - Action Needed: Call Your Senator Today on FISA! [The Falcon's View]

Posted: 07 Jul 2008 08:29 AM CDT

Hopefully this will be the last time I need to post anything on this issue. Hopefully you will all join me in contacting US Senators today, urging them to vote no on the FISA reforms passed to them by the...

Three Ways to Avoid “Wheel Reinvention” - and Build a Better, Trusted Solution [The Security Catalyst]

Posted: 07 Jul 2008 08:15 AM CDT

The last article in this series explored the top three reasons why group have a tendency to reinvent the wheel (read it here, or the entire series started here). And now, some solutions:

Beyond the frustration caused by an approach that simply recreates the wheel, the result is often a solution that is not trusted and therefore readily cast aside in favor of the next offering. To put a stop to this cycle requires taking a different approach. Success has to be based to fundamentals and sound principles.

 

How to do it?

A key part of the solution is to enter into deliberate discourse (note: this is a central theme of Into The Breach and a topic I am passionate about). More voices with an opportunity to review, consider and contribute have the potential to lead to a better product. For this to lead to a better product requires a strong leadership team with enough expertise to guide and the skills to help facilitate and negotiate the final result.

Instead of starting with a blank slate, it is a good practice to build on the success of others. When it comes to strategies that protect information, we have plenty of choices – frameworks like ISO 2700x, PCI, FISMA, etc. However, limiting the solution to a narrow set of industry standards may not yield the best results. Sometimes, real progress comes at the intersection of industries (to gain more insight on this approach, consider reading: The Medici Effect) – leveraging how the medical, engineering or other industries have dealt with and handled challenges may bring valuable insight to the effort at hand.

The advantage to building on the validated and transparent work of others is the ability to avoid conjecture and "gut feeling." This is the challenge: there are few shortcuts to spending the time to outline, think, plan, distill, check, cross-reference. This is an area where transparency really provides a benefit.

When the group of professionals is assembled, here are three steps to harnessing the collective power, building on the wheel (instead of building a new wheel) and reaching a point of success:

 

1. Capture and distill frameworks (or solutions)

Start by presenting a model to work from, based on an existing solution. In general, individuals and groups struggle to create but excel at editing and revising. With this in mind, selecting an initial framework or set of solutions to present to the group acts as a strawman [http://en.wikipedia.org/wiki/Strawman]. This has the added benefit of allowing people to beat on the framework(s) instead of each other.

The frameworks or solutions can either be selected in advance or decided by the team. Allowing the team to decide may provide for more diverse results but requires more time and a stronger facilitator (who possesses deep subject matter expertise). Stronger frameworks and solutions are those that have already been publicly validated and are more transparent. This suggests the "heavy lifting" has already been done and the team can focus on refining and tailoring what already exists from multiple sources into the solution required.

More important that just compiling a list of viable frameworks and solutions is how they are captured and processed. As the elements are suggested, reviewed and documented, look not only for the similarities, but also the distinctions between them. Working to understand why specific elements were either included or excluded may also reveal key insights that aid the development of a stronger solution. Note the intended audience and users of the solution and how it is received. It may be useful to note the level of maturity, too (since that provides some insights).

This process generates a lot of discussion – this is good, and leads to the second point.

 

2. Capture and distill the running dialogue

More important, perhaps, than the solutions selected in the last step is the running dialogue that occurs as part of the process. Yet few organizations take the time or make the effort to capture that solid gold value.

Ultimately, the discussion – the true process of negotiation and coming to a common understanding – is precisely what allows a group to build the final product. While the discussion is natural, here are three important questions to ask, answer and record during this process:

a. What works — and why?

b. What does not work — and why?

c. How is this applied — and why?

Look for specifics. This is an area where people tend to rely on “truthiness” – which, to a certain extent, may be okay. In the overall discussion, however, guide people back to more concrete grounding by asking more questions to ensure everyone shares a common understanding (which is not necessarily the same as a common opinion!). The next segment will explore the benefit of capturing this conversation and making it available in the future.

As the conversation continues, there is one more step to increase the overall value.

3. Capture and distill references

The value of having experts together in a room is their collective knowledge – informed by experience, training and a vast array of resources. Therefore, it is incredibly valuable to regularly ask this group to cite the references they find of value.

As the discussion rages on (if you have been part of a working group, rage is definitely the right word), asking people to take the time to cite the references that support their assertions returns focus to the fundamentals.

Not only does this improve the overall framework, but this also improves how it is applied and verified (as we will explore in the next sections).

 

Bottom Line

Bring together a small, tight team that works well together. Welcome as many voices into the process as reasonable. Take the time to distill and overlay what already works.

 

How this Applies to Trustmark

When Trustmark gets this right, it will essentially be an overlay on the entire industry – explaining where, how and why the different control families and control objectives can be met. This is important, since it allows for additional regulations or efforts to be acceptable without prescribing a set way of working. But whether working on Trustmark or a new process to protect information, following these steps leads to a stronger - and more trustworthy - result.

 

Up Next: the second challenge facing Trustmark and similar efforts is in how the solution is applied. We examine this challenge with potential solutions before moving on to the final challenge of how the solution is measured and verified.

 

If you enjoyed reading this article, please take a moment to either subscribe to the RSS feed (www.securitycatalyst.com/feed/) or sign up for free updates by email. Use the buttons below to print this article or share this with friends and colleagues that will benefit from this.

VoIP Exploit Research toolkit [varie // eventuali // sicurezza informatica]

Posted: 07 Jul 2008 07:50 AM CDT

http://sourceforge.net/projects/voiper/
VoIPER is a VoIP security testing toolkit incorporating several VoIP fuzzers and auxilliary tools to assist the auditor. It can currently generate over 200,000 SIP tests and H.323/IAX modules are in development.

@twitterspammers. [NP-Incomplete]

Posted: 07 Jul 2008 12:17 AM CDT

jill1194 pitch jill1194 profile missyinpink1987 pitch missyinpink1987 profile
Spammers went after Twitter pretty hard this holiday weekend using the "friend invite" model that was first developed against other social networking services. Briefly, the attack involves creating a large number of spammy profiles and then inviting people to view the spam by performing a friend request, or in twitters case, "following" the spam target. I have included screenshots of a few of these attacks.
An individual can remediate this attack in the short term by disabling e-mail notifications of people following you. This is by no means an optimal solution. The only people who can really address the situation is twitter, through a combination of blacklisting, throttling, CAPTCHAs, and content analysis.

Joint Commission Updating Security Standards and Elements of Performance [Compliance Focus - Blogs]

Posted: 06 Jul 2008 11:00 PM CDT

The Joint Commission, which is a non-profit organization that publishes standards for healthcare organizations and runs  an accreditation program, is updating some of their standards for 2009, including some which impact information security.

The Joint Commission, previously known as JCAHO, is updating some of the standards and elements of performance relating to information management, privacy, and security. Many healthcare organizations seem to pay more attention to the Joint Commission standards than they do to HIPAA, because have JCAHO accreditation is very important to the organization's business performance. JCAHO accreditation is an independent measure of healthcare quality of performance, across many areas of their business (information management being one). Liability insurers look to JCAHO accreditation as a measure of quality and risk, so this tends to be a big deal.

Joint Commission information management standards which are changing include:
IM 02.01.03, EP 5, which now reads "The hospital protects against unauthorized access, use, and disclosure of health information". The previous language just said "The organization implements the policy".

IM 02.01.03, EP 8, which now reads "The hospital monitors compliance with its policies on the security and integrity of health information".

The language is obviously not overly prescriptive in terms of how healthcare organizations are supposed to achieve these standards. One assumption is that the organizations will turn first to the HIPAA Security and Privacy Rules for guidance. Maybe they will also look at ISO27002 for more specific controls relating to information security.

These and the other changes to the JCAHO information management (security and privacy) standards are important because healthcare organizations now have the Joint Commission accreditation process at risk if they fail to adequately implement their information security program.

Jim

Rafa Wins!!! [The Falcon's View]

Posted: 06 Jul 2008 04:49 PM CDT

Spanish phenom Rafael Nadal has won the 2008 Wimbledon Championship for the first time today, and boy was it a thriller! After going up 2 sets to love, Nadal's competitor, the legendary Roger Federer, fought back to win two straight...

DeepSec 2007 Videos Now Online [Infosec Events]

Posted: 06 Jul 2008 03:06 PM CDT

DeepSec is an in-depth security conference in Vienna, Austria. Last year it was held on November 20th through the 23rd, and from the speaker lineup, it looked like a very good conference. Everyone can now enjoy the presentations, as the DeepSec 2007 videos are online at Google video.

Here are some of the DeepSec 2007 presentations that sound interesting to me:

The two DeepSec 2007 keynotes are online as well:

And here are the rest of the DeepSec 2007 presentations:

I also noticed a few DeepSec 2007 presentation not online:

  • Are the vendors listening? - Simon Howard
  • Collecting and Managing Accumulated Malware Automatically - Georg Wicherski
  • Intercepting GSM traffic - Steve
  • Web 2.0 Application Kung-Fu: Securing Ajax & Web Services - Shreeraj Shah

Forget the python vs ruby discussions.. [extern blog SensePost;]

Posted: 05 Jul 2008 04:48 PM CDT

Cause this puts Perl right back in the game!

-snip-

> sudo perl -MCPAN -e shell cpan> install Acme::LOLCAT install  -- OK

> cat demo.pl #!/usr/bin/perl use Acme::LOLCAT; print translate($ARGV[0]);

> ./demo.pl "Im going to run all emails through this before sending" IM GOINS 2 RUN ALL EMAILZ THROUGH THIZ BEFORE SENDIN

-snip-

ahhh.. MUH WORK AR DONE HERE

ARDAgent.app Vulnerability Analysis [...And you will know me by the trail of bits]

Posted: 05 Jul 2008 03:37 PM CDT


Apple recently released Mac OS X 10.5.4 with accompanying security updates for 25 vulnerabilities.  Notably absent, however, is a fix for the recently brouhaha’d ARDAgent.app local privilege escalation vulnerability.  The exploit is extremely simple and unfortunately, it seems that the fix is not; otherwise Apple would have fixed it in this batch.  For more information on the exploit including temporary fixes and workarounds to protect yourself until Apple fixes this vulnerability, see the full write-up at MacShadows.com.

In the interest of fully understanding Mac OS X security issues, let’s dive in and see how this vulnerability works.  As a reminder, the vulnerability is that ARDAgent.app, a set-user-id root executable, responds to the “do shell script” Apple Event, effectively running arbitrary commands as root.

Applications must announce that they can receive Apple Events before they may be scripted.  In Cocoa applications, this is done by setting the NSAppleScriptEnabled property to “YES” in the application bundle’s Info.plist file.  In Carbon applications, an application is made scriptable by simply calling the AEInstallEventHandler() function.  AEInstallEventHandler() lets the application define which Apple Events it can handle and supply the handler functions for them.

ARDAgent.app did not do anything special in order to respond to the “do shell script” Apple Event, this event is defined in the StandardAdditions Scripting Addition in /System/Library/ScriptingAdditions.  Scripting Additions are dynamic libraries (dylibs) that will be loaded automatically by the Apple Event handler if the application receives an Apple Event that is defined in them.  There are several Scripting Additions in /System/Library/ScriptingAdditions, but they may also potentially be found in /Library/Scripting/Additions or ~/Library/ScriptingAdditions.

Interestingly, ARDAgent.app calls AESetInteractionAllowed() with kAEInteractWithSelf after installing its own Apple Event handlers which is supposed to restrict the processing of Apple Events to only those sent by the process itself.  It obviously does not seem to have its intended effect in this case.

This is a pretty isolated vulnerability not a massive security hole in AppleScript.  Set-user-id executables should not be scriptable and ARDAgent.app appears to be the only application that violates this.

UPDATE @ 20080704: As mentioned in the MacShadows security forums from the link in the comments below, SecurityAgent is also susceptible to this, but only when SecurityAgent is running with increased privileges (after a sudo, unlock of System Keychain, etc). But this is not because SecurityAgent is setuid, it is because it still receives Apple Events when it runs with increased privileges. It doesn’t, however, appear to call any Apple Events functions. So I am not sure why it is processing Apple Events (handled in a base framework?). If anyone knows why, let me know.

UPDATE @ 20080705: I have poked around at this a bit more this weekend, and it turns out that an application does not need to call any Apple Events APIs in order to receive and process Apple Events.  While Cocoa applications must set the NSAppleScriptEnabled property, any Carbon application automatically handles Apple Events.  At the lowest level, Apple Events are sent over Mach ports looked up from the bootstrap server, so you need port send rights in order to send it Apple Events.  This means that a client will not be able to send events to an application running as a different user unless it is setuid/setgid (ARDAgent) or is run with increased privileges, but still checks its ports in with the bootstrap server (SecurityAgent, which is launched by securityd with gid=0 on Tiger in certain situations).

Oil, Energy Independence Day in 10 Years, Drilling, Becoming OPEC 2.0, and Banana Splits [The Converging Network]

Posted: 05 Jul 2008 11:37 AM CDT

Hyrdogen_fuel I was just getting ready to close down the laptop for the evening when I began thinking about how much my views have changed on our nation's energy policies. It's the 4th of July and I enjoyed a banana split to celebrate. (Long time since I've had one of those.) I was in high school during the 70's oil crisis and enjoyed those many years of driving 55 mph on the interstate (I'm being very facetious here.) I heard on Sirius radio that one of our congressmen proposed bringing back the 55 mph limit. While conservation is a good thing, so is our nation's (and my personal) sanity and bringing back the 55 mph speed limit is one of those ideas I hope we shoot down with a vengeance. I'm one of the biggest offenders of conservation when it comes to my Suburban, but I love to drive and I've enjoyed having a big vehicle. I hope to change that soon and move to a much more efficient vehicle once I decide what to buy. Okala_2 I tend to keep cars for quite a while so it's an important decision, one I don't want to make too quickly and realize I've made a choice that doesn't work for me. I actually am very concerned about energy independence, creating green products, and preserving our environment along with building a vibrant economy.It's one of the reasons I'm an advisor to Sustainable Minds, a company who helps make designing green products easier.

One of the things that I've always disliked about politics is the polarizing nature of how each side takes sides, making arguments win-lose when a combined solution is really what's needed. Americans are getting hit below the belt right now with the one-two-punch of high gas prices (along with the associated rise in food and other prices) and a struggling economy. Rather than take a sensible approach, Obama and McCain are framing the debate as energy alternatives vs. more drilling, turning the argument into yet another polarizing debate.

Apollo_rocket I'm glad Obama is strongly for creating energy alternatives. I would love to drive a hydrogen vehicle if they were available at a reasonable price with sufficient fueling stations available. I believe our nation's resources should be dedicated to becoming a new economy of alternative energy and green technologies. Just like John Kennedy ignited the American engineering spirit of the space program with his challenge to put a man on the moon before the end of the decade, we should make a current day challenge of bringing hydrogen cars and fueling stations across the country in less than ten years. Where's our government when we need it? Energy_independence_day If our government made the same kind of investment in becoming energy independent that we made to get to the moon, we'd be fueling a whole new economy of alternative energy businesses that could solve our energy problems and serve to the rest of the world. I believe in our continued investment in NASA but I'd delay everything we have on the table for the next 10 years to redirect that money into celebrating an Energy Independence Day in ten years or less. How about it Obama -- make the challenge: Energy Independence Day in less than 10 years. We do it, not because it is easy, but because it is hard... remember that kind of inspiration? Let's get moving, Washington.

I also believe we could use our oil reserves to help fund the creation of our energy independence. I flippantly said one day, "Lets drill offshore, sell the oil to China, and use the proceeds to fund the creation of hydrogen cars." Not such a crazy idea after all, eh? Oil_crack_3 It would be like selling China the oil equivalent of crack. Let them build up their dependence on oil to an even greater extent, and then sell them our green energy technology and products as even higher oil prices squeeze their economy and slow growth down the road. I do believe we have to drill for more oil using US resources to lessen the impact OPEC has on us. That doesn't mean we have to drill in Anwar, but parts of Colorado, Wyoming, South/North Dakota, Montana are sitting on sizable oil reserves. Those along with the oil sitting offshore could create at least a balancing factor against the current out of control oil price situation. Let others buy our expensive oil for a change, or they can buy our alternative energy technology instead. With the alternative energy and hydrogen cars created, the USA would be next generation OPEC 2.0 of alternative energy and oil. In ten years our problem could do a 180 and become our biggest strength.

No comments: