Spliced feed for Security Bloggers Network |
| The Catalyst onTour: Soon We’ll Be Making Another Run [The Security Catalyst] Posted: 08 Jul 2008 07:18 AM CDT (after reading that title, are you singing the theme to Love Boat yet? If you are, and you miss the program go watch a full episode now: http://www.cbs.com/classics/the_love_boat/) The book is being printed (finally!). The preview copies are being mailed out. And we have been in the same spot for a few weeks now. It is time to load up the coach and head back out on the roads! Our “ship” is not as big as the Love Boat, but the adventures never cease, and we’re ready for the next one. What was initially conceived to be the "Campaign Across America" has evolved into the more appropriate "Catalyst onTour." We have the "tour bus" and a desire to see as much of the country as we can. Unlike a rock band going on tour, we have more of a grass-roots approach and a powerful message: each of us makes a difference when it comes to protecting our information, our identities, our children. As the tour rolls on, we seek to bring that message of optimism and support door-to-door. Seriously. To better explain the Catalyst onTour concept, approach and benefits to business, families and even potential sponsors we are in the process of setting up the catalyst onTour website (hopefully before we leave again in July; it’s next after we update the book website). Minimally, this site will allow you to keep in touch and join (if only virtually) our efforts through writing, pictures, audio and video – and ask questions, make suggestions and otherwise get involved and make a difference! The July/August Route Tour Leg Anchor Events
Potential Cities and Stops Along the Way
On the way home, we have a lot of options - so if you are somewhere between Arizona and Upstate NY - let us know and we will try to work something out. We are currently planning to circle back to Upstate NY during the first week of September. This gives us a few weeks home before setting out on a series of speaking engagements and client working sessions, a potential trip to Orlando and whatever else influences some onTour segments. Technorati Tags: catalyst, catalyst onTour, blackhat | ||||||||||||||
| NIST: Due nuove pubblicazioni [varie // eventuali // sicurezza informatica] Posted: 08 Jul 2008 06:21 AM CDT Sono due le pubblicazioni appena rilasciate dal NIST, e sono entrambe molto interessanti. La prima, "Guide to SSL VPNs" parla delle tecnologie che stanno alla base delle SSL VPN, ambito che sta guadagnando sempre più popolarità rispetto alle "tradizionali" IPsec VPN. Si passa poi a scenari d'implementazione e raccomandazioni d'uso. Ovviamente non ho ancora avuto il tempo di leggere la pubblicazione, ma da una prima occhiata é fatta molto bene (come tradizione del NIST). link - NIST: Guide to SSL VPNs (pdf) La seconda é una pubblicazione ancora in draft, e quindi può essere commentata prima della chiusura e pubblicazione. Quanto mai attuale anche quest'argomento: "Guidelines on Cell Phone and PDA Security". Con l'utilizzo sempre più massiccio di dispositivi mobili, palmari e telefoni intelligenti che allargano il loro ambito d'azione dal semplice calendario o telefono a dispositivo multifunzionale che integra calendario, email, contatti, telefono e spesso accesso alle risorse aziendali, diventa imperativo porsi l'interrogativo sull'effettivo livello di sicurezza di questi dispositivi e su quali possono essere le linee guida da applicare per un utilizzo sicuro. Ben venga quindi una pubblicazione come questa! link - NIST: Guidelines on Cell Phone and PDA Security (Draft, pdf) | ||||||||||||||
| Provisioning: Security’s First Step to Measuring Organizational Impact [BlogInfoSec.com] Posted: 08 Jul 2008 06:00 AM CDT Security is often accused, occasionally with merit, of being an obstacle to an organization's business. While the drumbeat of cyber threats has at least raised the technology risk consciousness of many business managers, security professionals still have the challenge of quantifying how big an insurance policy makes sense for their organization. We will spend some time in a future article exploring effective security metrics, but one place where security can often measure both its impact and its benefit is in the provisioning process. Several years ago, while working in financial services, we were under strict internal and regulatory duress to ensure segregation of duties and least privilege access for all associates who had exposure to investment data (about 4000 people). Unfortunately, the manual processes then in place required not only significant administrative overhead from the access administration team but, more distressingly from management's perspective, from senior staff who were constantly barraged with access approval requests from a global user community. Needless to say, these manual processes were as ineffective as they were burdensome an almost constant stream of audit findings indicated. As with many organizations, both the overhead and ineffectiveness of the access approval process became accepted enterprise costs and there was no organizational mandate to address the challenges strategically. However, one tactical approach after another failed to provide any lasting solution, and served only to increase stress on access administrators and approvers alike. Security's requests to initiate a strategic solution fell on deaf ears until we were able use some previous lessons learned to make our case financially. While working a few years earlier in the corporate security function, we had sought to quantify the cost in terms of lost productivity of provisioning delays caused by not having a single user identifier and central identity store. While our methodology was pretty raw and (...) © Patrick Foley for BlogInfoSec.com, 2008. | Permalink | No comment This feed is copyrighted by bloginfosec.com. The feed may be syndicated only with our permission. If you feel that this feed is being syndicated by a website other than through us or one of our partners, please contact bloginfosec.com immediately at copyright_at_bloginfosec.com. Thank you! | ||||||||||||||
| Blizzard offers two-factor authentication, why doesn't your bank? [spylogic.net] Posted: 08 Jul 2008 06:00 AM CDT ![]() Lots of buzz on the net about Blizzard (creators of World of Warcraft) is offering a $6.50 two-factor authentication token for customers that want an extra layer of protection for their account. Yes, if you didn't know account theft in WoW is on the rise! I commend Blizzard for taking this extra step to help protect their customers...sure two-factor authentication isn't perfect, but regardless it's a step in the right direction. So why don't more banks and financial institutions set this up for their customers? PayPal was able to do it right (not perfectly, but close)? It comes down to customer support and cost. One of the many ways a bank or financial institution makes money is by offering products that are user friendly and can be used by just about anyone. For someone using a two-factor authentication token with some technical skill it's a cake walk...unfortunately, the average bank user (think about your mom or the person in your family with the least amount of technical skill...yes, the one that calls you to fix their computer...) will most likely be confused as how to use the device and that will be a call to the bank's customer support center (calls cost $$) and lets not forget about the back end infrastructure (servers and IT staff cost $$) and all the additional red tape the institution has in regards to advertising and putting a friendly spin on it to customers. Martin McKeay and Michael Santarcangelo on the Network Security Podcast (Episode 110) had some good discussion about this. In a nut shell the conversation was about how banks offer many different easy to use services and tying a two-factor solution to all of these products is just not worth the cost, time and effort (except for high wealth customers). Also, what happens when you have multiple accounts at multiple banks? Do you carry around multiple tokens? My opinion? Until there is something easier to use and more secure, I don't see most banks or financial institutions going two-factor anytime soon. | ||||||||||||||
| funky javascript [extern blog SensePost;] Posted: 08 Jul 2008 03:48 AM CDT | ||||||||||||||
| Maryland Breach Notices [Emergent Chaos] Posted: 08 Jul 2008 02:25 AM CDT Maryland Information Security Breach Notices are put online by the most-forward looking Douglas F. Gansler, attorney general. I'm glad that they list case IDs on there. We're getting to the point, what with Attrition.org, Identity Theft resource center, Privacy Rights ClearingHouse, Adam Dodge, Chris Walsh, and probably others I'm forgetting, it's like chaos out there. We need a 'CBE' just to help us all cross-correlate. Via "I've Been Mugged." | ||||||||||||||
| VeriSign extends free reissuance policy [Tim Callan's SSL Blog] Posted: 07 Jul 2008 04:51 PM CDT In recent weeks I've spoken a lot about the Debian flaw that enables the creation of weak SSL keys. One thing you may be aware of is that VeriSign suspended charging for replacement of SSL Certificates through the end of June to facilitate the replacement of these certificates.
| ||||||||||||||
| Minimizing the Attack Surface, Part 2 [Zero in a bit] Posted: 07 Jul 2008 04:10 PM CDT I’m finally getting around to finishing my post on minimizing attack surfaces. Here’s Part 1, in case you missed it. First, a quick clarification. I noticed that some of the readers who commented on that first post wanted to talk about improving security through the use of various development methodologies or coding frameworks. Those are interesting tangents (and ones that I may write about in the future), but my intention with this post is to discuss a very specific problem related to how people integrate third-party code — that is, the stuff you import or link in but didn’t write yourself. As I mentioned previously, developers have a tendency to “bolt on” third-party components to applications without understanding the security implications. Often, these components are glossed over or ignored completely during threat modeling discussions. I attempted to illustrate this with my fictitious WhizBang library example in Part 1. When integrating a third-party component, developers familiarize themselves with the API but generally don’t care how it’s implemented. Granted, that’s how an API is supposed to work; you don’t have to futz around with code beyond the API boundary, and you can blissfully ignore parts of the library that you don’t need. In past consulting gigs, I’ve sat in threat modeling discussions where nobody knew whether a particular library generated network traffic. “We just use the API,” they say. The fact that it works is good enough; nobody seems to care how it works. That mindset is ideal for rapid development but problematic for security. Failing to understand the complete application, as opposed to just the part you wrote, prevents you from accurately assessing its security posture. It’s also no coincidence that web app pen testers love third-party components — we get excited when we see “bolted on” interfaces, because we know that developers tend to leave extraneous functionality exposed. The resulting findings usually generate reactions such as “I didn’t even know that servlet had an upload function.” An Example Here’s a close-to-home example related to my post about DWR 2.0.5 from the other day. DWR is an Ajax framework that has a variety of operating modes. In-house, we use a subset of DWR’s full functionality — specifically, we interact with it using the “plaincall” method only, so we made sure that the features we didn’t need were disabled via the configuration file. As it turned out, there were vulnerable code paths prior to the “do you have this thing disabled” check. In hindsight, if we had taken more time to understand the exposed interfaces, we could have reduced the attack surface by filtering out unneeded request patterns before they even touched the third-party code. But wait, you say. What about maintainability? If I whitelist using a point-in-time application profile, doesn’t this create the same maintenance headache as the reviled WAF? It doesn’t have to. Certainly, one option would be to whitelist each and every unique URL that references the DWR framework, e.g. /dwr/call/plaincall/myMethod1 /dwr/call/plaincall/myMethod2 /dwr/call/plaincall/myMethod3 But then you’d have to update the whitelist every time you added or removed functionality from your application. Also, don’t lose sight of the security goal, which is to minimize the amount of exposed third-party code. If I add or remove URLs that list, provided they are still using the “plaincall” method, I’m hitting the same DWR dispatcher every time. So I’ve increased maintenance cost without any security benefit. A better option is to simply tighten the URL pattern a bit in the J2EE container. Here’s the default configuration: <servlet-mapping> <servlet-name>dwr-invoker</servlet-name> <url-pattern>/dwr/*</url-pattern> </servlet-mapping> Now, instead of allowing every URL starting with <servlet-mapping> <servlet-name>dwr-invoker</servlet-name> <url-pattern>/dwr/call/plaincall/*</url-pattern> </servlet-mapping> In this configuration, you don’t have to worry about A Logical Extension Even if you’re not a developer, you should still be thinking about attack surfaces. People download and install blogging platforms such as WordPress, Movable Type, etc. all the time, but how many take additional steps to harden their installations? The concept is the same as the OS hardening analogy I brought up at the very beginning of this discussion. Similarly, people install third-party WordPress plugins or Joomla components without considering that most of them are written by some random programmer who is a whiz with the plugin API but knows nothing about security? At the risk of sounding trite, always remember that security is only as strong as the weakest link. | ||||||||||||||
| Ignorance, Uncertainty and Doubt [Jon's Network] Posted: 07 Jul 2008 03:18 PM CDT RIchard Feynman gave this talk on the value of science over 50 years ago. It’s full of wisdom from a brilliant man.
All scientific progress came as a result of doubting existing “knowledge”. To make progress, we have to “recognize our ignorance and leave room for doubt”. via Big Contrarian | ||||||||||||||
| Don't use Clickcaster for podcast hosting [StillSecure, After All These Years] Posted: 07 Jul 2008 01:41 PM CDT Image via Wikipedia When I find a new product or service that I think is good I am only too happy to let the world know it on my blog. For the past almost 2 years in the notes of every episode of our podcast, I mention and thank ClickCaster for hosting our podcast. I originally was turned on to ClickCaster by Scott Converse out in Boulder, Co who was the founder of ClickCaster. When Scott realized that a free model was not going to pay the bills, he instituted a pay model for podcast hosting. I was only too happy to pay for the great service and stats I was receiving. Well a few months ago Scott and team sold ClickCaster to focus on their new project, Medioh!. The new owners, nexplore promised no changes and same great service. Since then the stats stopped working, it became harder and harder to post new content and the site was down more than it was up. Finally after getting no satisfaction from ClickCaster I had no choice but to look for another host. Mitchell and I have chosen Pod-o-matic to host the podcast going forward. Of course we don't have all of the episodes moved over yet because ClickCaster isn't even up enough for us to grab all the episodes. But most of them are up at pod-o-matic and we have already repointed the feedburner/iTunes feed. So from here on you can hear us at pod-o-matic. If you are looking to host your podcast, you don't have to use pod-o-matic, but don't use ClickCaster! | ||||||||||||||
| Chicago Security Community [Infosec Events] Posted: 07 Jul 2008 12:29 PM CDT This post is part of the information security communities project. Hey everyone! My Name is Steven McGrath, and as a security professional local to the Chicago area, I thought it would be best to share a list of events that I am familiar with in the area:
These are just a few of the security-related event in the area. With many of them recurring on a monthly basis, there is plenty of opportunities to occupy your free time socializing with a large number of security professionals. Every group mentioned also has an open, welcoming atmosphere and a diverse range of security professionals, from government to private sector to nonprofit. | ||||||||||||||
| New Meme: "Security Idiot" [Anton Chuvakin Blog - "Security Warrior"] Posted: 07 Jul 2008 11:59 AM CDT | ||||||||||||||
| Freakonomics and Data [Emergent Chaos] Posted: 07 Jul 2008 10:57 AM CDT There's a really interesting article in the New Republic, "Freaks and Geeks:" In 2000, a Harvard professor named Caroline Hoxby discovered that streams had often formed boundaries to nineteenth-century school districts, so that cities with more streams historically had more school districts, even if some districts had later merged. The discovery allowed Hoxby to show that competition between districts improved schools. It also prompted the Harvard students to wrack their brains for more ways in which arbitrary boundaries had placed similar people in different circumstances. ...In retrospect, I have come to see this as the moment I realized economics had a cleverness problem. How was it that these students, who had arrived at the country's premier economics department intending to solve the world's most intractable problems--poverty, inequality, unemployment--had ended up facing off in what sometimes felt like an academic parlor game?It's a very interesting article on the economics of academic economics, and some of the perverse incentives which exist in the field. Me, I look forward to the day when we have so much data that we can start looking for arbitrary differences and boundaries. I look forward to the day when security has a cleverness problem. No doubt we'll end up calling it database pharming. | ||||||||||||||
| Incite Redux: Day 2 - It's time for an Audit Revolution [Security Incite Rants] Posted: 07 Jul 2008 10:51 AM CDT Good Morning: Until then, I think I'll just enjoy the fact that things could be a lot worse. Incite #2: It's time for an audit revolution Contrary to popular belief (and desire), compliance is far from dead and remains a major buying catalyst (and funding source) for all sorts of information security tools, services and the like. Yet, the acrimonious relationship between the auditor and the audited continues to create problems and needlessly burn resources. Forward-thinking security professionals jump on the bleeding edge of innovation treating the auditor as a peer and viewing the audit as a learning opportunity. So using the compliance card is not a bad thing at all. But do you buy something that is purported to help with compliance? Of course not. After all, a smart guy figures that GRC is dead. Buy what you need to protect your stuff. That hasn't changed at all. You still need to focus on Security FIRST! If you do that well, you'll be in decent shape for your audits and assessments. In terms of a grade, the long term trend is intact and the approach is solid. But it'll happen more slowly than I anticipated - so I get a B-. Or go hug your auditor and prove me wrong. | ||||||||||||||
| Incite Redux: Day 1 - Express Your Inner Bean Counter [Security Incite Rants] Posted: 07 Jul 2008 10:17 AM CDT Good Morning: Incite #1: Express Your Inner Bean Counter | ||||||||||||||
| Urgent - Action Needed: Call Your Senator Today on FISA! [The Falcon's View] Posted: 07 Jul 2008 08:29 AM CDT | ||||||||||||||
| Posted: 07 Jul 2008 08:15 AM CDT The last article in this series explored the top three reasons why group have a tendency to reinvent the wheel (read it here, or the entire series started here). And now, some solutions: Beyond the frustration caused by an approach that simply recreates the wheel, the result is often a solution that is not trusted and therefore readily cast aside in favor of the next offering. To put a stop to this cycle requires taking a different approach. Success has to be based to fundamentals and sound principles.
How to do it? A key part of the solution is to enter into deliberate discourse (note: this is a central theme of Into The Breach and a topic I am passionate about). More voices with an opportunity to review, consider and contribute have the potential to lead to a better product. For this to lead to a better product requires a strong leadership team with enough expertise to guide and the skills to help facilitate and negotiate the final result. Instead of starting with a blank slate, it is a good practice to build on the success of others. When it comes to strategies that protect information, we have plenty of choices – frameworks like ISO 2700x, PCI, FISMA, etc. However, limiting the solution to a narrow set of industry standards may not yield the best results. Sometimes, real progress comes at the intersection of industries (to gain more insight on this approach, consider reading: The Medici Effect) – leveraging how the medical, engineering or other industries have dealt with and handled challenges may bring valuable insight to the effort at hand. The advantage to building on the validated and transparent work of others is the ability to avoid conjecture and "gut feeling." This is the challenge: there are few shortcuts to spending the time to outline, think, plan, distill, check, cross-reference. This is an area where transparency really provides a benefit. When the group of professionals is assembled, here are three steps to harnessing the collective power, building on the wheel (instead of building a new wheel) and reaching a point of success:
1. Capture and distill frameworks (or solutions) Start by presenting a model to work from, based on an existing solution. In general, individuals and groups struggle to create but excel at editing and revising. With this in mind, selecting an initial framework or set of solutions to present to the group acts as a strawman [http://en.wikipedia.org/wiki/Strawman]. This has the added benefit of allowing people to beat on the framework(s) instead of each other. The frameworks or solutions can either be selected in advance or decided by the team. Allowing the team to decide may provide for more diverse results but requires more time and a stronger facilitator (who possesses deep subject matter expertise). Stronger frameworks and solutions are those that have already been publicly validated and are more transparent. This suggests the "heavy lifting" has already been done and the team can focus on refining and tailoring what already exists from multiple sources into the solution required. More important that just compiling a list of viable frameworks and solutions is how they are captured and processed. As the elements are suggested, reviewed and documented, look not only for the similarities, but also the distinctions between them. Working to understand why specific elements were either included or excluded may also reveal key insights that aid the development of a stronger solution. Note the intended audience and users of the solution and how it is received. It may be useful to note the level of maturity, too (since that provides some insights). This process generates a lot of discussion – this is good, and leads to the second point.
2. Capture and distill the running dialogue More important, perhaps, than the solutions selected in the last step is the running dialogue that occurs as part of the process. Yet few organizations take the time or make the effort to capture that solid gold value. Ultimately, the discussion – the true process of negotiation and coming to a common understanding – is precisely what allows a group to build the final product. While the discussion is natural, here are three important questions to ask, answer and record during this process: a. What works — and why? b. What does not work — and why? c. How is this applied — and why? Look for specifics. This is an area where people tend to rely on “truthiness” – which, to a certain extent, may be okay. In the overall discussion, however, guide people back to more concrete grounding by asking more questions to ensure everyone shares a common understanding (which is not necessarily the same as a common opinion!). The next segment will explore the benefit of capturing this conversation and making it available in the future. As the conversation continues, there is one more step to increase the overall value. 3. Capture and distill references The value of having experts together in a room is their collective knowledge – informed by experience, training and a vast array of resources. Therefore, it is incredibly valuable to regularly ask this group to cite the references they find of value. As the discussion rages on (if you have been part of a working group, rage is definitely the right word), asking people to take the time to cite the references that support their assertions returns focus to the fundamentals. Not only does this improve the overall framework, but this also improves how it is applied and verified (as we will explore in the next sections).
Bottom Line Bring together a small, tight team that works well together. Welcome as many voices into the process as reasonable. Take the time to distill and overlay what already works.
How this Applies to Trustmark When Trustmark gets this right, it will essentially be an overlay on the entire industry – explaining where, how and why the different control families and control objectives can be met. This is important, since it allows for additional regulations or efforts to be acceptable without prescribing a set way of working. But whether working on Trustmark or a new process to protect information, following these steps leads to a stronger - and more trustworthy - result.
Up Next: the second challenge facing Trustmark and similar efforts is in how the solution is applied. We examine this challenge with potential solutions before moving on to the final challenge of how the solution is measured and verified.
If you enjoyed reading this article, please take a moment to either subscribe to the RSS feed (www.securitycatalyst.com/feed/) or sign up for free updates by email. Use the buttons below to print this article or share this with friends and colleagues that will benefit from this. | ||||||||||||||
| VoIP Exploit Research toolkit [varie // eventuali // sicurezza informatica] Posted: 07 Jul 2008 07:50 AM CDT http://sourceforge.net/projects/voiper/ VoIPER is a VoIP security testing toolkit incorporating several VoIP fuzzers and auxilliary tools to assist the auditor. It can currently generate over 200,000 SIP tests and H.323/IAX modules are in development. | ||||||||||||||
| @twitterspammers. [NP-Incomplete] Posted: 07 Jul 2008 12:17 AM CDT Spammers went after Twitter pretty hard this holiday weekend using the "friend invite" model that was first developed against other social networking services. Briefly, the attack involves creating a large number of spammy profiles and then inviting people to view the spam by performing a friend request, or in twitters case, "following" the spam target. I have included screenshots of a few of these attacks. An individual can remediate this attack in the short term by disabling e-mail notifications of people following you. This is by no means an optimal solution. The only people who can really address the situation is twitter, through a combination of blacklisting, throttling, CAPTCHAs, and content analysis. | ||||||||||||||
| Joint Commission Updating Security Standards and Elements of Performance [Compliance Focus - Blogs] Posted: 06 Jul 2008 11:00 PM CDT The Joint Commission, which is a non-profit organization that publishes standards for healthcare organizations and runs an accreditation program, is updating some of their standards for 2009, including some which impact information security. The Joint Commission, previously known as JCAHO, is updating some of the standards and elements of performance relating to information management, privacy, and security. Many healthcare organizations seem to pay more attention to the Joint Commission standards than they do to HIPAA, because have JCAHO accreditation is very important to the organization's business performance. JCAHO accreditation is an independent measure of healthcare quality of performance, across many areas of their business (information management being one). Liability insurers look to JCAHO accreditation as a measure of quality and risk, so this tends to be a big deal. Joint Commission information management standards which are changing include: IM 02.01.03, EP 5, which now reads "The hospital protects against unauthorized access, use, and disclosure of health information". The previous language just said "The organization implements the policy". IM 02.01.03, EP 8, which now reads "The hospital monitors compliance with its policies on the security and integrity of health information". The language is obviously not overly prescriptive in terms of how healthcare organizations are supposed to achieve these standards. One assumption is that the organizations will turn first to the HIPAA Security and Privacy Rules for guidance. Maybe they will also look at ISO27002 for more specific controls relating to information security. These and the other changes to the JCAHO information management (security and privacy) standards are important because healthcare organizations now have the Joint Commission accreditation process at risk if they fail to adequately implement their information security program. Jim | ||||||||||||||
| Rafa Wins!!! [The Falcon's View] Posted: 06 Jul 2008 04:49 PM CDT | ||||||||||||||
| DeepSec 2007 Videos Now Online [Infosec Events] Posted: 06 Jul 2008 03:06 PM CDT DeepSec is an in-depth security conference in Vienna, Austria. Last year it was held on November 20th through the 23rd, and from the speaker lineup, it looked like a very good conference. Everyone can now enjoy the presentations, as the DeepSec 2007 videos are online at Google video. Here are some of the DeepSec 2007 presentations that sound interesting to me:
The two DeepSec 2007 keynotes are online as well: And here are the rest of the DeepSec 2007 presentations:
I also noticed a few DeepSec 2007 presentation not online:
| ||||||||||||||
| Forget the python vs ruby discussions.. [extern blog SensePost;] Posted: 05 Jul 2008 04:48 PM CDT Cause this puts Perl right back in the game! -snip- > sudo perl -MCPAN -e shell cpan> install Acme::LOLCAT install -- OK > cat demo.pl #!/usr/bin/perl use Acme::LOLCAT; print translate($ARGV[0]); > ./demo.pl "Im going to run all emails through this before sending" IM GOINS 2 RUN ALL EMAILZ THROUGH THIZ BEFORE SENDIN -snip- ahhh.. MUH WORK AR DONE HERE | ||||||||||||||
| ARDAgent.app Vulnerability Analysis [...And you will know me by the trail of bits] Posted: 05 Jul 2008 03:37 PM CDT Apple recently released Mac OS X 10.5.4 with accompanying security updates for 25 vulnerabilities. Notably absent, however, is a fix for the recently brouhaha’d ARDAgent.app local privilege escalation vulnerability. The exploit is extremely simple and unfortunately, it seems that the fix is not; otherwise Apple would have fixed it in this batch. For more information on the exploit including temporary fixes and workarounds to protect yourself until Apple fixes this vulnerability, see the full write-up at MacShadows.com. In the interest of fully understanding Mac OS X security issues, let’s dive in and see how this vulnerability works. As a reminder, the vulnerability is that ARDAgent.app, a set-user-id root executable, responds to the “do shell script” Apple Event, effectively running arbitrary commands as root. Applications must announce that they can receive Apple Events before they may be scripted. In Cocoa applications, this is done by setting the NSAppleScriptEnabled property to “YES” in the application bundle’s Info.plist file. In Carbon applications, an application is made scriptable by simply calling the AEInstallEventHandler() function. AEInstallEventHandler() lets the application define which Apple Events it can handle and supply the handler functions for them. ARDAgent.app did not do anything special in order to respond to the “do shell script” Apple Event, this event is defined in the StandardAdditions Scripting Addition in /System/Library/ScriptingAdditions. Scripting Additions are dynamic libraries (dylibs) that will be loaded automatically by the Apple Event handler if the application receives an Apple Event that is defined in them. There are several Scripting Additions in /System/Library/ScriptingAdditions, but they may also potentially be found in /Library/Scripting/Additions or ~/Library/ScriptingAdditions. Interestingly, ARDAgent.app calls AESetInteractionAllowed() with kAEInteractWithSelf after installing its own Apple Event handlers which is supposed to restrict the processing of Apple Events to only those sent by the process itself. It obviously does not seem to have its intended effect in this case. This is a pretty isolated vulnerability not a massive security hole in AppleScript. Set-user-id executables should not be scriptable and ARDAgent.app appears to be the only application that violates this. UPDATE @ 20080704: As mentioned in the MacShadows security forums from the link in the comments below, SecurityAgent is also susceptible to this, but only when SecurityAgent is running with increased privileges (after a sudo, unlock of System Keychain, etc). But this is not because SecurityAgent is setuid, it is because it still receives Apple Events when it runs with increased privileges. It doesn’t, however, appear to call any Apple Events functions. So I am not sure why it is processing Apple Events (handled in a base framework?). If anyone knows why, let me know. UPDATE @ 20080705: I have poked around at this a bit more this weekend, and it turns out that an application does not need to call any Apple Events APIs in order to receive and process Apple Events. While Cocoa applications must set the NSAppleScriptEnabled property, any Carbon application automatically handles Apple Events. At the lowest level, Apple Events are sent over Mach ports looked up from the bootstrap server, so you need port send rights in order to send it Apple Events. This means that a client will not be able to send events to an application running as a different user unless it is setuid/setgid (ARDAgent) or is run with increased privileges, but still checks its ports in with the bootstrap server (SecurityAgent, which is launched by securityd with gid=0 on Tiger in certain situations). ![]() | ||||||||||||||
| Posted: 05 Jul 2008 11:37 AM CDT
One of the things that I've always disliked about politics is the polarizing nature of how each side takes sides, making arguments win-lose when a combined solution is really what's needed. Americans are getting hit below the belt right now with the one-two-punch of high gas prices (along with the associated rise in food and other prices) and a struggling economy. Rather than take a sensible approach, Obama and McCain are framing the debate as energy alternatives vs. more drilling, turning the argument into yet another polarizing debate.
I also believe we could use our oil reserves to help fund the creation of our energy independence. I flippantly said one day, "Lets drill offshore, sell the oil to China, and use the proceeds to fund the creation of hydrogen cars." Not such a crazy idea after all, eh? |
| You are subscribed to email updates from Black Hat Security Bloggers Network To stop receiving these emails, you may unsubscribe now. | Email Delivery powered by FeedBurner |
| Inbox too full? | |
| If you prefer to unsubscribe via postal mail, write to: Black Hat Security Bloggers Network, c/o FeedBurner, 20 W Kinzie, 9th Floor, Chicago IL USA 60610 | |



But then again, what's realistic tends to be constrained by people, and people don't really change readily - if ever. It reminds me of one of the great lines in
I've fielded a lot of inquiries and questions regarding IS 27001/2 and also COBIT. Throw a little of NIST's 800-100 and 800-53 and it remains clear that most practitioners still have no idea where to start when embarking on a security program. I expect these frameworks will accelerate over the short term. I also think that a lot of fairly pragmatic IT professionals will default to following the 12 requirements of the PCI DSS. Notice that I said pragmatic IT professionals, not Pragmatic CSOs.




I was just getting ready to close down the laptop for the evening when I began thinking about how much my views have changed on our nation's energy policies. It's the 4th of July and I enjoyed a banana split to celebrate. (Long time since I've had one of those.) I was in high school during the 70's oil crisis and enjoyed those many years of driving 55 mph on the interstate (I'm being very facetious here.) I heard on Sirius radio that one of our congressmen proposed bringing back the 55 mph limit. While conservation is a good thing, so is our nation's (and my personal) sanity and bringing back the 55 mph speed limit is one of those ideas I hope we shoot down with a vengeance. I'm one of the biggest offenders of conservation when it comes to my Suburban, but I love to drive and I've enjoyed having a big vehicle. I hope to change that soon and move to a much more efficient vehicle once I decide what to buy. 
I'm glad Obama is strongly for creating energy alternatives. I would love to drive a hydrogen vehicle if they were available at a reasonable price with sufficient fueling stations available. I believe our nation's resources should be dedicated to becoming a new economy of alternative energy and green technologies. Just like John Kennedy ignited the American engineering spirit of the space program with his challenge to put a man on the moon before the end of the decade, we should make a current day challenge of bringing hydrogen cars and fueling stations across the country in less than ten years. Where's our government when we need it?
If our government made the same kind of investment in becoming energy independent that we made to get to the moon, we'd be fueling a whole new economy of alternative energy businesses that could solve our energy problems and serve to the rest of the world. I believe in our continued investment in NASA but I'd delay everything we have on the table for the next 10 years to redirect that money into celebrating an Energy Independence Day in ten years or less. How about it Obama -- make the challenge: Energy Independence Day in less than 10 years. We do it, not because it is easy, but because it is hard... remember that kind of inspiration? Let's get moving, Washington.
It would be like selling China the oil equivalent of crack. Let them build up their dependence on oil to an even greater extent, and then sell them our green energy technology and products as even higher oil prices squeeze their economy and slow growth down the road. I do believe we have to drill for more oil using US resources to lessen the impact OPEC has on us. That doesn't mean we have to drill in Anwar, but parts of Colorado, Wyoming, South/North Dakota, Montana are sitting on sizable oil reserves. Those along with the oil sitting offshore could create at least a balancing factor against the current out of control oil price situation. Let others buy our expensive oil for a change, or they can buy our alternative energy technology instead. With the alternative energy and hydrogen cars created, the USA would be next generation OPEC 2.0 of alternative energy and oil. In ten years our problem could do a 180 and become our biggest strength.
No comments:
Post a Comment