Posted: 09 May 2008 07:37 AM CDT
. . . and unto security vendors things that deal with security. So it seems to be what Citrix CTO, Simon Crosby is saying in this audio interview on Search Security with Rob Westervelt. I was all set to write an article on the operationalization of security and all when I noticed that virtuoso of virtual security, Hoff beat me to the punch with his call of BS on Simon.
Hoff is right on. We can't afford the same old, same old of letting the OS or network vendor or in this case the virtual machine vendor build the product and have a separate security industry bolted on and clean up the mess. People want secure virtualization, they don't want to think about what they have to buy and install to make their virtual machines secure, they want security designed in from the beginning. I am surprised that Simon Crosby would even suggest this, it is frankly so 2001. Lets hope someone over at Citrix takes a que from the VMsafe program and does a little more thinking about security before hand. We can't afford any other option.
Posted: 09 May 2008 03:49 AM CDT
This month there are four patches scheduled for release, three Critical patches, and one Moderate. The three Critical patches address remote code execution risks in Office (2) and Windows (1), with the Moderate patch addressing a Denial of Service vulnerability affecting Windows Live OneCare, Microsoft Antigen, Microsoft Windows Defender, and Microsoft Forefront Security. It is important to note for OS X users that Microsoft will be issuing Critical updates for Office 2004 and 2008.
What is probably most surprising is the patch to be released for the Microsoft Jet Database Engine, a technology which was widely reported that it would not be receiving any further updates from Microsoft.
Posted: 09 May 2008 12:00 AM CDT
It’s serious egg on my face time. Let me explain. To track our interaction with partners and potential partners, we use the well-known CRM system, Salesforce.com. As I have mentioned in a previous post, we try to be very careful only to email people to who have requested to receive our mail. This is [...]
Posted: 08 May 2008 11:57 PM CDT
Eric Bidstrup of Microsoft has a blog entry up titled "How Secure is Secure?" In it he makes a number of points related, essentially, to measuring the security of software and what the appropriate metrics might be.
I'd been asking the Microsoft guys for a while whether they had any decent metrics to break down the difference between:
Microsoft has been releasing security bulletins since 1999. Based on some informal analysis that members of our organization have done, we believe well over 50% of *all* security bulletins have resulted from implementation vulnerabilities and by some estimates as high as 70-80%. (Some cases are questionable and we debate if they are truly "implementation issues" vs. "design issues" – hence this metric isn't precise, but still useful). I have also heard similar ratios described in casual discussions with other software developers.In general I think you're likely to find this trend across the board. Part of the reason though is that in general implementation defects are easier to find and exploit. Exploiting input validation failures that result in buffer overflows is a lot easier than complicated business logic attacks, multi-step attacks against distributed systems, etc.
We haven't answered whether there are more Architectural/Design defects or Implementation defects, but from an exploitability standpoint, its fairly clear that implementation defects are probably the first issues we want to fix.
At the same time, we do need to balance that against the damage that can be done by an architectural flaw, and just how difficult they can be to fix, especially in deployed software. Take as an example Lanman authentication. Even if implemented without defects, the security design isn't nearly good enough to resist exploit. Completely removing Lanman authentication from Windows and getting everyone switched over to it has taken an extremely long time in most businesses because of legacy deployment, etc. So, as much as implementation defects are the ones generally exploited and that need patching, architectural defects can in some cases cause a lot more damage and be harder to address/remediate once discovered/exploited.
Another defect to throw into this category would be something like WEP. Standard WEP implementations aren't defect ridden. They don't suffer from buffer overflows, race conditions, etc. They suffer from fundamental design defects that can't be corrected without a fundamental rewrite. The number of attacks resulting from WEP probably isn't known. Even throwing out high profile cases such as TJ Maxx and Home Depot, I'm guessing the damage done is substantial.
So far then things aren't looking good for using implementation defects as a measuring stick of how secure a piece of software is. Especially for widely deployed products that have a long lifetime and complicated architecture.
Though I suppose I can come up counter-examples as well. SQL-Slammer after all was a worm that exploited a buffer overflow in MS-SQL Server via a function that was open by default to the world. It was one of the biggest worms ever (if not the biggest, I stopped paying attention years ago) and it exploited an implementation defect, though one that was exploitable because it was part of the unauthenticated attack surface of the application - a design defect.
All this really proves is that determining which of these types of defects to measure, prioritize, and fix is a tricky business and as always, you mileage may vary.
As Eric clearly points out the threat landscape isn't static either. So, what you think is a priority today might change tomorrow. And, its different for different types of software. The appropriate methodology for assessing and prioritizing defects for a desktop application is substantially different than that for a centrally hosted web application. Differences related to exploitability, time-to-fix, etc.
More on that in a post to follow.
Posted: 08 May 2008 09:57 PM CDT
Anton Chuvakin of LogLogic posted today on some of the intricacies of Windows native file system audit. If you have a need for monitoring access or changes to files, beware of the do-it-yourself method. Chuvakin provides insight on some of the challenges.
One of the things that NetVision engineers brought to market long before I joined is a very slick file system monitoring solution. Slick mostly because you have extreme control over which events you want to capture. You can filter on server, folder, file, person acting, event type (read, create, modify, delete, ACL or attribute changes) – you can even specify times of day to activate a particular policy. And you can have different policies for different files or folders. You can also choose what to do when an event occurs. For some events, write it to a database or file. For others, send an email too or kick off another process. None of it relies on system logs and the reports are delivered in a nice web UI running on Crystal Reports. So the business people get relevant results without having to understand the tech stuff.
Some of our customers even use our filtering to narrow down the events that are then fed into an enterprise security event management or log management system (like LogLogic). It's File System audit made easy.
Posted: 08 May 2008 09:17 PM CDT
Matt Asay has a blog up on "OLPC's capitulation to Windows...". In it Matt waxes poetic about what a mistake Nicholas Negroponte is making by embracing Windows for the OLPC laptop project. Matt points to Groklaw, Richard Stallman and the rest of the Redmond revolutionaries who want to see Negroponte tarred and feathered and question his vision. Hey, lets face it the "m" word is toxic to that crowd. But I really think Matt is just plain twisted about this and about what OLPC is really about. Here is what Matt has to say, "OLPC is rather about liberating developing nations from their vassal status that continually keeps them at the mercy of the pricing and licensing of Microsoft and other proprietary vendors." No Matt, that is not what OLPC is all about and that is what the problem is! OLPC is about getting a laptop in the hands of every kid in the world. It is about giving these kids a chance to learn and grow up to compete in the global economy with the same tools that kids in this country have. It has nothing to with your views of Microsoft being a 21st century imperialistic empire.
Posted: 08 May 2008 08:42 PM CDT
In July of last year, I prognosticated that Google with it's various acquisitions was entering the security space with the intent to not just include it as a browser feature for search and the odd GoogleApp, but a revenue-generating service delivery differentiator using SaaS via applications and clean pipes delivery transit in the cloud for Enterprises.
My position even got picked up by thestreet.com. By now it probably sounds like old news, but...
Specifically, in my post titled "Tell Me Again How Google Isn't Entering the Security Market? GooglePOPs will Bring Clean Pipes..." I argued (and was ultimately argued with) that Google's $625M purchase of Postini was just the beginning:
Here's where we are almost a year later. From the Ars Technica post titled "Google turns Postini into Google Web Security for Enterprise:"
The race for cloud and secure utility computing continues with a focus on encapsulated browsing and application delivery environments, regardless of transport/ISP, starting to take shape.
Just think about the traditional model of our enterprise and how we access our resources today turned inside out as a natural progression of re-perimeterization. It starts to play out on the other end of the information centricity spectrum.
What with the many new companies entering this space and the likes of Google, Microsoft and IBM banging the drum, it's going to be one interesting ride.
Posted: 08 May 2008 06:51 PM CDT
In an article over at SearchSecurity.com, Simon Crosby, the CTO of Citrix, suggests that "Virtualization vendors [are] not in the security business."
Besides summarizing what is plainly an obvious statement of fact regarding the general omission of integrated security (outside of securing the hypervisor) from most virtualization platforms, Crosby's statement simply underscores the woeful state we're in:
I'm sure it's reasonable to suggest that nobody expects virtualization platform providers to "...catch bad guys," but I do expect that they employ a significant amount of resources and follow an SDLC to discover vulnerabilities -- at least in their software.
Further, I don't expect that the hypervisor should be the place in which all security functionality is delivered, but simply transferring the lack of design and architecture forethought from the hypervisor provider to the consumer by expecting someone else to clean up the mess is just, well, typical.
I love the last line. What a crock of shit. We've seen how well this approach had worked with operating system vendors in the past, so why shouldn't the "next generation" of OS vendors -- virtualization platform providers -- follow suit and not provide for a secure operating environment?
Let's see, Microsoft is investing hugely in security. Cisco is too. Why would the other tip of the trident want to? VMware's at least taking steps to deliver a secure hypervisor as well as API's to help secure the VM's that run atop of it. Where's Citrix in this...I mean besides late and complaining they weren't first?
So, in trade for the "open framework for security ecosystem partnership" cop-out, we get to wait for the self-perpetuating security industry hamster wheel of pain to come back full circle.
The fact that the "industry" has "decided" that "third party vendors are required to secure any platform" simply points to the ignorance, arrogance and manifest destiny we endure at the hands of those who are responsible for the computing infrastructure we're all held hostage with.
Just so I understand the premise, the security industry (or is it the virtualization industry?) has decided that the security industry instead of the OS/infrastructure (virtualization) vendors are the one's responsible to secure the infrastructure -- and thus our businesses!? What a shocker. Way to push for change, Simon.
I can't even describe how utterly pissed off these statements make me.
Posted: 08 May 2008 05:47 PM CDT
We all know that corporate marketing tends to suck, but this nonsense from Vontu is bordering on ridiculous. from the Symantec/Vontu website (here):
90% reduction in data loss incidents 10 days after deploying - wtf? a. isn’t 10% of data loss still really bad and b. how were they able to tell what baseline data loss was. Total, complete and utter BS!
Over 99.7% reduction in the total number of customer records exposed over a two-year period - wtf? Seriously, wtf?
Reduced incident flow “down to a trickle” - wtf is a “trickle”?
Posted: 08 May 2008 02:57 PM CDT
I read this today on a local news site and the only thought that went through my head was "wow"... Essentially a malicious individual hacked the Epilepsy Foundation's website and posted hundreds of rapidly flashing images. While I don't condone it... I can understand why people think they should target websites for profit or pride... but this? It's just plain mean... It makes me wonder what the world is coming to.
Update: Apparently this is old news and I'm a little slow finding out about it.
Posted: 08 May 2008 02:23 PM CDT
Popular social networks are facing a difficult time to stop spammers from abusing their networks. Twitter, a micro-blogging network site where you can publish text updates via SMS, instant messaging, email, Twitter's website and third party applications, is one of the many.
They recently started to blacklist people who spam other members and are posting these results on the Twitter Blacklist. At this time they already have 378 blacklisted members on this web site and it’s growing.
Posted: 08 May 2008 02:02 PM CDT
I wanted to expand on my earlier post about Extending the ROI of Provisioning. Here's a visual aide to help the discussion:
There's nothing new in this illustration. It simply shows that the provisioning engine connects to multiple identity data stores. As we know, provisioning systems have the potential to do a very good job at providing work flow and business rules around creation and management of user accounts across multiple systems. They may even have some additional capabilities around Separation of Duties enforcement, user attestation, user self-service password management, reporting on rights (based on its view), and more.
What it doesn't do, however, is protect the connected data stores against direct access. For example, the DBA still has direct access to the database and the Directory Administrator still has direct access to the directory. They can create new accounts, view information, and change permissions. The system may be able to see when new user accounts are created during its next scheduled run, but that capability isn't always enough. I'll give an example.
One of these LDAPs is not like the other
I purposely shaded the Network Directory so that it stands out from the others. That's because it is different. Since the market for the Network Directory consists almost entirely of just two vendors (Microsoft and Novell) and one has a much larger percentage of the market (Microsoft), I'll just use Microsoft's Active Directory (AD) as the example.
Now, back to the gaps:
All of these can be applied to other connected data stores as well. For example, scope is an issue for relational database tables. The provisioning system may only watch specific tables or may completely ignore local accounts in the RDBMS itself. Likewise, if AD is not the source, the HR database is likely the source which yields the same issue for the HR DBA.
My point isn't that provisioning systems are weak. They do what they do very well. But, you can improve the overall security posture of the environment by including localized protection on the connected data stores as well. Encrypt the database. Monitor DBA activity and Directory Administrator activity. Watch directories for failed attempts to create or modify accounts. Watch for failed authentication attempts. In a nutshell, ensure that accounts and permissions are being managed through the provisioning system into which you've built the business rules and work flow to ensure that rights are being managed effectively.
And if you have to respond to auditors for compliance reasons, you can say you're certain that accounts are only being created according to policy; instead of you hope that to be the case.
I've heard the argument that this might be overkill (admittedly an over-simplified characterization of the argument). OK. In some scenarios, maybe you don't need tighter security. You only care about work flow efficiency and cost cutting. Or you're OK with the level of improvement in your security posture that traditional user provisioning systems provide. I'm not saying that anyone should ignore the risk analysis process. But, if compliance is an issue and you want to prove compliance beyond reasonable doubt or just simplify the audit process, solutions that locally monitor the connected systems may provide value.
And if you can demonstrate that 100% of your user and rights management processes are funneled through the provisioning system with appropriate work flows, I think you could justify claiming a much improved ROI on the overall solution with minimal additional investment.
Disclaimer? Yes, NetVision can help with reporting and monitoring on your Network Directories (both major vendors) and related file systems. But that's no reason for me not to talk about it!
Posted: 08 May 2008 12:44 PM CDT
Microsoft has included a new feature in Windows Server 2008 to allow sharing individual applications through Terminal Services. This is not a new concept - Citrix has been offering something similar for a long time. They also are now offering a Terminal Services Gateway and TS Web Gateway for accessing Terminal Services, and RemoteApps, from the Internet. What isn't well known, but also isn't new, is the ability to 'break out' of these applications and access other applications and files on the Terminal Server. It is very easy to break out of GUI apps even for non-technical people. Below I will highlight a few examples of running other applications from a RemoteApp, and later I will follow with a number of configuration suggestions for securing your server.
Posted: 08 May 2008 11:23 AM CDT
Remember our posting about FashionShopping.com? Well we see in the logs a change of behaviour regarding the mailings of FashionShopping.com.
Last time it was a lot of trouble getting of that list and they send their mailings too many times. To give you an idea: we intercepted emails from FashionShopping on a daily base since april until yesterday, May 8th. 80% of the emails where sent to the same recipients meaning that you could have received their mailing on a daily base for more than two weeks if MX Lab didn’t block them accordingly.
These guys send now from a new domain emailing-direct.org. As always a quick visit to a site gives us an ‘under construction’ web site.
The first paragraph under the many images seems to have an unsubscribe link: “Vous avez été invité, mais conformement à la loi sur la confiance dans l’économie numérique, si vous ne souhaitez plus recevoir des propositions par email de la part de Emailing-Direct pour le compte de FashionShopping.com veuillez cliquer sur le lien suivant : Désinscription”.
However think link gives us the error “Unsubscribe links do not work inside a preview message. In order to test unsubscribe links you will need to do a campaign” on the website http://www.my-login.net/z_oocode_129168_oocode_z.php. A visit to the root of this site gives us a login to an control panel of Expedite Simplicity, email & mobile marketing software.
The second paragraph: “We support responsible and ethical email marketing practices. Please know that we respect your right to be purged from this marketing campaign. Removal from this email distribution list is automatically enforced by our email delivery system. Please click here to start the process for email deletion”.
This will lead us to http://emailing-direct.org/index.aspx?aspxerrorpath=/*************.aspx and yes, here we have an unsubscribe form. You can even contact them at “Emailing Direct - 66, Avenue Des Champs Elysées - Paris, FR 75008 FR. So, they have moved from mailing house EmailVision to this company. Did EmailVision received too many complaints?
A WHOIS search on Netsol for emailing-direct.org gives us some results. The domain is registered to Emailing Direct in Paris, France. The registrant contact email address is firstname.lastname@example.org. When visiting their site we get a nice web site.
A short contact with this ‘company’ learns us that you can get mailings for a minimum fee of € 500. This includes sending out 500.000 emails at € 0,001 per message in a fully managed campaign.
It is clear that pushing your email based campaigns to the limit isn’t always a good thing. Some general tips when you are into email marketing:
Posted: 08 May 2008 09:04 AM CDT
There were a couple of random things that I wanted to comment on.
The first was a post by Dave Lewis of Liquidmatrix. The post in question is a discussion of a Wonderware advisory released by Core Security and the level of detail that they provided. Dave doesn't agree with the level of detail provided... as they had details on how to exploit the vulnerability and even showed the assembly from the vulnerable function. He also comments that this isn't responsible disclosure. I'm <sarcasm>really glad to see this debate is coming up again</sarcasm>... but really where's the lack of responsible disclosure? Core reported the vulnerability to the vendor (repeatedly) and went out of their way to ensure the vendor was aware, this is more than a lot of people / companies do. They then continually pushed their advisory release date to accommodate the company. These details are being released after the patch as well.
There's absolutely nothing wrong with this... it's really no different from the level of detail provided by other security vendors that release advisories. Once the patch is out there isn't much to stop malicious individuals from obtaining the assembly to the vulnerable function... a copy of IDA Pro and BinDiff is really all they need. Outside of the assembly... the level of detail provided is really the same as most other security vendors that release advisories. I've seen them include some sort of binary analysis in the past... and most of them contain a text write-up... here's an example with enough text to more than locate the vulnerability from TippingPoint / ZDI:
The specific flaw exists in the oninit.exe process that listens by default on TCP port 1526. During authentication, the process does not validate the length of the supplied user password. An attacker can provide a overly long password and overflow a stack based buffer resulting in arbitrary code execution.
Part of the problem with the InfoSec battle is that the bad guys have essentially unlimited time, where as IS employees have families and lives and work a set schedule. The Core advisory has set internal security teams on their way to developing their own exploits should they need to, without it they'd have had a lot more work to do and it would have taken them more time. Core did everything short of release the related Python and you can't really blame them, since then they'd be giving away their product for free. In the end, what they did was, in my opinion, beneficial to all.
It's one thing to simply release details, but as soon as someone works with the vendor you can't really cry foul when they publish the details. At least not on the 'responsible disclosure' front... because they've followed responsible disclosure and in this case Core Security hasn't done anything different then a number of vendors. Microsoft Tuesday is coming up and watch the mailing lists, each vendor that has reported a vuln usually sends out some sort of advisory and these range from brief overviews to full binary analysis and specific details on exploiting the vulnerability. We've seen it before and we'll see it again... but the patch is out, so they aren't helping the malicious individuals... just the good guys who have time constraints.
Posted: 08 May 2008 07:27 AM CDT
Posted: 08 May 2008 06:45 AM CDT
The PHP Group released version 5.2.6 of the popular scripting language earlier this month. While there were more than 100 bugs fixed with this update, there were several critical security vulnerabilities patched that make updating essential for any administrators or users currently using the 5.x branch of PHP (if you're still stuck using 4.x or earlier you should really consider updating your installation).
Several memory leaks, buffer overflows, safe mode bypasses, and multi-byte character handling are amongst the issues addressed by this update, which is the first one to be released in six months by the PHP Group. Although there are probably many more security vulnerabilities yet to be found or patched (just see Stefan Esser's work, which has been somewhat quiet since the end of last year), the significant number of bugs patched is a continuing good sign from a project that has come under fire in the past for having a mixed approach to the security of their main product.
Posted: 08 May 2008 01:12 AM CDT
Joshua Corman from IBM ISS had a great presentation at Interop “Unsafe at any speed the 7 dirty secrets of the security industry” which has been receiving strong media coverage (here), and (here)…my favorite reference is from Alexander Wolfe at Information Week (here)
#0 Vendors do not need to be ahed of the threat they only need to be ahead of the buyer
The goal of the security industry is not to secure, the goal of the security industry is to make money. I think we all know this conceptually, and even with the best intentions in our capitalistic society we must understand that security companies are motivated by profits. This isn’t necessarily a bad thing, but it should help to dispel the myth that security companies are smarter than hackers, they aren’t, they are just smarter than the buyers.
#1 AV certifications do not test/require trojans
AV certifications are BS, they are essentially the AV industry’s equivalent of duck, duck, goose as vendors move up and down the rankings from one test to another - who gives a crap if in test 1 AV vendor A detected 98.4% and AV vendor B detected 95.7% and AV vendor C detected 97.6% and then in test 2 it all chagnes, especially when they are not testing their ability to detect the really nasty, stealthy, sophisticated non-replicating malcode that iz in yur bits stealin yor bytes.
BTW - Kurt Wismer is pretty passionate when it comes to Anti Virus, he is like the Guardian Angel of anyone who would dare to speak ill of the poor, defenseless AV companies. He recently posted on why the AV vendors were NOT falling behind using an “dog shit” analogy (here)- classy, professional, and uncomfortably hilarious. Honestly I am not sure what the hell he is talking about, but I am sure he will post his thoughts in triplicate soon
#2 There is no perimeter
The endpoint is the perimeter, the user is the perimeter, the business process is the perimeter, the data is the perimeter - the perimter is not the perimeter. Those who decry securing the endpoint and espouse the virtues of network security obviously do not care about the importance of protecting the ever increasing intermittenty connected, remote computing devices that move in and out of the corporate network like a transient looking for a warm underpass to sleep for the night, all the while bypassing perimeter and network security.
So why should we care if we do a really good job of protecting critical assets with the latest network security thingie? Well ask yourself if confidential data ever makes it’s way onto mobile devices, smart phones, handhelds and laptops - no, you say - really? nothing confidential in email? Bob in accounting doesn’t ever download propritery information to work with over the weekend? Your engineers only access source code from the security of Ninja-proofed, tempest shielded, lead walled closet surrounded by an army of M16 wielding bodyguards?
#3 Risk management threatens vendors
Risk management forces an organization to focus, to move towards policy driven proactive security and away from reactive, ad-hoc security models that drive knee-jerk secuity buying. Security vendors love knee-jerk security buying (see dirty secret #0)
#4 There is more to risk than weak software
There is a myth in information security that postulates the theory that if all software was secure we would eliminate threats - this would be true only if we didn’t allow computers to be turned on, connected to the internet and people were not allowed to use them, but they weren’t really designed for that. We all know there is no patch for human stupidity and social engineering is one of the easiest ways to infect a box, so the never ending cycle of vulnerability disclosure -> scan -> patch -> rinse and repeat keeps us locked into a never ending hamster wheel of misaligned goals and mismatched expectations.
#5 Compliance threatens security
When I was with Gartner we would publish a Cyber Threats Hype Cycle and for many years we placed Regulatory Distraction as a threat to enterprise security. The thinking was that being compliant doesn’t = improving security, whereas implementing strong security measures would generally make one compliant. Although we have made strides in defining more prescriptive compliance initiatives many organizations work to pass an audit as opposed to work to implement controls that actually benefit the organizations security program.
#6 Vendor blind spots allowed storm
Storm eats AV for breakfast, it doesn’t need vulnerabilities, it leverages outstanding social engineering, it is self-defending and resilient…
Microsoft did not kill storm worm (here), it is still out there lurking in the shadows like a malicious interloper (here) waiting to ridicule your inadequote reactive security measures and laugh at your inability to remove it from the internets.
#7 Security has grown well past “Do it yourself”
The days of dropping in a box and flipping a switch are long gone, we are in an era where the combination of people, process, and technology must be coordinated and well planned or you not only risk a failed deployment but the loss of business or worse.
Posted: 07 May 2008 10:09 PM CDT
2 这本书更适合于在信息安全这个领域工作了有相当长的时间的读者，他有相当多的专业技术知识、运行管理经验等，并且对ISO17799 /27000 /CoBit /ITIL等也已经有不错的积累。这样才会更好的理解和吸收Andy的这些观点。从另外一个角度讲，也只有比较资深的顾问、管理经理才更有机会用到这些知识。呵呵，一般来说，刚入门的顾问或普通工程师不是很有机会参与制定这些安全度量指标的开发。
6 本书第六章 - 可视化，是不错的参考，不管是入门的、还是已经比较资深的顾问，对于很多安全运行管理工程师经理也是非常好的参考。我见过很多顾问或工程师拿出来的报告或演讲材料真的是充满了滥用的色彩、线条和图片，而使的重要的"信息"完全被淹没了，最重要的"逻辑"消失了。不少情况下，本来工作做的还是不错的，这样一来，结果不好，非常可惜！
或许很多技术"专家"会认为"内容才是最重要"，"展现"不过是个形式，没有关系，所以不重视、也不情愿在展现和报告上下功夫。其实，不然。要知道，很多IT管理层并不是技术专家，越高的IT管理层通常对技术的专业知识了解的就越少，他们能够或愿意花在专业技术上的时间也就越少，他们最重要的工作是了解业务逻辑，了解技术的总体趋势。我们知道IT实在是太多技术了，分到安全上的自然会越来越少。所以这里的挑战就是如何用更少的时间展现更为清晰、更为关键的逻辑。这一点也是我到联想以后体会很深的一点 - 如何创建Executive Level的技术报告，一定要非常清楚自己的读者是谁，报告的目的是什么（decision oriented, or just information sharing）。在这一点上，我很高兴我自己以及我的Team都有了很快的提高。
我很欣赏Andy的观点，一个好的"展现"的特点是lean, trim and elegant, 朴素，直观，关注内在的数据和逻辑。Andy在书中举了若干个示例来说明这一点。关于这一点，如果大家读McKinsey的典型报告时会有较深的体会。我强烈推荐这本书的读者、这篇博客的读者在这方面多思考练习体会一下。呵呵，如果您已经是个中高手，也请您和大家也分享一下您的经验。
Posted: 07 May 2008 09:34 PM CDT
Posted: 07 May 2008 09:01 PM CDT
I read this article from Network World (Australia) where the author relayed the pinnings of C-levels from Australia and New Zealand by titling his story thusly: "If only reducing costs was as easy as security, say CIOs"
It seems that based upon a recent study, IDC has declared that "...conquering IT security is a breeze for CIOs."
I'm proud of my Kiwi lineage, but I had no idea my peeps were so ahead of the curve when it comes to enlightened advancements in IT security governance. They must all deploy GRC suites and UTM or something?
Anton, there must be something in the logs down there!
As per that famous line in "When Harry Met Sally," I respond with "I'll have what [s]he's having..."
Check this out:
I'm no analyst, but allow me to suggest that just because security is not the top priority or "challenge" does NOT mean they have the problem licked. It simply means it's not a priority!
Perhaps it's that these CIO's recognize that they've been spending their budgets on things that aren't making a difference and should instead be focusing on elements that positively impact corporate sustainability and survivability as an on-going concern instead?
The most hysterical thing about this article -- besides the re-cockulous premise they overly-hyped and the (likely) incorrect interpretation of results the title suggests -- is that on the same page as this article which suggests the security problem is licked, we see this little blurb for a NWW podcast:
So, there we have it. A direct tie. Security is solved and failing, all at the same time!
Posted: 07 May 2008 08:41 PM CDT
I don't know what the the hell Ptacek and crew are on about. Of course
In support, I submit into evidence People's Exhibit #1, from here your honor:
...and I quoteth:
Yeah! Get some! It's just like firewalls, IPS, and AV, bitches! Mo' is betta!
It's patently clear that Ptacek simply doesn't layer enough, is all. See, Rothman, you don't need to give up!
How's that for a metric?
That is all.
Posted: 07 May 2008 08:31 PM CDT
Steve Ragan with the Tech Herald has posted a response to Josua Corman’s “7 Dirty Secrets of the Security Industry” presentation, and I got quoted! <my submitted response> Overall, Tim Greene’s brief of Joshua Corman’s presentation does a solid job of discussing the very real need for, “a healthy level of skepticism about what security vendors” communicate. The [...]
Posted: 07 May 2008 03:38 PM CDT
Posted: 07 May 2008 02:23 PM CDT
In my post titled "The Four Horsemen Of the Virtualization Apocalypse" I brought to light what I think are some nasty performance, resilience, configuration and capacity planning issues in regards to operationalizing virtualized security within the context of security solutions as virtual appliances/VM's in hosts.
This point was really intended to be discussed outside of the context of virtualizing security in physical switches, and I'll get to that point and what it means in relation to this topic in a later post.
I wanted to reiterate the point I made when describing the fourth horseman, Famine, summarized by what I called "Spinning VM straw into budgetary gold:"
This is a really important issue because over the last few weeks, I've seen more and more discussions surrounding virtualization TCO and ROI calculations, but most simply do not take these points into consideration.
We talk about virtualization providing cooling, power and administrative cost-avoidance and savings. We hear about operational efficiencies, improved service levels and agility, increased resource utilization and reduced carbon footprint.
That's great, but with all this virtualized and converged functionality now "simplified" into a tab or two in the management console of your favorite virtualization platform provider, the complexity and operational issues related to security have just faded into the background and been thought of as having been absorbed or abstracted away.
I suppose that might point to why many simply think that security ought to be nothing more than a drop-down menu and checkbox because in most virtualization platforms, it is!
When thinking about this, I rationalized the experience and data points against my concern related to security's impact on performance, scale, and resiliency to arrive at what I think explains this behavior:
What are your thoughts? Are you thinking about these issues as you plan your virtualization roll-outs?
|You are subscribed to email updates from Security Bloggers Network |
To stop receiving these emails, you may unsubscribe now.
|Email Delivery powered by FeedBurner|
|Inbox too full? Subscribe to the feed version of Security Bloggers Network in a feed reader.|
|If you prefer to unsubscribe via postal mail, write to: Security Bloggers Network, c/o FeedBurner, 20 W Kinzie, 9th Floor, Chicago IL USA 60610|