Spliced feed for Security Bloggers Network |
The Recent History of the Future of Cash [Emergent Chaos] Posted: 11 Jul 2008 01:45 AM CDT Dave Birch has a really interesting post about The future of the future of cash: The report also identifies three key attributes of cash that make it -- still -- the dominant payment system. Universality, trust and anonymity. I'm curious about the location of anonymity in the customer mindset and I'm going to post some more about this shortly, so I'm only looking at the first two here.I want to extend Dave's assessment of what makes "trust" interesting: Trust, on the other hand, may not be such a big barrier. It's not clear to me how to disentangle trust in the medium of exchange from trust in the store of value, since people clearly use cash for both, but it is clear that a great many other tradable items can easily usurp cash once technology has acted to shift them from being a store of value into a viable medium of exchange (remember the tally sticks!) for their age. A couple of months ago we were discussing Nick Szabo's classification of commodity derivatives as a kind of near-money, but there are plenty of exant near-monies already in use around the world, including mobile phone minutes in a great many developing countries. If I lived in Zimbabwe, it would take me years to learn to trust cash more than Vodafone minutes.I think there's an important element of trust missing, which is finality. With almost all computer-based systems, payments are conditional on some complex bureaucracy deciding to credit them. For example, see Gary Leff on some deal for frequent flyer miles: Second, print everything and I mean everything. I printed the offer itself. I printed the page where I enter all the information about the rental (including my Skymiles number, etc). I printed the confirmation page. I’m saving all of those, and will save my rental receipt as well.Why does he do this? Because he doesn't trust the system. He's prepping himself to go fight its decisions. In contrast, if they handed him a bearer certificate for 9,999 miles, or $200 cash (the rough value of the miles at $.02 per) he'd be done. He'd trust those things. People used to sell things for cash on the barrelhead. When that cash was cold, hard cash, rather than fiat, print-it-yourself money, the deal was done when the money changed hands. You can't lose any more than you have in your pocket (or under your mattress). Electronic systems don't have that property, and that makes them harder to trust. You don't just have to disentangle value-store from medium of exchange. You have to estimate the value of finality. |
You want the truth, you can't handle the truth! [StillSecure, After All These Years] Posted: 10 Jul 2008 10:35 PM CDT I am not sure what it is with Richard Stiennon. Maybe his mom beat him with a NAC stick when he was young. Hence his Jack Nicholson looks (more like the Joker in Batman, than Col Jessep in A Few Good Men) and his total disdain for NAC. In any event Richard never seems to miss a chance to take a pot shot at NAC. I have fired back and debated him many times on this. In fact I am convinced that Richard's problem with NAC is that like Uncle Joe, he is just moving a little slow. Richard still thinks of NAC as Cisco's network admission control, circa Dec '03. He has not gotten up to speed on anything happening with NAC since. Richard is going to debate NAC with Joel Snyder according to this article by Tim Greene today. My prediction is Snyder by a knockout in 3 rounds or less. Richard's latest NAC knock comes on a comment to an excellent article by the Hoff. Chris takes a bold stand for someone working for a vendor and calls BS on the whole analyst thing (I will write more about that later in this article). Richard being an ex-analyst himself (lets face it, with Richard you can take the man out of the analyst job, but you can't take the analyst out of the man), takes exception to Hoff's "whining" (Richards words, not mine) and tries to tell Hoff that giving up is not the answer and the way to show up analysts, is to prove them wrong. Great Richard you try to prove them wrong, when because of what they report you don't have a market, can't get any capital and have no visibility. I guess that is when it is time to move on to the next gig, right? Then Richard has a bad NAC deja vu and feels it necessary to write this:
I assume Richard is referring to Updata recently leading the Bradford Networks VC round. But more importantly Richard it is time to call a code red on you and give you the cold hard truth. Richard the fact is that the edu market is not the only viable market for NAC. In fact, one of the biggest customers of NAC is the DoD. That is right Richard at least 3 of the 4 armed forces use NAC in helping to secure their networks. To paraphrase my friend Col Jessep - Richard, you want the truth, you can't handle the truth! You sleep securely under the blanket of protection that NAC provides. If it is good enough to help "clean the sand" out of laptops coming home from SWA (that is SouthWest Asia, like in Iraq and Afghanistan, in case you don't know Richard), it should be good enough for you. Think about that next time you are about to bad mouth NAC. Let me give you some other truths you may not like Richard. Why do you think every switch vendor (of which we partner with many of them) is lining up and bringing out NAC solutions? Why has Microsoft put such a big push on NAP? Why despite the Luddites like you does NAC still draw crowds at conferences like Interop (ask Joel about that). Richard we are still signing new major OEM partners. I am afraid you are the one sadly out of touch on this one Richard. Just as you are out of touch in missing Hoff's point in his article. As to Hoff's article, as I said I give Chris credit for speaking his mind. I spend an ungodly amount of my time speaking with analysts and trying to "learn" from them while at the same time trying to educate them. I am constantly amazed that so many analysts (and press for that matter) just take a vendors word as gospel. I have seen research reports from analysts big and small, that I am sure did not have any more research done than calling a handful of vendors and listening to their spiel. Too many of these vendors if they do speak to customers, base their findings on such a small sample that it is impossible to have an accurate picture. Personally, like Hoff says, who watches the watchers is the truth. I would like to see a code of conduct among analysts. I would start by dictating that vendors cannot pay analysts. Take the payola out of the equation the way they did to the DJ/Radio business in the late 50s. Next analyst reports have to come with metrics to back up the findings. I want to know how many customers they spoke to, how big they were, how they were found, etc. A vendor giving an analyst a real live"pet" customer is not real research. I want to know if the customer pays the analyst. It is a dirty business. Hey let me be clear, I play the game as well as the next guy. But I agree with Hoff we need to clean up the rules to make the whole analyst thing more fair, viable and valuable. |
links for 2008-07-11 [Raffy's Computer Security Blog] Posted: 10 Jul 2008 09:32 PM CDT |
ATM Vulnerabilities Plague Citibank [The IT Security Guy] Posted: 10 Jul 2008 09:14 PM CDT Citibank's ATMs are in the news again, but this time -- though details are still scarce -- for some basic security issues. So far, the speculation is that there could have been, at least, among others, two security issues. Unencrypted PINs transmitted from the ATMs to back end servers, and insecure servers themselves. What makes this interesting, if its due to these two causes, is that the ATM machines themselves weren't tampered with, as has happened in the past. Citibank's ATMs were also the subject of a recent Wired article. |
ADMP and Assessment [securosis.com] Posted: 10 Jul 2008 08:17 PM CDT Application and Database Monitoring and Protection. ADMP for short. In Rich’s previous post, under “Enter ADMP”, he discussed coordination of security applications to help address security issues. They may gather data in different ways, from different segments within the IT infrastructure, and cooperate with other applications based upon the information they have gathered or gleaned from analysis. What is being described is not shoving every service into an appliance for one stop shopping; that is decidedly not what we are getting at. Conceptually it is far closer to DLP ’suites’ that offer endpoint and network security, with consolidated policy management. Rich has been driving this discussion for some time, but the concept is not yet fully evolved. We are both advocates and see this as a natural evolution to application security products. Oddly, Rich and I very seldom discuss the details prior to posting, and this topic is no exception. I wanted to discuss a couple items I believe should be included under the ADMP umbrella, namely Assessment and Discovery. Assessment and Discovery can automatically seed monitoring products with what to monitor, and cooperate with their policy set. Thus far the focus through a majority of our posts has been monitoring and protection, as in active protection, for ADMP. It reflects a primary area of interest for us as well as what we perceive as the core value for customers. The cooperation between monitored points within the infrastructure, both for collected data and the resulting data analysis, represents a step forward and can increase the effectiveness of each monitoring point. Vendors such as Imperva are taking steps into this type of strategy, specifically for tracking how a user’s web activity maps to the back end infrastructure. I imagine they will come up with more creative uses for this deployment topology in the future. Here I am driving at the cooperation between preventative (assessment and discovery in this context) and detective (monitoring) controls. Or more precisely, how monitoring and various types of assessment and discovery can cooperate to make the entire offering more efficient and effective. And when I talk about assessment, I am not talking about a network port scan to guess what applications and versions are running- but rather active interrogation and/or inspection of the application. And for discovery, not just the location of servers and applications, but a more thorough investigation of content, configuration and functions. Over the last four years I have advocated discovery, assessment and then monitoring, in that order. Discover what assets I have, assess what my known weaknesses are, and then fix what I can. I would then turn on monitoring for generic threats I that concern me, but also tune my monitoring polices to accommodate weaknesses in my configuration. My assumption is that there will always be vulnerabilities which monitoring will assist with controlling. But with application platforms- particularly databases- most firms are not and cannot be fully compliant with best practices and still offer the business processing functions the database is intended for. Typically weaknesses in security that are going to remain part of the daily operation of the applications and databases require some specific setting or module that is just not that secure. I know that there are some who disagree with this; Bruce Schneier has advocated for a long time that “Monitor First” is the correct approach. My feeling is that IT is a little different, and (adapting his analogy) I may not know where all of the valuables are stored, and I may not know what the type of alarm is needed to protect the safe. I can discover a lot from monitoring, and it allows me to witness both behavior and method during an attack, and use that to my advantage in the future. Assessment can provide tremendous value in terms of knowing what and how to protect, and it can do so prior to an attack. Most assessment and discovery tools are run periodically; while they may not be continuous, nor designed to find threats in real time, they are still not a “set and forget” part of security. They are best run periodically to account for the fluid nature of IT systems. I would add assessment of web applications, databases, and traditional enterprise application into this equation. Some of the web application assessment vendors have announced their ability to cooperate with WAF solutions, as WhiteHat Security has done with F5. Augmenting monitoring/WAF is a very good idea IMO, both in terms of coping with the limitations inherent to assessment of live web applications without causing disaster, but also the impossibility of getting complete coverage of all possible generated content. Being able to shield known limitations of the application, due either to design or patching delay, is a good example of the value here. In the same way, many back-end application platforms provide functionality that is relied upon for business processing that is less than secure. These might be things like database links or insecure network ‘listener’ configurations, which cannot be immediately resolved, either due to business continuity or timing constraints. An assessment platform (or even a policy management tool, but more on that later) or a rummage through database tables looking for personaly identifiable information, which is then fed to a database monitoring solution, can help deal with such difficult situations. Interrogation of the database reveals the weakness or sensitive information, and the result set is fed to the monitoring tool to check for inappropriate use of the feature or access to the data. I have covered many of these business drivers in a previous post on Database Vulnerability Assessment. And it is very much for these drivers like PCI that I believe the coupling of assessment with monitoring and auditing is so powerful- the applications help compensate for each another, enabling each to do what it is best at, passing off coverage of areas where they are less effective. Next up, I want to talk about policy formats, the ability to construct policies that apply to multiple platforms, and how to include result handling. |
What Dan’s DNS Checker Doesn’t Do [Zero in a bit] Posted: 10 Jul 2008 06:03 PM CDT Despite what various commenters around the blogosphere think (I’ve read a few but can’t find the links now), Dan Kaminsky’s online “Check My Dns” utility doesn’t:
What it does is check whether your ISP’s DNS server is patched. Plain and simple. It looks for one thing — source port randomization. This does not give away the exploit, it checks for the existence of the sledgehammer fix that prevents the exploit from working. More specifically, there’s some Javascript code that generates a random hex string which is used to create a URL, e.g. http://6313d97e498e.toorrr.com. Your OS then does a DNS lookup for that unique hostname. Your ISP’s DNS server asks toorrr.com’s DNS server (a server Dan controls) to resolve that funky DNS name to an IP address. It sends a few packets in the process. Dan’s server makes a note of the source port of each request and sends back the webserver’s IP address to your DNS server, which sends it back to you. Now that you have the IP address, your browser can fetch the results page. The web page is generated dynamically by parsing the hex string out of the URL you requested, using Ajax to fetch the relevant port and TXID data stored on Dan’s server, and printing out a “safe” or “vulnerable” message such as:
That’s all. Nothing tricky. This particular DNS server is deemed safe because the source port varies from one request to the next. Come to think of it, those source ports don’t really look that random, do they? For anybody “in the know”, is that amount of randomness sufficient to protect against the attack? |
Lotta discussion around DNS flaws right now [Tim Callan's SSL Blog] Posted: 10 Jul 2008 05:21 PM CDT You may have noticed a lot of waves in the press and the blogosphere about the broad DNS security flaw just announced by security researcher Dan Kaminsky. We're getting a questions about how this affects the root DNS as well as major TLDs, and my fellow VeriSign blogger Philip Hallam-Baker has addressed this question. (Short story: It doesn't.)
|
What doesn’t work… [Ascension Blog] Posted: 10 Jul 2008 03:30 PM CDT A friend of mine passed along an email he received from an information security vendor today. The vendor had inquired as to whether or not my friend was still interested in his company’s product. When my friend thanked him for his time and explained that they had chosen to go in a different direction the vendor responded: (again paraphrased to protect the guilty) I thought we didn’t have a chance since I wasn’t able to speak to anyone other than you. Based upon common experience, I think you will find the direction you’ve chosen to be extremely expensive with relatively little return. But best of luck… Translation: Yeah, I knew you weren’t all that intelligent when you didn’t agree that my product was the best thing since sliced bread. If you’d have let me speak to someone with half a brain I’d have probably gotten the sale. You just don’t realize that you don’t really need what you decided to buy but like I said, you’re a moron so go figure. Now I’m no rocket scientist but giving a potential client backhanded “advice” isn’t going to endear yourself to that client. Even if the potential client is lacking in reasoning ability, you never tell them that. The sale that fell through today may very well be the sale that comes through tomorrow. This is the problem with product vendors. Many of them are lifelong salesmen and do not really have a clue as to what goes on in an information security department. These are the guys from the snake-oil school of sales: they tailor your problem to fit their solution as opposed to trying to really understand the problem. They don’t want to solve your problem, they want to sell their product. Now I know that this is a general statement and that there are some good sales guys out there. I’ve actually had the pleasure of meeting a few. There is one guy that I wouldn’t hesitate to call even though his territory is on the West Coast. (I’m on the East Coast.) The problem is that the good ones are few and far between. |
Security Analyst Sausage Machine Firms Quash Innovation [Rational Survivability] Posted: 10 Jul 2008 03:24 PM CDT Quis custodiet ipsos custodes? Who will watch the watchers? Short and sweet and perhaps a grumpy statement of the obvious: Security Analyst Sausage Machine Firms quash innovation in vendors' development cycles and in many cases prevent the consumer -- their customers -- from receiving actual solutions to real problems because of the stranglehold they maintain on what defines and categorizes a "solution." What do I mean? If you're a vendor -- emerging or established -- and create a solution that is fantastic and solves real business problems but doesn't fit neatly within an existing "quadrant," "cycle," "scope," or "square," you're SCREWED. You may sell a handful of your widgets to early adopters, but your product isn't real unless an analyst says it is and you still have money in the bank after a few years to deliver it. If you're a customer, you may never see that product develop and see the light of day and you're the ones who pay your membership dues to the same analyst firms to advise you on what to do! I know that we've all basically dropped trow and given in to the fact that we've got to follow the analyst hazing rituals, but that doesn't make it right. It really sucks monkey balls. What's funny to me is that we have these huge lawsuits filed against corporations for anti-trust and unfair business practices, and there's nobody who contests this oligopoly from the sausage machine analysts -- except for other former analysts who form their own analyst firms to do battle with their former employers...but in a kindler, gentler, "advisory" capacity, of course... Speaking of which, some of these folks who lead these practices often times have never used, deployed, tested, or sometimes even seen the products they take money for and advise their clients on. Oh, and objectivity? Yeah, right. If an analyst doesn't like your idea, your product, your philosophy, your choice in clothing or you, you're done. This crappy system stifles innovation, it grinds real solutions into the dirt such that small startups that really could be "the next big thing" often are now forced to be born as seed technology starters for larger companies to buy for M&A pennies so they can slow-roll the IP into the roadmaps over a long time and smooth the curve once markets are "mature." Guess who defines them as being "mature?" Right. Crossing the chasm? Reaching the tipping point? How much of that even matters anymore? Ah, the innovator's dilemma... If you have a product that well and truly does X, Y and Z, where X is a feature that conforms and fits into a defined category but Y and Z -- while truly differentiating and powerful -- do not, you're forced to focus on, develop around and hype X, label your product as being X, and not invest as much in Y and Z. If you miss the market timing and can't afford to schmooze effectively and don't look forward enough with a business model that allows for flexibility, you may make the world's best X, but when X commoditizes and Y and Z are now the hottest "new" square, chances are you won't matter anymore, even if you've had it for years. The product managers, marketing directors and salesfolk are forced to fit a product within an analyst's arbitrary product definition or risk not getting traction, miss competitive analysis/comparisons or even get funding; ever try to convince a VC that they should fund you when you're the "only one" in the space and there's no analyst recognition of a "market?" Yech. A vendor's excellent solution can simply wither and die on the vine in a battle of market definition attrition because the vendor is forced to conform and neuter a product in order to make a buck and can't actually differentiate or focus on the things that truly make it a better solution. Who wins here? Not the vendors. Not the customers. The analysts do. The vendor pays them a shitload of kowtowing and money for the privilege to show up in a box so they get recognized -- and not necessarily for the things that truly matter -- until the same analyst changes his/her mind and recognizes that perhaps Y and Z are "real" or creates category W, and the vicious cycle starts anew. So while you're a vendor struggling to make a great solution or a customer trying to solve real business problems, who watches the watchers? /Hoff |
SaaS mobile data Encryption product [An Information Security Place] Posted: 10 Jul 2008 02:53 PM CDT I have been evaluating a new SaaS mobile data encryption solution from a company called HyBlue. The product is called IceLock, and essentially they put all the management of the encryption in the cloud without storing the keys in the cloud. They offer some other services as well, but this one is what they asked me to review. While I cannot get into a full review right now, I can say that it looks pretty good. It uses a virtual drive for encryption instead of a full disk or file encryption solution. So once you install it and start the service, it creates a new drive letter. If you want something to be encrypted, you just pull it into the drive. The typical install they see targets the My Documents folder, which makes sense, but it is flexible and allows other directories to be encrypted as well. It uses a combination of the motherboard serial number, a password, and multiple other factors to create an ephemeral key for encryption. So basically, you can’t walk out with the disk and expect it to work on another system. They also say that "all keys are deleted from RAM and overwritten with random data" during hibernation, screen saver activation, power-off, log-off, etc. (I think they generate a key every time your system comes out of the screen saver or hibernation state because I have to enter my password every time - that can get annoying). The install process and management are still kinda kludgey. However, they are nothing of not flexible and willing to take criticism (they made a change based on a question I had within just a few days of my asking) so I expect this to change fairly quickly. Anyway, take a look. I am putting it on a VM (which they say will work fine) since it is fairly new, but I haven’t experienced any issues. Vet |
Posted: 10 Jul 2008 02:41 PM CDT I was recently involved in a discussion over the different terms used to describe what we do? Information Security (IS), Information Assurance (IA), or Information Risk Management (IRM). Some very interesting points and observations came out of that discussion so I thought I’d echo them here. A colleague of mine, who also works as an Information Assurance (IA) professional (DoD specialty), argues that the CISSP certification has “absolutely nothing to do with IA.” He is of the opinion that Information Security is not Information Assurance and sees “no similarities” at all (ummm… none?). Anyway, from a DoD 8570 perspective, the IAM (managerial) level II and III are required to have the certification while none of the IAT (technical) levels are required to have the CISSP. This started a discussion that went in two main directions - whether the CISSP certification is useful in both the technical and managerial realms (I won’t touch on this here) and whether or not Information Security (IS) is a subset of or has anything to do with Information Assurance (IA). There were several great posts to this discussion. One was by John Graham (MSIA ‘04): Although the DoD has ‘coined’ the phase Information Assurance, in my opinion the concept certainly is broader than information security, and information security is a subset of the information assurance space…knowing and understanding the concepts required for the CISSP only strengthen the Information Assurance professionals tool kit… And I have always found it interesting that most organizations tend to initially focus on technical controls, then gain the understanding of the required linkages to process and governance needed to actually implement and maintain the controls. Sharon Mudd (MSIA ‘08) took the concept even further. I agree with John. To expand on that, in my view the entire space is going through a maturation process. What was once Information Security (focused strictly on IT) evolve into Information Risk Management (allowing it to broaden a bit) and is now heading towards Information Assurance. Each evolution incorporates what was there before and enhances the importance of getting out of the InfoSec silo and into the other areas where it runs into business processes/needs(or government, or whatever other entity you’re working with). and What was the original foundation of InfoSec seems to be what we’ve been referring to as Security Operations - or - the day-to-day hands on the firewalls/IPSs/etc. work that must be done even monitoring and incident detection can fall into this category. Where I think it goes over the wall to a risk management activity is when you start trying to understand what the alerts mean in context of your business functions and managing the issues from a cost/benefit or risk/reward perspective. I believe the term “evolution” in this context is key. One of the things that I have enjoyed about being a consultant over the years is the variety of environments and networks that I’ve been privileged to become acquainted with. Most of the time the issues that I’ve come across had very little to do with the technology. Most technology issues were symptomatic of deeper alignment issues. Many of the highly specialized (or more tactical) activities that IS “grew up with” have now begun to be relegated back to the network and infrastructure departments. Our role has evolved into a strategic role that bridges business units. IT, and by extension IS/IA/IRM, has been the one department that is typically siloed off from the rest of the company in terms of being fully integrated into business operations. This is most likely do to the fact that IT basically began as a support function not much above the mail room in importance. The companies that have seen the advantage of integrating IT and IS into their strategic planning process have gained a commanding advantage in the workplace. Once you achieve an alignment with the business objectives, IS/IA/IRM projects are easier to sort out and prioritize in terms of their overall value. As we all know this requires that both the business units and IT cooperate in achieving the common goal. One key aspect of this is the use of a common taxonomy. In the end, whether we call it information security, information assurance, risk management or “skippity do” doesn’t really matter all that much as long as we achieve the ultimate goal of bringing value to our employers. The terminology may be determined by the sector in which were working, such as the DoD example, or it may be something that we can influence. I believe that we must learn the language of business. Business won’t learn the language of information security - that is what they hire us for. The approach that requires management to learn “our language” is doomed to failure. Whatever the case I’m more of an advocate of using the terms that most clearly conveys the concept to my audience. |
Time to update your JRE again [aut disce, aut discede] Posted: 10 Jul 2008 01:52 PM CDT [ Edit: Brian Krebs of the Washington Post's Security Fix blog spoke to me about Java security. You can read his column here. ] Sun have just released JRE Version 6 Update 7... which means 90% of desktops are currently at risk until they are upgraded!*. If you have the Java Update Scheduler enabled you should get prompted to update soon (depending on the update frequency you selected). If you want to be proactive, fire up the Java Control Panel, click on the Update tab then click on the Update Now button or head to http://www.java.com and download the binary directly. According to Sun's Security Blog the latest update fixes 8 issues. I'll be releasing advisories and blogging on the issues that I had a hand in, namely:
If you're thinking the first two issues sound all too familiar, you'd be right. I previously discussed this font issue that led to execution of arbitrary code. And the JNLP parsing code has had a number of similar buffer overflows (details here, here and here) ... not so much "same bug, different app" (the theme of this Brett Moore presentation) as "same bug, same app!" So perhaps the most interesting vulnerability is 238905, the JRE family version support issue. You may have noticed that JRE updates typically install alongside older versions so a given machine is likely to have several versions installed as noted by Brian Krebs. Prior to the introduction of Secure Static Versioning in JRE Version 5 update 6, it was possible for an applet to select the version of the JRE with which to run. Of course, a malicious applet could purposefully select an older vulnerable version in order to exploit known security flaws. Secure Static Versioning fixed this, however during my tests I was able to circumvent it and downgrade the browser's JRE. More on this in a future post. I'll also be blogging on a couple of other issues Sun have recently fixed. There are no SunSolve entries for these since they turned out to be flaws in the interaction between Java Web Start and Sun's web servers when installing alternate versions of the JRE as specified in JNLP files. They ultimately allowed us to present bogus security dialogs to the user, duping them into installing an older version of the JRE. These issues serve as a reminder of the dangers of including user-supplied input in security-related dialog boxes. So a few things in the pipeline; I haven't forgotten about part II of Stealing Password Hashes with Java and IE but it's a busy time and Black Hat is almost upon us so be patient :) Cheers John * JavaOne Keynote, 2008 - 90% of desktops run Java SE. This is unsurprisingly slightly higher than Adobe's reckoning. |
How magic might finally fix your computer - [The InfoSec Blog] Posted: 10 Jul 2008 07:33 AM CDT http://redtape.msnbc.com/2008/07/cambridge-mass.html#posts Charlatans don’t bother creating detailed schemes for deception. They just have a feel for what fools people. Its not about technology… Bad guys have better people skills Criminals usually don’t bother learning all the ins and out of the technology they exploit — they simply learn enough to be dangerous. But they spend endless hours understanding the people they plan to [...] |
Could SPAM Sway the US Presidential Election? [BlogInfoSec.com] Posted: 10 Jul 2008 06:00 AM CDT Might the power of SPAM be able to change the course of US political elections? Could a SPAM disinformation campaign sway voters? This scenario occurred to me after I received SPAM with the following headlines:
The process to create disinformation would be fairly easy. First, create a fake news website. The goal is to mix a small amount of fake news with a large amount of legitimate news. On the fake news website, the real news stories should come directly from the AP wires as the primary source. The fake news would be properly embedded in the site to make it seem legitimate. Next, the site should also be branded with a local small town flavor so as not to arouse suspicion. Finally, the disinformation (i.e., the fake news story buried in the AP stories) is SPAMmed to millions of people. This process would continue unabated until the election ends. It would be a genuine disinformation campaign aimed at the opposite party. If one of these stories was accidentally picked up by the mainstream media and reported as accurate, it would multiply the effect. One assumes that the US presidential candidates would be the most likely to be engaged in this sort of activity: one may also include foreign governments or rogue groups as people who may engage in this behavior as well. It would be easy to do for any organized group with a vested political interest and small amount of capital. To be fair to the skeptics: it’s a question of numbers. How many people would these stories reach? How may people would be duped? And, would it be enough to significantly change the outcome of the election? While I cannot definitively answer these questions, it’s an interesting scenario. © Kenneth F. Belva for BlogInfoSec.com, 2008. | Permalink | No comment This feed is copyrighted by bloginfosec.com. The feed may be syndicated only with our permission. If you feel that this feed is being syndicated by a website other than through us or one of our partners, please contact bloginfosec.com immediately at copyright_at_bloginfosec.com. Thank you! |
Poetic Security Review [Rational Survivability] Posted: 10 Jul 2008 12:21 AM CDT The InterWeb's broken!
As a wrap-up this time |
DNS flaw could possibly leave you vulnerable to evil hackers! [Security-Protocols] Posted: 09 Jul 2008 10:40 PM CDT |
Are people Still using DMZ’s? [An Information Security Place] Posted: 09 Jul 2008 09:54 PM CDT The simple answer to the title is "yes". However, that is really not the exact question I am asking here. The question is really "are DMZ’s actually still DMZ’s?" Let me ’splain. I had a client ask me the other day if I was seeing a drop in the use of DMZ’s out there. We had a quick discussion about it, but it got me to thinking more of the concept of a DMZ, the various implementations of DMZ’s (the point of this post is not to get into the security or various benefits of the different models, so I won’t discuss that or make any judgements on which is the best), the progression of the DMZ concept into the zone concept (and even a little further), and if the term DMZ is even really applicable anymore in the larger scope (or if it even matters). Oh, and as a note, you might need to take some of my terminology with a grain of salt if you started your firewall experience with Checkpoint. I started out with Netscreen, so that affects the way I think about networks and DMZ, and zones. It all comes out in the wash since they are all doing the same thing, but just wanted to give a warning there. Also, be prepared for rambling as I flesh this out. It’s my style. So anyway, to dig in a bit, let’s briefly define what a DMZ is and look at some of the more common implementations. The rationale behind a DMZ is to place externally accessible servers (such as HTTP, SMTP, etc) on a segment where potentially dangerous traffic can be isolated. Simply, you don’t want direct external access to your internal servers. Makes complete sense. So, how is this implemented? Some people implement a DMZ by squashing a bunch of servers in between two firewalls like this: The external firewall controls access from the Internet to your DMZ, and the internal firewall controls access from your DMZ to your internal network. Traffic may need to flow between your DMZ and internal network, but at least you can control that to a larger degree than just opening up the world to your servers as you must in the DMZ. This is physically identical to the military term DMZ, which is the space between two opposing army lines. Each army controls access to their side of the line. But the original concept of a DMZ generally costs a bit more because of more hardware. So in comes the concept of using three interfaces. Basically, with a three interface box, you let the firewall become the single point where all inside and outside traffic flows, as seen below: This gives you control in a single box, which keeps cost down. The DMZ is virtual in this case, since it is created and controlled by routing and policies, but the benefit is the same. But many smart people outside and inside firewall manufacturers started looking at this at started saying, "Hey, why can’t we put even more interfaces on this?" Basically, they started allowing for more than one DMZ. So if you had some externally accessible boxes that you wanted to keep isolated from your internal network AND each other, this allowed you to do so without building more firewalls and adding complexity to your network design. This was the precursor to the concept of zones, where you could create multiple areas where you wanted to segment off traffic with your firewall. So if you had multiple server farms, you could have a zone for each one. That is the point where I think the term DMZ becomes somewhat less effective, but it is still realistic if only used in the segmenting potentially dangerous traffic that is coming from the Internet. It is still not a DMZ in the physical sense (just like the three interface box), but it still serves the same purpose. But what about those people who put a firewall between internal segments or between two nodes on their private WAN? As an example, if you work at ABCCorp and your company bought XYZCorp, you might put in a firewall between the companies when you setup a WAN link. In that case, you probably would rename zones to something more representative, like "ABCCorp_Network" and "XYZCorp_Network". Here you are not really isolating traffic in the traditional sense in this case because you are creating a wall between the units. There is likely not an area where you have some isolated servers. You are simply controlling access between the two areas. So there is really no trusted or untrusted side (well, I guess that depends on which side of the firewall you are on and who implemented the firewall, but you know what I mean). This is more like the concept of a checkpoint in more modern urban warfare scenarios. There is no real DMZ, just checkpoints as you move from one hostile area to another. That doesn’t exactly fit since there are no distinctive lines in modern urban warfare, but I think there is a decent fit there. So the term DMZ does not fit. Now you can go the next step by creating virtual firewalls, with each FW treated as a separate entity with its own policy set, routing table, etc. But that is generally used in more of a carrier type of environment or a very large enterprise that needs to maintain total separation between units. Though this setup can be utilized to perform the same function as a DMZ or a zone, it is generally too complicated for that. But saying all of this makes me also come back to how I view many issues such as these, meaning what terms make sense or don’t make sense, which terms have been outdated, etc. Though I thoroughly believe that accuracy is needed when defining terms, I also think that in this case the term is not terribly important. I think the term is still very valid, even if the DMZ is virtual. So basically, use the term if you want (aren’t you glad I gave you permission?) But I think "zone" is really more accurate in how most DMZ’s are implemented today, both in the hardware and in the actual production installs. Vet |
links for 2008-07-10 [Raffy's Computer Security Blog] Posted: 09 Jul 2008 09:32 PM CDT |
NIST 800-41 Draft - Logging is a Step Child [Raffy's Computer Security Blog] Posted: 09 Jul 2008 05:56 PM CDT I just finished reading the NIST 800-41 draft about “Guidelines on Firewalls and Firewall Policy“. The guideline does a great job of outlining the different types of firewalls that exist and how to correctly setup a firewall architecture. The topic that falls fairly short is logging:
I sent this blog post to the authors of the guidelines. Hopefully they are going to address some of this. And again, the general structure and contents of NIST 800-41 are great! |
Stop arguing and patch your DNS [Articles by MIKE FRATTO] Posted: 09 Jul 2008 05:07 PM CDT |
Security is a Process, not a Product [IT Security Expert] Posted: 09 Jul 2008 03:21 PM CDT Back in the year 2000, I remember reading an article by Bruce Schneier (a security hero of mine), he said "Security is a Process, not a Product". Bruce talked about whether this would be ever understood. It really struck a chord with me at the time and I've been quoting Bruce saying that ever since in my own presentations. Well 8 years have gone by since I first read it, and Information Security has certainly come to the fore in that time, but Bruce's statement rings truer than ever. http://www.schneier.com/crypto-gram-0005.html I have to say part of the problem is down to the punters going out impulse buying "off the peg security products" tend not to understand what information security is about in the first place. Often they are looking to the security industry, and those pesky sales guy for security advice. In fact the sales tactic is to often host a "free security advice/awareness" session, to draw in the punters. I show up to some of these events to gage where the market and how threats are perceived to be moving, but it really makes me cringe at times, especially as the message is increasingly to buy this and you will be secure! And it gets worst, as some companies are clearly jumping on the security bandwagon to make a quick buck. At InfoSec Europe this year, I heard one (so called) security organisation openly presenting about PCI Data Security Standard to a bunch folk who gauging from their questions really didn't know anything about the standard, other than it effected their business. This company were out and out misleading those listening, and it was clear to me the presenter didn't even know the proper facts about PCI DSS. In fact I was so outraged in what I overheard, I stopped, blended in with punters, and at the right moment asked a question about requirement 6.6 to deliberately trip them up, I asked "so which is best on requirement 6.6 in your expert opinon a code review or an application firewall? and why?" – they didn't have a clue, anyone knowing and working with PCI DSS would instantly know and understand the issue around Req. 6.6 in mid 2008. I think the answer is for the "punters", namely the organisations which lets face, many of whom are just really waking up to the issue of information security, is to train and invest on security a department and personnel. So they are correctly advised on the proper solution processes from the ground up, as well as to understand when and where they should buy products off the shelf to help reduce security risk along the way. |
Trends in Identity and Access Management [The IT Security Guy] Posted: 09 Jul 2008 12:21 PM CDT This must be my lucky day at TechTarget. My second article of the day came out, this one on trends in identity and access management. It's part of a new newsletter they're launching about IAM. |
Richard Feynman and The Connection Machine [Emergent Chaos] Posted: 09 Jul 2008 12:19 PM CDT There's a fascinating article at The Long Now Foundation, "Richard Feynman and The Connection Machine," by Danny Hillis. It's a fun look into the interactions of two of the most interesting scientist/engineers of the last 40 years. |
You are subscribed to email updates from Black Hat Security Bloggers Network To stop receiving these emails, you may unsubscribe now. | Email Delivery powered by FeedBurner |
Inbox too full? Subscribe to the feed version of Black Hat Security Bloggers Network in a feed reader. | |
If you prefer to unsubscribe via postal mail, write to: Black Hat Security Bloggers Network, c/o FeedBurner, 20 W Kinzie, 9th Floor, Chicago IL USA 60610 |
No comments:
Post a Comment