Posted: 02 Oct 2008 08:54 PM CDT
Posted: 02 Oct 2008 07:52 PM CDT
Posted: 02 Oct 2008 04:59 PM CDT
XRumer was recently released putting another nail in the CAPTCHA Coffin. "The decline in CAPTCHA efficacy has been an ongoing story in 2008, as hackers and malware authors have steadily found ways to chip away at the protection these security practices were once thought to offer. Now, new findings indicate that...
Posted: 02 Oct 2008 03:26 PM CDT
"Although PHP 5.3 is still in alpha stage and certain features like the PHAR extension or the whole namespace support are still topics of endless discussions it already contains smaller changes that could improve the security of PHP applications a lot. One of these small changes is the introduction of a...
Posted: 02 Oct 2008 01:44 PM CDT
Posted by Cody Pierce
It is my belief that reverse engineering is one part patience, one part experience, and a whole lot of organization. OK, maybe that is a bit of an exaggeration, but organization is essential to reversing. Having a decent naming convention you stick to, not only helps you in the short term, but also 6 months down the line when you or your co-workers look at your IDB. There is no "right" naming convention, but everyone should at least have one they use regularly. So today in MindshaRE we will cover what I use to name functions, variables, and other information you might find in a binary.
MindshaRE is our weekly look at some simple reverse engineering tips and tricks. The goal is to keep things small and discuss every day aspects of reversing. You can view previous entries here by going through our blog history.
There are several reasons to actually use a naming convention. It makes your IDBs easier to read (for everyone), helps you organize functions, variables, and basic blocks, and in general makes your IDBs more professional looking, among other things. If you ever read this blog you know I try really hard to make everything as simple and clear as possible.
Naming convention standards have been debated ad-nauseum by developers since programming languages were created. For some reason everyone insists that their opinion is obviously the best. As long as you, and anyone you interact with, can read your labels and glean the intended information from them you win. Besides readability one of the most important aspects of a good naming convention is simplicity. If it's too complex you won't use it. A naming convention must become a natural extension to the way you reverse.
My personal naming convention is a mixture of Hungarian, UpperCamelCase, and traditional C style. I do this because I need type information, readability, and flexibility. I tend to make my labels longer and descriptive because it's easier to understand.
I have broken down my naming conventions into their respective categories. Let's jump right in.
I label my functions more than anything. You have three different views of functions which you'll look at almost every day. Viewing the top of a function, calling a function, or looking at the function window will display the names you have given them. IDA starts you out with the ambiguous "sub_xxxxxxxx" moniker. This is fine but hardly a description of what the function does.
When I have reversed a function I will give it a UpperCamelCase name, trying to be as descriptive as possible. For instance "sub_7E4D5E88" might become "ReadFromFile". One drawback from this method is you have to be mindful of any import names that may conflict. IDA will let you use it, but might assign prototype information to the function. If I wanted to call this function "ReadFile" I might just call it "MyReadFile"
Another common occurrence is to simply append "Wrapper" to functions. In the above example the caller may be renamed to "ReadFromFileWrapper". This can become a little cumbersome when you get 4 wrappers deep. ReadFromFileWrapperWrapperWrapperWrapper just doesn't have the same ring to it. In that case I will just shorten to "ReadFromFileWWWW".
For arguments and locals I use Hungarian notation for its data type definitions. This seems to be the most descriptive method for associating needed type information with a variable name.
In general arguments and locals are named in similar fashion. The only difference is I will prepend a "arg_" to the arguments name in a function. This lets me easily differentiate between the two. If you need position information as well you can append it to the original making a name like "arg0_", or "arg_4_", whichever is more natural to you.
Let's pretend we have a local integer that contains a count. Using Hungarian notation I would call it "dwCount". To me this specifies its size (I'm assuming dword ints of course) and its purpose. If this was a pointer I'd prepend the name with a "p" to become "pdwCount". I realize people may groan at how this looks. That's fine, but looking at this label I can instantly tell we have a pointer to a 32-bit integer being used as a count. If this was an argument we would use "arg_dwCount" or "arg0_dwCount". To satisfy those whom may not always be on 32-bit platforms you could also label this by size "i64Count".
If we also need the signed information for the data type we can add that as well. Sometimes signed distinction is unnecessary, but I support more information than not. Our above example of a dword integer would be "udwCount", or "sdwCount". And admittedly, the ugliest name "pudwCount" to denote a pointer to an unsigned dword.
Here is a list of the data types I often encounter.
b Byte bCount
w Word wCount
dw Dword dwCount
p Pointer pCount pwCount psdwCount
sz String szName
a Array aNames
s Struct sNames
Alternatively you could also use the c identifiers char, short, or long if you want. Whatever works for you.
Global data varies. It could be a handle, jump table, global variable or hundreds of other things. With that said you may need to work on a case-by-case basis. Normally however, I will use the C ALL_UPPERCASE_GLOBAL nomenclature. Since I am use to this as a global variable it works well for me. If we had a jump table that handled packet processing we could name it "PACKET_HANDLER".
Branches are your intraprocedural jumps to other basic blocks in the function. IDA names these as "loc_xxxxxxxx". Often times we want to rename this, for instance if we know the branch is a basic block that returns from the function.
For these instances I stick to the old c syntax of lower_case_underscore names. It helps me differentiate between functions and basic blocks easily. It also seems to be more readable in certain cases and stand out less. Lets pretend the basic block currently named "loc_7E4D5F56" returns True. I would label this as "return_true". If it returns false I'd go with "return_false". Some other common labels may be "check_null", "check_counter", "begin_for_loop", and "throw_exception". These labels are useful in explaining basic blocks in a single glance.
Bookmarks are used to save a particularly interesting location. In general these can be free form and as descriptive as possible. A good label will tell you why the location is important. An example I often use is "read tcp socket data" or "read from configuration file" keeping everything lower case and forgetting punctuation. I also have the ubiquitous "im here" or "here" mark indicating my last position in the IDB.
Comments should be readable and generally a single line. To me it's strange to see multiple lines of comments on a single address. You should insert any data you may have, or references to other addresses if need be. Remember any address IDA has a reference for that you put in a comment can be followed in the IDA GUI.
Creating names using a convention you are comfortable with helps everyone. Try to find something you feel is beneficial and it will become second nature. I don't know how many times I've gone back to an IDB and not known what was going on because I didn't name things properly. Forget trying to open someone else's IDB.
I would be very interested in hearing about the conventions you personally use. I certainly do not think my way is bulletproof or the absolute best. Everything can be improved and expanded. Please leave a comment if you have some suggestions, maybe one day everyone will use a similar style! In the very unlikely case people actually agree on a naming standard I'll draft up a document with more detail that can be used by everyone.
Hope you enjoyed this weeks MindshaRE.
Posted: 02 Oct 2008 11:13 AM CDT
Posted: 02 Oct 2008 11:05 AM CDT
What do you think our new financial law will be? What piece of legislation will be enacted by our government to protect us from the greed that caused this current financial crisis? Last time it was Sarbanes-Oxley. Who will be the poster child for our current financial crisis? Who will be the “Keating 5” this time around? You know it is coming. It has every other time greed has torpedoed our economy. And it is an easy target for any politician when there is only one side to an issue. I mean, how many voters are pro-financial crisis?
I am actually asking this as a serious question. I am really at a loss for a plan of action that would be effective in stopping financial institutions from making bad loans, or how the government could effectively regulate and enforce. The typical downsides to bad business practices (falling stock value & bankruptcy) have been nullified with mergers and government funding. In this case the blind greed seemed to be evident from top to bottom, and not just within a company or region, but the entire industry. From financial institutions, to buyers, and most of the parties in between. Yes, lenders skirted process and sanity checks to be competitive, but it took more than one party to create this mess. Buyers wanted more than they could afford, and eagerly took loans that led to financial ruin. Real estate agents wrote the deals as quickly as they could. Mortgage brokers looked for any angle to get a loan or re-fi done. Underwriters in absentia. Appraisers “made value” to keep business flowing their way. You name it, everyone was bending the rules.
So that really is the question on my mind: what will comprise the new regulation? How do you keep businesses from saying ‘no’ to new business? How do you keep competitive forces at bay to reduce this type of activity from happening again? My guess about this (and why I am blogging about it) is that enforcement of this yet-to-be-named law will become an IT issue. Like Sarbanes-Oxley, much of the enforcement, controls and systems, along with separation of duties necessary to help with fraud deterrence and detection, will be automated. Auditors will play a part, but the document control and workflow systems that are in place today will be augmented to accommodate.
Let’s play a game of ‘trifecta’ with this … put down the name of the company you think will will be the poster child for this debacle, the name of the politician who will sponsor the bill, and the law that will be proposed. I’ll go first:
Poster Child: CountryWide
Politician: John McCain
Law: 3rd party credit-worthiness verifications and audit of buyers
If you win I will get you a Starbuck’s gift card or drinks at RSA 2010, or something.
Posted: 02 Oct 2008 10:55 AM CDT
Posted: 02 Oct 2008 10:53 AM CDT
From the ODNI: “A groundbreaking new policy from the Office of the Director of National Intelligence changes how the intelligence community and, by influence, the entire federal government will build, validate and approve information technology systems. The policy requires common security controls and risk-management procedures – a unified approach to enhance collaboration.
Intelligence Community Directive 503 covers a lot of ground, but two key details stand out: There will be a single certification and accreditation process, which means all systems must follow the same authorized security requirements. Systems managers, the policy adds, should accept security risks when necessary to yield a decision advantage from timely and accurate intelligence.
Those measures will make it easier for the IC to adopt cutting-edge technology. They also foster reciprocity as well as information sharing. If one IC element certifies a system or major application, then others in the community can trust that it is secure without spending more time and money to duplicate tests.
Director of National Intelligence Mike McConnell signed the directive on Sept. 15. It cancels and replaces Director of Central Intelligence Directive 6/3 and an accompanying implementation manual, which governed the management of intelligence information systems for the past nine years.
ODNI officials said the new policy transforms the way that secure systems will be brought on line, so that greater trust will be sown across the IT enterprise. Plus, the changes will add efficiency to the entire process, they noted.
The effort, begun roughly two and a half years ago, is the outgrowth of an intense push led by the ODNI, with critical support from several IC partners – especially the federal Committee on National Security Systems, the National Institute of Standards and Technology, and the Office of the Assistant Secretary of Defense for Networks and Information Integration/Chief Information Officer in the Department of Defense.
The ODNI was represented by its Office of the Associate Director of National Intelligence and Chief Information Officer.
Buy-in from the committee, NIST and DoD was invaluable, giving the policy impact beyond the intelligence community. Furthermore, the policy's status as an official standard of the Committee on National Security Systems ensures alignment of certification and accreditation processes across all federal entities.
Intelligence Community Directive 503 is available online at http://www.dni.gov/electronic_reading_room/ICD_503.pdf.
The Director of National Intelligence oversees 16 federal organizations that make up the intelligence community. The DNI also manages the implementation of the National Intelligence Program. Additionally, the DNI serves as the principal adviser to the president, the National Security Council and the Homeland Security Council on intelligence issues related to national security.”
Posted: 02 Oct 2008 10:48 AM CDT
I just read an article by Phil Muncaster in computing.co.uk. It details a keynote speech by Neil MacDonald,VP of Gartner research at this weeks Gartner Security Summit 2008. I was not at this event, so can't report first hand on it, but taking Phil's article at face value, it seems that Neil was blaming security vendors for security professionals not being able to keep pace with the changing face of security threats. Too me this is like blaming Smith & Wesson for not making better guns for police officers. The fact that the bad guys are doing bad things somehow doesn't enter the equation. IT security progress is being held back because the threats we are facing are growing more complex and sophisticated! Lets not confuse the people trying to help with the solution with the people causing the problem.
On top of this, there are a lot of security vendor products out here that are not being used. I have yet to speak to an IT security professional who has the budget to get all of the security tools, training and services they need. Overall the security industry is constantly trying to make 30 cents out of a quarter. In an environment where the bad guys are making lots of money, resource starved security professionals are waging this war with one hand tied behind their back. It is not a lack of security tools, it is a lack of resources and money to buy and deploy them. Don't underestimate the deploy them part of it. How many times have we seen hard won budget dollars spent buying security products
That is not to say that security vendors are without blame. Security products are too hard to use, don't play nicely with each other and we don't do a good job of arming security professionals with compelling value propositions to sell the solutions up the chain.
Related articles by Zemanta
Posted: 02 Oct 2008 10:31 AM CDT
Amazon.com, Inc. (NasdaqGS: AMZN) has announced the availability (later this autumn) of EC2 (Amazon Elastic Cloud Computing) with Microsoft Corp. (NasdaqGS: MSFT) Windows Server 2008 and SQL Server 2008 databases. The full announcement after the jump.
From the Amazon Web Services announcement: “We are excited to let you know that Amazon Elastic Compute Cloud (Amazon EC2) will offer you the ability to run Microsoft Windows Server or Microsoft SQL Server starting later this Fall. Today, you can choose from a variety of Unix-based operating systems, and soon you will be able to configure your instances to run the Windows Server operating system. In addition, you will be able to use SQL Server as another option within Amazon EC2 for running relational databases.
Amazon EC2 running Windows Server or SQL Server provides an ideal environment for deploying ASP.NET web sites, high performance computing clusters, media transcoding solutions, and many other Windows-based applications. By choosing Amazon EC2 as the deployment environment for your Windows-based applications, you will be able to take advantage of Amazons proven scalability and reliability, as well as the cost-effective, pay-as-you-go pricing model offered by Amazon Web Services.
Our goal is to support any and all of the programming models, operating systems and database servers that you need for building applications on our cloud computing platform. The ability to run a Windows environment within Amazon EC2 has been one of our most requested features, and we are excited to be able to provide this capability. We are currently operating a private beta of Amazon EC2 running Windows Server and SQL Server. Please go to aws.amazon.com/windows if you are interested in being notified later this Fall when the offering is released broadly.
Posted: 02 Oct 2008 09:13 AM CDT
The malware challenge contest began yesterday and from what we can tell its very popular. According to our logs, we had over 100 downloads of the malware for the challenge from over a dozen countries.
For those who don't know yet, the malware challenge is a contest to analyze a piece of malware and find out what it does. The contest runs from October 1 to October 26 and the results will be presented at the Ohio Information Security Summit. Of course, we have lots of cool prizes to give away!
We have made the contest so that if you are new to malware analysis you'll still have a great shot at winning prizes. We're going to be looking more at the way people analyze the malware as opposed to if they get the right answers. In other words, if you unsure about it still participate. The worst that can happen is you learn something in the process and win a cool prize!
Also, thanks to all who have been helping advertise it! Without you no one would know about the contest.
I look forward to seeing everyone's submission!
Posted: 02 Oct 2008 08:11 AM CDT
It probably isn’t suprising to anyone that Tom-Skype is being monitored. “Breaching Trust” details the process by which conversations with matching keywords are uploaded to a webserver. The suprising bit is that the server is pretty much accessible to anyone. From the report:
Update (2 Oct 08, 1617GMT): Some other news organizations have picked this up:
Posted: 01 Oct 2008 09:30 PM CDT
Today, another vulnerability has been making the headlines, various industry security professionals predicting apocalyspe, genocide and famine along with everything in between. It first started earlier this summer, back when Dan Kaminsky, in a multi-vendor coordinated effort, told the world of his DNS vulnerability. Then came the BGP hijacking, disclosed by Tony Kapela and Alex Pilosov at Defcon. Granted, these were serious issues and not to discredit their research, the vulnerabilities themselves were nothing truly groundbreaking. Both DNS poisoning and BGP hijacking are literally implemented into the RFC — it was all just a matter of enumerating the various ways of doing it.
Following, came RSnake’s and Jeremiah Grossman’s browser Clickjacking bug, which when disclosed to Adobe, Adobe took upon themselves to fix within Flash and asked both to cancel their OWASP presentation at AppSec NYC 2008 last week. Today (or rather this week), was Robert E. Lee’s and Jack Louis’ SYN Cookie DoS vulnerability, affecting almost every TCP/IP stack implementation. (why people are even using SYN cookies is beyond my comprehension — it’s a hack and does not mitigate DoS attacks, though that’s a seperate discussion on its own) [Edit (10/02/2008 11:30): I misread the original post and it is not a vulnerability with SYN cookies. Robert was using SYN cookies as an analogy. See Outpost24’s TCP DOS Attack Explained]
The common occurrence between these vulnerabilities? They all were touted as super critical vulnerabilities that could bring down the internet and pwn every being in existence. In addition, the researchers behind them opted not to disclose details of the vulnerability. What this created, was an incentive, or challenge to others to discover the vulnerability before the discovering researchers decided to fully disclose. It took about two weeks before Halvar figured out Dan K’s bug, and only another couple hours for Arshan to figure out the Flash/clickjacking vulnerability.
I read this Slashdot comment earlier today which I found hysterical, that poked fun at RSnake’s “Robert and Jack are smart dudes.” I know RSnake is a smart dude too, but really, at the end of the day, you’re taking our word for it. And to quote Bruce Potter, “Don’t believe anything I say.”
But seriously though, I think the blogosphere is doing a disservice hyping these vulnerabilities to no end, and researchers doing a disservice to themselves when they disclose this way. Don’t tell the world until you’re ready. If you’re not ready, stay home. The security industry needs to stop crying wolf, because not everybody holds security to the same level of attention as we do. People are getting tired of the fear, uncertainty and doubt we spread.
Instead, let’s focus on fixing the problems and providing lessons learned so these vulnerabilities don’t crop up again. That’s what people truly want to see. If you discover a vulnerability and want to report it to the vendor, that’s great! Continue to work with the vendor until a patch has been released before going public — even to announce you have something. Just please, don’t come out and ask us to pick a hand when you know both are empty.
Posted: 01 Oct 2008 09:04 PM CDT
Posted: 01 Oct 2008 01:32 PM CDT
Posted: 01 Oct 2008 01:29 PM CDT
CNET reports Kevin Mitnick (founder of Kevin Mitnick Consulting, LLC, (released from incarceration on January 21st, 2001, after serving 5 years) was recently detained at the Hartsfield-Jackson Atlanta International Airport Port of Entry (by CBP) and subsequently released after a journey to Colombia. CNET: Kevin Mitnick detained, released after Colombia trip
 CNET: Hacking Caller ID: unblocking blocked phone numbers
 Wired.com: Kevin Mitnick Tells All in Upcoming Book — Promises No Whining
 Forbes: Reformed Hacker Looks Back  United States Customs and Border Protection
Posted: 01 Oct 2008 01:27 PM CDT
Posted: 01 Oct 2008 12:51 PM CDT
Today, the PCI Council released the final version of PCI DSS 1.2. We wrote about this upcoming release several times in the past and even had a pretty successful webinar.
In my opinion, adding the testing procedures and emphasizing some of the tests and steps that should be taken deflates one of the main controversial debate areas and simplifies section 6.6. As you know, we believe that choosing between code review and Web Application Firewall is a false dilemma. Amichai and myself were spearheading industry efforts evangelizing this idea and it looks like it has resonated well. PCI 1.2 takes the air out of the code review option and holds up a WAF as the optimal choice for addressing section 6.6.
Posted: 01 Oct 2008 11:35 AM CDT
Posted: 01 Oct 2008 11:11 AM CDT
Yesterday, following up after recording the podcast on clickjacking, I was talking with Robert Hansen about the TCP flaw some contacts of his found over in Sweden. He wrote it up in his column on Dark Reading, and Dennis Fisher over at TechTarget also has some information up.
Basically, it’s a massive unpatched denial of service attack that can take down nearly anything that uses TCP, in some cases forcing remote systems to reboot or potentially causing local damage. Codified in a tool called “Sockstress”, Robert E. Lee and Jack C. Louis seem to be having trouble getting the infrastructure vendors to pay attention. I can’t but help think it’s because they are with a smaller company in Sweden; had this fallen into the hands of one of the major US vendors/labs methinks the alarm bells would be ringing a tad louder.
From what Robert told me, supported by the articles, this tool allows an attacker to basically take down anything they want from nearly anywhere (like a home connection).
Robert and Jack are trying to report and disclose responsibly, and I sure as heck hope the vendors are listening. Now might be the time for you big end users to start asking them questions about this. It’s hard to block an attack when it takes down your firewall, IPS, and the routers connecting everything.
One interesting tidbit- since this is in TCP, it also affects IPv6.
Posted: 01 Oct 2008 11:00 AM CDT
The following picture was taken when we had to rush a laptop to see the doctor. If you watch carefully, you can see that the Windows operating system is booting in safe mode.
(Click on the image to see a larger picture)
I hope that I do not sound annoying when I ask you to think how this poor laptop's journey could have been prevented. Think Risk Management....
Posted: 01 Oct 2008 10:54 AM CDT
Greg Young over at Gartner has a humorous post on possibly the best way to make money in network security- the “Security Silly Jar”. Just drop in a quarter anytime someone says something stupid from the list. My favorite is number 9:
If you don’t know Greg, he’s the lead for network security over at Gartner and someone definitely worth reading…
Posted: 01 Oct 2008 10:39 AM CDT
Excerpted from an ISC(2) announcement yesterday: In support of the month, ISC(2) has launched Cyber Exchange where you can download original cyber security awareness materials at https://cyberexchange.isc2.org. The Cyber Exchange houses free security awareness tools from around the world, designed...
Posted: 01 Oct 2008 12:00 AM CDT
Posted: 30 Sep 2008 11:15 PM CDT
This is a very interesting article by Robert Westervelt over at Tech Target, and I wanted to make a couple follow-on comments.
Way back when, as a DBA, my morning ritual was to get into the office, grab a cup of coffee, and review the database and web app logs. Just to make sure that the databases were running smoothly and there was nothing unusual going on. I had a single web app and 5 or so databases. Took about 30 minutes. But that was pre-tech collapse, where DBAs only had 10 or so databases to manage. If you are managing 100 or more databases, you are not reviewing logs on a regular basis without automation. Whether it be security, systems management, or configuration management, you have to have help. And today, you are buying a tool for each, and of those, 2 of the 3 are not typically supplied by the database vendor or the tools vendor. We talk a lot about security products for databases here at Securosis, but few of them operate the way that DBAs and IT operations personal want them to work. Yes, I understand separation of duties and I understand that the DBA is not the best person to provide security analysis, but still, a single platform to provide all these operational aspects would make sense.
I used to love going to the IOUG events around the country. I gave presentations at some, but I wanted to go because I always learned something from the lectures or presentations. There was such a wealth of knowledge, and when you have hundreds of DBAs with unique problems and willing to experiment, they often run across very cool solutions. I ran across some Perl scripts once for data discovery that were really amazing, and I borrowed from this source as much as I could. It dawned on me that Oracle has an amazing resource here and does not leverage it enoug for either their own benefit, or for their users’. The model I am thinking of is Firefox and its community plug-ins. It would be nice to have the ability to browse and download utilities from the community at large and try them out. OEMs could really use that kind of lightweight option. And, yes, this means I have my doubts that Fusion middleware is going to be leveraged by the people who manage Oracle platforms and databases.
Posted: 30 Sep 2008 08:47 PM CDT
From one of my favorite blogs, GNU Citizen, comes this simple and elegant proposal for authentication. It is only suitable for lower value transactions, but it could form the basis for stronger authentication, and it sure beats complicated registration processes. I have come to regret some of the heavier processes I've put on some sites I maintain, and this might do the trick.
Posted: 30 Sep 2008 08:05 PM CDT
In the write-up, the authors list crimes ranging in time from 1989 (The WANK Worm) when most people had never even heard of this odd thing called the Internet, to more recent times (2008). Some of the mysterious crimes discussed I had never even heard about, which either says a lot about me, or is a serious compliment to the authors.
The list of mysteries is made up of:
|You are subscribed to email updates from Security Bloggers Network |
To stop receiving these emails, you may unsubscribe now.
|Email Delivery powered by FeedBurner|
|Inbox too full? Subscribe to the feed version of Security Bloggers Network in a feed reader.|
|If you prefer to unsubscribe via postal mail, write to: Security Bloggers Network, c/o FeedBurner, 20 W Kinzie, 9th Floor, Chicago IL USA 60610|