Tuesday, November 4, 2008

Spliced feed for Security Bloggers Network

Spliced feed for Security Bloggers Network

Network Security Podcast, Episode 126 [Network Security Blog]

Posted: 04 Nov 2008 07:19 AM CST

This is a special Get Out and Vote episode.  Rich is in Russia of all places and Martin is on the road most of today, so this episode was recorded on October 31, 2008, Halloween.  And there isn’t much scarier today than Direct Recording Electronic (DRE) voting machines.  That might make a good costume next year.  In any case, exercise your right and responsibility to vote today!

Network Security Podcast, Episode 126, November 4, 2008

Show Notes:

PS.  We took great pains to make sure the audio quality was a lot better this week.  Thanks for listening

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

This posting includes an audio/video/photo media file: Download Now

Is Twitter numbing our emotions? [Srcasm]

Posted: 04 Nov 2008 07:00 AM CST

When I am ready to head out for the night to the local watering hole or the amazing place that reminds me of my fraternity house basement, I send a quick message to Twitter to let people know where I’m going.  I have the policy that if I post it to Twitter, it’s open to anyone and that includes coming to hang out and grab a beer or dinner with friends.

One thing I’ve begun to realize though is that some people won’t come out unless I directly invite them.  Whether it’s that they feel that they are unwelcomed or they would like a personal, “Hey, I’m thinking about you and I would enjoy your company,” is up in the air.  I guess it differs for each person.

One thing is clear, we may be leaving people out of our lives by using services like Twitter, FriendFeed or Facebook to blast out invites to hang out instead of the more traditional methods like telephones, email or SMS messages.  What do you think?  Are we losing touch with our humanity?  Do we feel that we no longer need to communicate in a one-on-one world with micro-blogging and one-to-many services?  Should we be focusing on those closer to us by sending them personal invitations or notes instead of expecting them to check the large flood of information that’s contained on the internet?

Help a Hacker [Andy, ITGuy]

Posted: 04 Nov 2008 04:51 AM CST

A year or so ago I became a fan of the work that Johnny Long was doing. Not only his Google Hacking, No Tech Hacking, and other cool things, but also his Hackers for Charity work. Back in April I had the pleasure of seeing Johnny give his No Tech Hacking talk and I meet him after the talk. We spend a few minutes talking about hackers for charity. At that time I encouraged all of you to check out the hackersforcharity.org site and do what you could to help with this endeavor. Today I'm renewing that call to action. There are a several things that you can do that are very easy, enjoyable and even free (not all are free). You can buy the book No Tech Hacking by clicking to the Amazon site directly from Johnny's site. When you do this all the proceeds go directly to Hackers for Charity. You can buy a "I Hack Charities" vinyl label for you laptop from here. Again all the proceeds go to hackers for charity. You can donate time, money or equipment to the cause. If you blog or podcast tell your readers and/or listeners about the work that is going on at Hackers for Charity.

Now there is something new that you can do. Peter Giannoulis, founder of The Academy web site, is offering to donate $1 for every new member that joins www.theacademy.ca during the month of November. So not only do you get to make a charitable donation that costs you nothing but you also become a member of a very cool site that is aimed at making your job as a information security practitioner easier.

So I encourage all of you to take a look at the work that hackers for charity is doing and think about how you can help out and then do what you can.

Is Nothing Sacred? Data Breach at Texas Lottery [Digital Soapbox - Security, Risk & Data Protection Blog]

Posted: 04 Nov 2008 12:47 AM CST

[1] http://www.chron.com/disp/story.mpl/headline/metro/6089177.html

Apparently you're not even safe playing the Lottery lately.

In this article from October 31st (a little dated, I know, but just getting around to reading this) it's apparent that lax data security policies and poor judgment was the cause of this breach. What's astonishing is the complete and utter disregard this employee had for the super-sensitive data (including social security numbers) he "copied and burned to DVD"... what's even more disturbing is his motive:
"I indiscriminately copied all the files from the My DOC folder to a CD/DVD which I carried (to subsequent jobs)"... The employee added he wanted the information "for possible future reference as a programmer at other state agencies."
What possible future reference could he have had from this live data about real people? It continues to amaze me how people just haven't paid attention to the news media and other information outlets discussing how dangerous information like social security numbers is. Did this guy crawl out from under a rock?

links for 2008-11-03 [Srcasm]

Posted: 03 Nov 2008 11:03 PM CST

I’m Sick Of Losing! [BumpInTheWire.com]

Posted: 03 Nov 2008 10:55 PM CST

I’m so sick and tired of cheering for losers.  The Royals suck, the Chiefs suck, Nebraska sucks, my alma mater (Iowa State) sucks.  I’m tired of watching my team lose.  Plain and simple.  I seem to have a profound negative affect on teams I support.  This does not bode well for Team Obama.  Don’t forget to vote!

In nerdier news…its amazing how much easier things can be when you take the time to plan and test, test and plan, plan and test before making a major network change.  I spent a few days going over the design in a lab, drew everything out and typed out the config in advance.  When it came time to do the work yesterday all that had to be done was copy and paste along with a couple of cable moves.  I’ve spent more time updating documentation today than I actually did doing the work yesterday.  An ounce of preparation is worth a pound of cure.

In other Bump news I attempted to sign up for that 50 minutes of hell again tonight.  50 minutes of hell is better known as a spin class.  Divine intervention struck though and the class was already full with a four person waiting list.  I wish I could say I shed a tear over that. 

Came across this press release today [StillSecure, After All These Years]

Posted: 03 Nov 2008 10:16 PM CST

mikerothman RENOWNED SECURITY BLOGGER MIA SINCE TAKING JOB

The Pragmatic, Inciteful Mike Rothman Has Gone Missing From His Blogging Since Taking a "Real Job"

(Alpharetta, GA. – November 2, 2008) – The mouth of the south, renowned security blogger, Mike Rothman has turned up missing in action shortly after announcing his acceptance of a full time position as a vendor puke with eIQ. Several inquiries have been made, but even "the boss" has been mum on his whereabouts. Several prominent security experts are already suspecting foul play and some even whisper of some sort of left wing conspiracy.

Rothman originally sounded optimistic about continuing his blogging workload and not abandoning his legion of fans in the RSS feed world. However, it appears that a "real job" has proven more than he had bargained for. Could it be, that after for so long making fun of others who blogged in addition to their full time jobs, the task is more daunting than Mike could handle? Could the Security Twits have kidnapped him? Where is Mike Rothman?

Other rumors flying around the blogosphere have reports of Rothman sightings. One report had him canvassing door-to-door on behalf of Ron Paul in Montana. Still others say that Rothman has been in an "undisclosed location" (the same undisclosed location Dick Cheney uses) working on Barak Obama's cybersecurity plans. Rothman's name has been floated as a possible Czar in an Obama administration. Some are saying Mike was holding out to be the Sheik of cybersecurity, not the Czar. Others say Mike was far too pragmatic to get mixed up in politics.

Several other well known security bloggers were asked to comment on Rothman's whereabouts:

Chris Hoff of Rational Survivability said, "I hope and pray for the best for Mike. Unfortunately my suspicion is that he has been virtualized and sucked up into the cloud. We all know how insecure that can be."

Martin McKeay of Network Security Blog said, "You know Mike always made fun of my privacy views, but for once I wish we had a way to get past privacy laws and find out what really happened to Mike. I may have to don my purple tights and Captain Privacy suit to lead the search for Mike"

Rich Mogull of Securosis had this to say, "Mike did ask me for a hazmat suit that I used for the Democratic convention. I hope something did not go terribly wrong and Mike winds up as a green, muscular super hero".

Amrit Williams of Techbuddha had nothing to say at all about Mike. In fact he said he never really liked Mike anyway.

JJ of Security Uncorked said, "I think Mike is just holed up somewhere in the Deep South working on the next set of 802.1x standards. But if I don't start blogging more they may be putting out MIA releases on me next"

Richard Stiennon (sorry Rich, couldn't find your blog URL) said, "Though I am sorry to see Mike's disappearance, it does leave a real vacuum for blogging security analyst and Stiennon's first law is "blogging abhors a vacuum"

Alan Shimel  of StillSecure, After all these years, put perhaps the finishing touch on the Rothman situation saying, "You know Mike was a fast-talking NY guy who always spoke his mind. His up front, in your face style might have just rubbed someone the wrong way. He could very well be the security industry's Jimmy Hoffa. But you know being the huge Giant fan he is, I am sure he would not mind being buried in the end zone of the new Giants Stadium"

In the meantime a Ten ($10.00) Dollar reward has been offered by the Security Bloggers Network for any information leading to the whereabouts of Rothman. Anyone with information regarding this mystery can email podcast@stillsecure.com. All information will be kept confidential, as well as HIPAA and PCI compliant.

**All names and quotes are purely fictitious. Who knows where Rothman really is?**

Database Activity Monitoring & Event Collection Options [securosis.com]

Posted: 03 Nov 2008 04:32 PM CST

During several recent briefings, chats with customers, and discussions with existing clients, the topic of data collections methods for Database Activity Monitoring has come up. While Rich provided a good overview for the general buyer of DAM products his white paper, he did not go into great depth. I was nonetheless surprised that some people I was discussing the pros and cons of various platforms with, were unaware of the breadth of data collection options available. More shocking was a technical briefing with a vendor in the DAM space who did not appear to be aware of the limitations of their own technology choices … or at least they would not admit to it. Regardless, I thought it might be beneficial to examine the available options in a little greater detail, and talk about some of the pros and cons here.

Database Audit Logs

Summary: Database Audit Logs are, much like they sound, a log of database events that have already happened. The stream of data is typically sent to one or more files created by the database platform, and may reside at the operating system level or may be contained within the database itself. These audit logs contain a mixture of system resource recordings, transactional events, user events, system events, and other data definitions that are not available from other sources. The audit logs are a superset of activity. Logging can be implemented through an agent, or can be queried from the database using normal communication protocols.

Strengths: Best source for accurate data, and the best at ascertaining the state of both data and the database. Breadth of information captured is by far the most complete: all statements are captured, along with trigger and stored procedure execution, batch jobs, system events, and other resource based data. Logs can capture many system events and DML statements that are not always visible through other collection methods. This should be considered one of the two essential methods of data collection for any DAM solution.

Weaknesses: On some platforms the bind variables are not available, meaning that some of the query parameters are not stored with the original query, limiting the value of statement collection. This can be overcome by cross-referencing the transaction logs or, in some cases, the system tables for this information, but at a cost. Select statements are not available, and from a security standpoint, this is a major problem. Performance of the logging function itself can be prohibitive. Older versions of all the database platforms that offered native auditing did so at a very high cost in disk and CPU utilization- upwards of 50% on some platforms. While this has been mitigated to a more manageable percentage, if not properly set up, or if too much information is requested from high transaction rate machines, overhead can still creep over 15% unless carefully deployed. Not all system events are available.

Network Monitoring

Summary: This type of monitoring offers a way to collect SQL statements sent to the database. By monitoring the subnet, network mirror ports or TAPS, statements intended for a database platform can be ’sniffed’ directly from the network. This method will capture the original statement, the parameters, and the returned status code, as well as any data that was returned as part of the query operation. Typically an appliance-based solution.

Strengths: No performance impact to the database host, combined with the ability to collecting SQL statements. On legacy hardware, or where service level agreements prohibit any additional load being placed upon the database server, this is an excellent option. Simple and efficient method of collecting failed login activity. Solid, albeit niche applicability.

Weaknesses: Misses console activity, specifically privileged user activity, against the database. As this is almost always a security and compliance requirement, this is a fundamental failing of this data collection method. Sniffers are typically blind to encrypted sessions, although this is still a seldom used feature within most enterprises, and not typically a limiting factor. Misses scheduled jobs that originate in the database. To save disk space, most do not collect the returned data, and some products do a poor job of matching failed status codes to triggering SQL statements. “You don’t know what you don’t know”, meaning that in cases where network traffic is missed, mis-read or dropped, there is no record of the activity. This contrasts with native database auditing where some of the information may be missing, but the activity itself will always be recorded.

OS / Protocol Stack Monitoring

Summary: This is available via agent software that captures statements sent to the databases, and the corresponding responses. The agents are deployed either in the network protocol stack, or embedded into the operating system to capture communications to and from the database. They see an external SQL query sent to the database, along with the associated parameters. These implementations tend to be reliable, and low-overhead, with good visibility into database activity. This should be considered a basic requirement for any DAM solution.

Strengths: This is a low-impact way of capturing SQL statements and parameters sent to the database. What’s more, depending upon how they are implemented, agents may also see all console activity, thus addressing the primary weakness of network monitoring and a typical compliance requirement. They tend to, but do not always, see encrypted sessions as they are ‘above’ the encryption layer.

Weaknesses: In rare cases, activity that occurs through management or OS interfaces is not collected, as the port and/or communication protocol varies and may not be monitored or understood by the agent.

System Tables

Summary: All database platforms store their configuration and state information within database structures. These structures are rich in information about who is using the database, permissions, resource usage, and other metadata. This monitoring can be implemented as an agent, or the information can be collected by a remote query.

Strengths: For assessment, and for cross referencing status and user information in conjunction with other forms of monitoring.

Weaknesses: Lacks much of the transactional information typically needed. Full query data not available. The information tends to be volatile, and offers little in the way of transactional understanding, or the specific operations that are being performed against the database. Not effective for monitoring directly, but rather useful in a supporting role.

Stored Procedures & Triggers

Summary: This is the original method for database monitoring. Using the database’s native stored procedures and triggers to capture activity and even enforce policies.

Strengths: Non-transactional event monitoring and policy enforcement. Even today, triggers for some types of policy enforcement can be implemented at very low cost to database performance, and offer preventative controls for security and compliance. For example, triggers that make rudimentary checks during the login process to enforce policies about which applications and users can access the database, at which time of day, can be highly effective. And as login events are generally infrequent, the overhead is inconsequential.

Weaknesses: Triggers, especially those that attempt to alter transactional processes, are a huge performance cost if in line with transaction processing. Stored procedure and trigger execution, in line with routine business processing, not only increase latency and processing overhead, but can destabilize applications that use the database as well. The more policies are enforced, the worse performance gets. This method of data collection, for use with general monitoring, has been all but abandoned.

Database Transaction Logs

Summary: Database transaction logs are often confused with audit logs, but they are very different things used for different reasons. The transaction logs, sometimes called ‘redo’ logs, are intended to be used by the database to ensure transactional consistency and data accuracy in the event of a hardware or power failure. For example, on the Oracle database platform, the transaction logs records the statement first, and when instructed by a ‘commit’ or similar statement, writes the intended alterations into the database. Once this operation has been completed successfully, the completed operation is recorded in the audit log.

Should something happen before this data is fully committed to the database, the transaction log contains sufficient information to roll the database backward and/or forward to a consistent state. For this reason, the transaction log records database state before and after the statement was executed. And due to the nature of their role in database operation, these log files are highly volatile. Monitoring is typically accomplished by a software agent, and requires that the data be offloaded to an external processor for policy enforcement, consolidation and reporting.

Strengths: Before and after values are highly desirable, especially in terms of compliance.

Weaknesses: The format of these files is not always published and not guaranteed. The burden of reading them accurately and consistently is up to the vendor, which is why this method of data collection is usually only available on Oracle and SQL Server. Transaction logs can generate an incredible quantity of data, which needs to be filtered by policies to be managable. Despite being designed to ensure consistency of the database, transaction logs are not the best way to understand the state of the database. For example, if a user session is terminated unexpectedly, the data is rolled back, meaning previously collected statements are now invalid and do not represent true database state. Also, on some platforms, there are offline and online copies, and which copies are read impacts the quality and completeness of the analysis.

Memory Scanning

Summary: Memory scanners are an software agents that read the memory structures of the database engine. At some pre-determined interval, the agent examines all statements that have been sent to the database

Strengths: Memory scanning is good for collecting SQL statements from the database. It can collect the original SQL statements as well as all the variables associated with the statement. They can also understand complex queries and, in some cases, resource usage associated with the queries. In some products, they can also examine those statements, comparing the activity with security policy, and alert if appropriate. This may be desirable when near-real-time notification is needed, and the delay introduced by network latency when sending the collected information to an appliance or external application is unacceptable. Memory scanning is also highly advantageous on older database platforms where native auditing introduces a substantial performance penalty, providing much the same data more efficiently.

Weaknesses: The collection of statements is performed on the periodic scan of memory. The agent “wakes up” according to a timer, and typical timer intervals are 5-500 milliseconds. The shorter the interval, of course, the more CPU intensive. As the memory structure is conceptually a circular buffer, the database routinely overwrites older statements that have been completed. This means that machines under heavy load can overwrite statements before the memory scan commences copies them. The higher the execution rate, the more likely this is to be the case. Scanning more frequently reduces the likelihood of missing statements, but raises the performance cost, so this must be carefully considered and tuned.

Memory scanning can also be somewhat fragile- if the vendor changes the memory structures when updating the database, the scanner may break. In this case it might miss statements, or it could find garbage rather than SQL statements.

Vendor APIs

Summary: Most of the database vendors have unpublished codes and interfaces that turn on various functions and make data available. In the past, these options were intended for debugging purposes, and performance was poor enough that they could not be used in production database environments. As compliance and audit become a big driver in the database security and DAM space, the database vendors are making these features ‘production’ options. The audit data is more complete, the performance is better, and the collection and storage of the data is more efficient. Today, these data streams look a lot like enhanced auditing capabilities, and can be tuned to provide a filtered subset while offering both better performance and lower storage requirements. There are vendors who offer this today, and some who have in the past, but available remains spotty. Additionally, many of the options remain unpublished, but I expect to see more made public over time and used by DAM and systems management vendors.

Strengths: The data streams tend to be complete, and the data collected is the most accurate source of information on database transactions and the state of the database.

Weaknesses: Not fully public and not fully documented; or in some cases, only available to select development partners.

I may have left one or two out, but these are the important ones to consider. Now, what you do with this data is another long discussion. How to process it, how fast to process it, how to check for security and compliance violations, how to store it, and how to generate alerts, provide many more ways to differentiate between vendors than simply the data they make available. Those discussions are for another time.

-Adrian

FREE Pass: CSI 2008 (DC Area) [Matt Flynn's Identity Management Blog]

Posted: 03 Nov 2008 03:16 PM CST

The CSI 2008 Security Conference will happen two weeks from now in the D.C. area. It actually starts on Sat., 11/15 and runs through Fri., 11/21, but the main conference runs three days - 11/17 - 11/19. It will be held at the Gaylord National Resort and will cover Identity 2.0, NAC, Anti-Virus, and Virtualization, just to name a few.

I have been authorized to give away a FULL 3-Day Conference Pass FREE (an $1895 value).

I only have one to give, so I'll have a small contest. Here's how to enter:

CONTEST DETAILS

You must enter by Thurs. 11/6. I will contact the winner on Fri. 11/7.

To enter, send me the most creative, interesting, unusual, funny, or exciting thing that you've seen, heard-of, done, or would-like-to-do with Active Directory.

Be sure to include email, phone, company name and title in your response.

If you want to win, but can't think of anything, try something like "use it to store network credentials". - you never know. That might be enough to win ;)

Those of you who don't win, can still take advantage of a 25% Discount!
The 25% Discount code is: BLOG25

You can also go directly to the site for a FREE Exhibition-Only pass.

I look forward to reading your entries!

Cloud computing: Swarm Intelligence and Security in a Distributed World [Amrit Williams Blog]

Posted: 03 Nov 2008 12:43 PM CST


Reading through my blog feeds I came across something Hoff wrote in response to Reuven Cohen’s “Elastic Vapor: Life In the Cloud Blog, in particular I wanted to respond to the the following comment (here)

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

I also wrote about this concept in a series of post on swarm intelligence…

Evolving Information Security Part 1: The Herd Collective vs. Swarm Intelligence (here)

The only viable option for collective intelligence in the future is through the use of intelligent agents, which can perform some base level of analysis against internal and environmental variables and communicate that information to the collective without the need for centralized processing and distribution. Essentially the intelligent agents would support cognition, cooperation, and coordination among themselves built on a foundation of dynamic policy instantiation. Without the use of distributed computing, parallel processing and intelligent agents there is little hope for moving beyond the brittle and highly ineffective defenses currently deployed.

Evolving Information Security Part 2: Developing Collective Intelligence (here)

Once the agent is fully aware of the state of devices it resides on, physical or virtual, it will need to expand its knowledge of the environment it resides in and it's relative positioning to others. Knowledge of self, combined with knowledge of the environment expands the context in which agents could effect change. In communication with other agents the response to threats or other problems would be more efficiently identified, regardless of location.

As knowledge of self moves to communication with others there is the foundation for inter-device cooperation. Communication and cooperation between seemingly disparate devices, or device clusters, creates collective intelligence. This simple model creates an extremely powerful precedent for dealing with a wide range of information technology and security problems.

Driving the intelligent agents would be a lightweight and adaptable policy language that would be easily interpreted by the agent's policy engine. New polices would be created and shared between the agents and the system would move from simply responding to changes and begin to adapt on its own. The collective and the infrastructure will learn. This would enable a base-level of cognition where seemingly benign events or state changes coupled with similarly insignificant data could be used to lessen the impact of disruptions or incidents, sometimes before they even occur.

The concept of distributed intelligence and self-healing infrastructure will have a major impact on a highly mobile world of distributed computing devices, it will also form the foundation for how we deal with the loss of visibility and control of the “in the cloud” virtual storage and data centers that service them.

      

Apple serviced me well [Srcasm]

Posted: 03 Nov 2008 11:59 AM CST

Apple logo with a smiley faceIt was a dark and gloomy day in the Middleton-Kozak household when the MacBook’s power adapter ceased to function.  Magda screamed out in agony over the fact that her batter would no longer charge and she would have no way of getting online — Oh wait, we had 4 computers, 3 Blackberrys and a G1 in the household so that wasn’t a problem but the laptop not charging was an inconvenience to say the least.

Fast forward a day or two later, we decided to take it to the Apple store at the King of Prussia mall on Halloween and on the day that the Phillies parade was in Philadelphia.  This meant that there were little to no people at the mall for this time of year.  We got into the Apple store and went to the Genius Bar (the place where the Apple geeks live to fix Apple gear).

After signing in, they told us it would be 5-10 minutes before we were seen.  It was 2.  After Tim, the friendly tech, took a look at the machine, he said that it wasn’t the battery or the power adapter so it had to be the DC board in the laptop.  Something was screwy with it and it would get power to run but no power to charge the device.  He did however explain that it could be fixed by Apple for free (under the Apple Care plan).  Another win for Apple.

Tim said that we should leave it with the store for 3-5 days so that they could replace the hardware and all would be right in the world again.  We made sure they had the proper information to get ahold of us when it was fixed and left to go shopping for clothing (which I despise).

About 2 hours later, Magda got a call on her cell phone from Tim.  He left a voicemail that said that the laptop was finished and we could pick it up.  The time to fix went from 3-5 days down to 2 hours.  That was excellent as we were still at the mall so with a quick swing over to the Apple store again, we had the laptop back and it took a charge.  They get a huge thank you from me for the speedy recovery of the beloved device.

While I am not an Apple or Mac fan boy and I own many PCs as well as a few Apple devices, I will say that their customer service and support is one of the best in the country.  I have never had such a pleasant experience getting a piece of hardware fixed and with the speed they won’t a new spot in my heart.  I would go so far as to say that if Apple wanted to give me a MacBook Air for my new consulting job, I would be proud to use it.  (Hint, hint.)

You talk like a delinquent [Emergent Chaos]

Posted: 03 Nov 2008 10:48 AM CST

This is interesting. Not sure how robust the finding is, but according to an analysis of LendingClub data on all past loans, including descriptions of the use for the money, applicants using certain words in their descriptions are much more likely to default.

For our purposes define a Delinquency as either being late in your payments or having defaulted completely. The 10 words with the greatest p-values are below. [...]


WordLoans WithP(Delinquency|No word)P(Delinquency|Word)p-value
also
215
0.067
0.140
0.0004
need
608
0.062
0.105
0.0015
business
233
0.069
0.116
0.0038
live
91
0.070
0.154
0.0057
already
64
0.071
0.156
0.0059
other
285
0.068
0.112
0.0081
bills
223
0.067
0.135
0.0082
bill
279
0.066
0.125
0.0117
interest
660
0.081
0.053
0.0136
"Words and Credit Scores", Social Science Statistics Blog

Not something I've studied, but I wonder if a neural network could successfully classify these loans?

Thoughts about Democracy in America [Emergent Chaos]

Posted: 03 Nov 2008 09:44 AM CST

There's a place in de Tocqueville where he talks about America's civic strength coming from the way we organize: those voluntary organizations which come together to solve a problem as a community. He pointed out that what we got from that was not merely that particular problem solved, but a sense of community and a willingness to solve problems without the heavy hand of government.

I am tremendously inspired by stories like "Daughter of slave votes for Obama." There's real progress for our country, within the course of a lifetime.

I've watched as a number of my friends have gone all out for Obama, some traveling on their own dime to knock on doors in states less blue than their own. I'm glad to see that level of enthusiasm: a politics of petty attacks is very likely to lose tomorrow, where a McCain who had been "the McCain of 2000" might well have won.

I worry about Obama's views on national service, including his goal of 50 hours of community service from every middle and high school student, and his goal of federalizing non-profits. I think that the value of non-profits comes from their volunteer nature, and from their diverse goals. Federal dollars will be alluring for their sheer scale. They will also be distracting for many non-profits, forced, like many churches to strangely bifurcate their activity to allow for federal dollars to flow in. As de Tocqueville understood, much of the value of volunteerism — including volunteering for a political candidate — is that it brings us together as a civic society.

As I watch the outpouring of enthusiasm and of hope, I am hopeful that Obama is smart enough to understand that the real strength of our nation is not in Washington, and it's not in directives from Washington. It's from hundreds of millions of people pursuing their hopes and dreams. America is a diverse set of people with different hopes and different dreams, and the value of our democracy is that is has embraced and promoted the freedom of each of us to pursue our own dreams, chaotic though that may be.

Congrats to New Work City! [Srcasm]

Posted: 03 Nov 2008 07:59 AM CST

I spent the weekend down in Baltimore for Social Dev Camp East (pictures coming soon) and up in New York for the opening party for New Work City. It was a long and tiring weekend but it was so worth it!

Tony and the rest of the crew up there are incredible. They have the passion and the drive that’s necessary to succeed and they have the friends from other coworking spots to help guide them along (like IndyHall).

As far as the space goes, it’s bangin’. It’s a large open area perfect for a lot of desks and a lot of collaboration. They have a conference room and even a phone booth so you can make those secret business calls without the whole world knowing what’s up.

All in all, they are setup for success. Their minds are in the right place and their passions are driving them. If you get the chance, stop by while you’re in NY and say hi. You won’t regret it.

The French crack down on illegal downloads [Domdingelom on security, fun and life]

Posted: 02 Nov 2008 04:14 PM CST

On Friday, the EUObserver came with an interesting article on a new French law (http://euobserver.com/9/27026) that will introduce a cut-off from internet access for people that are caught 3 times illegally downloading copyrighted content.

To me, it's mind-boggling how the recording industry lobby has been able to push the French in accepting such a law. There was an amendment requesting to replace the cut-off by a fine but that was not accepted because "
The principle of a financial penalty changes the philosophy [of the bill], from instructive to repressive". And that in times where e-government is becoming more and more of a reality. Would we really allow a citizen or a family to be cut off from the intertubez for a year (yes, 365 jours !!) ? Is making them pay XXX euros less repressive ?

That's what you get when your prez marries a recording artist (* I'll leave the interpretation of the word artist to the readers discretion).


Day 4 with the G1 [Srcasm]

Posted: 02 Nov 2008 03:48 PM CST

Android G1 from TMobileI decided to go ahead and dive in and try the new Android phone from HTC.  The only thing I had to get over was that it was on TMobile only — And from what I remembered, their coverage all around the country was horrible.  It turns out after 4 days of using, testing, beating and sharing the device with other people, the service isn’t bad at all and the software and hardware rock!

First off, the hardware.  The G1 is solid phone with a slide out keyboard.  It’s a little bit narrower than an iPhone and a bit taller as well.  It’s a tad heavier but it fits very comfortably in your hand and next to your face while chatting with your lover — or a business associate.

As far as the software goes, the OS is bangin’!  It’s fast, simple to learn (almost as simple as the iPhone) and very robust.  Unlike the iPhone, you can run applications in the background and the software is open source.  This means that developers don’t have to get approval from Google or anyone else to sell their sweet wares.

I’ve been downloading, chatting, emailing and snapping pictures constantly.  Below are a few applications that I think many people would love to have on their phones:

  • Locale - This app tracks where you are and the time of day and changes your profile to suit.  So I tell it where work is and when I’m in the office, my phone automatically goes to vibrate.  The movie theatre, it sets to silent.  Cool?
  • PixelPipe - For a photo nut, this app is a must-have.  It can upload your photos to a whole lot of photo sharing services in one click (sort of).  TwitPic, Picasa, Flickr and many more are supported.
  • Twidroid - A great tool for those of us addicted to Twitter.  Post, follow, read and reply right from the comfort of a neat little Android app.
  • Compare Everywhere - Magda loves this application because it scans bar codes of products with the camera and looks up local and web deals to tell you where to shop.  For those ladies (and gents) out there that are always looking for the best deal, this is for you.

While I am loving the phone so far, I have a few gripes.  All of which should be able to be solved with some additional software or Google software updates (which are pushed down to the phone over wireless).

  • Battery life - Granted, I’m guilty of over-using technology during the honeymoon period, the batter life (with 3g and wifi enabled) has been pretty bad.  I had to recharge about 3/4 through my day.  Without 3g or wifi on, I can run it for at least 24 hours straight.
  • No on-screen keyboard - While I love the slide out keyboard for writing emails and love-notes to the fiancee, I would like to be able to send quick little messages with an on-screen, one-handed keyboard.  Having to slide it out all the time to say “Yes” can be quite a pain.

So far, so good.  I have about 10 days left on my TMobile trial and I have had very few dropped calls and very good luck with reception from Baltimore to New York City (and everywhere in between).  That’s what I have for now.  There will be another review as I learn a bit more about the phone and feel free to let me know if you have any questions.

PCI Compliance in the Cloud: Get it in writing! [Network Security Blog]

Posted: 02 Nov 2008 10:25 AM CST

A few days ago my friend Chris Hoff asked a very interesting question:  Can I be PCI compliant if I’m using some form of cloud computing?  Now Chris is a virtualization guru, and I have no intention of ever arguing virtualization issues with him (it’s not healthy for the ego to get beat down that badly), but when it comes to PCI I’ve got a leg up on him.  So I made several comments on the post, most of which boil down to referencing PCI requirement 12.8:  If you’re sharing cardholder information, i.e. credit card numbers, with a third party service provider, you need to have a clause in your contract that makes the service provider responsible for the PCI compliance of their systems.  With the example given, Amazon’s EC2, the chances of getting such a clause in your contract are almost non-existent.  Therefore, if you’re using Amazon’s EC2, you aren’t going to be PCI compliant until such a time as Amazon makes a compliant infrastructure.  The same needs to be said of any of the other cloud vendor, it’s not just EC2.

Afterward, Chris appended the post to say that he got exactly the response he expected.  But he doesn’t feel this is a good enough answer: virtualization and cloud computing are the next wave of computing fashion, therefore they need a deeper review by the PCI Security Standards Council to clarify PCI’s stance on these topics.  His rational is that cloud computing is going to happen, is happening and will happen whether we want it to or not.  He believes that the definition of ’service provider’ needs to be re-examined and updated to reflect the changes these technologies will bring about. 

Point blank:  Chris is wrong.  The definition of ’service provider’ is fine, here it is directly from the PCI Council’s site:

Service Provider
Business entity that is not a payment card brand member or a merchant directly involved in the processing, storage, transmission, and switching or transaction data and cardholder information or both. This also includes companies that provide services to merchants, services providers or members that control or could impact the security of cardholder data. Examples include managed service providers that provide managed firewalls, IDS and other services as well as hosting providers and other entities. Entities such as telecommunications companies that only provide communication links without access to the application layer of the communication link are excluded

By this definition, Amazon EC2 would be a service provider, pure and simple.  It doesn’t matter that the service they’re providing is virtualized.  In the eyes of PCI a virtualized system is really no different from a physical system.  Why should a rack of servers in a data center be different than the same services being provided on one server with multiple VM’s on it?  The service provider is still responsible for the physical security of the systems, they’re still responsible for the patching and security of the underlying operating systems.  Even when we talk about virtualization on your own network, the same PCI requirements apply.  In fact, virtualization can often be a negative from the PCI perspective, since every system that’s in the virtualized environment is now in-scope for a PCI assessment.

I would agree with Chris in saying this is a topic that needs more discussion, but to educate businesses and help them realize that cloud computing is no more a panacea for all their PCI woes than any other form of virtualization is.  You’re taking the same problems that you had with a service provider and adding a whole new layer of abstraction to them.  You’re sharing hardware with an unknown number of other clients, you have less visibility into what’s going on lower in the stack and you have a new set of patching and vulnerability concerns to be worried about.  Rather than reducing your stress levels and potential to be compromised, cloud computing will probably raise it to a new level.

I’d be willing to bet becoming a PCI compliant service provider wouldn’t be all that difficult for Amazon and EC2.  The security is probably already in place, all it would take is having an assessment every year to prove that Amazon meets the standards.  It’s the transfer of liability that’s going to be the big sticking point; I can’t imagine Amazon’s lawyers being in a big hurry to take on this responsibility, no matter how much the marketing department might want ‘PCI Compliant’ in their brochures.  And until you can put a clause in your contracts making your service provider responsible for a portion of your compliance, you aren’t going to be able to use EC2 and be compliant.

Just because a technology is new and exciting, it doesn’t mean we need to redefine the rules.  The definition of service profider works just fine when we’re talking about cloud computing.  They’re providing a service and they need to be compliant if your going be compliant.  There are PCI compliant service providers out there now and there are folks working on PCI compliance in the cloud.  Being a new and sexy technology shouldn’t exempt you from having to meet with the same compliance standards as everyone else, should it?

One last point, PCI requirement 12.8 is about transference of risk to the business closest to the cardholder data.  That’s it.  If your service provider isn’t willing to accept the risk associated with transferring, storing and manipulating cardholder data, you need a different service provider.   

[Slashdot] [Digg] [Reddit] [del.icio.us] [Facebook] [Technorati] [Google] [StumbleUpon]

Happy Morris Worm Day! [HiR Information Report]

Posted: 02 Nov 2008 07:28 AM CST

Twenty years ago to the day, Robert Morris, a Cornell student at the time, unleashed his worm (from the MIT campus!) on what little of the Internet existed in 1988.

To this day, no one really knows for sure how many computers were affected by the Morris Worm, which exploited any of several different vulnerabilities in order to replicate itself. Supposedly designed to be "harmless", it caused a large-scale denial-of-service attack, partly because of an error in the routine to check if it had already infected a given host.

Robert was the first person tried under the 1986 Computer Fraud & Abuse Act, and DARPA formed CERT Coordination Center in response to this incident. I found some interesting commentary about this via the Security Bloggers Network (of which HiR Information Report is a member). There's also some more information on the Wikipedia page: [Morris worm].

Creative Commons Photo Credit: Go Card USA on Flickr

Links for 2008-11-01 [del.icio.us] [HiR Information Report]

Posted: 02 Nov 2008 12:00 AM CDT

The Geek 100: How'd you do? (Poll) [HiR Information Report]

Posted: 01 Nov 2008 10:11 AM CDT

See the whole series: The Geek 100



This post contains a poll within an iFrame. If you don't see it in your RSS reader, visit HiR Information Report and take the poll, located on the right side of the site.

See the whole series: The Geek 100

No comments: