Wednesday, October 8, 2008

The DNA Network

The DNA Network

deCODE Launches deCODE BreastCancer ™, a Genetic Test to Screen for Risk of the Most Common Forms of Breast Cancer [deCODE You]

Posted: 08 Oct 2008 08:23 PM CDT

deCODE Breast Cancer enables women to understand whether they may benefit from more intensive screening, monitoring or preventive drug therapy.

deCODE Breast Cancer enables women to understand whether they may benefit from more intensive screening, monitoring or preventive drug therapy.

Reykjavik, ICELAND, October 8, 2008 – deCODE genetics today announced the launch of deCODE BreastCancer™, a new tool for assessing risk of the common forms of breast cancer. For the first time, a woman concerned about breast cancer can speak with her physician about a genetic test to better understand her lifetime riskw of developing the common forms of the disease.

The common forms of breast cancer result from the interplay of genetic as well as environmental and lifestyle factors and represent 95 percent of all breast cancers. These are distinct from the rare and essentially purely inherited forms of the disease due to mutations in the BRCA1 and BRCA2 genes, which cause between 1 and 3 percent of breast cancers. deCODE BreastCancer™ is a DNA-based reference laboratory test performed using a simple blood sample or cheek swab, ordered by physicians on behalf of their patients.

"This test is simple and compelling because it provides a woman and her doctor a means of understanding her personal risk of developing the common forms of breast cancer. This information is well-validated, relevant to the vast majority of women, and independent of family history and other known risk factors. Combined with the high public awareness of the importance of screening, advances in magnetic resonance imaging (MRI) technology and the availability of preventive drugs targeting estrogen receptors, I believe this test will help to save lives," said Dr. Kari Stefansson, M.D., Dr. Med., CEO of deCODE.

"DNA-based breast cancer risk assessment has to date been focused on detecting rare mutations that confer very high risk of early onset breast cancer. These are very valuable tests, but they do not measure genetic risk of the common forms of the disease.  The DNA markers identified recently by deCODE represent an important step toward filling current gaps in our understanding of breast cancer risk.  Ultimately, the goal is to deliver more personalized prevention and treatment for a much greater number of women," said Rebecca Sutphen, M.D., Clinical Geneticist at Moffitt Cancer Center and Advisory Board member at Informed Medical Decisions, Inc., a network of genetic counselors who provide support to physicians and patients using deCODE's tests.
"We speak to many people who are concerned about breast cancer through our 24/7 YourShoes Breast Cancer Support Center," said Margaret C. Kirk, CEO, Breast Cancer Network of Strength (formerly known as YME National Breast Cancer Organization). "We are very interested in all advances that could empower people to take charge of their health care and better understand their risk for developing breast cancer."

Owen Winsett, M.D., founder and director of the Breast Center of Austin, Texas, commented: "I have followed closely the recent scientific discoveries that are incorporated into this test. I am excited to be able to extend my screening and prevention practice, because this test applies to so many more women than the BRCA1 and BRCA2 tests. My patients are eager for this type of risk information and appreciate that the test can be done with a painless inner-cheek swab. I have ordered several tests on an early-access basis and plan to make this test a standard tool for helping me to decide which of my patients may benefit from screening at an earlier age, breast MRIs, and other risk reduction measures. This test helps define individual prevention, which is what so many of my patients want."

The deCODE BreastCancer™ test measures seven widely replicated single-letter variations (SNPs) in the human genome that deCODE and others have linked to risk of breast cancer. These SNPs contribute to the incidence of an estimated 60 percent of all breast cancers. The test integrates data from discovery and replication studies published in major peer-reviewed journals and involving nearly 100,000 breast cancer patients and healthy volunteers from many populations, principally of European descent. deCODE and other organizations are conducting replication studies to validate these markers in populations of other continental ancestries.

Women taking the deCODE BreastCancer™ test will receive a numerical score representing their relative risk of developing breast cancer in their lifetime compared to that of the general population as well as their personal lifetime risk. According to the American Cancer Society, average lifetime risk for women of European descent is 12 percent. Test scores range from 4.0 times average lifetime risk to less than half, or 0.4-times. The risk assessed by deCODE BreastCancer™ is independent of conventional risk factors such as family history of breast cancer in close relatives, age at first menstrual period, pregnancy history, and breast density. Therefore, this genetic risk should be viewed in the context of other risk factors assessed by a woman's physician.

deCODE BreastCancer™ can identify the roughly 5 percent of women who are at a greater than 20 percent lifetime risk of the common forms of breast cancer (about twice the average risk in the general population), and the 1 percent of women whose lifetime risk is roughly 36 percent (about three-times average). According to ACS guidelines, women with a lifetime risk of 20 percent or greater should receive annual MRI breast screenings in additional to mammograms, and women at 15 to 20 percent lifetime risk should talk with their doctors about the benefits and limitations of adding MRI screening to their yearly mammogram. With the information provided by the deCODE BreastCancer™ test, an additional 15 percent of women may fall within this range of moderately increased risk.

The test also predicts which women are more likely to develop ER-positive breast cancer if they develop cancer at all. This is important because these women may be more likely to respond to prevention strategies with drugs like tamoxifen that target estrogen receptors. The American Society of Clinical Oncology (ASCO) recommends that women with a five-year risk of 1.66 percent or greater should be considered for preventive treatment with tamoxifen.

deCODE BreastCancer™ may also be used to modulate the risk profile of the early onset inherited forms of breast cancer in women who have tested positive for risk variants in the BRCA1 or BRCA2 genes.

How to order deCODE BreastCancer™
Additional information and physician order forms for deCODE BreastCancer™ can be found at The price of the test is $1625 dollars and deCODE facilitates filing for reimbursement with commercial insurers. Testing is performed in deCODE's CLIA-registered laboratory, which has analyzed the genomes of hundreds of thousands of people from around the globe.

About Breast Cancer
Breast cancer is the most common cancer and the second leading cause of cancer deaths among women, according to the World Health Organization. The ACS estimates that 182,400 new cases of invasive breast cancer will be diagnosed in the United States in 2008, resulting in more than 40,000 deaths.

Breast cancers are classified as ER-positive or ER-negative according to whether tumors are found to contain estrogen receptors. In women of European descent, approximately three-quarters of breast cancers are ER-positive, and in women of African descent, approximately 50 percent are ER-positive.

Although a substantial portion of risk of breast cancer is inherited, it has taken painstaking research to find genetic variants predisposing to the disease's common forms. The mutations in the BRCA1 and BRCA2 genes conferring very high risk have a less than 0.5 percent frequency in the general population in the United States and Europe, accounting for only 1-3 % of all breast cancers.

Identifying and enabling the detection of a substantial proportion of the genetic risk for the common forms of breast cancer is the goal of deCODE's gene discovery work in breast cancer and the deCODE BreastCancer™ test. Women who know they are at a higher than average risk of breast cancer can also make proactive lifestyle changes to lower their lifetime risk, according to ACS. These include staying physically active, maintaining a healthy weight, eating healthy foods, and limiting alcohol intake and smoking.

About deCODE
deCODE is a biopharmaceutical company applying its discoveries in human genetics to the development of diagnostics and drugs for common diseases. deCODE is a global leader in gene discovery — our population approach and resources have enabled us to isolate key genes contributing to major public health challenges from cardiovascular disease to cancer, genes that are providing us with drug targets rooted in the basic biology of disease. Through its CLIA-registered laboratory, deCODE is offering a growing range of DNA-based tests for gauging risk and empowering prevention of common diseases, including deCODE T2™ for type 2 diabetes; deCODE AF™ for atrial fibrillation and stroke; deCODE MI™ for heart attack; deCODE ProCa™ for prostate cancer; deCODE Glaucoma™ for a major type of glaucoma; and deCODE BreastCancer™ for the common forms of breast cancer. deCODE is delivering on the promise of the new genetics.

Adipose tissue expandability explains onset of metabolic syndrome? [Yann Klimentidis' Weblog]

Posted: 08 Oct 2008 05:30 PM CDT

When studying disease traits it is always preferable to understand all elements of the etiological pathway.
A recent paper in PLoS Biology titled "It's Not How Fat You Are, It's What You Do with It That Counts" describes the "Adipose Expandability Hypothesis" as a way of explaining why some individuals develop metabolic syndrome and why others don't, regardless of how obese they are. What matters is your ability to store fat. If you're good at it you can stave off further health complications. If you're not good at it, you start storing fat in organ tissue, leading to further health complications even if you have no excess body fat.

John Derbyshire Misunderstands Race and Genetics [adaptivecomplexity's column]

Posted: 08 Oct 2008 05:00 PM CDT

There's been some comment recently about pundit John Derbyshire's belief that Obama will try to shut down biology because it has validated racism.

Needless to say, Derbyshire is full of it, and he has a poor grasp of what recent genetics has actually demonstrated regarding nature, nurture, and race.


Why and Why Not You May Need A Genetic Councilor [Think Gene]

Posted: 08 Oct 2008 04:55 PM CDT

You sometimes add a new dimension to blog-ramblings Andrew :-)

Here’s why you need a genetic counselor:
- and here’s why you might not need one:

Originally posted as a comment by Sciphu on Think Gene using Disqus.

Story: genetic councilors are people you pay to console you about the results of a genetic test report to make you feel better and help you make rational decisions about your health when your reasoning may be distorted by emotional distress. If that’s a service you want, buy a couple hours from a genetic councilor.

Computers can never provide human consolation, no matter how excellent and rational their reports. Doctors and scientists feel that councilor work is beneath them, nor are they typically any good at it, nor do they realistically have the capacity to do it. However, scientists tend to be much better at analyzing complex information, and medical doctors can use reports to better synthesize a general understanding of your health to recommend medical action not already prescribed by obvious standard practice.

For example, I would probably make an excellent systems biologist, but the world’s worst genetic councilor. “Beep-bop-boop, you have Huntington’s. Don’t have children.” See? A that’s perfectly rational statement, but it’s illustrative why genetic councilors are important.

But how many people in health care today can read the following expression? Or write it? Or tell me what it means not just clinically, but biologically?'(CAG){36,}',genome[4][3046205:3215485])

Strategic voting and the environment [Bayblab]

Posted: 08 Oct 2008 03:27 PM CDT

I can't say I'm a fan of strategic voting, the spirit of it defeats the purpose of voting in a democracy. And I want to make sure we continue having a multi-party system, even if it means voting for a party that is not going to realistically take power in a particular election. However many people embrace strategic voting as a means to achieve certain political ends. One of those is ensuring we elect an environmentally-friendly party. So if this is your intention might as well do it in a calculated manner using the "vote for environment" website. Plug-in your postal code and it will tell you who to vote for. And if the seat is not hotly contested it will even let you choose.

Community College Biotechnology programs and K12 outreach [Discovering Biology in a Digital World]

Posted: 08 Oct 2008 02:41 PM CDT

Over the years, I've seen many biotechnology education programs at community colleges embrace outreach to high schools as part of their mission. This kind of enthusiasm for outreach seems unique to biotech. No other kind of science or engineering program seems to do this sort of thing, at least not on the nationwide basis that I've seen demonstrated in biotechnology.

And yet, even though I've always admired and often participated in these efforts, some aspects are a little puzzling.

How do the colleges reconcile the energy spent in outreach efforts with the energy spent towards educating their own students?

Read the rest of this post... | Read the comments on this post...

Some thoughs on grid computing… [Mailund on the Internet]

Posted: 08 Oct 2008 02:30 PM CDT

Earlier this week, the LHC Computing Grid went online.  A description of the system can be found here, and blog posts about it here, here and here.

This got me thinking about grid computing for small scale scientists like myself.

I’ve had some experience with grid computing (see an old post about it here) but mostly I have found it too much trouble to be worth the effort.

Our typical computer use

For large projects that require years of CPU time, it is well worth the effort to set up the infrastructure to run computations on grids.  You really need it to get your the computations done, and the overhead is very small in comparison with the actual computation time.

Most of my projects — and most of the projects we do at BiRC — are a bit different.

We do need the computation power, but we are usually tinkering with our programs for most of a project — since we rarely know exactly how to analyse our data until we are mostly done with it — so we cannot just distribute a fixed version of our software and then start distributing the computations.

The typical work flow is that we write a program for our analysis, then we run the analysis and when we look at the results we find some strange results here and there. Then we extend the software to either extract more information from the data, or to fix a bug that caused the weird results.

We then need to run the analysis again, and repeat the process.

The analysis might take a few CPU days to a few CPU months — so it is small scale for grid applications — but between each analysis we spend a week or so modifying and testing our software.

We have a small cluster of Linux computers for this, and it is always in one of two states: completely overloaded or burning idle cycles.

This is the situation grid computing could fix.  Theoretically we should be able to get CPU cycles off the grid when we need it, and sell it to the grid when we are not running computations ourselves.

In practice, our work pattern makes this difficult.

The problems with small scale grid computing

If you are changing your software all the time, you need to distribute it together with the data you analyse.

This means you either send compiled binaries with the job submissions, or you compile the software as part of the job.

The former is fine if you have a program you can compile — and you’d better link it statically ’cause there is no guarantees about the libraries you can find on the resources that will run it.

If you have a bunch of scripts, you are not so lucky.

There are no guarantees that the computer that will run the computations has the script interpreter — or if it does that it is a version that can run your script — and even if it does, what about the modules you need?

You don’t want to have to compile BioPython or SciPy on a grid machine just to run your scripts.  The overhead in CPU time is going to be several percentage of your actual run (at least if you parallelise your computations to high enough a degree to be worth the grid in the first place), and how can you even know that there is a compiler to compile it at the other end?  You can’t, and there probably isn’t unless you are very lucky.

It is a major pain to see your jobs aborted after slowly making their way through the job queue, just because the host computer cannot even setup the environment you need for your computations.

What can we do about it?

If we want to use the grid for even smaller scale computations, at the very least we need an easier way to distribute new versions of our programs.

I have an idea for this.

Some grids, at least, are already dealing with “runtime environments” where you can specify that your job needs to run in a certain runtime environment, and the scheduler will only send your jobs to resources that can provide that environment.

This sounds like just the thing, but the catch is that it is up to the resource administrators to set up these environments and to tell the grid system that they provide them.

For something like LHC, it is probably not a problem to convince administrators to provide the right environment, but for Thomas Mailund it is.

What we need is a way for the grid users to be able to install environments on the resources!

So how about this: we introduce the concept of “runtime environment packages” that we can upload to the grid system.  They consists of a setup script (configure ; make) and a test suite, for example.

When a resource is idle, it tests if there are new environments available in queue, downloads these, and tries to build and test them.  If it succeeds, it informs the grid system that it can run the new type of environment.  The scheduler only sends jobs to resources that have the right environments, so if your environment tests are working properly, you never end up on a resource that cannot run your jobs.

We could even add environment requirements on the environment packages, so they don’t have to be self-contained.  E.g. to install SciPy, you don’t want to have to install Python itself, and there is no reason for resources without Python to try to install it only to give up.

To prevent resources to be filled up with old environment, we can add a time out period to environements, so they are deleted when they haven’t been used for a couple of days/weeks/months.

It shouldn’t be that hard to implement.  I am sure I could do it, but I don’t have my own grid infrastructure to work with, so I guess I’ll have to intimidate persuade someone else to do it…

Personalized Genetics: On the train again [ScienceRoll]

Posted: 08 Oct 2008 01:50 PM CDT

I’ve got a huge backlog now, but will try to keep sharing interesting genetic articles and posts with you regularly. So here is this week’s collection:

  • One of the main issues in the blogosphere is the 1000$ genome, the aim is to let everyone access their genomic data for 1000$. We thought we could reach that goal in the next couple of years, but according to Blaine Bettinger’s post, it might be done by the end of 2009.
  • The Genomic Revolution and the Future of Medicine and Health: A nice lecture about an essential subject


Webicina: Free E-Lessons [ScienceRoll]

Posted: 08 Oct 2008 01:19 PM CDT

Yesterday, we launched Webicina, an online service that can help medical professionals and patients enter the web 2.0 era. Now I would like to share some e-lessons we provide for free. Note that, you only see the lessons you have access to.

The first lessons are freely available after logging into your account:

I described in each of the above lessons, what kind of content you can get when purchasing a full e-course. I hope you will find these useful.


StatAlign: a new statistical alignment tool [Mailund on the Internet]

Posted: 08 Oct 2008 12:24 PM CDT

ResearchBlogging.orgThere’s an application note in the current issue of Bioinformatics that describes a new tool for statistical alignment, StatAlign, developed in my old group in Oxford.

StatAlign: an extendable software package for joint Bayesian estimation of alignments and evolutionary trees

Ádám Novák , István Miklós, Rune Lyngsø and Jotun Hein

Bioinformatics 2008 24(20):2403-2404

Motivation: Bayesian analysis is one of the most popular methods in phylogenetic inference. The most commonly used methods fix a single multiple alignment and consider only substitutions as phylogenetically informative mutations, though alignments and phylogenies should be inferred jointly as insertions and deletions also carry informative signals. Methods addressing these issues have been developed only recently and there has not been so far a user-friendly program with a graphical interface that implements these methods.

Results: We have developed an extendable software package in the Java programming language that samples from the joint posterior distribution of phylogenies, alignments and evolutionary parameters by applying the Markov chain Monte Carlo method. The package also offers tools for efficient on-the-fly summarization of the results. It has a graphical interface to configure, start and supervise the analysis, to track the status of the Markov chain and to save the results. The background model for insertions and deletions can be combined with any substitution model. It is easy to add new substitution models to the software package as plugins. The samples from the Markov chain can be summarized in several ways, and new postprocessing plugins may also be installed.

I am personally a firm believer in statistical alignment.  I think it is the way to go, to deal with the uncertainty in inferred alignments and to avoid the artefacts they can create.

For a good introduction to the problems (and how statistical approaches to alignment can help), you should read Lunter et al. Uncertainty in homology inferences: Assessing and improving genomic sequence alignment Genome Res. 18:298-309, 2008 (or my summary of it here).

StatAlign, the tool in the application note, looks like a nice way to attack alignments. Unlike previous approaches I’ve blogged about — and unlike my own small work in statistical alignment — it deals with multiple sequences (where MCMC is needed besides just HMMs).

It samples over both alignments and phylogenies, which is nice if there is any uncertainty in the phylogeny inference (which is typically based on alignments in the first place).

I can imagine that integrating over the phylogenies in the MCMC is the main time-killer, though, so it could be nice if you can turn that part of the state space exploration off in case you have a reasonable idea about the phylogeny but you are uncertain about some parts of the alignment…

A. Novak, I. Miklos, R. Lyngso, J. Hein (2008). StatAlign: an extendable software package for joint Bayesian estimation of alignments and evolutionary trees Bioinformatics, 24 (20), 2403-2404 DOI: 10.1093/bioinformatics/btn457

GFP wins Nobel Prize for Chemistry [Bayblab]

Posted: 08 Oct 2008 12:06 PM CDT

This year's Nobel Prize for Chemistry has been awarded "for the discovery and development of the green fluorescent protein, GFP". This is a protein every biochemist is familiar with. The award was shared between Osamu Shimomura who first isolated GFP, Martin Chalfie who pioneered its use as a visible genetic tag and Roger Y. Tsien who built our understanding of how it fluoresces and expanded the colour options. The image shown is taken from the Tsien lab website and shows an agar plate with bacteria expressing different fluorescent proteins. From the Nobel foundation press release:
This year's Nobel Prize in Chemistry rewards the initial discovery of GFP and a series of important developments which have led to its use as a tagging tool in bioscience. By using DNA technology, researchers can now connect GFP to other interesting, but otherwise invisible, proteins. This glowing marker allows them to watch the movements, positions and interactions of the tagged proteins.
The next time you use it to track viral infection, protein expression or cell motility you have these guys to thank.

Python 2.6 is out [Mailund on the Internet]

Posted: 08 Oct 2008 10:53 AM CDT

I just saw that Python version 2.6 came out a few days ago.  See the list of changes here.

I haven’t upgraded yet, and I don’t think I am going to right now.  I didn’t spot any new features I just have to have.  Not like 2.5 where generator expressions were something I’ve missed. And still miss on our cluster at BiRC where we are still running 2.4 :-(

Seminar Answer Flow Chart [evolgen]

Posted: 08 Oct 2008 10:30 AM CDT

Inspired by the Sarah Palin Debate Flow Chart, here's the Seminar Answer Flow Chart:


Read the comments on this post...

Microscopists of the World Celebrate - The Nobel Prize Awarded for GFP [The Daily Transcript]

Posted: 08 Oct 2008 09:06 AM CDT

From the Nobel site:

8 October 2008

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Chemistry for 2008 jointly to

Osamu Shimomura, Marine Biological Laboratory (MBL), Woods Hole, MA, USA and Boston University Medical School, MA, USA,
Martin Chalfie, Columbia University, New York, NY, USA and
Roger Y. Tsien, University of California, San Diego, La Jolla, CA, USA
"for the discovery and development of the green fluorescent protein, GFP".

Well I certainly nailed this one. In fact I got up this morning thinking, "let's find out if Tsien got the Nobel".

This is a well deserved prize. Flip open any biomedical journal and you'll see why - Green Fluorescent Protein (aka GFP) is probably the most used gene in the world.

It is safe to say that you can clearly divide microscopy into two phases, before GFP and after.

GFP is truly a wonder protein. If you excite the molecule with blue light it will convert this to green light which it emits. By monitoring the fluorescence you can pinpoint where GFP is located. Attach GFP to your favorite protein, for example a gene involved in cell duplication, and now the location of your fusion protein can be monitored inside off a cell. You might for example find out that your protein is localized to the chromosome during cell division and thus probably has a role in how duplicated chromosomes are pulled apart during mitosis.

Before GFP, we could only deduce how molecules and proteins were organized within cells after biological samples were fixed, extracted and stained with some sort of probe, usually a fluorescently conjugated antibody that recognizes the protein of interest. Wherever you detected fluorescence, you could assume that there was there was antibody bound to your fixed protein. Since the cells are fixed, these pictures were static. You knew where your protein might be, but had no clue how your protein moved around the cell.

After GFP, we can now deduce how molecules and proteins were organized within biological samples in LIVE CELLS. This has three advantages.

1) The samples do not have to be fixed and you don't need antibodies. Sometimes fixation disrupts cellular organization, sometimes antibodies cross react to other proteins. These problems do not exist with GFP technology.

2) We can now deduce the BEHAVIOR of proteins.Think of it this way. Before you only had still photographs. Now you have movies. GFP technology literally provides us with an extra dimension of information (in this case time). Understanding how proteins move around inside a cell gives us a tremendous amount of information about how cell organization is achieved.

3) GFP has allowed us to develop all sorts of tricks.

Through the use of a technology called two-photon microscopy, biological samples can now be observed from a distance. We can now observe GFP-labeled cells deep within a tumor or deep within the brain.

Destroy all the GFP fuorescence in one area of the cell, and now you can detect how GFP-tagged molecules from outside this area diffuse back into the area - you now just measured the diffusion rate of your protein of interest, this is called FRAP (Fluorescence Recovery After Photobleaching).

Here's another trick, have protein X tagged with yellow fluorescent protein (YFP) and protein Y tagged with cyan fluorescent protein (CFP) - if proteins X and Y interact inside of the cell, the CFP will activate the YFP. You now have a technique called FRET (Fluorescence Resonance Energy Transfer) that allows you to monitor if, how and when two different proteins interact within live cells.

And there's more. GFP has been modified by researchers such as Jennifer Lippincott-Schwartz, so that GFP can be activated and turned off. This has allowed researchers to monitor the half-life of proteins. It has allowed researchers to see how proteins redistribute from one area to another. Other GFPs have been modified so that they fluoresce differently depending on pH. In one of the most amazing papers ever, Roger Tsien used another fluorescent protein, called dsRed and literally evolved the protein into other fluorescent proteins that cover the entire spectrum of colors.


I'll post something latter this week on this remarkable paper.

Here's a link to a lecture given by Tsien at the Nobel Symposia

Read the comments on this post...

Project Planning Made Easy [Bitesize Bio]

Posted: 08 Oct 2008 08:42 AM CDT

Whatever field you work in, effective project planning can make your work much more efficient, and make your like much easier.

Liquid planner is a unique, online project planning interface that is free as long as your project has three or less team members.

Liquid Planner has a nice interface for defining tasks and constructing Gantt charts, and great features like the ability to attach documents, comments and even mini-discussion forums to each task.But my favorite thing about it is the unique method it uses for determining the likely completion time for tasks.

Project planning is a frustrating business because from the moment you make the plan, you know it is unlikely that your project will pan out like that. The delays will mount up and your actual timeline will look nothing like the  original.

When planning a scientific research project, this is especially true because so much can go wrong with science.

For example, if cloning a gene is part of your project plan - how do you deal with the fact that the cloning might not work first time? If you use Microsoft Project or other traditional planners you have no choice but to plump for an estimated completion time and leave it at that.

Liquid Planner, on the other hand allows you to enter best and worst case scenarios for the completion of a task; e.g. you could say at best it will take you 1 week to clone your gene, and at worst 4 weeks.

From this Liquid Planner calculates various scenarios; an earliest finish date, then 10% through to 98% likely completion dates.

The power of this is that it gives a realistic view of your project timeline and when you have multiple dependent tasks these likely completion   dates are taken into account so that your projected timeline is much more realistic.

Whatever your project planning needs - whether you are planning for a detailed project with multiple users, or just going through the thought process of planning your own research project - I think that Liquid Planner is a useful tool that is well worth trying out.

If you give it a go, be sure to let us know your thoughts here!

Tree EKGs [Bayblab]

Posted: 08 Oct 2008 08:33 AM CDT

Ran into an MIT link about getting small amounts of electricity from trees and using this electricity to power monitors. It looks as if the article is based upon just a tiny mention of potential applications by the authors of a PLoS ONE paper on the method of getting electricity from trees. The idea is to have monitors powered by the trees distributed throughout a forest to detect forest fires and monitor the environment. Could you hook these up in series and generate a usable amount of electricity from a wild forest, thus placing a value on a standing forest?

Should all food containing genetically engineered ingredients be labeled as "GM" to indicate that they contain genetically modified organisms? [Tomorrow's Table]

Posted: 08 Oct 2008 06:28 AM CDT

Last night at 6 pm, the debate began. The first team began with a clear overview of the problem. The second team then attacked through cross-examination, attempting to place their sharpest comments where they could most easily deflate their opponent's argument. Each team threw a few zingers and each reached for an emotional connection with the audience. The "Pro" side spoke of a persons right to know. The "Con" side spoke about a persons right to live. Each debater appeared so well-prepared for debate that I think they could have taking anyone on. It was lucky for McCain and OBama that they were tied up that night. They would have been crushed like government bureauocrats in this crowd of young talented scientists and sociologists.

Some comments here from the "Yes lets label" side:
"If it is labeled, you can git it and go home"
"ignorance is not bliss"
"If it is labeled vegans and those with philosophical opposition will know not to buy it"
"People care about reducing pesticides will buy it"
"They just need to have it explained to them"
"It is more risky to eat"

From the "No, lets not label" side:
"But most people are not aware of the vast reductions in pesticides, so instead , if there is a label, they wlll think something is wrong with it and a stigma would be attached. No one would buy it then and consumers would lose the overall environmental benefit"
"It would be costly to isolate each food proccessing step to be sure that GE and non-GE dont mix."
"There is no evidence of increased risk; in fact studies show that you are healthier if you grow GE cotton because you are not exposed to pesticides"
"Who will do the explaining? We cant necessarily rely on the media"

Last night, the students in the Genetics and society class shined. The 90 people in the audience called the debate a draw.

Trying out Boost.Test [Mailund on the Internet]

Posted: 08 Oct 2008 04:28 AM CDT

I’ve just started a new programming project — a library for dense HMMs that uses parallel hardware for its computations, if you want to know — and I decided to use Boost.Test for my unit testing.

Normally, I just write my unit tests with asserts and maybe a few home-made macros, but since I am going to use Boost heavily in the code anyway (I do more and more these days) I figured I might as well try out its unit testing framework.

Problems with the documentation

To my great surprise, I had some problems with the documentation of the framework.

Usually, the documentation for the boost libraries is excellent — at least compared to most libraries I use — and if you just read the documentation for Boost.Test it looks great.

There is a lot of it, with detailed descriptions of this and that and with tutorials to get you started.

It’s just that the examples there do not work.

Take for example this program from the tutorial:

#define BOOST_TEST_MODULE MyTest #include <boost/test/unit_test.hpp>  int add( int i, int j ) { return i+j; }  BOOST_AUTO_TEST_CASE( my_test ) {     // seven ways to detect and report the same error:     BOOST_CHECK( add( 2,2 ) == 4 );        // #1 continues on error      BOOST_REQUIRE( add( 2,2 ) == 4 );      // #2 throws on error      if( add( 2,2 ) != 4 )       BOOST_ERROR( "Ouch..." );            // #3 continues on error      if( add( 2,2 ) != 4 )       BOOST_FAIL( "Ouch..." );             // #4 throws on error      if( add( 2,2 ) != 4 ) throw "Ouch..."; // #5 throws on error      BOOST_CHECK_MESSAGE( add( 2,2 ) == 4,  // #6 continues on error                          "add(..) result: " << add( 2,2 ) );      BOOST_CHECK_EQUAL( add( 2,2 ), 4 );	  // #7 continues on error }

The BOOST_AUTO_TEST_CASE() macro should create a test function and plug it into the framework, and after compiling the file (and linking with -lboost_unit_test_framework) you should have a test program.

Well, you can compile the program, but you cannot link it.  There is no main() function.

Oh well, if you read the header file boost/test/unit_test.hpp you find these lines:

#if defined(BOOST_TEST_DYN_LINK) && defined(BOOST_TEST_MAIN) && !defined(BOOST_TEST_NO_MAIN) int BOOST_TEST_CALL_DECL main( int argc, char* argv[] ) {     return ::boost::unit_test::unit_test_main( &init_unit_test, argc, argv ); }

so it seems that the framework will define main, if only you have defined the right symbols, and yes, adding


to the top of the program makes it run.

There were a few other cases like this, where I couldn’t figure out how to use the framework.  Like testing template functions with a list of different template parameters, but I just worked my way around that problem with my own macros.

Using the framework

There’s a lot of different things you can do with Boost.Test, but so far I’ve just used the very basic functionality.

I use the BOOST_AUTO_TEST_CASE() macro for my test functions.  The different cases are automatically grouped into test suites — one per file (compilation unit) — so I don’t worry about the larger framework.  I write a few test cases per code unit I need to test and the rely on the default behaviour of the framework.

The actual testing is done through various BOOST_CHECK_* macros like in the program above.

Among the macros are tests for floating point numbers that lets you test that two numbers are equal up to a certain accuracy.  This is what you want to check, since testing equality of floating point numbers is rarely a good idea.

So far I’m happy with Boost.Test, and I’m going to try out some of the more advanced features as my project progresses, I think.

Nobel Prize for Chemistry 2008 [Sciencebase Science Blog]

Posted: 08 Oct 2008 03:40 AM CDT

The Nobel Prize for Chemistry 2008 was awarded to Osamu Shimomura (b. 1928) of the Marine Biological Laboratory (MBL), at Woods Hole, Massachusetts and Boston University Medical School, Martin Chalfie (b. 1947) of Columbia University, New York, and Roger Tsien (b. 1952) of the University of California, San Diego, La Jolla, “for the discovery (1962 by Shimomura) and development of the green fluorescent protein, GFP”. Important, of course, and congratulations to all three…but I just knew it would be bio again!

The Nobel org press release for the Chemistry Prize can be found here.

The remarkable brightly glowing green fluorescent protein, GFP, was first observed in the beautiful jellyfish, Aequorea victoria in 1962. Since then, this protein has become one of the most important tools used in contemporary bioscience. With the aid of GFP, researchers have developed ways to watch processes that were previously invisible, such as the development of nerve cells in the brain or how cancer cells spread. Of course, the things that the public know about GFP are the green-glowing mice and pigs that have hit the tabloid headlines over the years.

I’ve written about green fluorescent proteins (although not green flourescent proteins) on several occasions over the years. Briefly in an item on artificial cells in December 2004. In Reactive Reports in September 2005. In New Scientist (”Genetic Weeding and Feeding for Tobacco Plants”, Jan. 4, 1992, p. 11). In SpectroscopyNOW in January 2008. And, more substantially, in American Scientist (January 1996) on the use of a green-glowing jellyfish protein to create a night-time warning signal for crop farmers. Plants under stress would activate their GFP genes and start glowing, revealing which areas of which fields were affected by disease or pests and so tell the farmer where to spray. Of course, the idea of green-glowing cereals would have any tabloid headline writer spluttering into their cornflakes of a morning.

As I said earlier in the week, on the post for the Nobel Prize for Medicine 2008 and on the Nobel Prize for Physics announcement yesterday, the Nobel press team has employed various social media gizmos to disseminate the news faster than ever before, including SMS, RSS, widgets (see left), and twitter.

You can check back here later in the week and next week for the Literature, Economics, and Peace Prizes, the widget at the top left of this post will provide the details as soon as they are released. It’s almost as exciting as sniping your bids on eBay.

Nobel Prize for Chemistry 2008

Gene Sherpa Reports Systemic Medical Insurance Fraud [Think Gene]

Posted: 08 Oct 2008 01:33 AM CDT

Steve at Gene Sherpas reports systemic medical insurance fraud in many top institutions providing genetic health care. Clinics are billing as if doctors are seeing patients, but only genetic councilors ever see the patient. “But I have a solution,” Steve exclaims. “The answer: Nurse Geneticists.” He continues, “Call your insurer. Demand to be seen by a physician or physician extender… Why should you demand this service? It will put stress on the broken system. To repair that system, we must first rebuild the foundation.”

Be careful what you wish for, Steve. Systemic abuses like this that risk high liability for meager reimbursements suggest a deeper problem than petty greed. Either the system has collectively concluded that genetic councilors are sufficient to perform the service and that they are following procedure “in spirit,” or they are desperate to keep unsustainably high margins and must resort to abuse to protect them. I suspect some of both.

The problem genetic councilors is political. Genetics used to be called “eugenics,” literally, “the science of the well born.” After World War 2 and the civil rights movement, it became taboo to socially direct population growth and the practice of human genetics was legally and institutionally castrated. Today’s “non-directive” genetic councilors are the progeny of this purposefully impotent profession.

However, scientific progressed. We discovered DNA. We learned molecular biology. We sequenced the human genome. Suddenly, the members of this small, marginalized profession were both keepers of the hereditary taboo and keepers of the code of life.

So, that’s what genetic councilors are: specialized nurses who aren’t supposed to touch you or tell you to do anything, but who have become the hands, eyes, voices, and smiles of this newest, most vital paradigm of medicine. So long as people need a warm consult and an authoritative opinion, there will be a place for you.

That’s probably the nicest thing I’ve ever said on this website.

Old medicine was dead the day you adopted “evidence-based medicine.”

There, I feel better.

What medicine? Scientific medicine. Mathematical medicine. Mechanized medicine. Terabytes of cited hyper-linked studies compiled into statistically weighted results medicine. Input symptoms and test results, output diagnosis and CPT medical billing code medicine.

Beep-bop-boop you’re dead, have a nice day. (the industry, the not patient, who of this ethnicity and phenotypic profile can expect 55.2% efficacy and 82.1% efficiency over control who received traditional care)

And no other medicinal discipline is as scientific mechanical as genetics. How mechanized? Watch, I’ll make a free clinical diagnosis machine prototype right here on my blog. Yes, not a risk report, a real diagnosis. Just input your genome sequence service account login, and my machine download your genome securely from your sequence service and practice medicine.

Your Sequence Account OpenID URL:

Your Password:

 #!/usr/bin/env python import genome HTT = genome.autosome[4][3046205:3215485] if'(CAG){27,35}', HTT)):     print "mutable" elif'(CAG){36,40}', HTT)):     print "reduced penetrance" elif'(CAG){41,}', HTT)):     print "positive" else:     print "negative" 

Would you like to subscribe to our genetic counseling service? Now, only $89.95 a month!
Don’t have a sequence? Sequence with us! Now only $999.
Enter your credit card and you gmail account and password below.

The real question is: are you dead like “mama bell” AT&T, the century-old institution that suddenly crumbled and consolidated. Or, are you dead like the newspaper, a slow and humiliating decline of consolation and standards of with only a few prestigious survivors?

So, OK, some genetic councilors will go back to school for additional nursing or physician assistant credentials, and some genetic centers may begin staffing appropriately if they can afford to despite the gross waste in health care already. But realistically, I’m supposed to believe that genetically-trained nurses are the solution? A new, more expensive kind of medical profession for which academic and institutional support will take decades to mature? When America is straining under grossly inflated health care costs during an economic depression? Meanwhile, I can hypothetically write a Python script to practice medicine and hire a perky and well-educated Indian genetic councilor to answer phones now for cheap?

Have fun breaking the system, Dr. Murphy. It needs to be broken, sure, but I’ll meet you down here at the bottom.

Your personal health: Free means education [business|bytes|genes|molecules]

Posted: 08 Oct 2008 01:20 AM CDT

One more post today. Russ Altman talks about how the cost of genotyping is asymptoting to free
(eerie parallels to Chris Anderson’s Free). The post comes on the heels of the launch of Complete Genomics and that they will enable a $5000 genome in 2010.

But that’s not the part that I want to talk about. It’s another part of the post

We must work to educate every day people about how to interpret their genome and how to use the information beneficially. Trying to protect them in a paternalistic manner is going to fail, I fear–because it will be too easy to get the information cheaply. So solutions that rely on government regulations or mandatory insertion of a physician in the process of ordering genetic sequencing (don't get me wrong, I love physicians–I am one) are not going to work.

Given past writings on this subject, no surprise, I agree wholeheartedly. We need to educate, and educate fast.

Reblog this post [with Zemanta]

Your personal health: My fair PHR [business|bytes|genes|molecules]

Posted: 08 Oct 2008 12:48 AM CDT

A medical record folder being pulled from the ...Image via WikipediaRecent visits to various medical facilities have been accompanied by filling up a lot of long forms, essentially filling up the same information. It is highly inefficient, and frankly a waste of time, especially if you have to fill up a form “if anything has changed since your last visit”. Talk about not following the principle of DRY. This is why I think the Personal Health Record is not just important, but essential. In other words, an electronic record, which follows some acceptable standards, and one that you can provide to a physician or laboratory when required (like before an initial visit). Any changes to your record, e.g. allergies, change of address, can be broadcast automatically to all approved physicians.

So there, I want a PHR, and I want every physician I see to accept a PHR-ingestion system that allows me to broadcast my PHR to them (with appropriate authentication and identity systems in place), one they can then annotate appropriately. Results/records can then be broadcast back into my record (should this be done upon request or automatically?), so that I can share them with another physician if required.

It’s time we moved our health into the 21st century.

Reblog this post [with Zemanta]

A DNA Chip Pioneer Talks About His Latest Invention [adaptivecomplexity's column]

Posted: 07 Oct 2008 09:39 PM CDT

Joseph DeRisi is a young scientist who at age 39 has already racked up some amazing career achievements. He tells about his new DNA chip for detecting viruses:

My colleague Dave Wang and I were sitting around the office one day in 2000 asking, "How were viruses discovered in the past?"

We knew that it had always been a laborious and time-consuming effort. When an epidemic struck, what researchers generally did was go to electron microscopes and try to figure out what they were seeing. Sometimes, it took 10, 20 years to find a virus they knew had to be in there.

Earlier, when I was a Stanford graduate student, I'd worked on developing DNA microarrays, which are often called DNA chips. They allow a researcher to do many biological tests at once. The chips are now widely used in gene discovery, cancer detection, drug discovery and toxicology. So Dave and I reasoned that these DNA microarrays would be perfect for viral discovery. I said, "We can build a similar device representing every virus ever discovered, and it could simultaneously look for them."


No comments: