Thursday, July 31, 2008

The DNA Network

The DNA Network

Water on Mars [Bayblab]

Posted: 31 Jul 2008 07:46 PM CDT


Straight from NASA. Water has been found on Mars.

Manhattan Institute ponders personal genomics regulation (wonks on wonks) [biomarker-driven mental health 2.0]

Posted: 31 Jul 2008 06:01 PM CDT

The Manhattan Institute policy think-tank posts some commentary (including one by yours truly) on their Medical Progress Today section pertaining to the recent regulatory steps (backward).

Transgenic Aircraft Composites []

Posted: 31 Jul 2008 04:56 PM CDT

wing.jpg

Here’s something exciting: Biologists studying the marine rag worm were chasing the question of why its tiny fangs contained such a high concentration of zinc. At first they thought it was the little critter’s way of dumping excess zinc, but nature was way ahead of them: Turns out that zinc plus a few special proteins resulted in outrageously strong, light material – way stronger and lighter than carbon fiber composites, it turns out. Doesn’t take much imagination to see the transgenic angle: create your genetically modified e. coli to ooze super-light aircraft protein material. Wonder how good the guys at Boeing would be at tending a fermenter?

DNA Direct Is Confirmed To Be In Compliance With State Law [DNA Direct Talk]

Posted: 31 Jul 2008 04:49 PM CDT

It’s official: DNA Direct has received a formal letter from the California Department of Public Health (CDPH) stating that we are operating in compliance with state laboratory law. Specifically, the letter states that DNA Direct's tests are performed only with a physician order and are conducted at licensed laboratories, and that DNA Direct gives validated [...]

We will pay for you to die, but not for you to live [Mary Meets Dolly]

Posted: 31 Jul 2008 04:13 PM CDT


That is the message that Barbara Wagner got from her letter from the Oregon State plan that stated it would not cover the chemotherapy her doctor prescribed, but would cover the cost of physician-assisted suicide.

Please watch this piece by KVAL news, where Barbara is rightly outraged. Interestingly enough, it is the pharmaceutical company Genentech that has come to Barbara's rescue by providing her chemo drugs for free. So much for those super-evil Big Pharma companies and the benevolent state-sponsored health care.

For those of you who live in Washington state take notice. You will be voting on an Oregon-like assisted suicide initiative in November. If you want your insurance plan to write you a similar letter when you get a life-threatening disease, then vote for I-1000. If you would rather have your insurance pay for the expensive doctor-recommended treatment instead of the cheap over-dose that will kill you early, then vote NO on I-1000.

The slogan for the I-1000 campaign is "It's my decision!" Looks to me like, if you choose to live, it is really up to your insurance company.

Dial 'C' for Cancer [Bayblab]

Posted: 31 Jul 2008 01:56 PM CDT

I'm not a cell phone user. Even my grandfather has made the leap, while I remain in the technological dark ages with a phone attached to the wall with a wire! But despite being completely unreachable the second I walk out the door, there may be a bright side. Last week, the director of the University of Pittsburgh Cancer Institute issued a warning about cell phone use and a potential link to cancer:
The head of a prominent cancer research institute issued an unprecedented warning to his faculty and staff Wednesday: Limit cell phone use because of the possible risk of cancer.
The warning from Dr. Ronald B. Herberman, director of the University of Pittsburgh Cancer Institute, is contrary to numerous studies that don't find a link between cancer and cell phone use, and a public lack of worry by the U.S. Food and Drug Administration.
Now this is something many of us have heard before - cell phones cause cancer - but how true is it? Is this warning warranted, or is Dr. Herberman yelling fire in the proverbial theatre? Since the data this is being based on is unpublished, it's impossible for me to comment on. However, studies on cell phones and cancer have been done, so we can look at the data we have so far.

Several years ago, the US Food and Drug Administration consumer magazine had a piece explaining some of the difficulties in getting an answer to the phone-cancer question. The article included a very brief summary of studies that had been done (as well as some commentary by cancer researchers). The conclusion there was that there is very little to no risk of brain cancer from cell phone use. However, the intervening 8 years has been ample time for further studies to be done. A quick pubmed search shows numerous animal studies that show no increase in cell death or proliferation in mice exposed to non-ionizing radiation of the same frequency used by mobile phones. Many of the human studies (retrospective studies, for example here and here) are also negative, but they aren't exclusively so. While meta-analysis is not ideal for answering this type of question (see here for a brief discussion, particularly Bayman's commentary there), so far this year at least two meta-analyses have been done on the issue. This study found no overall increase of brain tumors (though they note in a subset of the cases included in the analysis that there is a potential elevated risk with long term cell-phone use). Similarly, this analysis is more assertive in its conclusions, claiming "a consistent pattern of an association between mobile phone use and ipsilateral glioma and acoustic neuroma using > or =10-years latency period" (while their data show that shorter term use is not associated with increased brain cancers).

So what's the answer? Is there a risk to cell-phone use? The literature seems to indicated that in the short term, no, probably not. Long term (>10 years) usage starts to tilt things towards the positive, although not in a huge way (the studies mentioned above indicate that the increased risk is relatively small). As is usually the case with these situations, the answer will probably bounce back and forth for awhile until a definitive study is done to put the question to rest

This brings us back to the renewed warnings issued last week. Given that the potential risks are small (but not negligible) and associated only with long term exposure, the question becomes what was the proper way to deal with the unpublished data from the University of Pittsburgh Cancer Institute? Again, it's difficult to say without seeing the data in question, but given the information we have so far it's hard to imagine a definitive, home-run conclusion (Herberman, who issued the warning, says as much in the news article). Should he have waited for peer review before sounding the alarm (would an extra few months before releasing it make a difference when talking about a 10 year latency)? Is it better to err on the side of caution? After all, he's not calling for an end to cellphone use, but rather certain precautions, particularly with children. Dr. Herberman defends himself thusly:
Herberman is basing his alarm on early unpublished data. He says it takes too long to get answers from science and he believes people should take action now - especially when it comes to children.
"Really at the heart of my concern is that we shouldn't wait for a definitive study to come out, but err on the side of being safe rather than sorry later," Herberman said.
Without seeing the unreleased data, is it enough to change your cellphone habits? For the University of Pittsburgh statement and the recommended precautions, go here.

Charlie Rose Interviews Thoughtleaders in Personal Genomics [The Personal Genome]

Posted: 31 Jul 2008 01:55 PM CDT

Interviews with David Agus, Co-Founder, Navigenics, Dean Ornish, President, Preventive Medicine Research Institute and George Church of Harvard Medical School’s Personal Genome Project.

Previously on Charlie Rose, Esther Dyson discusses her participation in the PGP.

Waiting for updates ... [The Daily Transcript]

Posted: 31 Jul 2008 12:28 PM CDT

My iPod has been a great addition to my life. I use it to listen to podcasts and audiobooks on my half-hour walks, to and from work. But recently two of my favorite items have gone into suspended animation.

Read the rest of this post... | Read the comments on this post...

Questions from our mailbag: How do I cite FinchTV? [FinchTalk]

Posted: 31 Jul 2008 11:55 AM CDT

One of the questions that appears in our mailbox from time to time concerns citing FinchTV or other Geospiza products. A quick search with Google Scholar for "FinchTV" finds 42 examples where...

Diabetes and African admixture [Yann Klimentidis' Weblog]

Posted: 31 Jul 2008 11:21 AM CDT

several interesting things in this paper:
- comparison of three admixture estimation methods and using various number of markers
- looking at the effect of admixture on association between diabetes and variants in diabetes candidate genes.

Probably the most important finding is the high proportion of African ancestry in females with diabetes, compared to female controls, male cases and controls.
Their interpretation:
If AA females with T2DM have genes of "African descent" that are involved in energy storage and expenditure, but also influence or are influenced by gender-specific pathways, when exposed to a westernized lifestyle, these individuals will be at an increased risk of obesity and subsequently developing T2DM.
also, their finding makes sense in terms of:
the unexplained increase in risk for having a family history of ESRD that is present among African American women, relative to men (Freedman et al. 2005; McClellan et al. 2007)
Exploration of the utility of ancestry informative markers for genetic association studies of African Americans with type 2 diabetes and end stage renal disease.
Keene KL, Mychaleckyj JC, Leak TS, Smith SG, Perlegas PS, Divers J, Langefeld CD, Freedman BI, Bowden DW, Sale MM.
Hum Genetics Online First 2008 Jul 25.
Abstract: Admixture and population stratification are major concerns in genetic association studies. We wished to evaluate the impact of admixture using empirically derived data from genetic association studies of African Americans (AA) with type 2 diabetes (T2DM) and end-stage renal disease (ESRD). Seventy ancestry informative markers (AIMs) were genotyped in 577 AA with T2DM-ESRD, 596 AA controls, 44 Yoruba Nigerian (YRI) and 39 European American (EA) controls. Genotypic data and association results for eight T2DM candidate gene studies in our AA population were included. Ancestral estimates were calculated using FRAPPE, ADMIXMAP and STRUCTURE for all AA samples, using varying numbers of AIMs (25, 50, and 70). Ancestry estimates varied significantly across all three programs with the highest estimates obtained using STRUCTURE, followed by ADMIXMAP; while FRAPPE estimates were the lowest. FRAPPE estimates were similar using varying numbers of AIMs, while STRUCTURE estimates using 25 AIMs differed from estimates using 50 and 70 AIMs. Female T2DM-ESRD cases showed higher mean African proportions as compared to female controls, male cases, and male controls. Age showed a weak but significant correlation with individual ancestral estimates in AA cases (r (2) = 0.101; P = 0.019) and in the combined set (r (2) = 0.131; P = 3.57 x 10(-5)). The absolute difference between frequencies in parental populations, absolute delta, was correlated with admixture impact for dominant, additive, and recessive genotypic models of association. This study presents exploratory analyses of the impact of admixture on studies of AA with T2DM-ESRD and supports the use of ancestral proportions as a means of reducing confounding effects due to admixture.

Mapping Connections in the Human Brain [Highlight HEALTH]

Posted: 31 Jul 2008 11:07 AM CDT

ResearchBlogging.orgThe first high-resolution structural connection map of the human cerebral cortex was published earlier this month in the journal PLoS Biology. The study reveals regions that are highly connected and central, forming a structural core network [1]. Intriguingly, this core network consists of many areas that are more active when we’re at rest than when we’re engaged in a task that requires concentration.

In the human brain, a complex network of fiber pathways link all regions of the cerebral cortex. This “brain wiring” is responsible for shaping neural activation patterns. To better understand the structural basis of functional connectivity patterns in the human brain, researchers at Indiana University and Harvard Medical School in the U.S., and at University Hospital Center and the University of Lausanne in Lausanne, Switzerland, used a brain imagining technique called diffusion spectrum imaging to map out axonal pathways in cortex white matter.

The outermost layer of the cerebrum is termed gray matter. Below the grey matter of the cortex, white matter, consisting of myelinated axons, interconnects different regions of the central nervous system. Axons or nerve fibers are long filaments that project from a nerve cell (called a neuron) and carry electrical impulses away from the cell body to other neurons. Myelinated axons are axons surrounded by an electrically-insulated phospholipid layer (meaning a fat-soluble molecule, think insulated brain wiring). The end of the axon comes close to but does not contact the next nerve cell; this gap between neurons is referred to as a synapse or synaptic junction.

Diffusion spectrum imaging is a type of MRI that identifies water molecules and monitors how they move. In nerve fibers, water molecules can be used to discern brain structural connectivity since they move along the length of fiber. The noninvasive technique offers researchers the ability to compare neural connection variability between participants and to relate it to differences in individual functional connectivity and behavior.

brain_structural_network_core.jpgThe scientists used network theory, usually applied in social network analysis, to measure network properties such as degree and strength (i.e. the extent to which the node is connected to the rest of the network), centrality and efficiency (i.e. how many short paths between other parts of the network pass through the node), and betweenness. Using measures from five healthy people (participants A — E), they calculated brain regions that were highly connected and contained numerous connector hubs.

The researchers found evidence for the existence of a structural core composed of densely connected regions at the top and back of the brain, straddling both hemispheres (figure). An earlier imaging study measured blood flow to different parts of the brain [2]. It positively correlated with the level of white brain matter axonal connectivity in individual participants.

Interestingly, regions that are in the most connected brain areas have been shown in various studies to have high levels of energy utilization and activation at rest [2], and significant deactivation during goal-directed tasks [2-4]. Termed the brain’s “default network”, it has been suggested to be involved with stimulus-independent thought (daydreaming) [5] and other internally focused thoughts and cognitions, such as remembering the past, envisioning future events and considering the thoughts and perspectives of other people [6]. These tasks all activate multiple regions within the default network.

According Dr. Olaf Sporns, co-author of the study and neuroscientist at Indiana University [7]:

This is one of the first steps necessary for building large-scale computational models of the human brain to help us understand processes that are difficult to observe, such as disease states and recovery processes to injuries.

Indeed, the default network is disrupted in several human disorders, including autism, schizophrenia and Alzheimer’s disease [6]. Using this technique to map out axonal pathways in the cortical network of patients with such diseases should help scientists identify default network regions that are disrupted. Researchers hope that they can better understand these disorders based on changes in the brain’s connectivity map.

For additional information on the human cortex and neurons, YouTube has a great video excerpt from the Discovery Channel on Neurons and How They Work.

References

  1. Hagmann et al. Mapping the Structural Core of Human Cerebral Cortex. PLoS Biol. 2008 Jul 1;6(7):e159. [Epub ahead of print] DOI: 10.1371/journal.pbio.0060159
    View abstract
  2. Raichle et al. A default mode of brain function. Proc Natl Acad Sci U S A. 2001 Jan 16;98(2):676-82.
    View abstract
  3. Gordon et al. Common Blood Flow Changes across Visual Tasks: II. Decreases in Cerebral Cortex. J Cogn Neurosci. 1997;9(5):648-663.
  4. Fox et al. The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proc Natl Acad Sci U S A. 2005 Jul 5;102(27):9673-8. Epub 2005 Jun 23.
    View abstract
  5. Mason et al. Wandering minds: the default network and stimulus-independent thought. Science. 2007 Jan 19;315(5810):393-5.
    View abstract
  6. Buckner et al. The brain’s default network: anatomy, function, and relevance to disease. Ann N Y Acad Sci. 2008 Mar;1124:1-38.
    View abstract
  7. New map IDs the core of the human brain. Indiana University Press Release. 2008 July 1.
Thank you for subscribing by RSS or email. I work hard to make the articles on Highlight HEALTH engaging and I truly appreciate your interest and readership!

This article was published on Highlight HEALTH.

Other Articles You May Like

Happy Birthday Rob! [Bayblab]

Posted: 31 Jul 2008 10:14 AM CDT

Today is Bayblabber Rob's birthday. If you can't be here to celebrate, you can wish Rob a happy birthday as you watch him blow out the candles on his cake below. Rob is a man of many talents indeed!

Genetic Genealogy Patents - A Brief Review [The Genetic Genealogist]

Posted: 31 Jul 2008 09:22 AM CDT

Yesterday, DNA Heritage issued a press release (reproduced below) regarding an opinion issued by the UK Intellectual Property. The opinion (available here) was the result of inquiry into whether claims 4-7 of a 2004 patent in England are valid. The patent, held by Bryan Sykes of Oxford Ancestors, was issued in 2004 and is directed at creating and using a database of Y-DNA haplotype information to examine surname relationships and determine the likelihood of common ancestry between individuals. The UK IPO's opinion holds that the claims are invalid because they are either not novel, or did not require an inventive step (i.e., they were obvious). Most intellectual property offices, such as those in the UK and the US, require that an invention at least be novel and nonobvious.

Interestingly, Sykes obtained a similar patent from the U.S. Patent Office on July 24, 2007 (US 7,248,970). During prosecution (i.e. the process of obtaining the patent), the patent examiner stated that the patent was not novel in view of certain references such as scientific articles by Paoli et al. and Jobling et al., but the applicant was able to argue around that point. The issue of obviousness was never raised by the examiner.

While the UK opinion doesn't affect the validity of the U.S. patent, the same references that were used to argue that the UK patent is not novel or lacked an inventive step could potentially be used to argue invalidity of the US patent if someone were to challenge it.

As many of you know, I'm currently studying to be a patent attorney, so this is right up my alley. I wondered how many other patents related to genetic genealogy exist, so I did a quick patent search and came up with the following list. Only the first item is an issued patents; the others are published applications, meaning that they are or were pending before the patent office but have not yet been issued as final patents. Note that all is information is public information available through the PAIR system at the United States Patent Office.

  • 7,248,970 – Forensic and Genealogical Test – Discussed above. The inventor is Bryan Sykes.
  • 2003/0172065 – System and Method for Molecular Genealogical Research – A method for identifying commonalities in haplotypes and other genetic characteristics of two or more individual members of a biological sample. The inventors are James L. Sorenon, Scott R. Woodward, Joel Myres, and Natalie Myres.
  • 2004/0029133 – Mitochondrial DNA Polymorphisms – A method of using SNPs to detect diseases, determine haplogroups, or establish genetic relationships. The inventor is Corinna Hernstadt. This application is currently marked as abandoned.
  • 2004/0229231 – Compositions and Method for Inferring Ancestry – A method using Ancestry informative markers to draw inferences about traits. The inventors are Tony N Frudakis and Mark D. Shriver. See also 2007/0037182, a related application from the same applicants.
  • 2006/0025929 – Method of Determining a Genetic Relationship to at Least One Individual in a Group of Famous Individuals Using a Combination of Genetic Markers – the title describes the method. The inventor is Chris Eglington. This application is currently marked as abandoned.
  • 2007/0042369 – Methods of Selection, Reporting and Analysis of Genetic Markers Using Broad-Based Genetic Profiling Applications – A method for determining whether an individual has an enhanced, diminished, or average probability of exhibiting a phenotype, or for determining the genomic ethnicity of an individual. The inventors are Martin G. Reese and Charles White.
  • 2007/0178500 – Methods of Determining Relative Genetic Likelihoods of an Individual Matching a Population – the title describes the method. The inventors are Lucas Martin and Eduardas Valaitis.

Now, before you get up in arms, let me make a point. Intellectual property is perhaps the most valuable asset most companies possess. Additionally, our patent system has been the model for hundreds of systems around the world and arguably is responsible for much of the success of our country in the last 200 years. If the patents in the list above contain inventions that are useful, novel, and nonobvious, then they are valid and should be respected.  If not, then they should be challenged and invalidated.

The text of the DNA Heritage press release is as follows:

DNA Heritage success in patent battle helps keep genealogy DNA test prices low.

DNA Heritage has recently overcome patent claims held by a competitor which would have severely restricted the use of DNA testing and databases that allow families around the world to match and connect up through their DNA. The patent covers the use of surnames and Y-chromosomes to establish a family connection. The UK Intellectual Property Office (UK IPO) has now rendered a formal opinion stating that the relevant claims are all invalid.

Alastair Greenshields, principal of DNA Heritage said “Patents are often needed to provide an incentive for innovative work, but in this case the academic work lacked inventiveness as other researchers had already shown the connection between surnames, Y-chromosomes and family history. This is a great outcome and allows DNA Heritage and other test companies to continue offering these tests and database services to the ever-growing genetic genealogy community without having to raise prices due to royalty payments.”

An infringement challenge from patent assignees Oxford Ancestors, which had been running for over two years, was thwarted when DNA Heritage asked the UK IPO to re-evaluate the patent claims in the light of work by previous researchers. After considering submissions from both parties, an Opinion provided on the 8th April 2008 (with a 3 month review period) found that of the four claims contested, all four were invalid for lack of an inventive step, and one was additionally lacking novelty.

Results from DNA Heritage’s testing using at-home cheek swabs and their accredited laboratory can both be fed into an in-house and public-access Ybase database. These have become invaluable tools for those researching their direct male lineage and surname.

About the company:

Established in 2002, DNA Heritage provides advanced genetic tests to the genealogy community and those tracing their roots. An innovator in providing multi-lab compatible tests and the open-access Ybase database, DNA Heritage enables many genealogists around the world to enhance their families’ research.  For more information, visit http://www.dnaheritage.com.

Sir Paul Nurse on Information in Biology [Bitesize Bio]

Posted: 31 Jul 2008 05:20 AM CDT

A bit of a rant today: Nobel laureate Paul Nurse has an article in last week’s issue of Nature that strikes me as having a bunch of buzzwords but a misguided message. In Life, Logic, and Information, Nurse becomes the latest in an endless line of prognosticators to claim that a major transition is near, this time in biology.

He also uses the popular buzzword in the emergence/complexity sense of the word (see Stuart Kauffman), which brings on all sorts of holistic connotations.

Living organisms are complex systems made up of many interacting components, the behaviour of which is often difficult to predict and so is prone to unexpected outcomes. Systems analyses of living organisms have used a variety of biochemical and genetic interaction traps with the emphasis on identifying the components and describing how these interact with each other. These approaches are essential but need to be supplemented by more investigation into how living systems gather, process, store and use information, as was emphasized at the birth of molecular biology.

Information processing? I don’t remember that in any of my biochemistry courses. It sounds like something someone from my IT department might say if he tried to grossly oversimplify molecular biology to the point that it wasn’t molecular biology any more.

The lac operon, for instance, isn’t *really* an electronic switch. Skipping over the biochemical interaction coefficients is fatal to one’s understanding if he or she uses the lac operon model to explain gene and protein regulation in the Eukaryotic cell.

At a more fundamental level, at least Nurse acknowledges that reductionist approaches are necessary to understanding emergent behavior in cell biology. But he suggests that you really can get additional form of understanding or knowledge about systems in ways other than breaking it down into its components.

Reductionism versus holism or synthesis as modes of building understanding are not new arguments however. E.O. Wilson expertly covers this topic in Chapter five of his book Consilience:

To dissect a phenomenon into its elements, [...] is consilience by reduction. To reconstitute it, and especially to predict with knowledge gained by reduction how nature assembled it in the first place, is consilience by synthesis. That is the two-step procedure by which natural scientists generally work: top down across two or three levels of organization at a time by analysis, then bottom up across the same levels by synthesis.

And Paul Nurse, as well as others who mention information processing in biological systems, remind me of people who have forgotten that synthesis is capable only *after* reductionist explanations are sufficiently complete. Clearly, we’re not there yet for understanding the complexity of the cell, or of the brain, to begin sythesizing artificial versions.

Darwin’s Evolution at Calouste Gulbenkian Foundation [My Biotech Life]

Posted: 30 Jul 2008 11:19 PM CDT

Although I’m certain I’ll be writing about this again, let me take this opportunity to show off a cool site for an exhibition that will be taking place at the Calouste Gulbenkian Foundation in Lisbon, Portugal celebrating Charles Darwin’s 200th birthday.

Darwin's Evolution @ Calouste Gulbenkian Foundation

The website is available in English and Portuguese and full of great content. Even a blog!

Darwin’s Evolution at Calouste Gulbenkian Foundation

Instead of Blogging, I was Overloaded on Science in Toronto [adaptivecomplexity's column]

Posted: 30 Jul 2008 11:02 PM CDT

I spent last week at the 2008 Yeast Genetics and Molecular Biology meeting at the University of Toronto. Don't be fooled by the name: this conference isn't about yeast in and of itself; it's about tackling basic problems in biology.

Read More...

The confusion over data rights [business|bytes|genes|molecules]

Posted: 30 Jul 2008 10:58 PM CDT

As a side note, I talked to a colleague who got harassed at the Ichs and Herps meeting for… gasp… downloading sequences from GenBank and using them without asking the author’s permission! Good lord, what is the world coming to? I’m surprised to hear of such active resistance to public availability of information.

Paulo Nuin pointed to a blog post on Phylota on Friendfeeed earlier today. The post is interesting in itself, but paragraph above, which was an aside on the post, blew my mind away. There was a time when I had the naive opinion that academics were all about the open dissemination of science, especially the sharing of basic scientific data. Alas, it turns out that for some the public domain is not exactly that. I suppose that this is a minority opinion, but it is clear that the confusion about scientific data and ownership needs to be resolved and fast. It should be obvious, but it isn’t and even those of us who should know better get confused. In the above case, if there was a paper where the data source had not been cited properly is understandable, but downloading and using sequences; Yowza!!!

There is a distinction between data and content/information. Too many people have trouble making the distinction and as a result there is confusion the ownership rights around the two. Anyway, this issue isn’t going anywhere soon it seems.

Zemanta Pixie

ShareThis

Evolution: Forget Random Mutation - Variation is the Real Issue [adaptivecomplexity's column]

Posted: 30 Jul 2008 10:40 PM CDT

Part 1 on The Plausibility of Life

Darwin is famous for convincingly arguing that natural selection can explain why living things have features that are well-matched to the environment they live in. In the popular consciousness, evolution is often thought of as natural selection acting on random mutations to produce the amazing tricks and traits found in the living world. But "random mutation" isn't quite right - when we describe evolution like this, we pass over a key problem that Darwin was unable to solve, a problem which today is one of the most important questions in biology. This key problem is the issue of variation, which is what biologists really mean when they talk about natural selection acting on random mutations. Variation and mutation are not the same thing, but they are connected. How they are connected is the most important issue covered Kirschner and Gerhart's The Plausbility of Life. It is an issue Darwin recognized, but couldn't solve in those days before genetics really took off as a science.

Natural selection really works on organisms, not directly on mutations: a particular cheetah survives better than other cheetahs because it can run faster, not because it has a DNA base 'G' in a particular muscle gene. A domesticated yeast can survive in wine barrel because of how it metabolizes sugar, not because of the DNA sequence of a metabolism gene. I know what you're thinking: this is just a semantic game over proximal causes. But this is not just semantics, it is a real scientific problem: what is the causal chain that leads from genotype to phenotype, that is, from an individual organism's DNA sequence, mutations included, to the actual physical or physiological traits of the complete organism?

Read More...

Paring pair frequencies pares virus aggressiveness [Omics! Omics!]

Posted: 30 Jul 2008 10:25 PM CDT

Okay, a bit late with this as it came out in Science about a month ago, but it's a cool paper & illustrates a number of issues I've dealt with at my current shop.. Also, in the small world department one of the co-author's was my eldest brother's roommate at one point.

The genetic code uses 64 codons to code for 21 different symbols -- 20 amino acids plus stop. Early on this was recognized as implying that either (a) some codons are simply not used or (b) many symbols have multiple, synonymous codons, which turns out to be the case (except in a few species, such as Micrococcus luteus, which have lost the ability to translate certain codons).

Early on in the sequencing era (certainly before I jumped in) it was noted that not all synonymous codons are used equally. These patterns, or codon bias, were specific to specific taxa. The codon usage of Escherichia is different from that of Streptomyces. Furthermore, it was noted that there is a signal in pairs of successive codons; that is that the frequency of a given codon pair is often not simply the product of the two codon's individual frequencies. This was (and is) one of the key signals which gene finding programs use to hunt for coding regions in novel DNA sequences.

Codon bias can be mild or it can be severe. Earlier this year I found myself staring at a starkly simple codon usage pattern: C or G in the 3rd position. In many cases the C+G codons for an amino acid had >95% of the usage. For both building & sequencing genes this has a nasty side-effect: the genes are very GC rich, which is not good (higher melting temp, all sorts of secondary structure options, etc).

Another key discovery is that codon usage often correlates with protein abundance; the most abundant proteins show the greatest hewing to the species-specific codon bias pattern. It further turned out that highly used codons tend to be most abundant in the cell, suggesting that frequent codons optimize expression. Furthermore, it could be shown that in many cases rare codons could interfere with translation. Hence, if you take a gene from organism X and try to express it in E.coli, it would frequently translate poorly unless you recoded the rare codons out of it. Alternatively, expressing additional copies of the tRNAs matching rare codons could also boost expression.

Now, in the highly competitive world of gene synthesis this was (and is) viewed as a selling point: building a gene is better than copying it as it can be optimized for expression. Various algorithms for optimization exist. For example, one company optimizes for dicodons. Many favor the most common codons and use the remainder only to avoid undesired sequences. Locally we use codons with a probability proportional to their usage (after zeroing out the 'rare' codons). Which algorithm is best? Of course, I'm not impartial, but the real truth is there isn't any systematic comparison out there, nor is there likely to be one given the difficulty of doing the experiment well and the lack of excitement in the subject.

Besides the rarity of codons affecting translation levels, how else might synonymous codons not be synonymous? The most obvious is that synonymous codons may sometimes have other signals layered on them -- that 'free' nucleotide may be fixed for some other reason. A more striking example, oft postulated but difficult to prove, is that rare codons (especially clusters of them) may be important for slowing the ribosome down and giving the protein a chance to fold. In one striking example, changing a synonymous codon can change the substrate specificity of a protein.

What came out in Science is using codon rewriting, enabled by synthetic biology, on a grand scale. Live virus vaccines are just that: live, but attenuated, versions of the real thing. They have a number of advantages (such as being able to jump from one vaccinated person to an unvaccinated one), but the catch is that attenuation is due to a small number of mutations. Should these mutations revert, pathogenicity is restored. So, if there was a way to make a large number of mutations of small effect in a virus, then the probability of reversion would be low but the sum of all those small changes would be attenuation of the virus. And that's what the Science authors have done.

Taking poliovirus they have recoded the protein coding regions to emphasize rare (in human) codon pairs (PV-Min). They did this while preserving certain other known key features, such as secondary structures and overall folding energy. A second mutant was made that emphasized very common codon pairs (PV-Max). In both cases, more than 500 synonymous mutations were made relative to wild polio. Two further viruses were built by subcloning pieces of the synthetic viruses into a wildtype background.

Did this really do anything? Well, their PV-Max had similar in vitro characteristics to wild virus, whereas PV-Min was quite docile, failing to make plaques or kill cells. Indeed, it couldn't be cultured in cells.

The part-Min part wt chimaeras also showed severe defects and some also couldn't be propagated as viruses. However, one containing two segments of engineered low-frequency codon pairs, called PV-MinXY, could but was greatly attenuated. While its ability to make virions was slightly attenuated (perhaps one tenth the number), more strikingly about 100X the number of virions was required for a successful infection. Repeated passaging of PV-MinXY and another chimaera failed to alter the infectivity of the viruses; the attenuation stability through a plethora of small mutations strategy appears to work.

When my company was trying to sell customers on the value of codon optimization, one frustration for me as a scientist was the paucity of really good studies showing how big an effect it could have. Most studies in the field are poorly done with too few controls and only a protein or two. Clearly there is a signal, but it was always hard to really say "yes, it can have huge effects". Clearly in this study of codon optimization writ large, codon choice has enormous effects.