Thursday, May 22, 2008

The DNA Network

The DNA Network

FriendFeed takes a interesting step [business|bytes|genes|molecules]

Posted: 22 May 2008 07:32 PM CDT

An example of a social network diagram.FriendFeed is one of my favorite places on the web. Not only does it aggregate the various content I produce on the web, it is beautifully designed and has spawned one of the best little biogeek communities on the web (case in point this wonderful discussion on code repositories for science).

Today, FriendFeed added a new feature; rooms. Will rooms be useful? The jury is still out, since they’ve been around for all but a couple of hours, but that didn’t stop me from creating a room for our community there. My hope for The Life Scientists is that it will become the kind of resource for me that Digg or Reddit have never become; place for us to share our miscellaneous links and items of interest.

At a more fundamental level, I’ve often talked about trust (channeling Jon Udell). That’s one of the reasons I really like Lijit (which needs to add FriendFeed post haste). The rooms in FriendFeed could become the custom Reddit I always wanted, a trusted source of information. Of course, this being the web, I am cautiously optimistic, but given the quality of FriendFeed thus far, I think that optimism is somewhat justified. The challenge is going to be correctly balancing my own feed with the room.

Image via Wikipedia

Technorati Tags: , , , ,

ShareThis

readings, readers and blogrolling [the skeptical alchemist]

Posted: 22 May 2008 07:23 PM CDT

First of all, I would like to welcome the new readers of the skeptical alchemist. Maybe you aren't necessarily new, but you sure added my feed to your reader/bookmarks, because today this blog register the highest number of subscribers since its start. I can't believe that there are 82 people out there reading my ramblings. Amazing.

In case you are totally new around here, there are a few good posts you can read - you can see them in the sidebar under the heading "the BEST of". These are some of the best/most searched posts on the blog, organized by topic.

Last but not least, you could also pay a visit to blogs in my blogroll, and to these new additions:

Sandwalk
Scienceroll
Eye on DNA

I hope you will enjoy your time spent reading the ramblings of the skeptical alchemist.

View blog reactions

Human Genetics Disorders.com [ScienceRoll]

Posted: 22 May 2008 04:24 PM CDT


There are only a few sites on the web dedicated to human genetic conditions that provide quality content, so I’m quite happy because I’ve recently come across HumanGeneticsDisorders.com, a unique site maintained by Chavonne Jones:

Welcome to humangeneticsdisorders.com, this website is committed to genetics education awareness. To obtain a better understanding about genetic revolution, we provide the history of genetics. The latest information on all human genetic disorders from Achondroplasia to Wilson’s disease is available. Other areas of focus include: dna and rna sequencing, stem cell research and therapy, genetic testing and screening, genetic selection, genes and behavior, mental health, gene therapy, cytocogenetics, pharmacogenetics, xenotransplantation, cloning, ethical, legal, and social issues in medical genetics.

You can also follow her on Twitter.

Just one recent example from the blog. Video about muscular dystrophy:

We need more similarly useful resources.

Is Blogging Healthy? [ScienceRoll]

Posted: 22 May 2008 04:10 PM CDT


I thought blogging could be dangerous for our health, but now here is an other opinion. As Scientific American reports:

Self-medication may be the reason the blogosphere has taken off. Scientists (and writers) have long known about the therapeutic benefits of writing about personal experiences, thoughts and feelings. But besides serving as a stress-coping mechanism, expressive writing produces many physiological benefits. Research shows that it improves memory and sleep, boosts immune cell activity and reduces viral load in AIDS patients, and even speeds healing after surgery. A study in the February issue of the Oncologist reports that cancer patients who engaged in expressive writing just before treatment felt markedly better, mentally and physically, as compared with patients who did not.

Scientists now hope to explore the neurological underpinnings at play, especially considering the explosion of blogs. According to Alice Flaherty, a neuroscientist at Harvard University and Massachusetts General Hospital, the placebo theory of suffering is one window through which to view blogging. As social creatures, humans have a range of pain-related behaviors, such as complaining, which acts as a "placebo for getting satisfied," Flaherty says. Blogging about stressful experiences might work similarly.

Also, blogging might trigger dopamine release, similar to stimulants like music, running and looking at art.

Image source

Case of Triplets with Different Fathers Finally Closed [The DNA Testing Blog]

Posted: 22 May 2008 03:39 PM CDT

According to a May 17 article in The Times (UK) online, a rare paternity case involving a set of triplets fathered by two different men has finally been settled after a 10-year legal fight. After the birth of his mistress's triplets in 1997, the man—who allegedly had a brief affair with the triplets' mother—began to doubt [...]

Medicine 2.0 Carnival: How are web 2.0 technologies changing the practice of medicine? [Discovering Biology in a Digital World]

Posted: 22 May 2008 03:18 PM CDT

On June 1st, I'll be hosting the next edition of Medicine 2.0, a carnival devoted to exploring the impacts of web 2.0 technologies on medicine and medical practice.

All topics that consider the impacts of web 2.0 on medicine and healthcare are fair game.

  • Are you talking with doctors about sexually transmitted diseases in Second Life?
  • Have you had your genome sequenced? Do your doctors send you e-mail?
  • Are you using web technologies to measure your food consumption and calorie burning?

If you have an article that you think fits the description, feel free to submit it to me, either via e-mail digitalbio at gmail.com or through the fancy blog carnival submission form.

Read the comments on this post...

Herb-Drug Interactions [A Forum for Improving Drug Safety]

Posted: 22 May 2008 01:23 PM CDT


COMMENT:
H. canadensis (goldenseal or Golden Seal, a perennial  of the buttercup family), otherwise known as: yellow root, orange root, puccoon, ground raspberry or  wild curcuma has been an important herbal medicine since settlers learned of it from Native Americans who used it for many purposes including:for mucosal infections, for colds or flu, for dyspepsia or vomiting, for a laxative, for the treatment of hemorrhoids and many others uses. In 1905 when the US Department of Agriculture published a bulletin highlighting its increase usage as a n herbal in America.  Nowadays, it is still a popular herbal; it has recently been listed as one of the top 5 selling herbals.  It contains many alkaloids: especially  hydrastine, berberine, canadine, and others. Interestingly, it has been used to mask urine drug screening for marijuana.  In the past, clinical studies have been shown it to be a mild ABCB1 inhibitor (inhibiting digoxin 14%).  It also is known to inhibit midazolam but not indinavir- raising the possibility of it being only an intestinal CYP3A4 inhibitor, but not a hepatic CYP3A4 inhibitor.  Recent clinical studies now show that it is also a CYP2D6 and CYP2E1 inhibitor.  Clinicians need to be aware of possible drug interactions involving this herbal. 

ABSTRACTS:

Mol Nutr Food Res. 2008 Jan 23.Clinical assessment of CYP2D6-mediated herb-drug interactions in humans: Effects of milk thistle, black cohosh, goldenseal, kava kava, St. John’s wort, and Echinacea. Gurley BJ, Swain A, Hubbard MA, Williams DK, Barone G, Hartsfield F, Tong Y, Carrier DJ, Cheboyina S, Battu SK.

 

Cytochrome P450 2D6 (CYP2D6), an important CYP isoform with regard to drug-drug interactions, accounts for the metabolism of approximately 30% of all medications. To date, few studies have assessed the effects of botanical supplementation on human CYP2D6 activity in vivo. Six botanical extracts were evaluated in three separate studies (two extracts per study), each incorporating 16 healthy volunteers (eight females). Subjects were randomized to receive a standardized botanical extract for 14 days on separate occasions. A 30-day washout period was interposed between each supplementation phase. In study 1, subjects received milk thistle (Silybum marianum) and black cohosh (Cimicifuga racemosa). In study 2, kava kava (Piper methysticum) and goldenseal (Hydrastis canadensis) extracts were administered, and in study 3 subjects received St. John’s wort (Hypericum perforatum) and Echinacea (Echinacea purpurea). The CYP2D6 substrate, debrisoquine (5 mg), was administered before and at the end of supplementation. Pre- and post- supplementation phenotypic trait measurements were determined for CYP2D6 using 8-h debrisoquine urinary recovery ratios (DURR).  
 

Comparisons of pre- and post-supplementation DURR revealed significant inhibition ( approximately 50%) of CYP2D6 activity for goldenseal, but not for the other extracts. Accordingly, adverse herb- drug interactions may result with concomitant ingestion of goldenseal supplements and drugs that are CYP2D6 substrates.


Food Chem Toxicol. 2007 Dec;45(12):2359-65. Epub 2007 Jun 15. Effects of herbal products and their constituents on human Cytochrome P450(2E1) activity. Raner GM, Cornelious S, Moulick K, Wang Y, Mortenson A, Cech NB.
 

Ethanolic extracts from fresh Echinacea purpurea and Spilanthes acmella and dried Hydrastis canadensis were examined with regard to their ability to inhibit cytochrome P450(2E1) mediated oxidation of p- nitrophenol in vitro. In addition, individual constituents of these extracts, including alkylamides from E. purpurea and S. acmella, caffeic acid derivatives from E. purpurea, and several of the major alkaloids from H. canadensis, were tested for inhibition using the same assay. H. canadensis (goldenseal) was a strong inhibitor of the P450(2E1), and the inhibition appeared to be related to the presence of the alkaloids berberine, hydrastine and canadine in the extract. These compounds inhibited 2E1 with K(I) values ranging from 2.8 microM for hydrastine to 18 microM for berberine. The alkylamides present in E. purpurea and S. acmella also showed significant inhibition at concentrations as low as 25 microM, whereas the caffeic acid derivatives had no effect. Commercial green tea preparations, along with four of the individual tea catechins, were also examined and were found to have no effect on the activity of P450(2E1).

 

 

 

 

 

 

 

 

 

Science on TV: A Question [Bayblab]

Posted: 22 May 2008 01:09 PM CDT


Recently, I caught a TV program called "Braniac: Science Abuse". It's a British program (or is that programme?) The episode I saw involved various segments including what happens to toothpaste when you put it in liquid nitrogen (it freezes), what happens if you put the ingredients for a parfait in the microwave with a fluorescent lightbulb and a bowl of flammable liquid (it explodes), can an aerobics teacher do her job while being randomly given an electric shock (she can't) and what happens if you blow-up a shed full of fireworks (they go off). It also included more scientific segments: an attempt to replicate Galileo's Leaning Tower of Pisa experiment, and an experiment intending to determine whether a person performs better when starving or when over-stuffed. (There were many other segments of both types) In all cases, the background science wasn't well explained and the experiments were poorly designed. The show has even been accused of forging results. 'Science Abuse' is an accurate subtitle.

This is a particularly bad example, but there are other science shows that aren't exactly rigorous with the scientific method (Mythbusters, for example, is a great show but the experimental design is sometimes lacking). I understand that these are television programs and the goal is to entertain, but it seems to me that those kind of changes needn't get in the way of the watchability. In the feast or famine performance example above, adding a person who had neither over-eaten nor been starved for a day wouldn't be difficult (forgetting that it's still an n=1 experiment). Or breaking up the tasks and assessing them individually rather than one mega-challenge of both physical and mental events.

So my questions are these:

Is it more important to have an accurate portrayal of the scientific method, or an entertaining program that attracts kids to science even if it mostly portrays it as blowing things up?

Is it impossible to make a more rigourous science program fun?

[Comic credit: xkcd]

The Probability of Winning the NBA Draft Lottery [evolgen]

Posted: 22 May 2008 12:00 PM CDT

nba-draft-lottery.gif

On Tuesday night, the National Basketball Association (NBA) held their annual draft lottery. In the draft, each team is given the opportunity to select a few players that have declared themselves eligible for the draft (either after completing at least one year of college in the United States or being from another country and over 18 years old). The order of picks in the NBA draft is determined with a goal of awarding earlier picks to teams that performed the worst the previous season. However, rather simply giving the worst team the first pick, second worst the second, etc., the NBA takes the teams that failed to make the playoffs and assigns them a probability of earning the first pick based on their record the previous season. They have been using a probability system based on the previous season's performance since 1990 and those probabilities can be found here.

The draft lottery was instituted to prevent the worst team from automatically earning the first pick -- which allowed the NBA to assuage any fears that a team would intentionally "tank" the end of the season to draft a young stud. From 1985-89, the NBA gave each non-playoff team an equal chance of winning the draft lottery and earning the first pick in the draft. This process began with the New York Knicks earning the first pick in 1985, which allowed the to draft Patrick Ewing, regarded by all the experts as the best available player. Many people accused the NBA of rigging the lottery so the young superstar would be paired with its marquee franchise. But the Knicks never won a championship with Ewing, and he earned a notorious reputation as a guy whose teams improved without him in the lineup (see the Ewing Theory).

NBA_lottery_show.gif

From 1990-93, the non-playoff teams were ranked by their records, and the team with the worst record had 11 chances of winning the lottery, the second worst had 10 chances, the third worst had 9 chances, etc, etc, all the way to the eleventh worst team who had one chance. This ended after the 1993 draft lottery, when the Orlando Magic captured the first pick despite having the best record of all the non-playoff teams, and a mere 1.5% chance of winning the lottery. From 1994 to the present, the probability of earning the first pick has been weighted even more toward the teams with the worst records. The worst team has a one in four chance of winning the first pick, and the second worst team has a one in five chance (a full list of the current probabilities can be found here). After the first pick is awarded, the teams with the second and third picks in the draft are also determined using the lottery. The draft order of the remaining teams -- those that were not selected in the lottery -- are based on their records (the worst teams draft earlier, and the better teams later).

Because the lottery is weighted in favor of the worst teams, sports journalists and commentators are often surprised when the worst teams do not win the first three picks in the draft. Here is how the Associated Press reported it:

Only twice have teams with the worst record won the lottery since the current format began in 1994. Though the lottery is weighted to give teams with the poorest records the best chance to win, the longshots keep finding a way.

That's 15 draft lotteries, and only two times has the number one pick been awarded to the worst team. Usually, the worst team has a 25% chance of winning the lottery. However, when the Toronto Raptors and Vancouver Grizzlies (now in Memphis) joined the NBA in 1996, the were denied the right to win the first pick until 1999. In 1996 and 1997, the Grizzlies had the worst record in the NBA, so we should really only consider the 13 other seasons. But in 1998, because the Grizzlies and Raptors could not win the lottery, the worst team, the Denver Nuggets, had over a one in three chance of winning (they did not win, and ended up with the third pick in the draft).

Read the rest of this post... | Read the comments on this post...

Recombination and substitution rates [Mailund on the Internet]

Posted: 22 May 2008 11:51 AM CDT

ResearchBlogging.orgIn a paper from PLoS Genetics earlier this month, Laurent Duret and Peter F. Arndt did a genome wide analysis of the correlation between recombination rate and substitution rate (and bias).

The Impact of Recombination on Nucleotide Substitutions in the Human Genome

Duret, L., Arndt, P.F. PLoS Genetics, 4(5) 2008

Abstract

Unraveling the evolutionary forces responsible for variations of neutral substitution patterns among taxa or along genomes is a major issue for detecting selection within sequences. Mammalian genomes show large-scale regional variations of GC-content (the isochores), but the substitution processes at the origin of this structure are poorly understood. We analyzed the pattern of neutral substitutions in 1 Gb of primate non-coding regions. We show that the GC-content toward which sequences are evolving is strongly negatively correlated to the distance to telomeres and positively correlated to the rate of crossovers (R2 = 47%). This demonstrates that recombination has a major impact on substitution patterns in human, driving the evolution of GC-content. The evolution of GC-content correlates much more strongly with male than with female crossover rate, which rules out selectionist models for the evolution of isochores. This effect of recombination is most probably a consequence of the neutral process of biased gene conversion (BGC) occurring within recombination hotspots. We show that the predictions of this model fit very well with the observed substitution patterns in the human genome. This model notably explains the positive correlation between substitution rate and recombination rate. Theoretical calculations indicate that variations in population size or density in recombination hotspots can have a very strong impact on the evolution of base composition. Furthermore, recombination hotspots can create strong substitution hotspots. This molecular drive affects both coding and non-coding regions. We therefore conclude that along with mutation, selection and drift, BGC is one of the major factors driving genome evolution. Our results also shed light on variations in the rate of crossover relative to non-crossover events, along chromosomes and according to sex, and also on the conservation of hotspot density between human and chimp.

The main point of this paper is the evolution of the GC content of the human genome, that varies significantly in various regions of the genome — the so-called isochore structure.

The evolution of isochores

The content of GC nucleotides vary along the genome, with some regions having very high fractions of GC and some having very low, and this variation is not what we would expect the sequence to look like if the entire genome was evolving under the same neutral process.

Why the genome has this structure has been debated (at time heated debates) the last two decades. Different explanations have been suggested, including:

  1. The mutation rate is biased and varies along the genome.
  2. Selection prefers high GC content in some regions and not in others.
  3. Gene conversion is biased, preferring to replace AT alleles with GC alleles.

where the later is a theory developed, among others, by the authors of this new paper.

Biased mutation rates is of course a possibility, but doesn’t explain the correlation with the recombination rate, unless the latter is mutagenic or causes this bias.

Selection is the explanation of Bernardi, the discoverer of the isochore structure.

Biased gene conversion is a neutral process that looks a lot like selection. The idea is as follows: there is no particular need for a bias in the mutation process — the AT to GC and GC to AT substitutions are not necessarily occurring at different rates in GC rich and GC poor regions — but once a polymorphism exists, gene-conversion between a GC allele and an AT allele will replace the AT allele with the GC allele more often than the other way around.

A consequence of this is, that although the mutation rate might not vary along the genome, the substitution rate will, and this substitution rate will be correlated with the recombination rate.

Eyre-Walker and Hurst (2001) gives more details on the three theories above.

The case for biased gene conversion

In the PLoS Genetics paper they argue for the biased gene conversion explanation (not surprisingly), and reasonably convincingly, in my opinion, but I am not an expert…

First, they construct a model of sequence evolution that does not assume time-reversibility and that the current sequences are at stationarity (which is usually assumed, but might not be true).

From this model, they estimate the substitution rate of the various types of substitutions, and they estimate the equilibrium GC content (called GC* in the paper). In the model, the equilibrium GC content can be different than the current GC content, as stationarity is not assumed, and in general GC* < GC meaning that the GC content in our genome — and this especially in GC rich areas — is decreasing. Very slowly, though.

This could suggest that whatever mechanism created the GC rich areas of our genome is either no longer in effect, or at least is weaker than it was when the GC rich areas were created.

They then consider the correlation between recombination rates and GC / GC* and notice a significant correlation, with a stronger correlation between recombintion rate and GC* than between recombination and GC.

This is take as evidence that it is recombination that drives the direction of mutations toward GC content, rather than base pair composition that determines recombination rate; if the recombination rate was determined by the base pair composition, then the present day GC content should be more correlated with the rate than some far future stationary GC content.

The biased gene conversion model suggest a preference for AT to GC substitutions in regions with high recombination rates, but where the strength of this preference depends on the effective population size.

The positive correlation between GC* and the recombination rate supports this, and the present day effective population size (or the present day recombination rate) can explain why the GC structure in the genome is eroding towards a higher AT content in the present day GC rich regions. The GC rich regions of today could have appeared in an ancestor with either a larger effective population size, or regional larger recombination rates, and the reduction in the effective population size in the present day humans is just not large enough that the biased gene conversion mechanism can keep the GC content at a high level.

The case against biased mutation and against selection

The biased mutation explanation is argued against based on the frequency patterns of polymorphisms. If the mutations are biased, but the resulting polymorphisms are selectively neutral, then the frequency of GC and AT derived polymorphisms should be the same.  However, GC alleles segregate at higher frequencies than AT alleles.

The first argument against selection is less convincing, I feel, but essentially says: it is hard to imagine why selection should prefer the occasional GC  in Mbp long regions with plenty of genes under selection, and even if it did, it probably wouldn’t be strong enough to drive the changes in GC content.  Well…

The second argument is that selection does not explain why GC content, and especially GC*, should be correlated with the recombination rate.  One possible explanation is the Hill-Robertson effect, but then the correlation should be between GC* and the population recombination, but GC* is stronger correlated with male recombination rate than with female recombination rate, something Hill-Robertson does not explain.

Conclusion

I read this paper because I was reading up on the correlation between effective population size and recombination rate for a project I’m working on.  I knew about the debate about isochores — I’ve chatted with some of the biased gene conversion proponents who have visited BiRC — but I never really read up on it.

It turns out that several of my colleagues at BiRC are interested in this, so we’ve discussed the paper over the last two days, and I’ve had a lot of fun reading my way through some of the references in the paper.

I would recommend it as an introduction to this, but of course not a neutral discussion of the three theories.


Duret, L., Arndt, P.F. (2008). The Impact of Recombination on Nucleotide Substitutions in the Human Genome. PLoS Genetics, 4(5), e1000071. DOI: 10.1371/journal.pgen.1000071

Eyre-Walker, A., Hurst, L.D. (2001). The evolution of isochores. Nature Reviews Genetics, 2(7), 549-555. DOI: 10.1038/35080577

Software decay and software repositories [Mailund on the Internet]

Posted: 22 May 2008 10:29 AM CDT

bbgm suggests:

In essence this expands on the issue that I have been raising lately; that academics should use code repositories like Google Code, Sourceforge or Github. That not only moves some of the issues with code maintenance infrastructure and utilities out onto the cloud, it also brings in the ability of a bigger user base, ability to access mode more easily, etc.

Will this solve the problem of URL decay mentioned in the latest issue of Bioinformatics?

URL decay in MEDLINE — a 4-year follow-up study

Jonathan D. Wren Bioinformatics 2008 24(11):1381-1385; doi:10.1093/bioinformatics/btn127

Abstract

Motivation: Internet-based electronic resources, as given by Uniform Resource Locators (URLs), are being increasingly used in scientific publications but are also becoming inaccessible in a time-dependant manner, a phenomenon documented across disciplines. Initial reports brought attention to the problem, spawning methods of effectively preserving URL content while some journals adopted policies regarding URL publication and begun storing supplementary information on journal websites. Thus, a reexamination of URL growth and decay in the literature is merited to see if the problem has grown or been mitigated by any of these changes.

Results: After the 2003 study, three follow-up studies were conducted in 2004, 2005 and 2007. Unfortunately, no significant change was found in the rate of URL decay among any of the studies. However, only 5% of URLs cited more than twice have decayed versus 20% of URLs cited once or twice. The most common types of lost content were computer programs (43%), followed by scholarly content (38%) and databases (19%). Compared to URLs still available, no lost content type was significantly over- or underrepresented. Searching for 30 of these websites using Google, 11 (37%) were found relocated to different URLs.

Conclusions: URL decay continues unabated, but URLs published by organizations tend to be more stable. Repeated citation of URLs suggests calculation of an electronic impact factor (eIF) would be an objective, quantitative way to measure the impact of Internet-based resources on scientific research.

It certainly seems like we are loosing our data and programs, so some larger repositories might be the way to go…

Statistics for Dummies [Bayblab]

Posted: 22 May 2008 09:43 AM CDT


For those of you who can't remember the difference between standard deviation and standard error of the mean, or who never bothered to learn this paper is a good primer on when to use which, what independent replicates really means, and how to interpret the error bars on the graph presented at yesterday's seminar. It's the kind of thing we should all know but many of us don't bother with.

[h/t: juniorprof]

Google Code as a science repository [business|bytes|genes|molecules]

Posted: 22 May 2008 09:34 AM CDT

iPhylo is now available on Google Code. As Rod Page explains on his blog, inspired by Pedro, the goal is to use Google Code as a project management system, i.e. a place to store more than just code but docs and data as well (why not, you have good version control).

In essence this expands on the issue that I have been raising lately; that academics should use code repositories like Google Code, Sourceforge or Github. That not only moves some of the issues with code maintenance infrastructure and utilities out onto the cloud, it also brings in the ability of a bigger user base, ability to access mode more easily, etc.

Will this be a broader trend? There are already a number of packages available on Sourceforge, but still too few and many are just thrown out there.

Don’t know about all of you, but I find moves like these significant, especially if this becomes a trend

Technorati Tags: , ,

ShareThis

Nuts! [Mailund on the Internet]

Posted: 22 May 2008 03:23 AM CDT

In this letter to Nature, Raghavendra Gadagkar argues that the open access model — that typically means “pay to publish, but read for free” — is doing more harm to research in the developing world than the traditional “publish for free, but pay to read” model.

The reasoning is, that having to pay to publish means that publications are not a result of the quality of ones research, but just as much a result of ones funding, and in developing countries there is less funding.

This is, of course, a valid point, but to conclude from this that the open access model — even if it means you have to pay to publish — is doing more harm than good is, well, just nuts!

First of all, many top journal charges you both for publishing and for reading the articles. With open access, at least, you can read for free.

Secondly, even if the publishing charges are much higher than the reading charge, you only pay when you have a result worth publishing. I don’t know about you, but I personally read a lot more papers than I publish, and most papers I read are never cited in my own work, because they turn out not to be relevant for my own work.

Gadagkar ends his letter with:

A ‘publish for free, read for free’ model may one day prove to be viable. Meanwhile, if I have to choose between the two evils, I prefer the ‘publish for free and pay to read’ model over the ‘pay to publish and read for free’ one. Because if I must choose between publishing or reading, I would choose to publish. Who would not?

Of course we all prefer to publish our own papers, but you cannot, and should not, publish worthwhile research if you are not familiar with the work of other researchers and have read the literature. You cannot choose publishing over reading!

I’m not saying there isn’t a problem with publication charges, but I strongly disagree with the claim that it is worse than the charge for access to papers (and I remind you, once more, that in many cases you get both of the two evils…)

“Lexomics” - Breaking the language barrier [HENRY » genetics]

Posted: 22 May 2008 03:12 AM CDT

Emma Marris in today’s Nature reviews my field of research, and chats to a number of my friends and colleagues:

In the past five to ten years, more and more non-linguists such as Pagel have used the computational tools with which they model evolution to take a crack at languages. And one can see why. Like biological species, languages slowly change and sometimes split over time. Darwin’s Galapagos finches evolved either large beaks or small; Latin amor became French amour and Italian amore. Darwin himself noted the ‘curious parallel’ between the evolution of languages and species in The Descent of Man, and Selection in Relation to Sex.

The advent of molecular genetics provided a new depth to the analogy. Just as the four nucleotides of DNA can produce a staggering variety of creatures, the alphabets of the world’s languages can generate an infinite number of sentences. These alphabets, the words they make, and the sounds and grammar rules that frame them are passed down from parent to child in a process that, at least superficially, resembles the inheritance of DNA.

Even some complications are the same. Just as species can shade off into a maddening continuum of subspecies, populations and hybrids, languages dissolve into an untidy collection of dialects and intermediate forms. And the rampant borrowing of words between languages resembles, graphically at least, the promiscuous horizontal gene transfer that microbes engage in.

The full story is here, and I’ve written about some of the (our) research here before.

Watch Detects Alien DNA [Eye on DNA]

Posted: 22 May 2008 03:06 AM CDT

Are you aware of the dangers alien DNA can wreak? Probably not unless you’re wearing the Tokyoflash Biohazard watch.

biohazard watch 2

With the threat of Alien Invasion growing ever closer & the distinct possibility that “they” are already here, it’s about time we had a device to detect the humans from the human-oids. The Biohazard wrist scanner probes the immediate vicinity for Alien DNA & displays the results so that you may assess the threat level.

Available for just $179.29 plus free shipping.

HT: Boing Boing

RNA toxicity contributes to neurodegenerative disease, University of Pennsylvania scientists say [Think Gene]

Posted: 22 May 2008 01:40 AM CDT

Expanding on prior research performed at the University of Pennsylvania, Penn biologists have determined that faulty RNA, the blueprint that creates mutated, toxic proteins, contributes to a family of neurodegenerative disorders in humans.

Nancy Bonini, professor in the Department of Biology at Penn and an investigator of the Howard Hughes Medical Institute, and her team previously showed that the gene that codes for the ataxin-3 protein, responsible for the inherited neurodegenerative disorder Spinocerebellar ataxia type 3, or SCA3, can cause the disease in the model organism Drosophila. SCA3 is one of a class of human diseases known as polyglutamine repeat diseases, which includes Huntington’s disease. Previous studies had suggested that the disease is caused largely by the toxic polyglutamine protein encoded by the gene.

The current study, which appears in the journal Nature, demonstrates that faulty RNA, the blueprint for the toxic polyglutamine protein, also assists in the onset and progression of disease in fruit fly models.

"The challenge for many researchers is coupling the power of a simple genetic model, in this case the fruit fly, to the enormous problem of human neurodegenerative disease," Bonini said. "By recreating in the fly various human diseases, we have found that, while the mutated protein is a toxic entity, toxicity is also going on at the RNA level to contribute to the disease."

To identify potential contributors to ataxin-3 pathogenesis, Bonini and her team performed a genetic screen with the fruit fly model of ataxin-3 to find genes that could change the toxicity. The study produced one new gene that dramatically enhanced neurodegeneration. Molecular analysis showed that the gene affected was muscleblind, a gene previously implicated as a modifier of toxicity in a different class of human disease due to a toxic RNA. These results suggested the possibility that RNA toxicity may also occur in the polyglutamine disease situation.

The findings indicated that an RNA containing a long CAG repeat, which encodes the polyglutamine stretch in the toxic polyglutamine protein, may contribute to neurodegeneration beyond being the blueprint for that protein. This raised the possibility that expression of the RNA alone may be damaging.

Long CAG repeat sequences can bind together to form hairpins, dangerous molecular shapes. The researchers therefore tested the role of the RNA by altering the CAG repeat sequence to be an interrupted CAACAG repeat that could no longer form a hairpin. Such an RNA strand, however, would still be a blueprint for an identical protein. The researchers found that this altered gene caused dramatically reduced neurodegeneration, indicating that altering the RNA structure mitigated toxicity. To further implicate the RNA in the disease progression, the researchers then expressed just a toxic RNA alone, one that was unable to code for a protein at all. This also caused neuronal degeneration. These findings revealed a toxic role for the RNA in polyglutamine disease, highlighting common components between different types of human triplet repeat expansion diseases. Such diseases include not only the polyglutamine diseases but also diseases like myotonic dystrophy and fragile X.

The family of diseases called polyglutamine repeat disorders arise when the genetic code of a CAG repeat for the amino acid glutamine stutters like a broken record within the gene, becoming very long. This leads to an RNA — the blueprint for the protein — with a similar long run of CAG. During protein synthesis, the long run of CAG repeats are translated into a long uninterrupted run of glutamine residues, forming what is known as a polyglutamine tract. The expanded polyglutamine tract causes the errant protein to fold improperly, leading to a glut of misfolded protein collecting in cells of the nervous system, much like what occurs in Alzheimer’s and Parkinson’s diseases.

Polyglutamine disorders are genetically inherited ataxias, neurodegenerative disorders marked by a gradual decay of muscle coordination, typically appearing in adulthood. They are progressive diseases, with a correlation between the number of CAG repeats within the gene, the severity of disease and age at onset.

Source: University of Pennsylvania

RNA toxicity is a component of ataxin-3 degeneration in Drosophila. Ling-Bo Li, Zhenming Yu, Xiuyin Teng & Nancy M. Bonini. Nature (2008). doi:10.1038/nature06909

Josh says:

I don’t think anyone has looked at this before or considered that RNA could have such an effect. I wonder how much this type of thing affects other diseases, both neurological and non-neurological.

Andrew says:

So because RNA is a molecule, not just an “abstract informational template” as we often think, the physical manifestation of RNA’s information can a significant impact on the biological system —not just the information itself! In this case, the issue is the “hairpin” shape of a CAG string in RNA. So again, the metaphor “genes as code” fails. Fascinating.

USC stem cell study sheds new light on cell mechanism [Think Gene]

Posted: 22 May 2008 01:30 AM CDT

Research from the University of Southern California (USC) has discovered a new mechanism to allow embryonic stem cells to divide indefinitely and remain undifferentiated. The study, which will be published in the May 22 issue of the journal Nature, also reveals how embryonic stem cell multiplication is regulated, which may be important in understanding how to control tumor cell growth.

"Our study suggests that what we believe about how embryonic stem cell self-renewal is controlled is wrong," says Qi-Long Ying, Ph.D., assistant professor of Cell and Neurobiology at the Keck School of Medicine of USC, researcher at the Eli and Edythe Broad Center for Regenerative Medicine and Stem Cell Research at USC, and lead author of the paper. "Our findings will likely change the research direction of many stem cell laboratories."

Contrary to the current understanding of stem cell self-renewal and differentiation, the findings suggest that embryonic stem cells will remain undifferentiated if they are shielded from differentiation signals. By applying small molecules that block the chemicals from activating the differentiation process, the natural default of the cell is to self-renew, or multiply, as generic stem cells.

"This study presents a completely new paradigm for understanding how to grow embryonic stem cells in the laboratory," says Martin Pera, Ph.D., director of the Eli and Edythe Broad Center for Regenerative Medicine and Stem Cell Research at USC. "The discovery has major implications for large scale production of specialized cells, such as brain, heart muscle and insulin producing cells, for future therapeutic use."

Embryonic stem cells have only been derived from a very small number of species.

"We believe the process we discovered in mice may facilitate the derivation of embryonic stem cells from species like pigs, cows or other large animals, which have not been done before," continues Ying. "If deriving embryonic stem cells from cows, for instance, is possible, then perhaps in the future cows might be able to produce milk containing medicines."

With better understanding of the multiplication process of embryonic stem cells, researchers have additional insight on tumor cell growth as these cells share similar qualities. "Our study reveals part of the little known process of how embryonic stem cells multiplication is regulated. This is important for us in understanding how to control tumor cell growth moving forward in cancer research," says Ying.

Source: University of Southern California

Qi-Long Ying, Jason Wray, Jennifer Nichols, Laura Batlle-Morera, Bradley Doble, James Woodgett, Philip Cohen and Austin Smith. "The Ground State of Embryonic Stem Cell Self-Renewal," Nature (2008). Doi: 10.1038/nature06968.

Josh says:

I’m a bit skeptical of this. I haven’t really done any cell culture work, but if the stem cells are grown in a serum free media that doesn’t have any “differentiation signals”, they still differentiate. What they may be doing here is blocking transcription factors that are normally present and cause the cells to differentiate, thereby keeping them undifferentiated.

Web as platform: Why I like web services #223 [business|bytes|genes|molecules]

Posted: 21 May 2008 10:33 PM CDT

Earlier today I pointed to nmrdb.org on Friendfeed. A few minutes ago I saw a post by Antony Williams that uses the web services provided by nmrdb.org to provide NMR functionality on Chemspider.

One of the salient features of todays web is a philosphy of services not site; the ability to remix and repurpose functionality. The idea is to repurpose content of functionality that people have developed on different parts of the web. I feel that in the sciences we do not do a very good job of that by and large, preferring local copies or their own versions of well known algos. As a physicist once told me (and I paraphrase), “I am a physicist, I always implement my own version”. Recently, that approach seems to be changing, and this is an excellent example. Of course, we need more web friendly APIs, but that’s also changing, so I remain hopeful of a distributed science web, with content and algorithms available via RESTful APIs for various mashups and distributed pipelines.

Technorati Tags: , ,

ShareThis

No comments: