Wednesday, May 7, 2008

The DNA Network

The DNA Network

Breast cancer tumors grow faster in younger women [Think Gene]

Posted: 07 May 2008 06:43 PM CDT

A new approach to estimating tumour growth based on breast screening results from almost 400,000 women is published today BioMed Central's open access journal, Breast Cancer Research. This new model can also estimate the proportion of breast cancers which are detected at screening (screen test sensitivity). It provides a new approach to simultaneously estimating the growth rate of breast cancer and the ability of mammography screening to detect tumours.

The results of the study show that tumour growth rates vary considerably among patients, with generally slower growth rates with increasing age at diagnosis. Understanding how tumours grow is important in the planning and evaluation of screening programs, clinical trials, and epidemiological studies. However, studies of tumour growth rates in people have so far been based mainly on small and selected samples. Now, Harald Weedon-Fekjær of the Department of Etiological Research, Cancer Registry of Norway and colleagues have developed a new estimating procedure to follow tumour growth in a very large population of breast cancer patients included in the Norwegian Breast Cancer Screening Program.

The researchers applied their model to cancer incidence and tumour measurement data from 395,188 women aged between 50 and 69 years old. They found that tumour growth varies considerably between subjects. About one in twenty tumours double in size in just over a month from 10 to 20mm, while similar numbers took more than six years to grow to this size. They estimated the mean time for a tumour to double in size from 10 to 20 mm in diameter is 1.7 years.

"There are enormous implications for the sensitivity of breast cancer screening programs" Weedon-Fekjær explains. "We found that mammography screen test sensitivity (STS) increases sharply with increased tumour size, as one might expect. Detection rates are just 26% for a 5 mm tumour but increase to 91% once a tumour is 10 mm in size." The team compared their model with the previously used Markov model for tumour progression, and found its predictive power to be almost twice as accurate as the Markov model, in addition to providing new estimates directly linked to tumour size.

Source: BioMed Central

First analysis of platypus genome may impact disease prevention [Think Gene]

Posted: 07 May 2008 05:35 PM CDT

There's no doubt about it … the platypus is one odd duck-billed, egg-laying, lactating mammal. With adaptations like webbed feet to fit its aquatic lifestyle and the poison spurs that decorate males, the platypus represents for many a patchwork of evolutionary development. But LSU's Mark Batzer, along with an international consortium of scientists led by Wes Warren at Washington University in Saint Louis, Mo., has taken this theory to an entirely new level, proving that platypus looks aren't only skin-deep – their DNA is an equally cobbled-together array of bird, reptile and mammalian lineages.

The consortium conducted the first analysis of platypus DNA in what was the largest platypus population genetics study to date.

"Their genomic organization was strange and a little unexpected," said Batzer, Andrew C. Pereboom Alumni Departmental Professor of Biological Sciences at LSU and one of the principle investigators of the project. "It appeared much more bird- and reptile-like than mammalian, even though it is indeed classified as a mammal. It's an ancient animal, too, and it has remained relatively primitive and unchanged, both in physical appearance and genetically."

What does this discovery mean for the public? The very real potential for advances in human disease prevention and a better understanding of mammalian evolution.

"This is a huge genetic step," said Batzer. "Understanding is key. We're learning a lot about mammalian gene regulation and immune systems, which has huge implications for disease susceptibility research. We hope to, in time, identify the underlying causes and methods of disease prevention in humans."

The platypus was chosen as the subject of this study in large part due to its strange appearance, but other selection factors include the species' endangered status in its only indigenous habitat, Australia. Platypuses are extremely shy by nature and there has been little success in breeding the animals while in captivity. Researchers hope that some of the clues unearthed in platypus DNA might lead to new directions in conservation efforts.

The international effort to decode the platypus genome was an extremely large-scale undertaking, featuring contributions from dozens of prestigious researchers. Batzer and his team at LSU's Batzer Laboratories – along with Arian Smit from the Institute for Systems Biology in Seattle and David Ray from West Virginia University in Morgantown, W. Va. – developed a very specific aspect of the project – decoding mobile DNA elements, often called "jumping genes" or even "junk DNA."

"These mobile elements were once thought to be so small that they had no function," said Batzer. "But, in reality, they cause insertions and deletions, which can lead to genetic diseases in humans as well as the creation of new genes and gene families in the genome, which can lead to genetic disease in humans." Because of this, understanding the impact of mobile elements on genome structure is paramount to understanding the function of the genome. In the platypus study, Batzer's group was able to pinpoint specific instances of mobile element insertions within the species and determine the timing of each genetic event.

Batzer and several different international consortia have succeeded in sequencing the genomes of several species in the past, most recently that of the rhesus macaque and the first marsupial, a small, South American opossum species.

"Each one gives us another piece of the puzzle, which brings us that much closer to answering some of the more pressing questions about gene and genome organization and evolution," Batzer said.

Source: Louisiana State University

Undergrad has sweet success with invention of artificial Golgi [Think Gene]

Posted: 07 May 2008 05:34 PM CDT

An undergraduate student at Rensselaer Polytechnic Institute has learned very quickly that a spoonful of sugar really does help the medicine go down. In fact, with his invention, the sugar may actually be the medicine.

Among the most important and complex molecules in the human body, sugars control not just metabolism but also how cells communicate with one another. Graduating senior Jeffery Martin has put his basic knowledge of sugars to exceptional use by creating a lab-on-a-chip device that builds complex, highly specialized sugar molecules, mimicking one of the most important cellular structures in the human body — the Golgi Apparatus.

"Almost completely independently he has been able to come closer than researchers with decades more experience to creating an artificial Golgi," said Robert Linhardt, the Ann and John H. Broadbent Jr. '59 Senior Constellation Professor of Biocatalysis and Metabolic Engineering at Rensselaer and Martin's adviser. "He saw a problem in the drug discovery process and almost instantly devised a way to solve it."

Cells build sugars in a cellular organelle known as the Golgi Apparatus. Under a microscope, the Golgi looks similar to a stack of pancakes. The strange-looking organelle finishes the process of protein synthesis by decorating the proteins with highly specialized arrangements of sugars. The final sugar-coated molecule is then sent out into the cell to aid in cell communication and to help determine the cell's function in the body.

Martin's artificial Golgi functions in a surprisingly similar way to the natural Golgi, but he gives the ancient organelle a very high-tech makeover. His chip looks similar to a miniature checker board where sugars, enzymes, and other basic cell materials are suspended in water and can be transported and mixed by applying electric currents to the destination squares on the checker board. Through this process sugars can be built in an automated fashion where they are exposed to a variety of enzymes found in the natural Golgi. The resulting sugars can then be tested on living cells either on the chip or in the lab to determine their effects. With the chip's ability to process many combinations of sugars and enzymes, it could help researchers quickly uncover new sugar-based drugs, according to Martin.

Scientists have known for years that certain sugars can serve as extremely beneficial therapeutics for humans. One well-known example is heparin, which is among the most widely used drugs in the world. Heparin is formed naturally in the Golgi organelle in cells of the human body as well as in other animals like pigs. Heparin acts as an anticoagulant preventing blood clots, which makes it a good therapeutic for heart, stroke, and dialysis patients.

The main source of heparin is currently the intestines of foreign livestock and, as recent news reports highlight, the risk of contamination from such sources is high. So researchers are working around the clock to develop a safer, man-made alternative to the drug that will prevent outside contamination. A synthetic alternative would build the sugar from scratch, helping eliminate the possibility of contamination he explained.

"I am very grateful to have the privilege of working with Dr. Linhardt who has discovered the recipe to make fully synthetic heparin," Martin said. "Because we know the recipe, I am going to use it as a model to test the device. If our artificial Golgi can build fully functional heparin, we can then use the artificial organelle to produce many different sugar variants by altering the combination of enzymes used to synthesize them. Another great thing about these devices is that they are of microscale size, so that if needed we could fill an entire room with them to increase throughput for drug discovery."

There are millions of possible sugar combinations that can be formed and scientists currently only know the function of very few of them like heparin. "Since it is known that these types of sugars play a part in many important biological processes such as cell growth, cell differentiation, blood coagulation, and viral defense mechanisms, we feel that that this artificial Golgi will help our team to develop a next generation of sugar-based drugs, known as glycotheraputics," Martin said. "We are going to start making new combinations and we simply don't know what we are going to find. We could find a sugar whose signal blocks the spread of cancer cells or initiates the differentiation of stem cells. We just don't know."

Source: Rensselaer Polytechnic Institute

Prions show their good side [Think Gene]

Posted: 07 May 2008 05:23 PM CDT

Prions, the infamous agents behind mad cow disease and its human variation, Creutzfeldt-Jakob Disease, also have a helpful side. According to new findings from Gerald Zamponi and colleagues, normally functioning prions prevent neurons from working themselves to death. The findings appear in the May 5th issue of the Journal of Cell Biology.

Diseases such as mad cow result when the prion protein adopts an abnormal conformation. This infectious form creates a template that induces normal copies of the protein to misfold as well. Scientists have long assumed that prions must also have a beneficial side but have been unable to pinpoint any such favorable traits.

In the new work, the authors found that mice lacking the prion protein had overactive brain cells. Their neurons responded longer and more vigorously to electrical or drug-induced stimulation than did neurons that had normal prion protein. This hyperactivity eventually led to the neurons' death. The results might help explain why misfolded prions cause dementia: in the wrong conformation, the prion can no longer protect brain cells from deadly overexcitement.

Source: Rockefeller University

MicroRNAs appear essential for retinal health [Think Gene]

Posted: 07 May 2008 05:23 PM CDT

Retinas in newborn mice appear perfectly fine without any help from tiny bits of genetic material called microRNAs except for one thing — the retinas do not work.

In the first-ever study of the effects of the absence of microRNAs in the mammalian eye, an international team of researchers directed by the University of Florida and the Italian National Research Council describes a gradual structural decline in retinas that lack microRNAs — a sharp contrast to the immediate devastation that occurs in limbs, lungs and other tissues that develop without microRNAs.

The discovery, reported in today's (May 7) issue of the Journal of Neuroscience, may lead to new understanding of some blinding diseases and further penetrates the cryptic nature of microRNAs — important gene regulators that a decade ago were considered to be little more than scraps floating around the cell's working genetic machinery.

"MicroRNAs are behaving differently in the nervous system than they are in other bodily tissues," said Brian Harfe, Ph.D., an assistant professor of molecular genetics and microbiology at the University of Florida College of Medicine. "Judging by our previous studies in limb development, I was expecting to see lots of immediate cell death in the retina. I was not expecting a normal-looking retina in terms of its form. It would be something like finding a perfectly formed arm at birth that just did not work."

Production of microRNAs is dependent on Dicer, an enzyme widely used by living things to kick-start the process of silencing unwanted genetic messages. By breeding mice that lack one or both of the forms — or alleles — of the gene that produces Dicer in the retina, scientists were able to observe retinal development when Dicer levels were half of normal or completely eliminated.

Electrical activity in retinas devoid of Dicer was abnormally low at the time of eye opening and became progressively worse at 1-, 3- and 5-month stages. Structurally, the retinas initially appeared normal, but the cells progressively became disorganized, followed by widespread degeneration.

Retinas in animals equipped with a single form of the Dicer gene never underwent the inexorable structural decline that occurs in total absence of Dicer, but they also never functioned normally, according to electroretinograms.

"We have removed Dicer from about 30 different tissues," said Harfe, a member of the UF Genetics Institute. "In all of those cases with half the amount of Dicer, you still had a normal animal. In the retina, there were functional abnormalities. This is the first indication that the dose of Dicer is important for normal retinal health."

Inherited forms of retinal degeneration affect about 100,000 people in the United States, according to the National Eye Institute. The problems typically occur with the destruction of photoreceptor cells called rods and cones in the back of the eye. More than 140 genes have been linked to these diseases, which only account for a fraction of the cases.

"We have many types of retinal degeneration and not enough mutations to explain them," said Enrica Strettoi, a senior researcher at the Institute of Neurosciences of the Italian National Research Council in Pisa, Italy. "Finding that ablation of Dicer causes retinal degeneration might be helpful in discovering candidate disease genes. What we've done is target virtually all microRNAs in the retina by ablating Dicer, the core enzyme regulating their synthesis. The next step is to try to address each one separately, and find the role of specific microRNAs. Removal of Dicer from other areas of the central nervous system has also produced functional and structural abnormalities, confirming the fundamental role of this enzyme in neurons."

More than 400 microRNAs have been identified in both mice and humans, and each one has the potential to regulate hundreds of target genes. They have also been linked to human diseases such as diabetes, hepatitis C, leukemia, lymphoma, Kaposi's sarcoma and breast cancer.

"This interesting study, together with recent findings reported from three other labs in the United States, provide strong evidence that the microRNA pathway is involved in the health and sickness of many parts of the mammalian nervous system," said Fen-Biao Gao, an investigator at the Gladstone Institute of Neurological Disease at the University of California-San Francisco, who did not participate in the research. "Additional in-depth studies in the future will likely help develop new therapeutic approaches for many neurodegenerative diseases."

Source: University of Florida

Immune system pathway identified to fight allergens, asthma [Think Gene]

Posted: 07 May 2008 05:22 PM CDT

For the first time, researchers from the University of Pittsburgh School of Medicine have identified genetic components of dendritic cells that are key to asthma and allergy-related immune response malfunction. Targeting these elements could result in more effective drugs to treat allergic disorders and asthma, according to a study reported in the May edition of the journal Nature Medicine.

Dendritic cells are vital to immune response in that they recognize, capture and introduce threatening organisms to T lymphocytes - other immune cells that secrete potent proteins called cytokines that surround and destroy the invaders. However, the Pittsburgh team's study goes further to illuminate a pathway that allergens use to act directly on dendritic cells to propel differentiation into the T lymphocytes that fight back.

"We now have identified a molecule, c-Kit, that is central to the process of allergic response," said Anuradha Ray, Ph.D., co-corresponding author and professor of medicine and immunology in the Division of Pulmonary, Allergy and Critical Care Medicine, University of Pittsburgh School of Medicine. "We show that genes encoding for c-Kit and the cytokine interleukin 6 (IL-6) are significantly activated when allergens are present, but c-Kit is the very first molecule that gets triggered."

Interactions between viruses and bacteria and molecular steps that initiate the immune defense have remained largely unknown. Using cells cultured from c-Kit mutant mice, Dr. Ray, her husband and co-corresponding author Prabir Ray, Ph.D., and their colleagues studied molecular reactions to assaults by cholera toxin and a standard allergen, house dust mites. In addition to c-Kit and IL-6, they found effects on stem cell factor and Jagged-2¯immune system molecules that are parts of the activation process.

"We have known the T-cell side of the story for many years, and we know that dendritic cells are important, but what we did not know was how the dendritic cell does what it does," said Dr. Prabir Ray. "Therapy directed against c-Kit specifically on dendritic cells using compounds coupled to c-Kit inhibitors such as Gleevec, a drug that is already FDA-approved and used in cancer treatment, may alleviate allergic diseases and, potentially, inflammatory bowel disease."

The Pittsburgh team incubated dendritic cells with cholera toxin and house dust mite allergens, finding that both substances induced significant secretion of c-Kit and IL-6, initial steps in a cascade resulting in the activation of T helper cells.

"Dual upregulation of c-Kit and stem cell factor has been noted in some cancers, such as small cell lung cancer. IL-6 has been associated with cancers such as multiple myeloma," said Dr. Anuradha Ray. "Collectively, similar approaches to inhibit c-Kit, in addition to Gleevec or other inhibiting compounds could alleviate multiple cancers."

Source: University of Pittsburgh Schools of the Health Sciences

Molecular espionage shows a single HIV enzyme’s many tasks [Think Gene]

Posted: 07 May 2008 05:21 PM CDT

Using ingenious molecular espionage, scientists have found how a single key enzyme, seemingly the Swiss army knife in HIV’s toolbox, differentiates and dynamically binds both DNA and RNA as part of the virus’ fierce attack on host cells. The work is described this week in the journal Nature.

The enzyme, reverse transcriptase (RT), is already the target of two of the three major classes of existing anti-HIV drugs. The new work, using single-molecule fluorescent imaging to trace RT’s activity in real time, not only reveals novel insights into how this critical viral enzyme functions, but also clarifies how some of the anti-HIV pharmaceuticals work.

The research team, at Harvard University and the National Cancer Institute, was led by Xiaowei Zhuang at Harvard and Stuart Le Grice at NCI. Elio A. Abbondanzieri at Harvard and Gregory Bokinsky, formerly at Harvard and now at the Lawrence Berkeley National Laboratory, are lead authors.

“Our experiments allowed us, for the first time, a peek at how individual RT molecules interact with the HIV genome,” says Zhuang, professor of chemistry and chemical biology and of physics in Harvard’s Faculty of Arts and Sciences, as well as an investigator with the Howard Hughes Medical Institute. “We found that RT binds RNA and DNA primers with opposite orientations and that RT’s function is dictated by this binding orientation.”

HIV begins its assault by injecting its single-stranded RNA into a host cell. Three subsequent steps are all mediated by RT: The viral RNA is converted into single-stranded DNA, the single-stranded DNA is replicated into double-stranded DNA, and the original viral RNA is degraded. Another enzyme mediates the final step of the genome conversion, where the viral double-stranded DNA is inserted into the host’s DNA, allowing it to take advantage of the host’s genetic machinery to replicate and propagate itself.

Using their molecular probe to spy on this process, Abbondanzieri and colleagues traced RT’s multitasking skill to its dynamic active sites, which allow it to bind and process RNA as well as single- or double-stranded DNA.

“Remarkably, RT can spontaneously flip between these two opposite orientations on DNA and RNA to facilitate two distinct catalytic activities,” says Abbondanzieri, a postdoctoral researcher in Harvard’s Department of Chemistry and Chemical Biology. “These flipping motions, which have never before been seen in a protein-nucleic acid complex, can be likened to a nanoscale version of a gymnastics routine on a pommel horse.”

The 180-degree flipping of RT is regulated by nonnucleoside RT inhibitors (NNRTIs), a major class of anti-HIV drugs. Abbondanzieri and coworkers observed NNRTIs inhibiting HIV activity by accelerating RT’s flipping between its two active sites, hindering the enzyme’s ability to convert single-stranded DNA to double-stranded DNA.

Source: Harvard University

On Codon Usage [Bayblab]

Posted: 07 May 2008 04:53 PM CDT

Each amino acid in a protein sequence is represented by a 3-letter 'word' (codon) in the genetic code. Since there are 4 'letters' (A,C,G,T) there are 64 potential words to represent 20 amino acids, plus stop codons. The code is unambiguous - each codon represents only a single amino acid. It also has redundancies - most amino acids are represented by multiple codons (glycine, for example, can be represented 4 different ways). One might think that the diversity of life on the planet would come with a diverse difference in codon usage. This is not the case. There are differences in codon preference, both within and across species but usage is almost universal. For example, in humans the triplet ATC (20.8 codons/1000 codons) is preferred over the triplet ATA (7.5 codons/1000 codons) and in the yeast S. cerevisiae ATT is preffered to both of those (30.1 codons/1000 codons). However, in each of those cases - and virtually every other species - all three of those triplets code for isoleucine. Codon preference is related to abundance of the respective transfer RNA. (Larry Moran touches upon codon bias and why mutations that change the codon but not the amino acid may not be neutral in an article here)

There is experimental evidence for a universal genetic code:
mRNAs can be correctly translated by the protein synthesizing machinery of very different species. For example, human hemoglobin mRNA is correctly translated by a wheat-germ extract [...] bacteria efficiently express recombinant DNA molecules encoding human proteins such as insulin.
(Stryer, L. Biochemistry 3rd Ed. p 108)
A universal code is the basis of many techniques (and headaches) in the lab. For example, in vitro protein synthesis can involve rabbit reticulocyte lysates (or wheat germ, as above) translating non-rabbit proteins. Non-mouse sequences can be used to introduce genes into mice. E. coli is often used for recombinant protein production. In this latter case, the difference in codon preference between E. coli and other species is a common problem for high level recombinant expression (eg. if a codon is preferred in humans - CCC for proline - but not in E. coli, this limiting tRNA could hinder protein production).

That the genetic code is universal is not entirely true; some inter-species differences are being discovered. There are some species, such as ciliated protozoa have slight variations (in ciliates, TAA and TAG are glutamine rather than stop codons). Mitochondria are another important exception.

Mitochondria carry their own circular DNA which encodes for, among other things, a set of 22 tRNAs. Because it doesn't use the set of nuclear-encoded tRNAs, it isn't restricted to the standard code. In fact, human mitochondrial codon use differs from nuclear codon use in 4 places. For example, in the isoleucine example above, the codon AUA codes for methionine in mitochondria (see table, reproduced from Stryer). This isn't news, but something I failed to appreciate before. A difference in codon usage between species might not be surprising (in fact the consistancy in usage among species is surprising - until you consider the far reaching effects a change in codon use would have: Every protein would be affected). A difference in usage within a single cell is more striking, unless you're familiar with endosymbiotic theory.

Endosymbiotic theory, popularized by Lynn Margulis, describes the origins of eukaryotic organelles: mitochondria and chloroplasts. These organelles were once autonomous organisms that were taken up by other cells in a symbiotic relationship. Both organelles have strong resemblences to the proposed parent prokaryotes, as detailed in the above link. Codon use separate from nuclear DNA can be added to that list.

Read more about the different codon usage sets here.
Codon preference numbers from here.

Intelligent Design: Coming To A State Legislature Near You [adaptivecomplexity's column]

Posted: 07 May 2008 03:28 PM CDT

Would you recognize a legislative push for Creationism if you saw one? After decades of failed legal strategies to overtly ban evolution or make equal time for Creationism in public schools, the latest tack used by the opponents of evolution is to have 'academic freedom' bills that encourage school teachers to include supposed evidence against evolution, or the presentation of 'both sides' of a controversial issue in science class. If you support the integrity of science education, you should oppose bills like this, both because they are redundant when it comes to good science (teachers already can teach both scientific sides of a legitimate scientific debate), and because the Creationist legislators pushing them are up to no good. But are we reaching a point where Creationism is defining itself out of existence? Are they creating a legal loophole too small for their anti-evolutionary propaganda to fit through?

read more

If you could use next-gen sequencing technology to answer any question.... []

Posted: 07 May 2008 03:22 PM CDT

...what small study would you do? This is intended as a bit of fantasy, but a serious question, as I've often considered this question: If you could do one small set of focused samples what would they be and what platform would you use to answer the question? Cost/labor/your ability to analyze the data is not to be considered...we're talking pure scientific curiousity. Let's limit it to say one run on a GA/SOLiD, or 2-3 runs on 454, or a combination of short/long reads. But no more...

Read more and join the community...

Platypus Genome Complete [Bayblab]

Posted: 07 May 2008 02:00 PM CDT

We've talked before about the platypus and all it's strangeness. The platypus genome has has just been sequenced, revealing some things we already suspected: the platypus shares features with birds, reptiles and mammals. Follow the link for the Nature News story, podcast and video interview with the authors behind the project.

Biobanks: Consent or Re-Consent? [PredictER Blog]

Posted: 07 May 2008 01:35 PM CDT

Participants donating samples to research biobanks are, often, contributing to a research resource that may be used for unanticipated research purposes in the future. For example, a participant may donate to a cardiovascular research biobank, but this donation might also be of value for future diabetes research. If a secondary use for this donation is discovered, should researchers be required to re-contact participants to secure consent for the previously unspecified research? Many researchers consider the labor of re-contact and re-consent to be a burden that will inhibit future research. Acquiring one-time, general consent for research, therefore, would seem to be the best and most efficient way to encourage the pace of medical research.

Many ethicists, and the World Health Organization (WHO), however, argue that one-time consent violates a research participant's autonomy. If a participant does not have the opportunity to evaluate these possible future uses and to decide whether their sample and information can be used for them, is the importance of informed consent being undermined?

Wednesday, at a noon seminar hosted by the IU School of Medicine's Department of Medical and Molecular Genetics, PredictER's Peter Schwartz critically evaluated some of the most prominent ethical arguments against one-time consent and described the complexity of deciding the role of autonomy in this realm. In his presentation, "Changing the Rules? Consent and Re-Consent in Predictive Health Research", Schwartz argued that it is not clear that a carefully constructed policy of one-time consent violates autonomy of subjects. While it would be inappropriate to justify such "one-time consent" simply on the basis of the social value of the research involved, or public support for such a policy, a careful reconsideration of autonomy may allow certain kinds of "blanket consent" policies. In his assessment, the path forward for consent for research involving biobanks is far from clear, but a possibility like one-time consent cannot be dismissed simply by appealing to a simple notion of autonomy. The crafting of responsible policies in this area will require more careful reflection on the relevant ethical notions.

Additional Reading:

45 CFR §46.116 - General requirements for informed consent. Department of Health and Human Services.
Caulfield T, Upshur RE, Daar A. DNA databanks and consent: a suggested policy option involving an authorization model. BMC Med Ethics. 2003 Jan 3;4:E1. PMID: 12513704

Genetic databases. Assessing the benefits and the impact on human and patient rights. World Health Organization, 2003. [PDF]

Disclosing Risk: Good Communication or "Doctor-Knows-Best"? [PredictER Blog]

Posted: 07 May 2008 01:32 PM CDT

A newly published paper from PredictER's Peter H. Schwartz and Eric M. Meslin, examines the challenges of balancing beneficence and the respect for autonomy in preventive and predictive medicine. In "The ethics of information: absolute risk reduction and patient understanding of screening" (J Gen Intern Med. 2008 Apr 18; [Epub ahead of print] | PMID: 18421509) the authors question whether providing absolute probabilities of risk based, for example, on genetic screening for breast cancer, is always in the best interest of the patient's health. While many argue the respect for the patient's autonomy demands that risk is communicated numerically or graphically, Schwartz and Meslin argue that the disclosures should be made "in the light of careful consideration of patient understanding and possible impacts on uptake and well-being".

Harvard Law faculty votes for ‘open access’ to scholarly articles [business|bytes|genes|molecules]

Posted: 07 May 2008 01:19 PM CDT

Harvard Law School shieldFrom an email I received earlier today. Would normally not pay this much attention, but this is the Berkman Center and Open Access is always a good thing

Good afternoon,

The Berkman Center for Internet & Society is pleased to announce that the faculty of Harvard Law School has unanimously approved a motion for open access: articles will be made freely available in an online repository. With the success of this motion, Harvard Law becomes the first law school to make an institutional commitment to open access to its faculty’s scholarly publications.

In February, Harvard University’s Faculty of Arts and Sciences unanimously passed an open access motion spearheaded by computer science professor and Berkman faculty co-director Stuart Shieber. Professor Shieber’s work and leadership, along with that of Harvard library director Robert Darnton, paved the way for Berkman faculty director William Fisher and executive director John Palfrey to bring an open access proposal to Harvard Law School.

The Berkman community is tremendously proud and excited about the success of these important initiatives.

The full release from Harvard Law School can be found online at
Contact: Harvard Law School Office of Communications

The Berkman Center’s announcement, including a link to the full text of the open access motion, can be found online at

Image via WikipediaTechnorati Tags: ,


Lethal Injection [Bayblab]

Posted: 07 May 2008 12:59 PM CDT

I have had to euthanize mice for experiments before. The animal care people here are very adamant that the animals experience the least amount of discomfort as possible, as they should, it's their job. One acceptable way to euthanize a mouse is an intraperitoneal injection of euthasol, which is a barbituate, a class of CNS inhibitors commonly used as anesthetic.
If this is the most humane way to euthanize that should mean that lethal injections, as practiced by countries that euthanize convicted criminals as part of capital punishment, should be the same. I guess, that is, assuming that the point of lethal injection is to be more humane than previous methods of euthanization. I was surprised to learn that lethal injection is not very simple.
Lethal injection in the United States consists of three sequentially administered drugs. These drugs are sodium thiopental, pancuronium bromide, and potassium chloride given intravenously in that order. This is a similar procedure to what is on wikipedia for physician assisted suicide.
Sodium thiopental, also used as 'truth serum', induces relaxation and release from social inhibitions. It is a short-acting barbituate that at the doses for lethal injection rapidly induces a coma. It inhibits CNS activity by activating GABAa receptors. Alcohol acts on these same receptors, however, barbituates are not used much recreationally anymore, I think, as they have been largely replaced by benzodiazepines. 'benzos' (apparently safer and more fun). Relevant to lethal injection is that sodium thiopental has no analgesic effects ie. not a painkiller.
Then comes the pancuronium bromide. This blocks acetylcholine at neuromuscular junctions, rendering the convicted paralyzed, including breathing muscles. Hopefully that thiopental coma is going well or it would be like drowning.
Then just to ensure death, potassium chloride. Such a simple salt of two ions that are very common in your body is deadly? Really it is because the large bolus IV injection at a high dose messes with the electrochemical gradient needed for proper cardiac muscle function. The heart is stopped.
I wonder how much this combo has to do with ensuring death, and making it less traumatic for the witnesses than it does with being humane. A good way to go might be an overdose of an opiate or something but seeing a convict get all high and say weird things might be too much.

Pepsi selling Food containing Genetically Modified Products [Microarray and bioinformatics]

Posted: 07 May 2008 12:51 PM CDT

Tests commissioned by Greenpeace, shown that Pepsico’s Doritos Corn Chips contains Genetically Modified Mon 863 and NK 603 variety corn ingredients.

Both Mon 863 and NK 603 are Monsanto's genetically modified corn varieties. Mon 863 has a bacterial gene to give pest tolerance, while NK 603 has a bacterial gene for herbicide tolerance. An Independent analysis last year, done by the Committee for Independent Research and Information On Genetic Engineering and French National Committee For Risk Assessment of GMOs had concluded that both Mon 863 and NK 603 pose serious health impacts.

The find comes from India by Greenpeas after  an independent laboratory in germany conducted Tests on products picked up randomly from a supermarket in New Delhi.

Under existing Indian laws this is illegal practice. Every importer is required to label the products containing any GM content as well as get prior approval from the Genetic Engineering Approval Committee which falls under the environment ministry.

Surprisingly in 2007 United Arba Emirates have confirmed that 40% of the food in the UAE is genetically modified yet is sold to the end users without proper labelling.

While in Europe if an item contains more than 0.9 per cent of GMOs it is required to carry a label.

Its growing concern among many developing countries that import products from US where Monsanto dominates the food chain with its GM seeds. And it is going to create more controversy as Monsanto has aggressive plans in milk production. It already has a product on Bovine somatotropin a natural protein produced in the pituitary glands of all cattle which helps adult cows to produce milk. Monsanto’s version of Bovine somatotropin is a leading dairy animal health product in the United States and many other countries.

Navigenics Enters Personal Genomics Game ... Meanwhile: "What's a SNP?" [PredictER Blog]

Posted: 07 May 2008 12:27 PM CDT

On April 8th, Navigenics announced it will provide genomic testing services to the general public, yet, creating additional competition among other genetic health startup companies such as deCODEme and 23andMe. These businesses are drawing attention by allowing ordinary people to see their genetic makeup and by providing services to help understand their risk for common conditions.

For an initial fee of $2500, Navigenics' personalized medicine package includes genotyping for 18 listed medical conditions such as Alzheimer's disease, glaucoma, colon cancer, lupus, breast cancer, prostate cancer, and Crohn's disease. Saliva, instead of blood, is collected for the genome scan as a less invasive and less hazardous approach. Within three weeks, Navigenics promises to deliver your risk assessment report electronically and provides genetic counseling over the telephone to educate customers on their genetic predispositions and to encourage them to take preventive measures.

The personal genomics industry is growing and potential consumers have choices. For example, 23andMe lets customers see their entire genetic profile of more than 500,000 single nucleotide polymorphisms (SNPs) while Navigenics limits customers to 18 selected conditions, even though it uses a 1 million SNP chip. On the other hand, Navigenics promises the customer access to future technology for an annual fee of $250. Customers' spit samples are frozen, stored, and re-tested as new associations with SNPs are found.

Hoping to set industry standards, Navigenics proposed 10 criteria for performance, quality, and service for personal genomic services:

1. Validity
2. Accuracy and quality
3. Clinical relevance
4. Actionability
5. Access to genetic counseling
6. Security and Privacy
7. Ownership of genetic information
8. Physician education and engagement
9. Transparency
10. Measurement

With the evolution of personalized medicine and genetic profiling, consumers have more information in their hands. New research initiatives are on the move to understand how consumers act upon this information (i.e. ignore health risks or needlessly worry about slight risks). Navigenics has plans to support future health outcome studies and has recently joined forces with the Mayo Clinic to measure the impact genetic information has on behaviors.

It will be interesting to see whether The Personalized Medicine Coalition adopts or modifies Navigenics standards. Also interesting will be the response from the medical community to risk assessment reports generated by personal genomic businesses such as Navigenics, 23andMe, and deCODEme.

What could be better than knowing your own DNA? This genomic revolution sounds almost too good to be true. Dr. Eric Topol, cardiologist at the Scripps Clinic (ironically a collaborator with Navigenics), listed his comments (December 2007) in an editorial for The Wall Street Journal. Topol presumes it is too soon to tell whether having your genome scanned can be good for your health because there are so many unidentified genes associated with disease risk. He also wonders, as do I, how personal genomics will impact the medical community. His example . . . "When a consumer arrives in his or her doctor's office to get help in interpreting the genomic data, the doctor is likely to respond: What's a SNP?" – Katie Carr

[Katie Carr is a graduate student in public health at Indiana University-Purdue University, Indianapolis (IUPUI). In addition to taking classes in bioethics at the IU Center for Bioethics, Katie is working with us to develop an ethical plan for pandemic influenza response.]

One Year of PredictER Blog [PredictER Blog]

Posted: 07 May 2008 12:26 PM CDT

PredictER Blog turns one today! To mark the date, here is a list of the 10 most popular posts:

1. HIPSA: The Health Information Privacy and Security Act of 2007 - Thursday, July 19, 2007

2. Texas: Your Boss, Your Medical Records and the Information Economy - Saturday, March 22, 2008

3. Get Your Genetic Test Results Online and Who Needs a Physician? – Katherine Drabiak, Thursday, November 15, 2007

4. Predictive Health Legislative Update: GINA, HIPSA and more ... - Monday, December 17, 2007

5. Smith-Lemli-Opitz Syndrome and a Florida "Wrongful Birth" Case - Wednesday, July 25, 2007

6. Biomedical Research Ethics 2.0: MySpace and Pediatrics - Tuesday, January 8, 2008

7. Minnesota and Genetic Privacy: Why the Rule of Law is Good for Research – Katherine Drabiak, Wednesday, September 26, 2007

8. Navigenics Enters Personal Genomics Game ... Meanwhile: "What's a SNP?" – Katie Carr, Wednesday, April 16, 2008

9. Health Risk Assessments in the Workplace: Clarian Health, Indianapolis. - Sunday, August 19, 2007

10. Newborn Screening: An Update on Minnesota – Katherine Drabiak, Friday, November 16, 2007

Combining alignment inference and phylogenetic footprinting [Mailund on the Internet]

Posted: 07 May 2008 11:01 AM CDT

Since constructing an alignment is really a statistical inference problem, we shouldn’t pretend that we can infer it flawlessly and completely ignore the uncertainty in the inference. Especially since doing so can bias the downstream inference. See a list of previous posts where I discuss these problems to get an idea about the problem.

At the very least, we should take into account the uncertainty associated with alignments. In many cases, however, we are not really interested in the alignment in the first place. It is a nuisance parameter in our model, necessary for extracting whatever information we are really interested in.

For example, if we want to annotate a sequence and want to use related sequences for this annotation, we are not really interested in alignments but in the annotation. It is just that we need alignments to get the annotation.

Or do we? Couldn’t we just integrate / sum over all alignments, with some suitable prior, and get rid of the nuisance parameter that way?

At least in theory, you can, and in many cases you can in practise as well. I have, myself, a paper under review doing this, but some colleagues in Oxford beat me to it (despite them starting their project long after we had the results for our paper … we have been a bit slow on this thing):

Combining statistical alignment and phylogenetic footprinting to detect regulatory elements
Rahul Satija, Lior Pachter and Jotun Hein

Bioinformatics 2008 24(10):1236-1242; doi:10.1093/bioinformatics/btn104


Motivation: Traditional alignment-based phylogenetic footprinting approaches make predictions on the basis of a single assumed alignment. The predictions are therefore highly sensitive to alignment errors or regions of alignment uncertainty. Alternatively, statistical alignment methods provide a framework for performing phylogenetic analyses by examining a distribution of alignments.

Results: We developed a novel algorithm for predicting functional elements by combining statistical alignment and phylogenetic footprinting (SAPF). SAPF simultaneously performs both alignment and annotation by combining phylogenetic footprinting techniques with an hidden Markov model (HMM) transducer-based multiple alignment model, and can analyze sequence data from multiple sequences. We assessed SAPF’s predictive performance on two simulated datasets and three well-annotated cis-regulatory modules from newly sequenced Drosophila genomes. The results demonstrate that removing the traditional dependence on a single alignment can significantly augment the predictive performance, especially when there is uncertainty in the alignment of functional regions.

The idea, quite simply, is this: You can express the alignment problem statistically as a hidden Markov model (HMM), and you can express the footprinting problem as a hidden Markov model, and it is rather straight forward to combine two hidden Markov models into a combined model (at least as long as you do not worry too much about the probabilities and only consider the topology). Running the combined HMM, you can look at the marginal states can consider them solutions to the two original problems. Again, there are some problems with interpreting the marginal states as runs of the marginal HMMs because of the structure imposed on them by the joint HMM, but in most cases it doesn’t matter much.

In this paper, they happily just combine the alignment HMM with the footprinting HMM and thereby get footprinting annotations while summing away the alignments as a nuisance parameter.

In my opinion, this is the way to go.

There is one complication, though, and that is the running time. The way the HMM is constructed here, the number of sequences in the data is a problem. This is a general problem, regardless of whether you consider statistical or “plain old” alignment: the running time to get optimal alignments (or sum over them all) grows exponential in the number of sequences.

The way out of this is sampling methods, and there is an obvious Gibbs sampler solution. Using the forward and backward algorithm from HMM theory, you can sample the alignment of two sequences with the correct probability. So, in a multiple sequence alignment, you fix all but one sequence, you can use this trick to sample the conditional alignment of one sequence at a time. That is all you need for a Gibbs sampler.

Unfortunately, the Gibbs approach is left for future work, but I guess we will see something soon …

Satija, R., Pachter, L., Hein, J. (2008). Combining statistical alignment and phylogenetic footprinting to detect regulatory elements. Bioinformatics, 24(10), 1236-1242. DOI: 10.1093/bioinformatics/btn104

Accounting for Research [Sciencebase Science Blog]

Posted: 07 May 2008 07:00 AM CDT

Accounting for scientists

How does one measure the worth of the science base? From the scientists’ perspective it is their bread and butter, or low-fat spread and rye biscuit, perhaps, in some cases. From industry’s standpoint, it is occasionally a source of interesting and potentially money-spinning ideas. Sometimes, it sits in its ivory tower and, to the public, it is the root of all those media scare stories. At worst, the science base is perceived as a huge drain on taxpayers’ money, especially when the popular press gets hold of ’spiders on cannabis’ and the ’scum on your tea’ as the lead science stories for the year!

For the government though, which more often than not is providing the funds for basic research, the science base is crucial to all kinds of endeavours: wealth creation, the development of fundamental science into practical technology, the demolition of those ivory towers and the mixing of scientists with the great industrial unwashed through collaboration. As such, governments try to ensure that the science they fund is accountable - to government, to its sponsors and to society and the public as a whole.

But, I come back to my first question. How does one measure the impact of basic research on society? If one went begging for funding for a new area in chemistry with no practical applications anywhere in sight, funding would likely be meagre. It can be dressed up, of course, natural product chemistry almost always has the potential for novel medicinally active compounds while even the most esoteric supramolecular chemistry could be the nanotechnology breakthrough we have been waiting for. You’ve seen and maybe even written the applications yourself. On the other hand, take any piece of genetics with the potential to cure some likely disease and the cash will usually roll in, at least relatively speaking.

So, what does
mean when
applied to scientific research
what does quality mean when applied to scientific research? Was the discovery of the fullerenes quality science? Well, yes it obviously was in that it stirred up the chemistry and other communities and generated mass appeal for a subject that gets rather less of an airing in a positive light than certain other sciences. Fullerenes also provided some of the scientists involved with a Nobel Prize so someone in Sweden must have liked it.

But, if we were to apply any kind of standard criteria of usefulness to society we would be hard pushed to give it the highest score except only as a demonstration that fundamental science can still excite. After all, have you seen any real applications yet? I touched on the potential for medicinal fullerenes early in the fullerene rising star and it is probably unfair to single them out for accountability, especially as ultimately they inspired the carbon nanotubes. You might say that they are simply one of many examples of science as art. They allow us to visualise the world in a new way, they are beautiful - chemically, mathematically, physically.

The pressure is now on scientists to face up to some imposing questions as government-mandated requirements begin to come into effect. [This has become a moot point in the UK since this article was first aired, given funding cuts for big, esoteric science projects]. Efforts to make science accountable come with a massive burden of controversy and are hindered by the almost impossible task of measuring creative activities such as research. Added to this, accountability requires increasing levels of administration especially at times of formal assessment for the scientists themselves.

The careers of most scientists hinge on these assessments, in more ways than one, as the pressure on faculty pushes them in directions they may not naturally go, producing research papers just to satisfy the assessment process, for instance. This coupled with a general drive to bring science to the public - through media initiatives - and so demonstrate to people why science is important and why their money should be spent on it - just adds to the pressure.

However, despite the marketing-style talk of stakeholders, and the close industrial analogues, the shareholders, basic scientific research is not about customers and churning out identical components on a production line. There are usually no targets and no truly viable and encompassing methods to assess the quality of any part of the scientific endeavour. Ironically, this means the end-of-year bonus is something on which most scientists miss out, regardless of their successes. Science is the art, technology makes it pay, but some art is fundamental or avant garde and some finds its way on to advertising hoardings. Which do you prefer, fine art or glossy brochure?

By forcing basic science to become accountable in terms of product and efficiency there is the possibility that creativity and autonomy will be stifled. If done right, accountability can strengthen the relationship between research and societyaccountability can strengthen the relationship between research and society.

Measuring the socioeconomic benefits from specific scientific investments is tough. Basic research gets embodied in society’s collective skills, putatively taking us many more directions than we would otherwise have headed. As such, it can have a future impact on society at entirely unpredictable points in time. Who knows where that pioneering fullerene chemistry will have taken us by the end of this century?

Sir Harry Kroto, co-discoverer of the fullerenes told me in an interview once that, “Scientists are undervalued by a society that does not understand how outstanding someone has to be to become a full-time researcher.” Maybe the measure of science is in its beauty rather than its assessment scores.

A post from David Bradley Science Writer

Accounting for Research

DNA link found between frozen Aboriginal man and 17 living people [Next Generation Sequencing]

Posted: 07 May 2008 06:05 AM CDT

According to CBC, scientists have found 17 living relatives of a centuries-old “iceman” whose remains were discovered in a melting glacier in northern British Columbia, Canada, nine years ago. Chief Diane Strand of the Champagne and Ashihik First Nations led a project to search for the young man’s living relatives. She said 241 native people from [...]

The secret behind fast assembly of Next Generation Sequencing data [Next Generation Sequencing]

Posted: 07 May 2008 05:58 AM CDT

With CLC Genomics Workbench we’re introducing a new assembly pipeline, which is not only sophisticated and highly scalable - but also very fast. In benchmark tests we have assembled half a million 454 reads against the full E.coli reference genome, in around 2 minutes on a Dual-core laptop with 1 gigabyte RAM, which is very [...]

Personalized Genetics: Privacy and the Virtual Gene [ScienceRoll]

Posted: 07 May 2008 03:35 AM CDT

Here is the regular post about the recent improvements of individualized medicine. This week, T. Ryan Gregory at Genomicron attempted to define the term genome. While The New York Times tried to redefine disease, genes and all.

What if a unauthorized person get access to the 23andme database? He will have a lot of information about many people. Ok, they can use strong encrypting algorithms but we know there is no 100% secure system. Maybe providing a anonymous service as Keyose this problem could be nearly totally prevented.

But this is not the only problem. Not at all!. What if I just take some of the spit of my new partner or my employee and send it to 23andme pretending to by my own spit? Then I could access to the genomic information of a third person without his/her permission. That sounds not really funny!

Google and Microsoft together on medical health records:

And if you’re fed up with these news about the personalized genetic companies, send a virtual gene to your friend in Facebook.

More here.

What does DNA mean to you? #4 [Eye on DNA]

Posted: 07 May 2008 03:30 AM CDT

dna dundee

What does DNA mean to you?

Here’s what Eric of Seqanswers says:

To me, DNA is the single most important biological molecule, in fact to everyone it is! That fact is motivation enough for me to study and try to understand it completely. It is also extremely important for fueling intense amounts of blogging. ;)

Beyond the Blue Horizon: how ancient voyagers settled the far-flung islands of the Pacific [HENRY » genetics]

Posted: 07 May 2008 03:14 AM CDT

National Geographic has a fantastic special edition on the Lapita culture and the settlement of Polynesia:

Much of the thrill of venturing to the far side of the world rests on the romance of difference. So one feels a certain sympathy for Captain James Cook on the day in 1778 that he “discovered” Hawaii. Then on his third expedition to the Pacific, the British navigator had explored scores of islands across the breadth of the sea, from lush New Zealand to the lonely wastes of Easter Island. This latest voyage had taken him thousands of miles north from the Society Islands to an archipelago so remote that even the old Polynesians back on Tahiti knew nothing about it.

Imagine Cook’s surprise, then, when the natives of Hawaii came paddling out in their canoes and greeted him in a familiar tongue, one he had heard on virtually every mote of inhabited land he had visited. Marveling at the ubiquity of this Pacific language and culture, he later wondered in his journal: “How shall we account for this Nation spreading it self so far over this Vast ocean?”

Continued at: How ancient voyagers settled the far-flung islands of the Pacific. Don’t miss the videos of Jared Diamond and Pat Kirch either.

Medicine 2.0 carnival at Canadian Medicine [ScienceRoll]

Posted: 07 May 2008 02:00 AM CDT

Discussion on business models around Open Data is building up [business|bytes|genes|molecules]

Posted: 07 May 2008 12:57 AM CDT

ChemspiderThis post got deleted during a blog snafu. Reposting

Many months ago, I started talking about the monetization of biological data, a theme that’s been present throughout the history of bbgm. In general, I have maintained that for the most part, the value lies not in the raw data, but in what we can do with the data. It looks like there is an interesting discussion brewing on the web around some of these ideas. Here are three a couple of posts, I think in chronological order

Peter Murray-Rust. The comment from Rich Apodaca is a must read. There is a follow up post from Antony Williams as well.

I will just re-iterate a generalizations, because I am only peripherally familiar with the specifics. On the web, data should be available as an addressable resource. The fact that data is available as RDF is great (and I wish more data was available as such). However, my personal preference is that data, especially open data, needs to be accompanied by APIs that allow the data to be accessed in a number of formats (not a dump per se). I think over time the acceptable formats will be established. The key aspect here are the business models. Is the business in providing a service on top of the data? For example for more than X number of API calls, there could be a fee associated.

These business models are going to be the key. Just like Open Source has found business models as have some web services, the models that allow people to build upon Open Data are the key

Image via Wikipedia

Technorati Tags: , , ,


Loss of posts and various wordpress issues [business|bytes|genes|molecules]

Posted: 07 May 2008 12:57 AM CDT

I have lost some recent posts. Will try and resurrect them from Google Reader. Once again I apologize for all these Wordpress issues.

Please email me if you continue seeing problems … mndoci AT mndoci — DOT — com


Don’t Overdo The Multi-tasking [Bitesize Bio]

Posted: 07 May 2008 12:56 AM CDT

Multi-tasking used to be my favourite way to get ahead.

During my PhD I saw others around me working extremely long hours in the lab and not really having much of a personal life and quite early on I made the decision that this was not for me.

Although I enjoy my work, having a good life outside the is also very important. Also, I found that if I worked very long hours then I tended to be far less efficient overall.

But, I still wanted to get through as many experiments as possible. So I also made the decision that my approach would be to work a regular 8 hour day and be as efficient as possible during that time. My basic recipe for an efficient working day was:

1. Good time planning

2. Multi-tasking

3. Efficient and accurate record keeping

4. Short breaks (!)

Most of this worked quite well, but in retrospect the multi-tasking part was not a good idea, at least to the extent I took it to.

My aim in multi-tasking was to fill every part of the working day with something useful. So I would normally have several experiments going on at once and flit between them in a packed schedule. Gaps in the schedule were filled with the other necessaries: writing up, reading, making solutions.

I definitely got through a lot of experiments that way. But that was the only good thing that such a packed schedule delivered.

Multi-tasking with few gaps in the day made me stressed, error-prone and took the enjoyment out of the job. That’s not surprising looking at it now, but at the time I thought it was the best way to work.

A quick Google search on multi-tasking brings up a whole host of articles highlighting the inefficiency of multi-tasking (see here and here for example - both contain links to other useful articles including primary literature). Apparently the brain takes a certain amount of time to switch between tasks, so constant switching means constantly losing time and train-of-thought.

These days I still multi-task - although your brain is not designed for it, there’s no way around it. But I pay close attention to the number of tasks I am juggling. Where possible I perform only one or two experiments in parallel, and guess what? I have a much higher rate of success, and much less stress than before.

What do you think of multi-tasking?

The Next Gen Sequencing Lab [FinchTalk]

Posted: 06 May 2008 10:13 PM CDT

Illumina's Genome Center in a mailroom message really captures the impact of next generation sequencing technology. Each Illumina Genome Analyzer, AB SOLiD instrument, or Roche Genome Sequencer (454)...

No comments: