Thursday, May 15, 2008

The DNA Network

The DNA Network

What’s on the web (2008 May 15) [ScienceRoll]

Posted: 15 May 2008 03:26 PM CDT


finally an excuse for my sweet tooth! [the skeptical alchemist]

Posted: 15 May 2008 03:03 PM CDT

ResearchBlogging.org

Finally I can blame this (supposedly, as I have not got my genome sequenced) on a genetic variant of the glucose transporter GLUT2. Yes, it does remind you of gluttony.

A glucose transporter is a channel that allows cells to import glucose, and apparently the variant of GLUT2 you possess will affect your intake of sugary foods and drinks. Or, at least, there is a positive correlation between possession of one variant and increased consumption of sugary foods. The study documenting this was published in Physiological Genomics.

The study looked at two cohorts of patients: older obese people, and young lean people. They then compared sugar consumption, as well as protein consumption, within the two groups, according to the GLUT2 variants. In this way, they found that age and sex seem not to affect sugar consumption, but a GLUT2 variant significantly correlated with increased "sweet toothedness".

These were the main finding of this study:


  • those individuals with the GLUT2 variation consistently consumed more sugars (sucrose (table sugar)), fructose (simple sugar such as corn syrup) and glucose (carbohydrates), regardless of age or sex.
  • the two sets of food records from the older group showed that the older individuals with the variation consumed more sugars than their non-variant older counterparts (112± 9 vs. 86±4 grams of sugar per day and 111±8 vs. 82± 4 grams per day).
  • the individuals in the younger population who carried the variant were found to consume more sweetened beverages (0.49±0.05 vs. 0.34±0.02 servings per day) and more sweets (1.45±0.10 vs. 1.08±0.05 servings per day) than their non-variant counterparts.
  • there were no differences in the amount of protein, fat, starch or alcohol that was consumed by those either with or without the variant.


What I find very interesting, and suggests to me that the study might be onto something, is that people with the "sweet channel" variant consumed more sugar regardless of sex and age. This still does not imply causation, but it definitely seems to suggest that there is a link.

Now, I would be interested in knowing whether people who consume more sugars also express higher levels of GLUT2 in general (regardless of the variant). You would expect that to be the case, as all this sugar has to be taken out of the circulation.

This seems to be a case of predisposition. But don't start being gluttonous now and blame it all on GLUT2.

Sources

American Physiological Society (2008, May 14). Genetic Variation Linked To Preference Sugary Food. ScienceDaily. Retrieved May 15, 2008, from http://www.sciencedaily.com­ /releases/2008/05/080514064928.htm

Eny, K.M., Wolever, T.M., Fontaine-Bisson, B., El-Sohemy, A. (2008). Genetic variant in the glucose transporter type 2 is associated with higher intakes of sugars in two distinct populations. Physiological Genomics, 33(3), 355-360. DOI: 10.1152/physiolgenomics.00148.2007

P.S.: We had problem with the MCB Carnival, as the next host could not log in to BlogCarnival. If you are still interested in being included, and you have not submitted your post yet, you can contact him directly at his e-mail address, or you can just send it in to me. We will have the Carnival up soon!


View blog reactions

23andMe Collaborates on Study of Parkinson’s Disease Genetics [Eye on DNA]

Posted: 15 May 2008 12:50 PM CDT

Researchers are rarely study participants. Up until last week, I’d only had a hand in designing and conducting epidemiologic studies and no experience at all participating in one. While waiting for my prenatal check-up, a master’s degree student at Imperial College recruited me for her thesis study on stress and comorbidity in pregnant women. I was happy to help out since I know firsthand how hard recruiting can be especially since she told me she’d had to reduce her sample size from 200 to 100 because it was such tough going.

All I had to do was check the boxes on about 10 pages of questions and collect a total of six saliva samples over two days. Two samples are taken upon waking and another around 9 pm. I’m sorry to say that on the first day, I forgot that I was supposed to sit still while the cotton plug was under my tongue for two minutes absorbing saliva and I also forgot to take my night-time sample at 9 and did it at 10:30 instead. I was also supposed to mail my frozen samples in by regular mail this past Monday or Tuesday but got distracted so it will have to wait until next week.

What a lousy study participant I am especially given that I know what it’s like to be on the other side!

Perhaps 23andMe and the Parkinson’s Institute and Clinical Center will have more compliant study participants. They are planning a Web-based study of Parkinson’s disease that will ask 150 people to donate their saliva for genetic analysis like any 23andMe customer as well as submit personal data via the Web. In addition, the study will validate the online data collection method with face-to-face or phone interviews.

The San Francisco Chronicle reports that 23andMe also hopes that pharmaceutical companies will pay them for access to personal genomics customers who have specific conditions or disorders. Linda Avey is quoted as saying the pharmaceutical companies would be contacting customers to offer them the opportunity to participate in clinical trials (where 23andMe may also be able to offer database services) but I can easily see this extending into personalized ads for personalized medicine. Could be both good and bad.

From my recent and previous personal experiences, interviewer-led data collection will always be the gold standard because participants can ask for clarification on sample collection instructions and questions. However, post boxbeing able to complete questionnaires in the comfort of one’s own home without time pressure (I was in a rush in case I was called in for my appointment), may increase the accuracy of the data being collected. The 23andMe Parkinson’s disease study will be valuable not only for its potential genetic discoveries, but also for its insights into the implementation of the Web in scientific research. Unfortunately, there’s no easy way for me to spit my sample into them series of Internets tubes so I guess I’ll still have to remember to schlep them to the mailbox on Monday.

Photo credit: David Wilmot on Flickr

First SOLiD Publication: High Res Nucleosome Mapping in Worm [SEQanswers.com]

Posted: 15 May 2008 10:59 AM CDT

Looks like it's a milestone in the life of SOLiD, the (that I know of) has been released to PubMed:

Read more and join the community...

Remembering Lunch Can Help Reduce the Desire to Snack [Highlight HEALTH]

Posted: 15 May 2008 10:45 AM CDT

ResearchBlogging.orgMind over matter may really work when it comes to managing appetite. Researchers at the University of Birmingham, U.K. have found that recalling foods eaten at lunch has an inhibitory effect on subsequent snacking later the same day. The study is currently in press and will be published in the journal Physiology & Behavior [1]. The effect was observed regardless of the type of snack eaten or palatability. The study also found that meal recall was only effective in decreasing the amount eaten if participants did not have a tendency to overeat.

Looking in the Refridgerator
Creative Commons License photo credit: Perfecto Insecto

The study conclusions are based on the results of three experiments described below. Participant eating behavior was determined in these experiments using a questionnaire that included scales assessing dietary restraint (meaning the conscious determination and effort to restrict food intake and calories to control body weight) and tendency toward dietary disinhibition (meaning the tendency to overeat in certain situations). The studies evaluated the inhibitory effects of lunch recall and were conducted to examine the influence of (1) snack palatability, (2) individual dietary traits and (3) the time elapsed since lunch.

Experiment 1: Recall of today’s lunch and subsequent intake of popcorn

The first experiment investigated whether recall of the last meal decreased intake of popcorn differing in added salt and participant-rated pleasantness. Participants took part in one of two experimental conditions:

  • The Lunch Today condition, in which participants were asked to recall what they had eaten for lunch that day immediately before tasting and rating the afternoon snack.
  • The Lunch Yesterday condition, in which participants were asked to recall what they had eaten for lunch the previous day immediately before tasting and rating the afternoon snack.

Participants (14 young healthy male students) were tested between 2:30 and 4:30 in the afternoon. Upon arrival, those in the Lunch Today condition were asked to write down in ask much detail as possible what they ate for lunch that day. Participants in the Lunch Yesterday condition were asked to do the same for lunch the previous day. Following an assessment of hunger, fullness, desire to eat and mood, participants completed a popcorn taste test. Three large bowls of popcorn were offered with varying amounts of salt (no salt, low salt and high salt) to provide a scale of palatability. After tasting and rating each of the popcorn types, participants were told they could help themselves to any popcorn left over.

The researchers found that when participants were asked to recall lunch eaten earlier that day, intake of all three popcorn types (measured by weighing the bowls of popcorn consumed by each participant before and after each test) was reduced compared to when participants were asked to recall lunch eaten the previous day.

Experiment 2: Recall of today’s lunch and subsequent snack intake: effects of dietary restraint and disinhibition

The second experiment investigated whether the effect of meal recall on snacking is dependent on the tendency towards disinhibition, specifically the tendencies to consciously restrict food intake and to overeat when tempting food is present or other people are eating. Participants (75 young healthy female students) were separated into four groups based on dietary restraint and tendency toward disinhibition:

  • Low restraint/Low disinhibition
  • High restraint/Low disinhibition
  • Low restraint/High disinhibition
  • High restraint/High disinhibition

Two participants were excluded because they reported having diabetes or food allergies, were smokers or had a body mass index (BMI) outside the normal range.

Participants again took part in two experimental conditions, the Lunch Today or Lunch Yesterday condition. However, prior to the main test, they attended an introductory session to taste and rate the popcorn, providing the researchers with a baseline measurement of snack intake for each of the four groups. Participants ate their lunch at least 2 hours prior to the test session.

The researchers found no evidence that dietary restraint affected the response to meal recall. However, only participants scoring low in the tendency toward disinhibition decreased their snack intake after recalling lunch. Participants scoring high actually ate as much or more of all three popcorn types! The researchers hypothesized that participants with a tendency towards disinhibition (overeating) may have impairments in working memory related to preoccupying thoughts of food and body shape, as has been shown previously [2]. These impairments would thus interfere with memory encoding or retrieval of the recent meal.

Experiment 3: Recall of today’s lunch and subsequent snack intake: effect of time elapsed since the lunch

The aim of Experiment 3 was to test the hypothesis that the effect of meal recall on snack intake is dependent on memory. Introduction of a delay between a prior event and a later task is typical in memory testing. Thus, researchers examined whether the effect of meal recall is time dependent. Researchers again excluded participants if they reported having diabetes or food allergies, were smokers, had a body mass index (BMI) outside the normal range, or were not habitual breakfast eaters (to control for food intake prior to the controlled lunch). The sample comprised 47 healthy young female students who were asked to remember either their lunch eaten earlier that day or their trip to the test session. Instead of popcorn, the effect of meal recall on cookie intake was measured after a short delay (1-hour post lunch) or a longer delay (3-hour post lunch).

Based on the results of Experiment 2, researchers also assessed whether the effect of meal recall is moderated by dietary traits. Prior to the main test, participants attended an introductory session to taste and rate the cookies that would be eaten in the main experiment, providing the researchers with a baseline measurement of snack intake.

Participants were divided into two groups: half were asked to recall the lunch they had eaten earlier that day (Lunch Today condition) and half were asked to recall their journey to the University campus (Journey Control condition). Within each group, participants attended two test days, each comprising two sessions: a lunch session, which took place between 12:00 and 1:30 p.m., and a snack tasting session, which took place either 1 hour or 3 hours after lunch. Approximately half the participants completed the 1-hour post lunch test followed by the 3-hour post lunch test and the remaining participants experienced the tests in the reverse order.

Replicating the results of Experiment 2, researchers again found an effect of lunch recall on snacking for participants that scored low in tendency toward disinhibition. They also found that the effect of snack intake was time dependent, with participants significantly decreasing their snack intake after recalling lunch in the 3-hour delay condition, but not the 1-hour delay condition. According to the lead author of the study, Psychologist Dr. Suzanne Higgs [3]:

The women who had been asked to recall their lunches and who took the taste test after three hours showed significantly reduced appetites compared to those who had detailed their journeys. This may be because after just one hour, the memory of eating lunch was still vivid enough to affect all the women's appetites.

Taken together, the results of these experiments show that recall of a recent meal before eating a snack can decrease the amount of snack eaten. The results also suggest that this effect is likely to be related to memory of the meal. The effect of recent meal recall is delay-dependent; snacking was reduced when testing took place 3-hours post lunch and not after 1-hour post lunch. While these results identify the phenomenon, they don’t address how recollection of a recent meal affects subsequent intake. The authors speculate that changes in participants “feeling full” are cognitively mediated and are dependent on a number of factors, including food-related sensory cues, current internal state cues, how participents “felt” after the recent meal, and how they anticipate “feeling” following a snack.

As paradoxical as it sounds, these results suggest that by concentrating on a recent meal, you can reduce your desire to snack. If you’re up to the challenge, give it a try and let me know how you do!

References

  1. Higgs et al. Recall of recent lunch and its effect on subsequent snack intake. Physiol Behav. 2008 Mar 4 [Epub ahead of print].
    View abstract
  2. Kemps and Tiggemann. Working memory performance and preoccupying thoughts in female dieters: evidence for a selective central executive impairment. Br J Clin Psychol. 2005 Sep;44(Pt 3):357-66.
    View abstract
  3. Focus on Food — Thinking About Your Last Meal Could Reduce Snacking. University of Birmingham News and Events. 2008 Apr 24.
Thank you for subscribing by RSS or email. I work hard to make the articles on Highlight HEALTH engaging and I truly appreciate your interest and readership!

This article was published on Highlight HEALTH.

Related articles

Bayesian Graphical Models for Genomewide Association Studies [Mailund on the Internet]

Posted: 15 May 2008 10:41 AM CDT

ResearchBlogging.org
Lately I have been interested in Bayesian approaches for genome wide association mapping. These have the benefit of often being able to consider all data in a single model and from that score markers with their probability of being related to the disease without the usual multiple testing problems.

So for our journal club today, I picked the following paper for us:

Bayesian Graphical Models for Genomewide Association Studies
Verzilli, Stallard and Whittaker
American Journal of Human Genetics 79(1) 100-112

Abstract

As the extent of human genetic variation becomes more fully characterized, the research community is faced with the challenging task of using this information to dissect the heritable components of complex traits. Genomewide association studies offer great promise in this respect, but their analysis poses formidable difficulties. In this article, we describe a computationally efficient approach to mining genotype-phenotype associations that scales to the size of the data sets currently being collected in such studies. We use discrete graphical models as a data-mining tool, searching for single- or multilocus patterns of association around a causative site. The approach is fully Bayesian, allowing us to incorporate prior knowledge on the spatial dependencies around each marker due to linkage disequilibrium, which reduces considerably the number of possible graphical structures. A Markov chain–Monte Carlo scheme is developed that yields samples from the posterior distribution of graphs conditional on the data from which probabilistic statements about the strength of any genotype-phenotype association can be made. Using data simulated under scenarios that vary in marker density, genotype relative risk of a causative allele, and mode of inheritance, we show that the proposed approach has better localization properties and leads to lower false-positive rates than do single-locus analyses. Finally, we present an application of our method to a quasi-synthetic data set in which data from the CYP2D6 region are embedded within simulated data on 100K single-nucleotide polymorphisms. Analysis is quick (<5 min), and we are able to localize the causative site to a very short interval.

The idea in the paper is to capture the joint distribution of all genotypes and phenotypes in a graph model — that explicitly capture which model variables are independent and which are not — and from this read off the probability that each genotype is independent from the phenotype or not.

By putting a few restrictions on the topology of the kind of graphs they consider, it is possible to calculate the likelihood of any given graph by, essentially, running through the graph and calculate conditional probabilities of all cliques in the graph, where each clique is modeled as a multinomial distribution over genotypes, possibly together with phenotypes. With a small re-write, this becomes a product of independent probabilities divided by another product of independent probabilities.

p(C1) x p(C2) x … x p(CL) / p(S1) x p(S2) x … x p(SR)

Using Dirichlet priors, the parameters of the multinomial distributions can be integrated out, and the resulting expression is very fast to compute, making it feasible to sample over graphs in an MCMC.

This integral, though, I don’t think is correct. It comes down to those products of independent probabilities. These guys depend on the multinomial parameters, and while the terms in the nominator depend on disjoint parameters and the same for the denominator, there is an overlap in the parameters used in the nominator and denominator that introduces dependencies.

p(C1|θ) x p(C2|θ) x … x p(CL|θ) / p(S1|θ) x p(S2|θ) x … x p(SR|θ)

= p(C1|θ1) x p(C2|θ2) x … x p(CL|θL) / p(S1|θ1) x p(S2|θ2) x … x p(SR|θR)

When integrating over the set of parameters, θ, if these could be split into disjoint parameters for each of the independent probabilities, the integral could be solved for each term individually — which is easy with a conjugate prior. When the parameters cannot be split in disjoint sets — and here they cannot — then that trick doesn’t work.

In the paper they do it anyway, though.

This just means that their model introduces more independence than it was really supposed to, and that it the model they describe is not really the model they run the MCMC on, but all that really matters is how well the method identifies genotype-phenotype association, and that it seems to be pretty good at.


VERZILLI, C. (2006). Bayesian Graphical Models for Genomewide Association Studies. The American Journal of Human Genetics, 79(1), 100-112. DOI: 10.1086/505313

Keeping the technology in the background [business|bytes|genes|molecules]

Posted: 15 May 2008 10:04 AM CDT

Companies, especially those with an advanced technology infrastructure, often struggle with this question. Should we push our applications or the underlying technology. That’s one of the questions that Paul Miller takes on in a summary of a panel on commercializing the semantic web. The following section summarizes my own feelings

Neither users nor investors are particularly interested in being pitched with 'the Semantic Web' or 'RDF' or 'triples'; they want applications and solutions. The fact that the Semantic Web is at work behind the scenes to make those applications and solutions 'better', cheaper, more scalable or whatever is clearly important, but shouldn't be the opening gambit in conversation.

If you don’t understand the applications you are always leaving yourself open to the “so what” question. This philosophy is not exclusive to the Semantic Web, but in cases where the technology gets overhyped, you have to be even more cautious. However, depending on your audience, don’t forget the technology either, especially if that is your core competency, just remember that technology solves problems and without a problem the coolest tech is not worth much.

Technorati Tags: ,

ShareThis

Correcting machine learning hand-ins [Mailund on the Internet]

Posted: 15 May 2008 02:56 AM CDT

I’ve been correcting the hand-ins for the first project in my machine learning class. It is a very simple exercise where the students are given five data sets with predictor variables and target values, and from them they need to train a model (just using linear regression) and then predict targets for new predictor values.

They can transform the predictor variables in any way they want to come up with a good set of basis function to then use in the linear model, and this is really the only tricky part in the exercise. After that, it is a simple programming task to fit the model.

Some of the data sets are easy enough to work with, like the first one that is a simple line with gaussian error around it. This is the typical linear regression setup. Other data sets are harder to figure out, but the take-home message is that it doesn’t really matter so much if you can work out the true model specification, as long as you can make predictions better than mere guessing (although, in one of the data sets the predictors and targets are independent, just to show that that is also always a possibility).

The only measure that matters is the prediction accuracy on the new data, and that is what I have been looking at for the hand-ins.  I want to reduce this to a single score so I can pick a “winner” and give him a little prize.

For each dataset I’ve reduced the prediction to a single score, by taking the square-root if the sum if errors squared and then divided it with the mean of the true target values.  This scales the errors to “standard errors” so I can compare the individual models.

Still, some models are much harder to make predictions  about, so to take that into account, I’ve taken the mean of the errors in the hand-ins for each model and divided the individual errors with that.  That way, the difficult models count for less than the easy ones.  The waited sum of errors is then the final score.

Gene therapy slows progression of fatal neurodegenerative disease in children [Think Gene]

Posted: 15 May 2008 02:33 AM CDT

Gene therapy to replace the faulty CLN2 gene, which causes a neurodegenerative disease that is fatal by age 8-12 years, was able to slow significantly the rate of neurologic decline in treated children, according to a paper published online ahead of print in the May 2008 issue (Vol. 19 No. 5) of Human Gene Therapy, a peer-reviewed journal published by Mary Ann Liebert, Inc. The paper is available free online at www.liebertpub.com/hum

Late Infantile Neuronal Ceroid Lipofuscinosis (LINCL) is an autosomal recessive genetic disorder that causes degeneration of the central nervous system. It is a form of Batten disease, a group of lysosomal storage disease in which a lipofuscin-like material is not broken down and accumulates in neurons, causing cognitive impairment, visual failure, seizures, and progressive deterioration of motor function.

Ronald Crystal and colleagues from Weill Cornell Medical College (New York, NY), describe a study conducted in 10 children with LINCL who received gene therapy to replace the defective CLN2 gene via administration of human CLN2 carried in an adeno-associated virus (AAV). In the paper entitled "Treatment of Late Infantile Neuronal Ceroid Lipofuscinosis with CNS Administration of a Serotype 2 Adeno-associated virus expressing the CLN2 cDNA," the authors report that over an 18-month period, assessment using a neurologic rating scale demonstrated significant slowing of disease progression in the treated, compared to the untreated children. On the basis of these findings, the authors proposed that additional studies to assess the safety and efficacy of AAV-mediated gene therapy for LINCL be pursued.

Although the treatment was associated with some serious adverse events in some patients, these were not unequivocally attributable to the gene therapy vector.

"This clinical trial is an important step toward the development of treatments for this group of underserved inherited neurodegenerative disorders," says James M. Wilson, MD, PhD, Editor-in-Chief and Head of the Gene Therapy Program, Department of Pathology and Laboratory Medicine, at the University of Pennsylvania School of Medicine, in Philadelphia.

Source: Mary Ann Liebert, Inc./Genetic Engineering News

Josh says:

Hopefully we’ll see more applications of gene therapy from a clinical perspective, now that the FDA may be open to the idea again after the mishaps with cancer in the first round of gene therapy testing.

Chemical compound prevents cancer in lab [Think Gene]

Posted: 15 May 2008 02:33 AM CDT

While researching new ways to stop the progression of cancer, researchers at the University of Oklahoma Health Sciences Center, have discovered a compound that has shown to prevent cancer in the laboratory. The research appears in the journal Gene Regulation and Systems Biology.

The compound, which still faces several rounds of clinical trials, successfully stopped normal cells from turning into cancer cells and inhibited the ability of tumors to grow and form blood vessels. If successful tests continue, researchers plan to create a daily pill that would be taken as a cancer preventive.

"This compound was effective against the 12 types of cancers that it was tested on," said Doris Benbrook, Ph.D., principal investigator and researcher at the OU Cancer Institute. "Even more promising for health care is that it prevents the transformation of normal cells into cancer cells and is therefore now being developed by the National Cancer Institute as a cancer prevention drug."

The synthetic compound, SHetA2, a Flex-Het drug, directly targets abnormalities in cancer cell components without damaging normal cells. The disruption causes cancer cells to die and keeps tumors from forming.

Flex-Hets or flexible heteroarotinoids are synthetic compounds that can change certain parts of a cell and affect its growth. Among the diseases and conditions being studied for treatment with Flex-Hets are polycystic kidney disease, kidney cancer and ovarian cancer.

Benbrook and her research team have patented the Flex-Het discovery and hope to start clinical trials for the compound within 5 years. If the compound is found to be safe, it would be developed into a pill to be taken daily like a multi-vitamin to prevent cancer.

The compound also could be used to prevent cancer from returning after traditional radiation and chemotherapy treatments, especially in cancers that are caught in later stages such as ovarian cancer where life expectancy can be as short as 6 months after treatment.

"It would be a significant advancement in health care if this pill is effective in preventing cancer, and we could avoid the severe toxicity and suffering that late stage cancer patients have to experience," Benbrook said.

Source: University of Oklahoma

Josh says:

I really hope this is for real, but I am still a little skeptical. All cancers are different; they just tend to be lumped together under the general term “cancer”, so to think that there is one compound that prevents against them all seems unlikely. I would be very curious though to see the exact mode of action of this compound from a biochemical perspective.

Families shed light on likely causative gene for Alzheimer’s [Think Gene]

Posted: 15 May 2008 02:32 AM CDT

The genetic profile of two large Georgia families with high rates of late-onset Alzheimer’s disease points to a gene that may cause the disease, researchers say.

Genetic variations called single nucleotide polymorphisms, or SNPs, are common in DNA, but this pattern of SNPs shows up in nine out of 10 affected family members, says Dr. Shirley E. Poduslo, neuroscientist in the Medical College of Georgia Schools of Medicine and Graduate Studies and the Charlie Norwood Veterans Affairs Medical Center in Augusta.

The 10th family member had half the distinctive pattern. The SNPs also were found in the DNA of 36 percent of 200 other late-onset patients stored in the Alzheimers’ DNA Bank.

“We were shocked; we had never seen anything like this before,” Dr. Poduslo says of findings published online in the American Journal of Medical Genetics. “If we looked at unaffected spouses, their SNPs were all different. The variants consistently found in affected siblings are suggesting there is something in this gene. Now we have to go back and find what is in this gene that is making it so unique for Alzheimer’s patients.”

The variation was in the TRPC4AP gene, part of a large family of genes that is not well-studied but is believed to regulate calcium. Calcium is needed throughout the body but its dysregulation can result in inflammation, nerve cell death and possibly plaque formation as well, she says.

The finding provides new directions for research and possibly new treatment targets, Dr. Poduslo says. It also shows the important role large families affected by a disease can have in determining the cause of the disease.

The specific genetic mutation responsible still must be identified and will require sequencing the very large gene, or determining the order of the base pairs that form the rungs of the ladder-like DNA, Dr. Poduslo says. An SNP represents a change in either side of a rung. “The mutation could be a deletion of some of the nucleotides, could be an insertion, or something in the promoter gene that turns the gene off so it’s never transcribed. It could be a wide variety of things, and that is what our next step is to identify the mutation.” She’ll work with The Scripps Research Institute in Jupiter, Fla., to expedite the required high-throughput analysis.

One of the families that provided the sentinel genetic information had 15 members, including five with Alzheimer’s that started in their 60s and 70s; the second family had 14 members, six of whom had the disease. The disease incidence itself was notable and the incidence of the pattern of SNPs was equally so. “This to me is very, very striking. Genetics and lifestyle either could be the biggest risk factors,” says Dr. Poduslo. “We looked at these families very, very carefully to see what in their background may make them different and we couldn't come up with anything. They were farmers living fairly healthy lifestyles.”

Families donate to the Alzheimer’s DNA bank to help others at this point and don’t get feedback from findings generated by their genetic material, Dr. Poduslo says. “Right now, we have no way of treating it, so it’s not going to help them. When we come up with new drugs so we can treat this disease, then it becomes important to identify it early and get treatment.”

Source: Medical College of Georgia

Josh says:

This is really interesting. I’m sure more research will be done on other variations within this gene to see if they also have a strong correlation to Alzheimer’s. Now the next question to be answered is why do these SNPs seem to cause the disease.

Compound has potential for new class of AIDS drugs [Think Gene]

Posted: 15 May 2008 02:31 AM CDT

Researchers have developed what they believe is the first new mechanism in nearly 20 years for inhibiting a common target used to treat all HIV patients, which could eventually lead to a new class of AIDS drugs.

Researchers at the University of Michigan used computer models to develop the inhibiting compound, and then confirmed in the lab that the compound does indeed inhibit HIV protease, which is an established target for AIDS treatment. The protease is necessary to replicate the virus, says Heather Carlson, U-M professor of medicinal chemistry in the College of Pharmacy, and principal investigator of the study.

Carlson stresses this is a preliminary step, but still significant.

“It’s very easy to make an inhibitor, (but) it’s very hard to make a drug,” said Carlson, who also has an appointment in chemistry. “This compound is too weak to work in the human body. The key is to find more compounds that will work by the same mechanism.”

What’s so exciting is how differently that mechanism works from the current drugs used to keep the HIV from maturing and replicating, she says. Current drugs called protease inhibitors work by debilitating the HIV-1 protease. This does the same, but in a different way, Carlson says.

A protease is an enzyme that clips apart proteins, and in the case of HIV drugs, when the HIV-1 protease is inhibited it cannot process the proteins required to assemble an active virus. In existing treatments, a larger molecule binds to the center of the protease, freezing it closed.

The new mechanism targets a different area of the HIV-1 protease, called the flap recognition pocket, and actually holds the protease open. Scientists knew the flaps opened and closed, but didn’t know how to target that as a mechanism, Carlson says.

Carlson’s group discovered that this flap, when held open by a very small molecule—half the size of the ones used in current drug treatments—also inhibits the protease.

In addition to a new class of drugs, the compound is key because smaller molecules have better drug-like properties and are absorbed much more easily.

“This new class of smaller molecules could have better drug properties (and) could get around current side effects,” Carlson said. “HIV dosing regimes are really difficult. You have to take medicine several times in the day. Maybe you wouldn’t have to do that with these smaller molecules because they would be absorbed differently.”

Kelly Damm, a former student and now at Johnson & Johnson, initially had the idea to target the flaps in this new way, Carlson says.

“In a way, this works like a door jam. If you looked only at the door when it’s shut, you’d not know you could put a jam in it,” she said. “We saw a spot where we could block the closing event, but because everyone else was working with the closed form, they couldn’t see it.”

Source: University of Michigan

Inhibiting HIV-1 protease through its flap-recognition pocket. Kelly L. Damm, Peter M. U. Ung, Jerome J. Quintero, Jason E. Gestwicki, Heather A. Carlson. Biopolymers Volume 89, Issue 8, Pages 643 - 652

Andrew says:

The more mechanisms that an HIV treatment can attack at once, the less likely the HIV virus will be able to mutate to negate all the attacks. While a drug targeting this mechanism does not yet exist, it’s likely to exist soon, and together with other drugs, this is small step forward towards an effective treatment for HIV.

New Videos for Genetic Genealogists [The Genetic Genealogist]

Posted: 15 May 2008 02:00 AM CDT

While conducting some online research the other day, I discovered a series of videos about genetic genealogy by Alastair Greenshields, founder of DNA Heritage. The main page contains 6 videos (shown in the list below) that are broken down into 2 to 8 chapters. Since the videos are broken up into chapters, you can can easily skip to the topics that are the most relevant to you.

  1. Genetic Genealogy Terminology
  2. Genetic Genealogy Defined
  3. Tracing My Genetic Heritage
  4. My Past
  5. Giving DNA
  6. Genetic Genealogy Results

There are many other places to find videos about genetic genealogy. Last April I wrote “Ten Videos For Genetic Genealogists“, although only 8 of them are still available. You can also watch videos about DNA here at TGG’s DNA Channel, courtesy of Roots Television. And lastly, Family Tree DNA has videos available on its website.

To give you a preview of the DNA Heritage videos, the first is embedded below:

Matt Wood on the distributed web of data [business|bytes|genes|molecules]

Posted: 15 May 2008 01:55 AM CDT

Matt Wood has a great great screencast of a talk around some very relevant problems (things that touch my interests as a blogger and in my day job as well).

No surprise that I agree with him on the importance of distributed data and how we can start leveraging the web for a field that gets more and more data intensive by the day.

Further reading
Do we need life science CDNs?

Technorati Tags: , ,

ShareThis

Lost a finger nail and taking a break [Mailund on the Internet]

Posted: 15 May 2008 12:38 AM CDT

You might be wondering why I’ve just been posting video clips the last couple of days rather than my usual posts. Well, in the weekend I fell on a chair and tore off a finger nail, so typing is a bit slow for me these days. I’m essentially using only my left hand, although I’m slowly getting used to using only three fingers on my right hand (the nail on the ring finger is missing and my finger is wrapped up in bandages so using that finger and the pinky is a problem).

It’s like learning to type all over again, and rather slow, so I’ve been taking a break for a few days. I haven’t gotten much work done either, but I have managed to prepare a talk I am giving tomorrow. I haven’t any problems with using a mouse, so preparing slides was not a problem.

Anyway, expect shorter posts the coming week…

How to Refold 653 Insoluble Proteins [Bitesize Bio]

Posted: 15 May 2008 12:22 AM CDT

Bacteria are good hosts for expressing recombinant proteins, mainly because they are easy to manipulate and grow. But their relatively simple expression systems can’t cope with every gene you throw at them so proteins will often fail to express properly.

Sometimes the protein is fully expressed but cannot fold properly. This is particularly common when the evolutionary distance between the host and the organism means that the bacterial cytoplasm is an unsuitable environment for folding. Too high a rate of protein expression, lack of a chaperone or partner protein and inability to form disulphide bonds are common examples.

These unfolded proteins will aggregate to form inclusion bodies - insoluble protein granules - in the cytoplasm. But if this happens, all is not lost. It is possible to recover the inclusion bodies, denature them and refold the proteins in vitro to recover functional protein that can be used for further study.

The problem is that every protein requires different conditions and methodologies for optimal refolding, so a lot of time consuming experiments are required to optimise the protocol for each protein.

To alleviate this problem, researchers at Australia’s Monash University have set up the REFOLD database, which contains protocols, developed by contributor to the database, for refolding hundreds* of different proteins.

If you are faced with the task of refolding an insoluble protein, this should be your first stop. If you find the protocol for your protein then you will save yourself a lot of time. And if you don’t, once you develop your protocol, uploading it to the database will save someone else the time in future - giving you some good research karma.

*Protocols for 653 proteins were in the database at the time of writing

Photo: bousinka

No comments: