The DNA Network |
FriendFeed takes a interesting step [business|bytes|genes|molecules] Posted: 22 May 2008 07:32 PM CDT FriendFeed is one of my favorite places on the web. Not only does it aggregate the various content I produce on the web, it is beautifully designed and has spawned one of the best little biogeek communities on the web (case in point this wonderful discussion on code repositories for science). Today, FriendFeed added a new feature; rooms. Will rooms be useful? The jury is still out, since they’ve been around for all but a couple of hours, but that didn’t stop me from creating a room for our community there. My hope for The Life Scientists is that it will become the kind of resource for me that Digg or Reddit have never become; place for us to share our miscellaneous links and items of interest. At a more fundamental level, I’ve often talked about trust (channeling Jon Udell). That’s one of the reasons I really like Lijit (which needs to add FriendFeed post haste). The rooms in FriendFeed could become the custom Reddit I always wanted, a trusted source of information. Of course, this being the web, I am cautiously optimistic, but given the quality of FriendFeed thus far, I think that optimism is somewhat justified. The challenge is going to be correctly balancing my own feed with the room. Image via Wikipedia Technorati Tags: FriendFeed, Reddit, Digg, Lifestream, Social Networks |
readings, readers and blogrolling [the skeptical alchemist] Posted: 22 May 2008 07:23 PM CDT First of all, I would like to welcome the new readers of the skeptical alchemist. Maybe you aren't necessarily new, but you sure added my feed to your reader/bookmarks, because today this blog register the highest number of subscribers since its start. I can't believe that there are 82 people out there reading my ramblings. Amazing. In case you are totally new around here, there are a few good posts you can read - you can see them in the sidebar under the heading "the BEST of". These are some of the best/most searched posts on the blog, organized by topic. Last but not least, you could also pay a visit to blogs in my blogroll, and to these new additions: Sandwalk Scienceroll Eye on DNA I hope you will enjoy your time spent reading the ramblings of the skeptical alchemist. View blog reactions |
Human Genetics Disorders.com [ScienceRoll] Posted: 22 May 2008 04:24 PM CDT There are only a few sites on the web dedicated to human genetic conditions that provide quality content, so I’m quite happy because I’ve recently come across HumanGeneticsDisorders.com, a unique site maintained by Chavonne Jones:
You can also follow her on Twitter. Just one recent example from the blog. Video about muscular dystrophy: We need more similarly useful resources. |
Is Blogging Healthy? [ScienceRoll] Posted: 22 May 2008 04:10 PM CDT I thought blogging could be dangerous for our health, but now here is an other opinion. As Scientific American reports:
|
Case of Triplets with Different Fathers Finally Closed [The DNA Testing Blog] Posted: 22 May 2008 03:39 PM CDT According to a May 17 article in The Times (UK) online, a rare paternity case involving a set of triplets fathered by two different men has finally been settled after a 10-year legal fight. After the birth of his mistress's triplets in 1997, the man—who allegedly had a brief affair with the triplets' mother—began to doubt [...] |
Posted: 22 May 2008 03:18 PM CDT On June 1st, I'll be hosting the next edition of Medicine 2.0, a carnival devoted to exploring the impacts of web 2.0 technologies on medicine and medical practice. All topics that consider the impacts of web 2.0 on medicine and healthcare are fair game.
If you have an article that you think fits the description, feel free to submit it to me, either via e-mail digitalbio at gmail.com or through the fancy blog carnival submission form. Read the comments on this post... |
Herb-Drug Interactions [A Forum for Improving Drug Safety] Posted: 22 May 2008 01:23 PM CDT COMMENT: ABSTRACTS: Mol Nutr Food Res. 2008 Jan 23.Clinical assessment of CYP2D6-mediated herb-drug interactions in humans: Effects of milk thistle, black cohosh, goldenseal, kava kava, St. John’s wort, and Echinacea. Gurley BJ, Swain A, Hubbard MA, Williams DK, Barone G, Hartsfield F, Tong Y, Carrier DJ, Cheboyina S, Battu SK.
Cytochrome P450 2D6 (CYP2D6), an important CYP isoform with regard to drug-drug interactions, accounts for the metabolism of approximately 30% of all medications. To date, few studies have assessed the effects of botanical supplementation on human CYP2D6 activity in vivo. Six botanical extracts were evaluated in three separate studies (two extracts per study), each incorporating 16 healthy volunteers (eight females). Subjects were randomized to receive a standardized botanical extract for 14 days on separate occasions. A 30-day washout period was interposed between each supplementation phase. In study 1, subjects received milk thistle (Silybum marianum) and black cohosh (Cimicifuga racemosa). In study 2, kava kava (Piper methysticum) and goldenseal (Hydrastis canadensis) extracts were administered, and in study 3 subjects received St. John’s wort (Hypericum perforatum) and Echinacea (Echinacea purpurea). The CYP2D6 substrate, debrisoquine (5 mg), was administered before and at the end of supplementation. Pre- and post- supplementation phenotypic trait measurements were determined for CYP2D6 using 8-h debrisoquine urinary recovery ratios (DURR). Comparisons of pre- and post-supplementation DURR revealed significant inhibition ( approximately 50%) of CYP2D6 activity for goldenseal, but not for the other extracts. Accordingly, adverse herb- drug interactions may result with concomitant ingestion of goldenseal supplements and drugs that are CYP2D6 substrates.
Ethanolic extracts from fresh Echinacea purpurea and Spilanthes acmella and dried Hydrastis canadensis were examined with regard to their ability to inhibit cytochrome P450(2E1) mediated oxidation of p- nitrophenol in vitro. In addition, individual constituents of these extracts, including alkylamides from E. purpurea and S. acmella, caffeic acid derivatives from E. purpurea, and several of the major alkaloids from H. canadensis, were tested for inhibition using the same assay. H. canadensis (goldenseal) was a strong inhibitor of the P450(2E1), and the inhibition appeared to be related to the presence of the alkaloids berberine, hydrastine and canadine in the extract. These compounds inhibited 2E1 with K(I) values ranging from 2.8 microM for hydrastine to 18 microM for berberine. The alkylamides present in E. purpurea and S. acmella also showed significant inhibition at concentrations as low as 25 microM, whereas the caffeic acid derivatives had no effect. Commercial green tea preparations, along with four of the individual tea catechins, were also examined and were found to have no effect on the activity of P450(2E1).
|
Science on TV: A Question [Bayblab] Posted: 22 May 2008 01:09 PM CDT Recently, I caught a TV program called "Braniac: Science Abuse". It's a British program (or is that programme?) The episode I saw involved various segments including what happens to toothpaste when you put it in liquid nitrogen (it freezes), what happens if you put the ingredients for a parfait in the microwave with a fluorescent lightbulb and a bowl of flammable liquid (it explodes), can an aerobics teacher do her job while being randomly given an electric shock (she can't) and what happens if you blow-up a shed full of fireworks (they go off). It also included more scientific segments: an attempt to replicate Galileo's Leaning Tower of Pisa experiment, and an experiment intending to determine whether a person performs better when starving or when over-stuffed. (There were many other segments of both types) In all cases, the background science wasn't well explained and the experiments were poorly designed. The show has even been accused of forging results. 'Science Abuse' is an accurate subtitle. This is a particularly bad example, but there are other science shows that aren't exactly rigorous with the scientific method (Mythbusters, for example, is a great show but the experimental design is sometimes lacking). I understand that these are television programs and the goal is to entertain, but it seems to me that those kind of changes needn't get in the way of the watchability. In the feast or famine performance example above, adding a person who had neither over-eaten nor been starved for a day wouldn't be difficult (forgetting that it's still an n=1 experiment). Or breaking up the tasks and assessing them individually rather than one mega-challenge of both physical and mental events. So my questions are these: Is it more important to have an accurate portrayal of the scientific method, or an entertaining program that attracts kids to science even if it mostly portrays it as blowing things up? Is it impossible to make a more rigourous science program fun? [Comic credit: xkcd] |
The Probability of Winning the NBA Draft Lottery [evolgen] Posted: 22 May 2008 12:00 PM CDT On Tuesday night, the National Basketball Association (NBA) held their annual draft lottery. In the draft, each team is given the opportunity to select a few players that have declared themselves eligible for the draft (either after completing at least one year of college in the United States or being from another country and over 18 years old). The order of picks in the NBA draft is determined with a goal of awarding earlier picks to teams that performed the worst the previous season. However, rather simply giving the worst team the first pick, second worst the second, etc., the NBA takes the teams that failed to make the playoffs and assigns them a probability of earning the first pick based on their record the previous season. They have been using a probability system based on the previous season's performance since 1990 and those probabilities can be found here. The draft lottery was instituted to prevent the worst team from automatically earning the first pick -- which allowed the NBA to assuage any fears that a team would intentionally "tank" the end of the season to draft a young stud. From 1985-89, the NBA gave each non-playoff team an equal chance of winning the draft lottery and earning the first pick in the draft. This process began with the New York Knicks earning the first pick in 1985, which allowed the to draft Patrick Ewing, regarded by all the experts as the best available player. Many people accused the NBA of rigging the lottery so the young superstar would be paired with its marquee franchise. But the Knicks never won a championship with Ewing, and he earned a notorious reputation as a guy whose teams improved without him in the lineup (see the Ewing Theory). From 1990-93, the non-playoff teams were ranked by their records, and the team with the worst record had 11 chances of winning the lottery, the second worst had 10 chances, the third worst had 9 chances, etc, etc, all the way to the eleventh worst team who had one chance. This ended after the 1993 draft lottery, when the Orlando Magic captured the first pick despite having the best record of all the non-playoff teams, and a mere 1.5% chance of winning the lottery. From 1994 to the present, the probability of earning the first pick has been weighted even more toward the teams with the worst records. The worst team has a one in four chance of winning the first pick, and the second worst team has a one in five chance (a full list of the current probabilities can be found here). After the first pick is awarded, the teams with the second and third picks in the draft are also determined using the lottery. The draft order of the remaining teams -- those that were not selected in the lottery -- are based on their records (the worst teams draft earlier, and the better teams later). Because the lottery is weighted in favor of the worst teams, sports journalists and commentators are often surprised when the worst teams do not win the first three picks in the draft. Here is how the Associated Press reported it:
That's 15 draft lotteries, and only two times has the number one pick been awarded to the worst team. Usually, the worst team has a 25% chance of winning the lottery. However, when the Toronto Raptors and Vancouver Grizzlies (now in Memphis) joined the NBA in 1996, the were denied the right to win the first pick until 1999. In 1996 and 1997, the Grizzlies had the worst record in the NBA, so we should really only consider the 13 other seasons. But in 1998, because the Grizzlies and Raptors could not win the lottery, the worst team, the Denver Nuggets, had over a one in three chance of winning (they did not win, and ended up with the third pick in the draft). Read the rest of this post... | Read the comments on this post... |
Recombination and substitution rates [Mailund on the Internet] Posted: 22 May 2008 11:51 AM CDT In a paper from PLoS Genetics earlier this month, Laurent Duret and Peter F. Arndt did a genome wide analysis of the correlation between recombination rate and substitution rate (and bias).
The main point of this paper is the evolution of the GC content of the human genome, that varies significantly in various regions of the genome — the so-called isochore structure. The evolution of isochoresThe content of GC nucleotides vary along the genome, with some regions having very high fractions of GC and some having very low, and this variation is not what we would expect the sequence to look like if the entire genome was evolving under the same neutral process. Why the genome has this structure has been debated (at time heated debates) the last two decades. Different explanations have been suggested, including:
where the later is a theory developed, among others, by the authors of this new paper. Biased mutation rates is of course a possibility, but doesn’t explain the correlation with the recombination rate, unless the latter is mutagenic or causes this bias. Selection is the explanation of Bernardi, the discoverer of the isochore structure. Biased gene conversion is a neutral process that looks a lot like selection. The idea is as follows: there is no particular need for a bias in the mutation process — the AT to GC and GC to AT substitutions are not necessarily occurring at different rates in GC rich and GC poor regions — but once a polymorphism exists, gene-conversion between a GC allele and an AT allele will replace the AT allele with the GC allele more often than the other way around. A consequence of this is, that although the mutation rate might not vary along the genome, the substitution rate will, and this substitution rate will be correlated with the recombination rate. Eyre-Walker and Hurst (2001) gives more details on the three theories above. The case for biased gene conversionIn the PLoS Genetics paper they argue for the biased gene conversion explanation (not surprisingly), and reasonably convincingly, in my opinion, but I am not an expert… First, they construct a model of sequence evolution that does not assume time-reversibility and that the current sequences are at stationarity (which is usually assumed, but might not be true). From this model, they estimate the substitution rate of the various types of substitutions, and they estimate the equilibrium GC content (called GC* in the paper). In the model, the equilibrium GC content can be different than the current GC content, as stationarity is not assumed, and in general GC* < GC meaning that the GC content in our genome — and this especially in GC rich areas — is decreasing. Very slowly, though. This could suggest that whatever mechanism created the GC rich areas of our genome is either no longer in effect, or at least is weaker than it was when the GC rich areas were created. They then consider the correlation between recombination rates and GC / GC* and notice a significant correlation, with a stronger correlation between recombintion rate and GC* than between recombination and GC. This is take as evidence that it is recombination that drives the direction of mutations toward GC content, rather than base pair composition that determines recombination rate; if the recombination rate was determined by the base pair composition, then the present day GC content should be more correlated with the rate than some far future stationary GC content. The biased gene conversion model suggest a preference for AT to GC substitutions in regions with high recombination rates, but where the strength of this preference depends on the effective population size. The positive correlation between GC* and the recombination rate supports this, and the present day effective population size (or the present day recombination rate) can explain why the GC structure in the genome is eroding towards a higher AT content in the present day GC rich regions. The GC rich regions of today could have appeared in an ancestor with either a larger effective population size, or regional larger recombination rates, and the reduction in the effective population size in the present day humans is just not large enough that the biased gene conversion mechanism can keep the GC content at a high level. The case against biased mutation and against selectionThe biased mutation explanation is argued against based on the frequency patterns of polymorphisms. If the mutations are biased, but the resulting polymorphisms are selectively neutral, then the frequency of GC and AT derived polymorphisms should be the same. However, GC alleles segregate at higher frequencies than AT alleles. The first argument against selection is less convincing, I feel, but essentially says: it is hard to imagine why selection should prefer the occasional GC in Mbp long regions with plenty of genes under selection, and even if it did, it probably wouldn’t be strong enough to drive the changes in GC content. Well… The second argument is that selection does not explain why GC content, and especially GC*, should be correlated with the recombination rate. One possible explanation is the Hill-Robertson effect, but then the correlation should be between GC* and the population recombination, but GC* is stronger correlated with male recombination rate than with female recombination rate, something Hill-Robertson does not explain. ConclusionI read this paper because I was reading up on the correlation between effective population size and recombination rate for a project I’m working on. I knew about the debate about isochores — I’ve chatted with some of the biased gene conversion proponents who have visited BiRC — but I never really read up on it. It turns out that several of my colleagues at BiRC are interested in this, so we’ve discussed the paper over the last two days, and I’ve had a lot of fun reading my way through some of the references in the paper. I would recommend it as an introduction to this, but of course not a neutral discussion of the three theories. Duret, L., Arndt, P.F. (2008). The Impact of Recombination on Nucleotide Substitutions in the Human Genome. PLoS Genetics, 4(5), e1000071. DOI: 10.1371/journal.pgen.1000071 Eyre-Walker, A., Hurst, L.D. (2001). The evolution of isochores. Nature Reviews Genetics, 2(7), 549-555. DOI: 10.1038/35080577 |
Software decay and software repositories [Mailund on the Internet] Posted: 22 May 2008 10:29 AM CDT
Will this solve the problem of URL decay mentioned in the latest issue of Bioinformatics?
It certainly seems like we are loosing our data and programs, so some larger repositories might be the way to go… |
Statistics for Dummies [Bayblab] Posted: 22 May 2008 09:43 AM CDT For those of you who can't remember the difference between standard deviation and standard error of the mean, or who never bothered to learn this paper is a good primer on when to use which, what independent replicates really means, and how to interpret the error bars on the graph presented at yesterday's seminar. It's the kind of thing we should all know but many of us don't bother with. [h/t: juniorprof] |
Google Code as a science repository [business|bytes|genes|molecules] Posted: 22 May 2008 09:34 AM CDT iPhylo is now available on Google Code. As Rod Page explains on his blog, inspired by Pedro, the goal is to use Google Code as a project management system, i.e. a place to store more than just code but docs and data as well (why not, you have good version control). In essence this expands on the issue that I have been raising lately; that academics should use code repositories like Google Code, Sourceforge or Github. That not only moves some of the issues with code maintenance infrastructure and utilities out onto the cloud, it also brings in the ability of a bigger user base, ability to access mode more easily, etc. Will this be a broader trend? There are already a number of packages available on Sourceforge, but still too few and many are just thrown out there. Don’t know about all of you, but I find moves like these significant, especially if this becomes a trend Technorati Tags: iPhylo, Roderic Page, Google Code |
Nuts! [Mailund on the Internet] Posted: 22 May 2008 03:23 AM CDT In this letter to Nature, Raghavendra Gadagkar argues that the open access model — that typically means “pay to publish, but read for free” — is doing more harm to research in the developing world than the traditional “publish for free, but pay to read” model. The reasoning is, that having to pay to publish means that publications are not a result of the quality of ones research, but just as much a result of ones funding, and in developing countries there is less funding. This is, of course, a valid point, but to conclude from this that the open access model — even if it means you have to pay to publish — is doing more harm than good is, well, just nuts! First of all, many top journal charges you both for publishing and for reading the articles. With open access, at least, you can read for free. Secondly, even if the publishing charges are much higher than the reading charge, you only pay when you have a result worth publishing. I don’t know about you, but I personally read a lot more papers than I publish, and most papers I read are never cited in my own work, because they turn out not to be relevant for my own work. Gadagkar ends his letter with:
Of course we all prefer to publish our own papers, but you cannot, and should not, publish worthwhile research if you are not familiar with the work of other researchers and have read the literature. You cannot choose publishing over reading! I’m not saying there isn’t a problem with publication charges, but I strongly disagree with the claim that it is worse than the charge for access to papers (and I remind you, once more, that in many cases you get both of the two evils…) |
“Lexomics” - Breaking the language barrier [HENRY » genetics] Posted: 22 May 2008 03:12 AM CDT Emma Marris in today’s Nature reviews my field of research, and chats to a number of my friends and colleagues:
The full story is here, and I’ve written about some of the (our) research here before. |
Watch Detects Alien DNA [Eye on DNA] Posted: 22 May 2008 03:06 AM CDT Are you aware of the dangers alien DNA can wreak? Probably not unless you’re wearing the Tokyoflash Biohazard watch.
Available for just $179.29 plus free shipping. HT: Boing Boing |
Posted: 22 May 2008 01:40 AM CDT Expanding on prior research performed at the University of Pennsylvania, Penn biologists have determined that faulty RNA, the blueprint that creates mutated, toxic proteins, contributes to a family of neurodegenerative disorders in humans. Nancy Bonini, professor in the Department of Biology at Penn and an investigator of the Howard Hughes Medical Institute, and her team previously showed that the gene that codes for the ataxin-3 protein, responsible for the inherited neurodegenerative disorder Spinocerebellar ataxia type 3, or SCA3, can cause the disease in the model organism Drosophila. SCA3 is one of a class of human diseases known as polyglutamine repeat diseases, which includes Huntington’s disease. Previous studies had suggested that the disease is caused largely by the toxic polyglutamine protein encoded by the gene. The current study, which appears in the journal Nature, demonstrates that faulty RNA, the blueprint for the toxic polyglutamine protein, also assists in the onset and progression of disease in fruit fly models. "The challenge for many researchers is coupling the power of a simple genetic model, in this case the fruit fly, to the enormous problem of human neurodegenerative disease," Bonini said. "By recreating in the fly various human diseases, we have found that, while the mutated protein is a toxic entity, toxicity is also going on at the RNA level to contribute to the disease." To identify potential contributors to ataxin-3 pathogenesis, Bonini and her team performed a genetic screen with the fruit fly model of ataxin-3 to find genes that could change the toxicity. The study produced one new gene that dramatically enhanced neurodegeneration. Molecular analysis showed that the gene affected was muscleblind, a gene previously implicated as a modifier of toxicity in a different class of human disease due to a toxic RNA. These results suggested the possibility that RNA toxicity may also occur in the polyglutamine disease situation. The findings indicated that an RNA containing a long CAG repeat, which encodes the polyglutamine stretch in the toxic polyglutamine protein, may contribute to neurodegeneration beyond being the blueprint for that protein. This raised the possibility that expression of the RNA alone may be damaging. Long CAG repeat sequences can bind together to form hairpins, dangerous molecular shapes. The researchers therefore tested the role of the RNA by altering the CAG repeat sequence to be an interrupted CAACAG repeat that could no longer form a hairpin. Such an RNA strand, however, would still be a blueprint for an identical protein. The researchers found that this altered gene caused dramatically reduced neurodegeneration, indicating that altering the RNA structure mitigated toxicity. To further implicate the RNA in the disease progression, the researchers then expressed just a toxic RNA alone, one that was unable to code for a protein at all. This also caused neuronal degeneration. These findings revealed a toxic role for the RNA in polyglutamine disease, highlighting common components between different types of human triplet repeat expansion diseases. Such diseases include not only the polyglutamine diseases but also diseases like myotonic dystrophy and fragile X. The family of diseases called polyglutamine repeat disorders arise when the genetic code of a CAG repeat for the amino acid glutamine stutters like a broken record within the gene, becoming very long. This leads to an RNA — the blueprint for the protein — with a similar long run of CAG. During protein synthesis, the long run of CAG repeats are translated into a long uninterrupted run of glutamine residues, forming what is known as a polyglutamine tract. The expanded polyglutamine tract causes the errant protein to fold improperly, leading to a glut of misfolded protein collecting in cells of the nervous system, much like what occurs in Alzheimer’s and Parkinson’s diseases. Polyglutamine disorders are genetically inherited ataxias, neurodegenerative disorders marked by a gradual decay of muscle coordination, typically appearing in adulthood. They are progressive diseases, with a correlation between the number of CAG repeats within the gene, the severity of disease and age at onset. Source: University of Pennsylvania Josh says: I don’t think anyone has looked at this before or considered that RNA could have such an effect. I wonder how much this type of thing affects other diseases, both neurological and non-neurological. Andrew says: So because RNA is a molecule, not just an “abstract informational template” as we often think, the physical manifestation of RNA’s information can a significant impact on the biological system —not just the information itself! In this case, the issue is the “hairpin” shape of a CAG string in RNA. So again, the metaphor “genes as code” fails. Fascinating. |
USC stem cell study sheds new light on cell mechanism [Think Gene] Posted: 22 May 2008 01:30 AM CDT Research from the University of Southern California (USC) has discovered a new mechanism to allow embryonic stem cells to divide indefinitely and remain undifferentiated. The study, which will be published in the May 22 issue of the journal Nature, also reveals how embryonic stem cell multiplication is regulated, which may be important in understanding how to control tumor cell growth. "Our study suggests that what we believe about how embryonic stem cell self-renewal is controlled is wrong," says Qi-Long Ying, Ph.D., assistant professor of Cell and Neurobiology at the Keck School of Medicine of USC, researcher at the Eli and Edythe Broad Center for Regenerative Medicine and Stem Cell Research at USC, and lead author of the paper. "Our findings will likely change the research direction of many stem cell laboratories." Contrary to the current understanding of stem cell self-renewal and differentiation, the findings suggest that embryonic stem cells will remain undifferentiated if they are shielded from differentiation signals. By applying small molecules that block the chemicals from activating the differentiation process, the natural default of the cell is to self-renew, or multiply, as generic stem cells. "This study presents a completely new paradigm for understanding how to grow embryonic stem cells in the laboratory," says Martin Pera, Ph.D., director of the Eli and Edythe Broad Center for Regenerative Medicine and Stem Cell Research at USC. "The discovery has major implications for large scale production of specialized cells, such as brain, heart muscle and insulin producing cells, for future therapeutic use." Embryonic stem cells have only been derived from a very small number of species. "We believe the process we discovered in mice may facilitate the derivation of embryonic stem cells from species like pigs, cows or other large animals, which have not been done before," continues Ying. "If deriving embryonic stem cells from cows, for instance, is possible, then perhaps in the future cows might be able to produce milk containing medicines." With better understanding of the multiplication process of embryonic stem cells, researchers have additional insight on tumor cell growth as these cells share similar qualities. "Our study reveals part of the little known process of how embryonic stem cells multiplication is regulated. This is important for us in understanding how to control tumor cell growth moving forward in cancer research," says Ying. Source: University of Southern California Josh says: I’m a bit skeptical of this. I haven’t really done any cell culture work, but if the stem cells are grown in a serum free media that doesn’t have any “differentiation signals”, they still differentiate. What they may be doing here is blocking transcription factors that are normally present and cause the cells to differentiate, thereby keeping them undifferentiated. |
Web as platform: Why I like web services #223 [business|bytes|genes|molecules] Posted: 21 May 2008 10:33 PM CDT Earlier today I pointed to nmrdb.org on Friendfeed. A few minutes ago I saw a post by Antony Williams that uses the web services provided by nmrdb.org to provide NMR functionality on Chemspider. One of the salient features of todays web is a philosphy of services not site; the ability to remix and repurpose functionality. The idea is to repurpose content of functionality that people have developed on different parts of the web. I feel that in the sciences we do not do a very good job of that by and large, preferring local copies or their own versions of well known algos. As a physicist once told me (and I paraphrase), “I am a physicist, I always implement my own version”. Recently, that approach seems to be changing, and this is an excellent example. Of course, we need more web friendly APIs, but that’s also changing, so I remain hopeful of a distributed science web, with content and algorithms available via RESTful APIs for various mashups and distributed pipelines. Technorati Tags: Web Services, nmrdb.org, Chemspider |
You are subscribed to email updates from The DNA Network To stop receiving these emails, you may unsubscribe now. | Email Delivery powered by FeedBurner |
Inbox too full? Subscribe to the feed version of The DNA Network in a feed reader. | |
If you prefer to unsubscribe via postal mail, write to: The DNA Network, c/o FeedBurner, 20 W Kinzie, 9th Floor, Chicago IL USA 60610 |
No comments:
Post a Comment