Friday, August 1, 2008

The DNA Network

The DNA Network

Open Metagenomics Highlight: Comparative Analysis of Human Gut Microbiota by Barcoded Pyrosequencing [The Tree of Life]

Posted: 01 Aug 2008 06:15 PM CDT

OK - it is not quite metagenomics, but there is new paper in PLoS One worth looking at if you study uncultured organisms. This paper (Comparative Analysis of Human Gut Microbiota by Barcoded Pyrosequencing) reports on a slightly new twist in carrying out deep rRNA surveys of uncultured microbes using one of the "next" generation sequencing methods.

Open Metagenomics Highlight - PloS Biology paper reporting more from Banfield lab on the Acid Mine Drainage [The Tree of Life]

Posted: 01 Aug 2008 03:04 PM CDT

Just a quick "Open Metagneomics" posting here. There is a very interesting paper in PLoS Biology that just came out reporting more detail from Jill Banfield's lab on their studies of an Acid Mine Drainage (AMD) site. This paper is ostly a population genomic study of the microbes living in the AMD. See the paper at PLoS Biology - Population Genomic Analysis of Strain Variation in Leptospirillum Group II Bacteria Involved in Acid Mine Drainage Formation.

Is There Too Much Hypothesis Testing? [evolgen]

Posted: 01 Aug 2008 01:00 PM CDT

About a month ago, we were told that theory is dead. That was the thesis of Chris Anderson's article in Wired. Rather than testing hypotheses using the scientific method, Anderson argues that scientists are merely generating loads of data and looking for correlations and stuff. The article was a bit muddled, but that's Anderson's main point . . . I think.

Well, now the Times Higher Education has published an article by Tim Birkhead in which he argues the opposite (In praise of fishing trips). Birkhead says that the scientific establishment is too attached to hypothesis testing. This means funding agencies are reluctant to approve projects that are designed to collect data without testing a priori hypotheses. Here's the brunt of his argument:

The scientific research councils seem to be obsessed by hypothesis testing. Many times I have heard it said by referees rejecting a proposal: "But there was no hypothesis." The research council modus operandi now seems to be the h-d method, but taken to an almost ludicrous extreme - researchers basically have to know what they are going to find before putting in a research application (in truth, they need to have done the research, even though they will not actually say so, to write a convincing account of what they intend to do). That's hardly the best way of doing science. The other problem, and it is a much more serious one, is that by knowing what you will find out, science becomes trivially confirmatory and inherently unlikely to discover anything truly novel.

So, which one is it? Have we abandoned hypothesis testing? Or are we so attached to hypothesis testing that we overlook novel projects that would explore underdeveloped areas of research?

Read the comments on this post...

The price of a human life [Mary Meets Dolly]

Posted: 01 Aug 2008 12:04 PM CDT

I have said many times that we abandon the human embryo to the whims of researchers at our own peril. By protecting the human embryo, we protect all of us that are of the same species.

The uproar in Oregon about the state offering coverage for assisted suicide but not life-extending chemotherapy continues. Barbara Wagner was not the only one that was told that Oregon would pay for her to die, but not for her to live. Randy Stroup also received that same message.

So what does the Oregon assisted suicide debacle have to do with the protection of the human embryo? Well, I cannot lace it together better than Cal Thomas in his piece "The Price is (not) Right" in the Jamestown Sun:

It is difficult to pinpoint the precise beginning of the cultural tsunami that has devalued human life. Did it begin with the subjugation of women? Did it begin with slavery? The Nazis made their contribution with the Holocaust and Josef Mengele's hideous human experiments. Surely unrestricted abortion added to the growing list of inhumanities.

Now we have the next wave. Randy Stroup is a 53-year-old Oregon man who has prostate cancer, but no insurance to cover his medical treatment. The state pays for treatment in some cases, but it has denied help to Stroup. State officials have determined that chemotherapy would be too expensive and so they have offered him an alternative: death...

How much is a human life worth? Body parts and bone marrow can fetch some pretty high prices, but a human life is more than the sum of its body parts. The reason this is important is that the federal government is now placing a price tag on individual lives and if government ever gets to run health care from Washington, bureaucrats will start making decisions similar to the one made for Randy Stroup.

"Body parts and bone marrow can fetch some pretty high prices" indeed. Once we start seeing one member of the human species as a valuable commodity that can be priced and torn apart for parts, we all look that way. Especially to the government:

According to The Washington Post, several federal agencies have come up with figures for the dollar value of a human life to analyze the costs and benefits of new programs they believe will save lives.

Saving lives is the announced intention, but if government gains the power to determine when a life is no longer "worth" saving and orders the plug to be pulled or the death pill to be administered, then what? This is the future of the socialized medicine that Hillary Clinton, Barack Obama and the Democratic Party wish to impose on us.

Lets us never forget that it was the Oregon state-sponsored health care that told Barbara Wagner that she wasn't worth the price of the medication she needed and it was the private Big Pharma company Genentech that came to her aid by giving Barbara her chemo drugs at no charge.

Thomas leaves us with the same message I have been screaming at the top of my lungs for nearly 3 years:

When pro-lifers warned about the "slippery slope" more than three decades ago, they were dismissed as alarmists. Not anymore. Their prophecy is now being fulfilled.

Please! Let us put a stop to the out of control slide and see the human embryo as what it is: a member of the human species that has no price tag and cannot be destroyed for parts.

PredictER Year-End Review [PredictER Blog]

Posted: 01 Aug 2008 11:22 AM CDT

Join us Monday afternoon (August 4th, 3-4pm) at the Center for Bioethics for a year-end review of PredictER's progress. PredictER's director, Eric M. Meslin, will share his assessment of the program's accomplishments and goals for the coming year. This is an excellent opportunity to learn about the services we provide to our constituents with an interest in predictive health research, medicine, and the associated ethical and legal issues. Refreshments will be served.

IU Center for Bioethics, 410 W 10th Street, Suite 3100. Indianapolis, IN 46202 | 317-278-4034

Cancer Carnival #12 [Bayblab]

Posted: 01 Aug 2008 10:20 AM CDT

The 12th Edition of the Cancer Research Blog Carnival has arrived! It's hard to believe it's been almost a year since the first edition. The carnival keeps getting bigger and better, and the latest edition is no exception. Click the link and check it out. Thanks to Ben for logo design, and for the awesome hosting job. Read it. Now.

As always, if you want to host a future edition, send us an email: bayblab[at]

Finding Experts [Sciencebase Science Blog]

Posted: 01 Aug 2008 07:00 AM CDT

Finding expertsOne of the main tasks in my day-to-day work as a science writer is tracking down experts. The web makes this much easier than it ever was for journalists in decades since. There are times when a contact in a highly specialist area does not surface quickly but there are also times when I know for a fact that I’ve already been in touch with an expert in a particular area but for whatever reason cannot bring their name to mind. Google Desktop Search, with its ability to trawl my Thunderbird email archives for any given keyword is a boon in finally “remembering” the contact.

However, finding just a handful of contacts from web searches, email archives and the good-old-fashioned address book pales into insignificance when compared to the kind of industrial data mining companies and organisations require of their “knowledge workers”.

According to Sharman Lichtenstein of the School of Information Systems at Deakin University, in Burwood, Australia, and Sara Tedmori and Thomas Jackson of Loughborough University, Leicestershire, UK: “In today’s highly competitive globalised business environment, knowledge workers frequently lack sufficient expertise to perform their work effectively.” The same concern might be applied to those working in any organisation handling vast amounts of data. “Corporate trends such as regular restructures, retirement of the baby boomer generation and high employee mobility have contributed to the displacement and obfuscation of internal expertise,” the researchers explain.

The team explains how knowledge is increasingly distributed across firms and that when staff need to seek out additional expertise they often seek an internal expert to acquire the missing expertise. Indeed, previous studies have shown that employees prefer to ask other people for advice rather than searching documents or databases. Finding an expert quickly can boost company performance and as such locating experts has become a part of the formal Knowledge Management strategy of many organisations.

Such strategies do not necessarily help knowledge workers themselves lacking the search expertise and time required to find the right person for the job, however. So, Jackson developed an initial expertise locator system, later further developed with Tedmori, to address this issue in an automated way. The researchers
an automated
key-phrase search system
that can
identify experts from
the archives of the
organisation’s email system
The researchers discuss an automated key-phrase search system that can identify experts from the archives of the organisation’s email system.

Immediately on hearing such an intention, the civil liberties radar pings! There are sociological and ethical issues associated with such easy access and searchability of an email system, surely? More than that, an expert system for finding experts could become wide open to misuse - finding the wrong expert - and abuse - employees and employers unearthing the peculiar personal interests of colleagues for instance.

The first generation of systems designed to find experts used helpdesks as the formal sources of knowledge, and comprised simply of knowledge directories and expert databases. Microsoft’s SPUD project, Hewlett-Packard’s CONNEX KM system, and the SAGE expert finder are key examples of this genre, the researchers point out. Such systems are akin to Yellow Pages and are essentially electronic directories of experts that must be maintained on a continual basis. They allow anyone with access to tap into expertise, but unless the experts keep their profiles up to date, they can quickly lose relevancy and accuracy.

Overall, when large numbers of employees are registered and profiles are inaccurate, credibility is rapidly lost in such systems which are increasingly ignored by knowledge seekers.

Second generation expertise locators were based on organisations offering their staff a personal web space within which they could advertise their expertise internally or externally. Convenient for those searching but again relying on the experts in question to keep their web pages up to date. Moreover, simple keyword matching when searching for an expert would not necessarily find the best expert because the search results would depend on how well the expert had set up their web pages and whether and how well they had included keywords in those pages. In addition, keyword searching can produce lots of hits that must then be scanned manually, which takes time.

The third generation of expert searching relies on secondary sources, such as tracking the browsing patterns and activities of employees to identify individual experts. Such an approach raises massive privacy concerns, even for companies with a strict web access policy. Activity on forums, bulletin boards, and social networks falls into this third generation approach.

The fourth generation approach mashes the first three and perhaps adds natural language searching again with various efficiency and privacy concerns. Again, it does not necessarily find the best expert, but often just the person whose data, profile, and web pages are optimised (deliberately or by chance) to reach the top slot in the search results.

An approach based on key-phrase identification in e-mail messages could, however, address all requirements but throws up a new wave of privacy concerns, which Lichtenstein and colleagues discuss.

There are several features of email that make it popular and valuable for organisational knowledge work, and relevant to to finding an expert:

  • It attracts worker attention
  • It is integrated with everyday work
  • It provides a context for sense-making about ideas, projects and other types of business knowledge
  • It enables the referencing of work objects (such as digital documents), and provides a history via quoted messages
  • It has high levels of personalised messages which are appealing, meaningful and easily understood
  • It encourages commitment and accountability by automatically documenting exchanges
  • It can be archived, so providing valuable individual, collective and organisational memories that may be mined
  • It facilitates the resolution of multiple conflicting perspectives which can stimulate an idea for a new or improved process, product or service.

All these factors mean that email could become a very useful tool for finding experts. Already many people use their personal email archives to seek out knowledge and experts, but widen that to the organisational level and the possibilities become enormous.

The researchers have developed an Email Knowledge Extraction (EKE) system that utilises a Natural Language ToolKit (NLTK) employed to build a key-phrase extraction “engine”. The system is applied in two stages, the first of which “teaches” the system how to tag the speech parts of an email, so that headers and other extraneous information become non-searched “stop words” within the email repository. The second stage extracts key-phrases from the searchable sections of an email once it is sent. This extraction process is transparent to the sender and takes just milliseconds to operate on each email. A final stage involves the sender being asked to rank each identified key-phrase to indicate their level of expertise in that key-phrase area. A database of experts and their areas of expertise is gradually developed by this approach. Later, employees searching for experts can simply consult this database.

The EKE system has been implemented at Loughborough University and at AstraZeneca in trials and found to be able to capture employee knowledge of their own expertise and to allow knowledge workers to correctly identify suitable experts given specific requirements. The researchers, however, highlights the social and ethical issues that arise with the use of such as system:

  • Employee justice and rights and how these might conflict with employer rights.

  • Privacy and monitoring, as there is more than a small element of “Big Brother” inherent in such a system
  • Motivational issues for sharing knowledge, as not all those with expertise may wish to be data mined in this way, having enough work of their own to fill their 9-to-5 for instance
  • Relationships, as not everyone will be able to work well together regardless of expertise
  • Ethical implications of expert or non-expert classification, as the system could ultimately flag as experts those employees with little or no expertise.
  • Deliberate misclassification of experts, as all systems are open to abuse and malpractice.
  • Expert database disclosure, as such a comprehensive database if accessed illicitly by an organisation’s rivals could wreak havoc in terms of stealing competitive advantage, headhunting or other related activities.

Lichtenstein, S., Tedmori, S., Jackson, T. (2008). Socio-ethical issues for expertise location from electronic mail. International Journal of Knowledge and Learning, 4(1), 58. DOI: 10.1504/IJKL.2008.019737


Finding Experts

Closed Access Award #2: Andrey Rzhetsky, Michael Seringhaus and Mark Gerstein [The Tree of Life]

Posted: 01 Aug 2008 05:03 AM CDT

Just got pointed to a new paper by someone near and dear to me. In this paper (Seeking a New Biology through Text Mining), Andrey Rzhetsky, Michael Seringhaus and Mark Gerstein seem to argue for the importance of text mining for the future of biology research. Text mining is indeed an important new tool in biology. Of course, it works best if you have access to the text. Alas, I would tell you more about their paper, but I have been out sick and stuck at home, and I do not have access to their paper, which was published in Cell. And thus, even without seeing their paper, I am giving them my second "Closed Access Award" for apparently outlining a path for a new biology that will be only available to some, not all.

Fact or Fiction: Eclipse Blindness [Bayblab]

Posted: 01 Aug 2008 05:00 AM CDT

Today, August 1 2008, a total solar eclipse will be visible in parts of northern Canada, Greenland, Russia, China and Mongolia, with a partial eclipse visible in other parts of the the country.

I remember in elementary school, a big deal was made of not looking directly at a solar eclipse. A very big deal. Even a quick, unprotected glimpse could cause blindness. At the time, it didn't make sense to me - why would looking at an eclipse be any worse than looking at an uneclipsed sun? Was the seemingly heightened paranoia because it's an event that people are likely to want to look at? Or is there something else about an eclipse that warrants the extra attention? Even NASA acknowledges that eclipse warnings may be a bit hyperbolic:
In the days and weeks preceding a solar eclipse, there are often news stories and announcements in the media, warning about the dangers of looking at the eclipse. Unfortunately, despite the good intentions behind these messages, they frequently contain misinformation, and may be designed to scare people from seeing the eclipse at all.
The fact of the matter is yes, you can do serious damage to your eyesight looking directly at a partial solar eclipse. During the period of total eclipse - when the moon completely obscures the sun - it's safe to observe with the naked eye (see also NASA link above). However, this isn't without risk since the shadow will continue to pass and without proper timing you could find yourself looking directly at a partial eclipse. Even when 99% of the sun is obscured, the remaining 1% is intense enough to cause serious retinal damage.

Is looking at a partially-eclipsed sun more hazardous than an unobscured sun? In a way, yes. Because even a mostly blocked sun can be damaging and the surrounding illumination in these conditions can actually be quite dim. In this case the eyes natural defense - constriction of the pupil - doesn't kick in right away. The pupil is exposed to the sun in a dilated state, allowing more of the damaging light to enter the eye. (Of course an unobscured sun will be sending that much more light to your eye, so this isn't a proper comparison.)

The bottom line is that there's nothing magical about a solar eclipse, and nothing special about the light it gives off, but after every eclipse there's a spike in cases of retinal injury and eyesight damage. Take proper precautions when viewing an eclipse and don't stare directly at the sun - eclipsed or not.

[Image source: Wikipedia]

Open Anthrax: Open access publications for those writing about B. anthracis [The Tree of Life]

Posted: 01 Aug 2008 03:25 AM CDT

Well, I am sure there are going to be a million news stories and blogs over the next few days about the anthrax letters. That is because of the recent death of someone who appears to have been the latest suspect in the anthrax mailings (see for example CNN Report: Anthrax suspect kills self as FBI closes in). (Also see the LA Times, which broke the story). I must say, after the disastrous handling of some of the previous suspects, I think we should reserve judgement on this case until some of the evidence is made public.

In the interest of helping out some of those interested in the science of studying the organism that causes anthrax I am posting here some links to fully open access articles on B. anthracis.

PLoS One papers
PLoS biology Papers
Biomed Central Journals

Inherited form of hearing loss stems from gene mutation [Think Gene]

Posted: 01 Aug 2008 02:21 AM CDT

Josh: More people need to recognize that role that genetics play in their health and what happens to him. Most people really do attribute hearing loss to environmental factors, and there are definitely many cases where this is true, but certainly if a family all begin losing their hearing, especially earlier in their lives, the cause should be recognized as being genetic. I’m glad that at least in this case, someone investigated it.

Pat Phalin learned she had hearing loss at 30, when she volunteered to give hearing tests at her local school. The pupils heard sounds she could not hear.

Her husband Larry, a genealogy enthusiast, saw a pattern in his wife’s family history. Her mother, grandfather and great-grandfather had severe hearing loss as adults. One of the Phalins’ children had hearing problems before he reached school age. (more…)

Electronic notebooks are cool, and so is RDF [business|bytes|genes|molecules]

Posted: 01 Aug 2008 12:06 AM CDT

Had a conversation earlier today, all about RDF and linked data. I am a big believer, which is why posts like this one by Cameron Neylon on A new way of looking at science? bring a smile.

Andrew Milsted, a PhD student, enabled an RDF dump of the content in the lab notebook used by Cameron’s group (and others I suspect). The result, a graph that shows each post in the notebook as a node and links between posts as edges. It is a universe of the work going on in the lab, and how that work interacts. It would be interesting to see the dynamics of this graph evolve, and various other ways of visualizing the underlying data and relationships. It would also be cool to put this up on the web as linked data and link it to data outside Cameron’s lab. Might even lead to some very interesting observations and relationships.

This is a simple example, but highlights why it is so important to be able to put data into machine readable formats. RDF is a naturally good model, since it highlights relationships within the underlying data.

Zemanta Pixie


Industry watching: The serendipitous (and lean) future of pharma [business|bytes|genes|molecules]

Posted: 31 Jul 2008 11:36 PM CDT

A statue of Asclepius. The Glypotek, Copenhagen.Image via WikipediaMy graduate advisor’s favorite word, or at least one of the more popular ones, was serendipity. He was a firm believer in the role of serendipity in science, and personally I believe that serendipity plays a big role in discovery, of any kind. So when Richard Jones pointed to a story in the Financial Times entitled Drug Research Needs Serendipity, it naturally piqued my interest.

The article has an ominous start

The molecular revolution was supposed to enable drug discovery to evolve from chance observation into rational design, yet dwindling pipelines threaten the survival of the pharmaceutical industry. What went wrong?

Lest you think that this is some journalist writing a story, look at who wrote the piece. David Shaywitz (a well known writer covering health and medicine and a physician) and Nassim Nicholas Taleb are not lightweights. The high level answer comes right away

The answer, we suggest, is the mismeasure of uncertainty, as academic researchers underestimated the fragility of their scientific knowledge while pharmaceuticals executives overestimated their ability to domesticate scientific research.

Much as I criticize Singularitarians, etc for underestimating how limited our knowledge of human biology is, it is true for the scientific community as well, although I will add that a big chunk of scientists, perhaps most of them, realize how little we really know. Unfortunately, in our eagerness to get grants, venture capital and improved stock prices, many, tend to look past the reality that I suspect we are well aware of.

I forget who it was, either Pedro or Jonathan Eisen, who noted once that we have done a great job developing analytic technologies that generate data, but need to do a better job making sense of all that data. I’ll add that we not only need to do a better job making sense of the data, but converting all that data into actionable information.

And this is where I take umbrage at some of the language in the article and call out the authors for the fragility of their own understanding. They write (emphasis mine)

Medical research is particularly hampered by the scarcity of good animal models for most human disease, as well as by the tendency of academic science to focus on the "bits and pieces" of life – DNA, proteins, cultured cells – rather than on the integrative analysis of entire organisms, which can be more difficult to study.

The authors note that the “integrative analysis of entire organisms” is difficult. What they fail to note is that academic scientists are not simply focused on the bits and pieces, and haven’t been for a while. A good chunk of the community is trying to understand biology at a much more holistic level, but it is HARD. What is the level at which we need to study entire organisms when there are so many gaps in our knowledge. I would argue that people do “integrative analysis” just for the sake of it from time to time, without really understanding why they want to or need to do it. Contrary to what Chris Anderson might think, a lot of data does not lead to better science.

I also take umbrage by the slighting of scientists in the pharma industry. I have met many. Are pharma companies perfect; hardly. Are they too slow to change; absolutely. But there are some brilliant scientists working there, and many companies are trying their level best to figure out what the best way to utilize this knowledge is, but I would like to remind the authors about what they say themselves, it is hard, and the risks and costs associated with drug development result in the management of these companies being hesitant to throw the kitchen sink at developing drugs exclusively using pharmacogenomics or other techniques enabled by the glut of data and the scientific advances of the past decade. I will agree on one thing though; pharma has to move away from rigid planning faster and towards the proof of concept/first in man approaches that many are now beginning to take.

I really liked the second part of the article, where among other things, the authors advise the industry to embrace serendipity. I completely agree that a number of companies have taking the wrong approach. The declining productivity will not be solved by increased efficiency. That implies that inefficiency is the major cause of that decline. It is a cause, a symptom of bloat, but not the cause. Better drugs will be the result of better science and changes to the models by which the industry, academia and the drug development ecosystem function.

The pharma industry is ripe for disruptive innovation. I continue to believe that the role of big pharma will increasingly become akin to that of a system integrator, with small biotechs, service providers and external, distributed research forming the backbone of the system. Perhaps we need more thinking along the lines of Robin Spencer

While I might not agree with some of the statements made by Shaywitz and Taleb, I agree with the overall message and some of the directions the biopharma industry needs to take. Will it happen before the industry falls down under its own weight?

Zemanta Pixie


Farewell to some bits of olde Cambridge [Omics! Omics!]

Posted: 31 Jul 2008 10:36 PM CDT

We sent off one of our departing colleagues in style yesterday, taking him to the finest cuisine in Cambridge: the MIT Food Trucks. These institutions are various privately run trucks serving hot foods, from around the world, to long lines of students. While private, the trucks are sanctioned: not only do they have specially reserved parking spots but they also are listed in the MIT Food Service website.

However, first we had to find them. Their previous locale is now a major hole in the ground, to be filled in with the new Koch Cancer Institute (or some such name). With the MIT web site's help, we were able to find the new location.

During the year I (and others) have discovered two other institutions which were not so lucky.

It was a bit of a shock one day to discover the Quantum Books location cleared out, though not much after recollection. Quantum was a bookstore specializing in technical books -- particularly computing books. It was a handy place to browse such books before investing; I've spent far too much on books that looked good but were awful. The not so much shock was on thinking about it: not only was Quantum getting hammered by the usual Amazon internet tide, but they were in a perfectly awful location. While there might be a lot of commuter traffic, otherwise they were in a nearly retail-free zone that is one of the many crimes against urban design inflicted on Kendall Square in the 60's/70's. They tried to have a children's section and other experiments, but it was hard to see much hope of success. Quantum isn't kaput, but has gone to a nearly totally Internet model, but unless their fans are super-loyal, it's hard to see that lasting long.

Cambridge was once a center of conventional industry. For example, a huge fraction (I forget the amount; it's on a plaque in the park on Sidney Street) of the undersea telegraph cable used in WW2 was created in Cambridge. But fewer and fewer remain. Even in my short tenure at least 2 candy factories have closed, leaving only one left (tootsie rolls!). A prominent paint company moved out a few years ago. Sometimes it's hard to tell what's still active & what is only an empty shell. But not in this case.

There will be no more "goo goo g'joob" in Cambridge; Siegal egg company has not only cleared out but been cleared out -- the building is gone. A distributor of eggs, they were across the street from one MLNM building and adjacent to Alkermes. Indeed, it was that proximity to MLNM that forced me to notice them: their egg trucks would sometimes block Albany Street while backing into the loading dock, trapping the MLNM shuttle van (always with me late for a meeting!). I think the demolition was part of the adjacent MIT dorm construction, but perhaps a new biotech building will go in. By chance, the Google street view catches the building being prepared for demolition.

Will some future writer remark wistfully on the disappearance of biotech buildings from Cambridge? It's difficult to imagine -- but who a century ago could have imagined Cambridge getting out of the business of supplying everyday things.

No comments: