The DNA Network |
| Posted: 03 Jul 2008 07:35 PM CDT ![]() This is such a massively broad topic but I ran across a review of the literature on why we sleep in PLoS Biology. Check out the above table of the leading reasons why we need to sleep and the pros and cons of that arguement. Previously I had thought that it known that sleep was necessary for learning and thus just plain necessary, but it turns out that it is just one of three leading theories. The most convincing to me according to the review is the theory that we require sleep to restore some key macromolecules. The article also contains many other interesting aspects of sleep research including some conjecture on why evolution favoured sleep as such a prevalent behavior, but I didn't understand much of it. |
| DNA Testing for Members of the Military [The DNA Testing Blog] Posted: 03 Jul 2008 03:33 PM CDT As July 4 approaches, we remember and thank the service men and women serving in the U.S. Armed Forces stateside and overseas. At DNA Diagnostics Center, we are honored to serve those who serve our country. We are pleased to offer special accommodations to service men and women who may find themselves in need of DNA [...] |
| Posted: 03 Jul 2008 03:16 PM CDT This is how respectable scientists resolve disputes and settle their dominance. Your argument carries so much more weight when you demolish your opponent at pubmedfight! If only it could take into account impact factors then we would be set.... [found on scienceroll] |
| Foreign Accent Syndrome [Bayblab] Posted: 03 Jul 2008 02:02 PM CDT Foreign Accent Syndrome is a rare condition that can occur after brain injury. With this condition, a patient speaks the same language, but with a different regional accent (for example, a person from the American midwest may adopt a British accent). Recently at McMaster University, and published in the Canadian Journal of Neurological Sciences [press release] a Canadian case was reported. In this instance, a woman from Southern Ontario suffered a stroke and began speaking with a Newfoundland accent, which continues even two years after the original brain injury: "Rosemary's speech is perfectly clear, unlike most stroke victims who have damage to speech-motor areas of the brain," says Humphreys. "You wouldn't guess that the speech changes are the result of a stroke. Most people meeting her for the first time assume she is from out East. What we are seeing in this case is a change in some of the very precise mechanisms of speech-motor planning in the brain's circuitry." |
| Pfizer cuts on Education.....What abour Us? [The Gene Sherpa: Personalized Medicine and You] Posted: 03 Jul 2008 12:07 PM CDT |
| Posted: 03 Jul 2008 09:39 AM CDT We've all come across certain people who seem to have a particularly high ability to play and/or appreciate music. As a geneticist, my assumption has always been that this is inherent and heritable to some degree. Nevertheless, it could certainly be argued that it is environmental. The literature provides some support for the concept that musical ability is genetic. For example, musical talent has been noted to cluster in some families. Additionally, the ability to identify pitch in the absence of a reference pitch clusters in families, as well. Conversely, tone deafness, also known as congenital amusia, also seems to be genetic on the basis of strong familial clustering. Lastly, in a formal study of pitch recognition in twins, the heritability of scores on the so-called Distorted Tunes Test were estimated to be more than 70%. Now, a Finnish research group has demonstrated that it is highly likely that a gene on chromosome 4 (located in the vicinity of chromosome band 4q22) influences musical aptitude. They utilized three different measures of musical aptitude in coming to this conclusion and performed a "genome-wide linkage test." This study has narrowed the region containing the gene to a segment of the chromosome containing ~50 genes, so further studies will be necessary to find the precise genetic change influencing musical aptitude in these families. The authors also noted other regions of the genome in which there was suggestive linkage, suggesting that musical aptitude is likely to be affected by multiple genes. It will be interesting to watch as these are hopefully identified in future studies. Reference |
| How much data is a human genome? It depends how you store it. [Genetic Future] Posted: 03 Jul 2008 07:04 AM CDT Andrew from Think Gene has finally prompted me to write a post I've been working on sporadically for a month or so. The question is pretty simple: in the not-too-distant future you and I will have had our entire genomes sequenced (except perhaps those of you in California) - so how much hard drive space will our genomes take up?Andrew calculates that a genome will take up about two CDs worth of data, but that's only if it's stored in one possible format (a text file storing one copy of each and every DNA letter in your sequence). There are other ways you might want to keep your genome depending on what your purpose is. The executive summary For those who don't want to read through the tedious details that follow, here's the take-home message: if you want to store the data in a raw format for later re-analysis, you're looking at between 2 and 30 terabytes (one terabyte = 1,000 gigabytes). A much more user-friendly format, though, would be as a file containing each and every DNA letter in your genome, which would take up around 1.5 gigabytes (small enough for three genomes to fit on a standard data DVD). Finally, if you have very accurate sequence data and access to a high-quality reference genome you can squeeze your sequence down to around 20 megabytes. The details For the first two formats I'll assume that someone is having their genome sequenced using one of today's cutting-edge sequencing technologies, the Illumina 1G platform. The 1G platform and its rivals, Roche's 454 and Applied Biosystem's SOLiD, are the instruments that are currently being used to sequence over 1,000 individuals for the international 1000 Genomes Project; if you were to have your genome sequenced right now it would almost certainly be using one of these platforms. The Illumina technology basically sequences DNA as a huge number of short (36-letter) fragments, called reads. Because read lengths are so short and the system has a fairly high error rate, assembling an entire genome would require what's called 30x coverage - which basically means each base in the genome is sequenced an average of 30 times. Once the reads have been generated, they are assembled into a complete genome with the help of a universal reference genome, the sequence created by the Human Genome Project from a mixture of DNA from several individuals. Even with this high level of coverage there is still considerable uncertainty involved in the process of re-assembling the genomes from very short fragments, and both the algorithms used to perform this assembly and the reference genome are being constantly improved. Thus for the moment there may be some advantage in storing your data in a raw format, so that in a few month's time you can take advantage of better software and more complete reference genome to reconstruct your own sequence in a more complete fashion. For the third and fourth formats, I've moved into the future: basically, I'm assuming that we now have access to affordable sequencing technology that can generate extremely long and accurate reads from a single molecule of DNA. That would allow you to reconstruct your entire genome - both sets of chromosomes, one from your mother and one from your father - with very high confidence. In that case you no longer need to store your raw data, and we can instead start thinking about the most efficient possible way to keep your entire genome on disk. Note that in what follows, for the sake of simplicity I am ignoring the effects of data compression algorithms. It's likely that you could shrink down these data-sets (especially the image files) by quite a bit using even straightforward compression. Anyway, enough background. Let's get started. 1. For hard-core data junkies only: raw image files To put it very simply, the Illumina 1G platform sequences your DNA by first smashing it up into millions of fragments, binding those fragments to a surface, and then feeding in a series of As, Cs, Gs and Ts. As these bases are incorporated into the DNA fragments they set off flashes of light that are captured by a very high-resolution camera, resulting in a series of pretty coloured images such as the one on the left(which is actually a montage of four images, one for each base). Each of those spots represents a separate fragment of DNA, captured at the moment that a single base (A, C, G or T, each labelled with a different colour) is read from that fragment. By building up a series of these images the machine accumulates the sequence of the first ~36 bases of those fragments in the image, after which the sequence quality starts to drop off.Almost as soon as these images are generated they are fed into an algorithm that processes them, creating a set of text files containing the sequence of each of the fragments. The image files are then almost always discarded. Why are they discarded? Because, as you will see in a minute, storing the raw image data from each run in even a moderate-scale sequencing facility quickly becomes prohibitively expensive - in fact, several people have suggested to me that it would be cheaper to just repeat the sequencing than to store these data long-term. How much data? Each tile of an Illumina machine will give you accurate sequence information for around 25,000 DNA fragments. A separate image is obtained for each of the four bases, with each "snap-shot" comprising around 2 Mb of data. That comes to a total of 320 bytes/base. For an entire genome with 30x coverage, that comes out as around 28.80 terabytes of data. That's almost 30,000 gigabytes! Why store your genome like this? Well, either you believe that image-processing algorithms are likely to improve in the near-future, thus allowing you to squeeze a few more bases out of your data; or you have a huge bunch of data servers lying idle that you want to do something with; or you're just a data junkie. However, your actual sequence data is not readily accessible in this format, so you'd also want to be keeping at least a roughly assembled version of your genome around to examine as new information about risk variants becomes available. 2. For DIY assemblers: storing individual reads I mentioned above that those monstrous image files are rapidly converted into text files containing the sequence of each of your ~36-base reads. The files that are generally used here are called Sequence Read Format (SRF) files, which are used to store the most likely base at each position in the read along with other associated data (such as quality scores). How much data? It depends what sort of quality information you keep: at the high end you'd be looking at around 22 bytes/base (1.98 terabytes total) to store raw trace data, while at the low end you could just score sequence plus confidence values for around 1 byte/base (90 gigabytes total). That's starting to become feasible - you could now store your genome data on an affordable portable hard drive. Why store your genome like this? This is a pretty efficient way to store your raw read data while you wait for improvements in both the reference human genome sequence (which is still far from complete) and assembly algorithms. As with the previous format, though, you'd also want to store your sequence in a more readily accessible assembled sequence so you could actually use it. 3. Your genome, your whole genome, and nothing but your genome OK, now let's gaze a few years into the future, and assume (fairly safely) that new technologies for generating accurate, long reads of single DNA molecules have become available. This means you can stitch your entire genome together very easily, allowing you to store the whole 6 billion bases of it in a text file - this is the type of data storage approach that Andrew discussed in his post. In essence, you're storing every single base in your genome as a separate character in a massive, 6 billion letter long text file. How much data? Each DNA base can be stored in two bits of data, so your complete genome (both sets of chromosomes) tallies up to around 1.5 gigabytes of data. If you wanted to store some associated confidence scores for each base (indicating how likely it is that you sequenced that section of your DNA correctly) that might take you up to 1 byte/base, or a total of around 6 gigabytes. Either way, you could now fit your genome on a cheap USB thumb drive. Why store your genome like this? This is probably the easiest possible format to store your genome in - it contains all the information you need to compare your sequence with someone else's, or to find out if you have that rare mutation in your GABRA3 gene that you saw on the news last night. It's everything you need and nothing you don't. Now, most sensible people will probably be content with their 1.5 Gb genome, especially as data storage becomes ever cheaper. But a few will want to squash it down further, particularly if they're storing lots of genomes (like a large sequencing facility, or your insurance company). In that case they can go one step further by taking advantage of the fact that at the DNA level all of us are very much alike. 4. The minimal genome: exploiting the universal reference sequence I don't know who you are, but I do know that if you lined up our genomes you would find that we have a lot in common - almost all of the bases in our genomes are absolutely identical. Indeed, for any two randomly selected humans you will find, on average, that around 99.5% of their DNA is precisely the same (although the precise pieces that are different will of course differ from person to person). We can use this commonality to compress our genomes further using a clever trick: if we have a very good universal human reference sequence, we can ignore all the parts of our genome that match it, and only store the differences. In practice, then, your personal genome sequence will comprise (1) a header, stating which reference sequence to compare to (this would ideally be a reference sequence from your own ethnic group), and (2) a set of instructions providing all the information required to transform that reference sequence into your own 6 billion base genome. For convenience, I'll assume that the reference is stored as a single contiguous text file containing all 46 chromosomes joined together. To make your genome, a software package will start at one end of the reference sequence; each instruction will tell it to move a certain number of bases through the genome, and then change the sequence at that position in a specific way (it could either change that base to something else, insert new sequence, or delete the base entirely). In this way, sequentially running your personal instruction set will convert the reference sequence into your own genome, base by base. How much data? This one is tricky because we still don't have a great idea of exactly how many differences exist between people. I'm going to make some rough guesses using this paper, which compares the genome of Craig Venter to the sequence generated by the Human Genome Project, and assuming that this paper under-represents the total number of variable sites by around 30% (due to missed heterozygotes and poor coverage of repetitive areas). For a diploid genome (i.e. one containing two copies of each chromosomes, one from each parent) this gives an estimate of around 6 million single base polymorphisms, about 90,000 polymorphisms changing multiple bases, and about 1.8 million insertions, deletions and inversions. Now, the instructions for each of the single base polymorphisms can be stored as 1.5 bytes each on average (enough space to store both the distance from the previous polymorphism, and a new base). Multiple-base polymorphisms will be perhaps 2 bytes each, allowing for the storage of a few additional changed bases. Deletions might be around 3 bytes to store the length of the deleted region. Insertions will be more complicated: if they simply duplicate existing material they might only take up 3 or 4 bytes, but if they involve the insertion of brand new material they will be much larger (3 bytes, plus 1 byte for every 4 new bases inserted). From the Venter genome the average insertion size is 11.3 bases, so let's say insertions take up 7 bytes on average. Making some more assumptions and tallying everything up I get a total data-set on the order of 20 megabytes. In other words, you could fit your genome and the sequences of about 34 of your friends onto a single CD. Why store your genome like this? If you have a fetish for efficiency, or if you have a whole lot of genomes you need to store, this is the system to use. Of course, it relies on having access to a universally accessible reference sequence of high quality - and you would probably want to recalculate it whenever a new and better reference became available. Squeezing your genome even further Want to get even more genomes per gigabyte? Here's one efficiency measure you might want to consider: use databases of genetic variation. These might be especially useful for large, common insertions (rather than storing the entire sequence of the insertion, you can simply have a pointer to a database entry that stores this sequence). Acknowledgments: the raw numbers and calculations in this post owe a lot to David Carter, Tom Skelly and James Bonfield from the Sanger Institute and Zamin Iqbal from the European Bioinformatics Institute, UK. Thanks guys! |
| Leech blog! [T Ryan Gregory's column] Posted: 03 Jul 2008 06:48 AM CDT Sometimes, when I think a little extra motivation is required, I (jokingly) threaten my students that I may switch to working on leeches. |
| Genetic Companies in Trouble in California [ScienceRoll] Posted: 03 Jul 2008 04:56 AM CDT Months before, Steve and I, we both said it would happen. And now look at this announcement:
Just an example from one of the letters:
Genetic tests are not funny or interesting products, but tools in the hands of qualified physicians who know what genomic medicine is about.
![]() |
| Posted: 03 Jul 2008 03:47 AM CDT You definitely know Googlefight where you can compare the number of search results returned by Google for two terms or expressions (Wikipedia). What about a similar tool in health science? Here is Pubmedfight, a French tool, with which you can compare authors by their number of publications in Pubmed. It’s more than funny… ![]() |
| Another Transgenic Bug That Turns Useless Dross Into Fuel [] Posted: 03 Jul 2008 02:41 AM CDT
Our Google Alert on the words “proprietary organism” turns up new stuff every week. This week it picked up a story about a collaboration between the Department of Energy and Dupont. Dupont, with partner Genencor, have cooked up a new transgenic bug to efficiently convert agricultural waste (corn husks, sugar cane bagasse) into ethanol. Wouldn’t it be nice if the government’s ethanol program didn’t cut into the food supply? We’re rooting for the little transgenic critters. They’re our kind of Frankenstein. |
| Posted: 03 Jul 2008 02:08 AM CDT The Wall Street Journal Health Blog has an interesting story on a brewing turf war between breast surgeons and plastic surgeons over breast reconstruction procedures. Some women with breast cancer - including some with BRCA1 or BRCA2 mutations undergoing bilateral mastectomy - decide to have breast reconstruction performed. This procedure has generally been performed by a plastic surgeon in a separate procedure from the mastectomy itself. Now, as noted by the WSJ, a growing number of breast surgeons (without plastics training) are learning the breast reconstruction procedures and beginning to offer it at the same time as the mastectomy itself. Undoubtedly, this will lead to some turf wars. If you are shopping around for a breast reconstruction surgeon, the most important question to ask is how many breast reconstruction procedures have they done... |
| Poetry and Art as Outlets for Cancer Patients [Cancer and Your Genes] Posted: 03 Jul 2008 12:41 AM CDT |
| Synthetic molecules emulate enzyme behavior for the first time [Think Gene] Posted: 03 Jul 2008 12:40 AM CDT When chemists want to produce a lot of a substance — such as a newly designed drug — they often turn to catalysts, molecules that speed chemical reactions. Many jobs require highly specialized catalysts, and finding one in just the right shape to connect with certain molecules can be difficult. Natural catalysts, such as enzymes in the human body that help us digest food, get around this problem by shape-shifting to suit the task at hand. Chemists have made little progress in getting synthetic molecules to mimic this shape shifting behavior — until now. Ohio State University chemists have created a synthetic catalyst that can fold its molecular structure into a specific shape for a specific job, similar to natural catalysts. In laboratory tests, researchers were able to cause a synthetic catalyst — an enzyme-like molecule that enables hydrogenation, a reaction used to transform fats in the food industry — to fold itself into a specific shape, or into its mirror image. The study appears in the June 25 issue of the Journal of the American Chemical Society. Being able to quickly produce a catalyst of a particular shape would be a boon for the pharmaceutical and chemical industries, said Jonathan Parquette, professor of chemistry at Ohio State. The nature of the fold in a molecule determines its shape and function, he explained. Natural catalysts reconfigure themselves over and over again in response to different chemical cues — as enzymes do in the body, for example. When scientists need a catalyst of a particular shape or function, they synthesize it through a process that involves a lot of trial and error. “It’s not uncommon to have to synthesize dozens of different catalysts before you get the shape you’re looking for,” Parquette said. “Probably the most important contribution this research makes is that it might give scientists a quick and easy way to get the catalyst that they want.” The catalyst in this study is just a prototype for all the other molecules that the chemists hope to make, said co-author and professor of chemistry T.V. RajanBabu. “Eventually, we want to make catalysts for many other reactions using the fundamental principles we unearthed here,” RajanBabu said. For this study, Parquette, RajanBabu, and postdoctoral researcher Jianfeng Yu synthesized batches of a hydrogenation catalyst in the lab and coaxed the molecules to change shape. The technique that the chemists developed amounts to nudging certain atoms on the periphery of the catalyst molecule in just the right way to initiate a change in shape. The change propagates to a key chemical bond in the middle of the molecule. That bond swings like a hinge, to initiate a twist in one particular direction that spreads throughout the rest of the molecule. Parquette offered a concrete analogy for the effect. “Think of the Radio City Rockettes dance line. The first Rockette kicks her leg in one direction, and the rest of them kick the same leg in the same direction — all the way down the line. A change in shape that starts at one end of a molecule will propagate smoothly all the way to the other end.” In tests, the chemists caused the catalysts to twist one way or the other, either to form one chemical product or its mirror image. They confirmed the shape of the molecules at each step using techniques such as nuclear magnetic resonance spectroscopy. That’s what the Ohio State chemists find most exciting: the molecule does not maintain only one shape. Depending on its surroundings — the chemical “nudges” that it receives on the outside — it will adjust. “For many chemical reactions to work, molecules must be able to fit a catalyst like a hand fits a glove,” RajanBabu said. “Our synthetic molecules are special because they’re flexible. It doesn’t matter if the hand is a small hand or a big hand, the ‘glove’ will change its shape to fit it, as long as there is even a slight chemical preference for one of the hands. The ‘flexible glove’ will find a way to make a better fit, and so it will assist in specifically making one of the mirror image forms.” Despite decades of research, scientists aren’t sure exactly how this kind of propagation works. It may have something to do with the polarity of different parts of the molecule, or the chemical environment around the edges of the molecule. But Parquette says the new study demonstrates that propagation can be used to make synthetic catalysts change shape quickly and efficiently — an idea that wasn’t apparent before. The use of adaptable synthetic molecules may even speed the discovery of new catalysts. Source: Ohio State University Josh: Finally, it seems that we may have a way to efficiently create only one stereoisomers like biological systems do. One of the triumphs of modern organic chemistry was the synthesis of taxol, because of all the stereocenters. If this technique is as good as they claim it is, it will let us synthesize complex molecules such as taxol much easier. |
| Gene directs stem cells to build the heart [Think Gene] Posted: 03 Jul 2008 12:22 AM CDT Researchers have shown that they can put mouse embryonic stem cells to work building the heart, potentially moving medical science a significant step closer to a new generation of heart disease treatments that use human stem cells. Scientists at Washington University School of Medicine in St. Louis report in Cell Stem Cell that the Mesp1 gene locks mouse embryonic stem cells into becoming heart parts and gets them moving to the area where the heart forms. Researchers are now testing if stem cells exposed to Mesp1 can help fix damaged mouse hearts. “This isn’t the only gene we’ll need to get stem cells to repair damaged hearts, but it’s a key piece of the puzzle,” says senior author Kenneth Murphy, M.D., Ph.D., professor of pathology and immunology and a Howard Hughes Medical Institute investigator. “This gene is like the first domino in a chain: the Mesp1 protein activates genes that make other important proteins, and these in turn activate other genes and so on. The end result of these falling genetic dominoes is your whole cardiovascular system.” Embryonic stem cells have created considerable excitement because of their potential to become almost any specialized cell type. Scientists hope to use stem cells to create new tissue for treatment of a wide range of diseases and injuries. But first they have to learn how to coax them into becoming specialized tissue types such as nerve cells, skin cells or heart cells. “That’s the challenge to realizing the potential of stem cells,” says Murphy. “We know some things about how the early embryo develops, but we need to learn a great deal more about how factors like Mesp1 control the roles that stem cells assume.” Mesp1 was identified several years ago by other researchers, who found that it was essential for the development of the cardiovascular system but did not describe how the gene works in embryonic stem cells. Using mouse embryonic stem cells, Murphy’s lab showed that Mesp1 starts the development of the cardiovascular system. They learned the gene’s protein helps generate an embryonic cell layer known as the mesoderm, from which the heart, blood and other tissues develop. In addition, Mesp1 triggers the creation of a type of cell embryologists recently recognized as the heart’s precursor. They also found that stem cells exposed to the Mesp1 protein are locked into becoming one of three cardiovascular cell types: endothelial cells, which line the interior of blood vessels; smooth muscle cells, which are part of the walls of arteries and veins; or cardiac cells, which make up the heart. “After they are exposed to Mesp1, the stem cells don’t make any decisions for several days as to which of the three cell types they’re going to become,” Murphy notes. “The cues that cause them to make those commitments come later, in the form of proteins from other genes.” Researchers already know a number of the genes that shape the heart later in its development. Murphy plans to start tracing Mesp1’s effects from gene to gene—following the falling genetic dominoes, which branch out into the pathways that form the three cardiac cell types. “If we can find gene combinations that only make endothelium or cardiac or smooth muscle, then that could be applied to tailoring embryonic stem cells for therapies later on,” he says. Source: Washington University School of Medicine Josh: It’s only a matter of time before we can regenerate organs… |
| Wanted: Disruptor [business|bytes|genes|molecules] Posted: 03 Jul 2008 12:04 AM CDT
We talk a lot about Health 2.0 in the social health or crowd sourced context, but what about the future of medicine, the therapies that will guide our future? For the better part, some of the real challenges of all the data we have today, with the inherent noise, the lack of clear success stories, the complex standards, the lack of standardization of statistical methods, etc, are not being discussed as they should be. We are in need of some serious technological innovation in the informatics and biosimulation side of things, as well as in developing appropriate strategies and best practices for trial design. It’s fascinating to see some of the approaches companies are taking in evolving their research strategies, with an emphasis on getting human samples early, and ideally doing some proof of concept studies. There are all kinds of efforts going on as well; in developing clinical data standards, in translational medicine, in building biobanks and population selection, etc., but from what I can tell they tend to (with exceptions) fall into two buckets. Either they are overly complex, the kind of efforts doomed to sit in committee for years, or haphazard, doomed to stay proof of concept for a long time. Where am I going with this? We have conferences about the next generation of health information technology, blogs that discuss the underlying topics to death, but we seem to have very little public discussion, even the very conceptual, geeky variety, about how we can change the way drugs are developed, and little open innovation, the kind that comes from a disruptive player or trends. There have been disruptive technologies, disruptive drugs, but as far as I can tell, no company that just changes the game on its head, at least not in a long time. Why is that? Is it the heavily regulated nature of the biopharma industry? For sure. Is it the inability of the “post-genomic” era to bear any obvious fruit? Partly. But I think there’s something systematic. No system-busting mavericks that challenge the very structure and business models that have been around for so long. No process changes that allows a company to develop a drug relatively fast, and with minimal attrition and with a companion diagnostic to boot. Whatever it is, we need that disruptor, perhaps a Black Swan event. The “2.0″ label is an overused, and often misused, cliche today, but I would love to see a medicine 2.0 conference, one where we talk about innovative business models and technologies that will revolutionize not just data production, but the very results that we derive from the plethora of data that we have. |
| How to clone a sheep in 10 simple steps [Mary Meets Dolly] Posted: 02 Jul 2008 11:07 PM CDT As funny as this video is, it is also a good tutorial in somatic cell nuclear transfer, or SCNT better known as cloning. These wild and crazy guys are talking about sheep, but the process is the same in humans. I love the final step: PUBLISH, PUBLISH, PUBLISH! |
| Badass scientist of the week [Bayblab] Posted: 02 Jul 2008 09:43 PM CDT I couldn't pass on the opportunity for a new installment in this series, and to share this news of a biologist who jumped into the Golf of Mexico to save a 375 pound bear from drowning. I mean look at the picture, this may be the most badass scientist yet... "Mr Warwick kept one arm underneath the bear gripped the scruff of the bear's neck with the other to keep its head above water as he dragged the animal back to shore." |
| You are subscribed to email updates from The DNA Network To stop receiving these emails, you may unsubscribe now. | Email Delivery powered by FeedBurner |
| Inbox too full? | |
| If you prefer to unsubscribe via postal mail, write to: The DNA Network, c/o FeedBurner, 20 W Kinzie, 9th Floor, Chicago IL USA 60610 | |










No comments:
Post a Comment