Thursday, November 6, 2008

The DNA Network

The DNA Network

Mister Doctor Prof. [Genomicron]

Posted: 06 Nov 2008 08:43 PM CST

I don't get too concerned about things such as titles, but I have noticed that this year a more substantial number of students has been sending emails addressed to "Mr. Gregory". I don't know if the students this year are unaware that most professors hold a Ph.D. and therefore are "Dr." and not "Mr." (or "Ms." as the case may be), but this seems to be much more common lately. Has anyone else noticed this?

I know this can get a bit confusing, so let me try to explain it, at least as the terms are used in North America.

The title "Doctor" and the abbreviated prefix "Dr." come from the Latin for "teacher", and are traditionally bestowed on those who have earned the highest academic degree attainable. The suffix Ph.D. is an abbreviation for Philosophiæ Doctor (L. "Teacher of philosophy"), with "philosophy" from the Greek for "love or pursuit of wisdom". The Ph.D. is awarded in most academic disciplines, including science. Medical professionals may also hold the title "Doctor" even though they may do little or no teaching, with common degrees being M.D. (Medicinae Doctor, or Doctor of Medicine), D.V.M. (Doctor of Veterinary Medicine), D.D.S. (Doctor of Dental Surgery), and so on.

As a noun rather than a prefix, "Doctor" is usually reserved for medical doctors ("I'm not a doctor but I play one on TV"). Usually, the person teaching your course at a major university is a "Professor" and not a "Doctor" (noun) ("Are you really a mad scientist, professor?"). He or she does, however, use the prefix "Dr.". Get it?

To make it more complex and Monty Python-ish, the prefix "Prof." is not used by all professors. "Professor" (noun) is the position, but there are also ranks. In North America, these would be "Assistant Professor", "Associate Professor", and "Professor" (or "Full Professor"). In many cases, only full professors use the prefix "Prof." in situations outside the university. I don't use "Prof. Gregory" in non-university settings because I am not a full professor. However, I am a professor, not a doctor, although I use Dr. Gregory instead of Prof. Gregory. Right.

Perhaps that's all too complicated to bother about. Here is the short version: When addressing a professor, just go with "Dear Dr. So-and-so" unless he or she asks you to call him or her something different.



'Coons Get the 'Fluenza...Who Knew? [Bayblab]

Posted: 06 Nov 2008 08:20 PM CST

The globe and mail reports on the TOTALLY GROUNDBREAKING discovery that wild raccoons get infected with influenza. Thanks guys. Now I know not to eat my garbage after the coons have been into it. As if the fear of rabies wasn't already enough.

It's always interesting to see which science stories newspapers choose to report on. There's a lots of fascinating questions in virology. This is not one of them. Is it really surprising to learn that raccoons, who will pretty much stick their noses into anything, get infected by a virus that will stick its genome into pretty much any available cell? As the author of the study says, "More diseases have been found in raccoons than pretty much any other wild animals,...You name it, raccoons get it. But they're tough as nails." Ok, this is getting a bit more interesting. Apparently infected coons can shed and transmit the flu, but they don't get sick.

Maybe someone needs to establish a coon model for immunology. What's so gosh-darn special about the raccoon immune system? That would be interesting. Warm fuzzy mammals for press releases AND some novelty...

Scuppering the Program Pirates [Sciencebase Science Blog]

Posted: 06 Nov 2008 08:00 PM CST

program-codeProfessors the world over are worried about plagiarism: students simply lifting huge chunks from web pages and passing the thoughts and arguments off as their own. Then there are the Professors who steal from each other and publish their work in supposedly novel research papers and books and present it at conferences as original. This kind of plagiarism seems to be on the increase. No one know the true extent to which it is being undertaken, but a few high-profile cases have increased awareness in the academic community of the paper pirates who could scupper your research career plans with a few well-stolen words.

It could be that a whole generation of students and unscrupulous Professors are creating an information black market. In the long-term, it is the students’ education, the research community, and the future of progress that will suffer. After, all student assessment is based on the assumption that their work is original and similarly the advancement of any particular area of endeavour relies on originality and credit where it is due otherwise the whole system collapses into nothing more than noise.

For instance, in the world of computer science, students programming submissions has an important effect on the whole computing educational procedure. “It is
of
a great
importance to evaluate
the programming
skills of each student
It is of a great importance to evaluate the programming skills of each student,” explain Ameera Jadalla and Ashraf Elnagar of the Department of Computer Science at the University of Sharjah, in the United Arab Emirates, “but the evaluation results become misleading and unreal due to the problem of plagiarism.”

The researchers point out that since the late 1970s, concerns about source code plagiarism have risen significantly. Various surveys have shown that up to 85 percent of a representative sample of students had engaged in some sort of academic dishonesty and almost 40 percent in one survey confessed to engaging in at least one instance of cut and paste plagiarism using the internet in the preceding year. Companies that offer to do the plagiarism for you, for a fee, are rife. Studies have shown that male students are commonly more dishonest than their female peers in this regard and science students more than health or educational students. Mature students are less likely to engage in such practices.

The researchers suggest that there are a few fundamental non-technical steps that can be taken to reduce plagiarism.

  • Increasing the number of in-class assignments.
  • Doing more group work, makes it harder to cheat if just one student is honest.
  • Explaining from an early age that plagiarism is unethical and that citation is important.
  • Expecting an oral presentation to show understanding.
  • Giving students different specifications for the same assignment
  • Improving coursework in terms of time, pressure and difficulty to preclude the need to plagiarise.
  • Having flexible deadlines if plagiarism is the other option to completion on time.
  • Using honestly policies and punishment systems.
  • Recognising different plagiarism techniques.

It is easy to see why some students might plagiarise the efforts of others: getting a better grade, laziness or poor time management, easy access to the internet and not understanding the rules. Students are encouraged to use the internet, but there is often no emphasis on the importance of citation or acknowledgement.

Indeed, say Jadalla and Elnagar, the focus of society on end results, the “final certificate”, means that students are under immense pressure to perform while the opportunities for cheating have gone far beyond the simple sharing of notes among themselves and the copying out of textbook paragraphs that were well known in the previous generation. However, no amount of top tips for persuading students not to plagiarise will solve the problem.

There are various programs available that a hard-pressed Professor might employ to spot plagiarism in the work of their academic offspring, but this is usually tailored towards essays and papers. Now, Jadalla and Elnagar, have developed PDE4Java, a new Plagiarism Detection Engine based on the platform-independent system Java that can detect plagiarism in computer science code.

Plagiarism in software was defined as “a program which has been produced from another program with a small number of routine transformations.”

PDE4Java uses data mining techniques to spot content that has been copied from other sources in a given set of programs, usually without attribution. The system “tokenises” the suspect program and then uses data mining, akin to a search engine algorithm, to carry out fast similarity searching of the tokenised index. It can then display side-by-side views of similar programming code and so display clusters of code that look suspiciously similar. These clusters allow the instructors or graders to quickly spot programming routines that the students lifted from each other.

The researchers point out that although modern technology makes it easier for students to plagiarise the work of others, programs such as theirs are allowing Professors to catch up with the cheats and plagiarism sinners.

Search Engine Journal has a nice side-by-side comparison of currently available anti-plagiarism systems including Copyscape DocCop, Plagiarism Detect, Reprint Writer’s Tool, Copyright Spot. Plagiarism Today also has an interesting post on how to find plagiarism.

Ameera Jadalla, Ashraf Elnagar (2008). PDE4Java: Plagiarism Detection Engine for Java source code: a clustering approach International Journal of Business Intelligence and Data Mining, 3 (2) DOI: 10.1504/IJBIDM.2008.020514

Scuppering the Program Pirates

America Cannot Afford More “Cost Saving” Medical Initiatives [Think Gene]

Posted: 06 Nov 2008 07:37 PM CST

Incremental Agglomeration, Creative Destruction, and the Impending Forest Fire of the American Medical Establishment

Part 1: Introduction

The great promise of every modern medical initiative has been “cost savings.” Indeed, what magnanimous medical aspiration in print or powerpoint could preclude a fast-fact stat about the alarming overspending in American healthcare? Yet, as America continues to realize these health initiatives —ostensibly in the spirit of “cost savings” —yet, health spending inexorably grows despite typically marginal improvements in care.

Electronic records, evidence-based medicine, patient-centric care, personalized medicine, universal coverage, and genomics —sure, they all sound good, but can America afford another revolutionary “cost saving” medical industry initiative? No, because revolution is destructive, and what institution willfully destroys itself? Not institutions that still exist. So like the non-directive flaws of biological group selection theory, the drastic improvements promised by revolution are beyond the incremental and self-sustaining mechanisms of institutionalism —as intellectually unaesthetic as that may seem.

Every new medical initiative purports revolution by some combination of better education, savings, standards by some new knowledge, product, or idea. But there’s no grand medical conspiracy inhibiting the potential of these ideas; the mundane simplicity is that change is hard work, people are busy, and everyone stakes in the status quo.  Only outside competitors are willing and able to metabolize the medical establishment, and while these well-minded initiatives may snuff immediate needs, rather than sparking significant change, they continue to agglomerate like a thicket. Thus the longer revolutionary fire is procrastinated, the more dead wood accumulates, and the greater the inevitable conflagration in healthcare will be.

But today, two decades of information technology has made healthcare revolution imaginable. First, medicine has matured from scholarly practice to information science, and science needs no priesthood. Next, decades of accumulated initiatives choke American healthcare in waste information, a problem information technology solves well… too well. From problems like “pager tag” to communicate simplistic information like blood pressure and on-calls to “governance by bureaucratic harassment” like insurance coding and liability threats outside strict procedure, the revolutionary solution destroys the existing systems and their vested interests. But, destroying useful systems is locally suboptimal, and institutions can act for their own immediate growth and survival. Thus, revolution fundamentally unachievable by insider initiatives despite theoretical feasibility and best intentions. So finally, a willing and able outsider is necessary to imagine revolution, and the elite groups behind information technology giants like Google have announced their revolutionary intentions with ventures like 23andMe and Google Health. However, revolution is unpredictable, and by the power of the Internet, the eventual leading health revolutionaries could be anybody.

So manifests the quintessential American pragmatic hypocrisy: all tout the wealth-building virtues of creative destruction until the torch of change is under you. Then, change is “unethical.” Only law is more artificially sustained by its own ethical sophistry and willfully abstruse erudition as is the gross establishment of American health. America hates its healthcare, revolution is coming, and no insider medical initiative can stop it forever.

This is a four part serialized essay which will discuss what a revolution in health will and will not look like, why revolution is possible today, and why direct-to-consumer genomics is sparking our imaginations. Subscribe to Think Gene to be notified when new sections are published.

[1] Introduction

[2] Picture of Revolution (coming soon)

[3] Why Revolution Today (coming soon)

[4] The Spark of Direct-to-Consumer (coming soon)

carnivals coming & going [the skeptical alchemist]

Posted: 06 Nov 2008 05:16 PM CST

Just a short post today -- I am too bogged down by Proposition 8 to write much. You can follow future developments and other miscellaneous news by joining me on Twitter.

The news Skeptics' Circle is up at Ferret's Cage, and the Molecular and Cell Biology Carnival (still in need of a host) will take place on November 9. If I can't find a host by then, I will post it myself this time.

View blog reactions

Lots and Lots of Genotyping in Today's Nature Issue [adaptivecomplexity's blog]

Posted: 06 Nov 2008 04:48 PM CST

If you're looking for genome news, the Nov. 6 issue of Nature is chock full. A news feature (subscription only) goes after why genome-wide association studies are failing to find genetic variants that explain obviously heritable traits like height or autism:

But between those variants that stick out like a sore thumb, and those common enough to be dredged up by the wide net of GWAS, there is a potential middle ground of variants that are moderately penetrant but are rare enough that they are missed by the net. There's also the possibility that there are many more-frequent variants that have such a low penetrance that GWAS can't statistically link them to a disease.

read more

Art, Genomics and Daily Life [Retail Genomics: The Science and Business of Consumer Genomics & Consumer Bioinformatics]

Posted: 06 Nov 2008 02:32 PM CST

A couple months ago, I started my investigation on the interaction between visual arts and genomics. The goal is to stimulate public debate on the possibilities and impacts of genomics on everybody's daily life.


In collaboration with my colleagues, I have created two digital frames in the first series of works. They were displayed at the ISMB conference in Toronto, 2008 and recognized by the ISCB Visual Reflections On Science award. They are currently on display at the ECCB conference in Europe.



Work #1:



Portrait of James D. Watson in his own Word, 2008


Jared Flatow, Brian Chamberlain, and Simon Lin

(click on the picture to zoom in)





According to Wikipedia, "a portrait is a painting, photograph, sculpture, or other artistic representation of a person". Instead of simply using color pigments, we use unique portions of Dr. James Watson's DNA sequence to portrait himself. Dr. Watson was the discoverer of the structure of the DNA and helped to establish the Human Genome Project.DNA, as a primary genetic material, defines the molecular signature of oneself. Dr. Watson's DNA was fully sequenced and made public in 2007 by The Baylor College of Medicine Genome Sequencing Center, 454 Life Sciences Technology, and The Rothberg Institute. We used the SNPs, which define the small differences of DNA from person to person, to uniquely represent Dr. Watson. In order to do this, we took the variant allele base pairs from Dr. Watson's genome (Cold Spring Harbor Laboratory distribution, 6/6/2007) which had a sequence observation count greater than 12, and generated a portrait capturing his phenotype.




Work #2:


DNA and Community, 2008

Simon Lin and Jared Flatow
(click on the picture to zoom in)





Artists constantly explore the interactions between science and society. We looked into the public understanding of DNA in the Web 2.0 era by retrieving Creative Commons (CC)-licensed photos from the Flickr (photo-sharing) website. We retrieved 899 images using the topics of DNA and myself on April 6, 2008. We rearranged these images using a mosaic algorithm to reveal the hidden message of "DNA and Community". Traditional art uses oil and brush; we are using Python and the internet to experiment with new building blocks of CC-licensed photos. By integrating the photos through the lens of 899 individuals, we are investigating how people share their life stories (Flickr) and how people share their creative responsibility (CC license). It is interesting to note that our work is also licensed under CC and thus has 899 lines of acknowledgements.

It's an RNA world after all [Discovering Biology in a Digital World]

Posted: 06 Nov 2008 12:19 PM CST

Before mammals, before dinosaurs, before bacteria, or plants, there was something else; a protocell containing RNA.

ribozyme.gif
The Exploring Origins Project has excellent animations of protocells, a timeline of life's evolution, and best of all- fantastic animations of the RNA world.

You can see how RNA folds, ribozymes (RNA that catalyzes chemical reactions), and learn about the role of RNA when the Earth was young.

BTW- I made this ribozyme image with Cn3D. The RNA is synthetic - made by humans with machines, that is, and this molecule can cut chemical bonds.

Read the comments on this post...

Patents and Graduate Students [Bayblab]

Posted: 06 Nov 2008 09:40 AM CST

I'm about to sign over the rights to a patent application to the institution where I am doing my graduate studies. Does anyone have experience with this? Advice?
I'm pretty sure Bayman has experience with this. (?)
I fully understand that the institution will be the full and exclusive rights holder and financial beneficiary, however, I would still like my name on the patent as the inventor so that I may refer to it in the future in job applications. Is that the default? The paper work I have in front of me seems pretty vague about such details.

Nature is so Wrong! [The Gene Sherpa: Personalized Medicine and You]

Posted: 06 Nov 2008 09:32 AM CST

Nils over at SciPhu has decided to stop posting about DTC and his stance. Why would such a great scientist/blogger decide enough is enough? Nature made him do it...Huh? Yes Nature and its eloquent...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Preimplantation Genetic Diagnosis (PGD): A Discussion [Eye on DNA]

Posted: 06 Nov 2008 09:24 AM CST

This past April, I participated in a vodcast with Dr. Chris Korey of the College of Charleston and students in his Molecular Biology Lab. We talked about the science and ethics behind direct-to-consumer genetic testing. While I’m not too pleased with the way I looked while heavily pregnant (eek!), we had a great conversation and I was impressed with the students’ enthusiasm and thoughtfulness.

This month, Dr. Misha Angrist is the guest participant and he’s hosting a discussion on preimplantation genetic diagnosis (PGD), selection and disability on his blog, Genomeboy. I hope you’ll join the students in conversation there.

For more information on PGD, please see these previous posts here at Eye on DNA:

Japanese scientists clone dead mice [Discovering Biology in a Digital World]

Posted: 06 Nov 2008 08:00 AM CST

What kind of dead animals are in your freezer? I used to be skeptical about the whole notion of cloning wooly mammoths. But this recent article in PNAS (1), makes the whole idea seem less far fetched.

frozen_mouse.pngWakayamaa et. al. describe an amazing technical advance where scientists in Japan were able to derive clones from mice that had been frozen for 16 years at -20°C.

I'm guessing that this wasn't one of the freezers with an automatic defrost cycle.

ResearchBlogging.org

Read the rest of this post... | Read the comments on this post...

First week of applied programming [Mailund on the Internet]

Posted: 06 Nov 2008 02:22 AM CST

We’ve just completed the first week of our new programming class, Applied Programming.  We started last Thursday with a two-hour lecture, then the students had lab exercises with the TAs Friday and Monday, and Tuesday we had a follow-up lecture.  Now, this whole thing repeats.

I wrote down my thoughts on teaching programming while planning the course, see

Now, after the first week, I have some additional thoughts on the class…

We don’t really know our students

We are two lecturers and three TAs, and all of us have a computer science and bioinformatics background. Personally, I’ve taught classes in computer science, bioinformatics and statistics, but almost exclusively to students on the computer science or the bioinformatics masters or bachelor program.  The background for the other teachers is roughly the same.

We have modelled this class loosely on a similar class we used to teach on the bioinformatics program, but removed all the advanced material and instead added a few weeks on very basic programming.  On the bioinformatics course, all students had already had a seven week introduction to programming, but on this class we are not making that assumption.

About half the programs at our faculty of science require the basic programming class, that is an introduction to Java taught at the department of computer science.  The bioinformatics and computer science students will have this class, of course.

Personally, I don’t think an introduction to Java programming is much use for bioinformatics, where script languages are more often used, and in any case I think Python is a better first language.

Anyway, now that the students we get have not had any introduction to programming, obviously we cannot expect as much from them when we teach them in this class.  Obviously, in the old class, we still had to teach the students Python, but at least we could expect them to know about control structures, functions, variables, etc.  Now we cannot, so we try to teach them about it as we teach them Python.

This is new to us, but we will have to see how it goes, and try to correct the course as we go along.

A more serious issue is that the students are no longer computer science or bioinformatics students.  They still take the class at computer science, so they don’t need this one.

Instead, 90%+ are from a new study program in molecular medicine, where this class is a mandatory class.  We also have a few exchange students and PhD students from other backgrounds, who’ve found that they need to learn programming and decided to take this class to learn it, but it is only a few compared to the molecular medicine students.

We don’t really know anything about molecular medicine and certainly nothing about the backgrounds or academic interests of the students following the program.

Motivating the students is pretty hard

So when they ask us, “why do we need to learn to program” or “are we ever going to need what we learn here”, it is hard to answer.

Personally, I cannot imagine doing science in the 21st century without at least some computer skills, and I think the need for computer skills for scientists will only increase.  You can do a lot with a spreadsheet, but there is a limit, and some basic programming skills for manipulating your data or simulating models will be needed.

That is just my opinion, though, and it is biased since I have a computer science background, so I am not particularly convincing when I tell the students this.

The students do not really want to follow this course.  Most of them, anyway. If they study molecular medicine, the have to, and they don’t see the point.

I can totally relate to this.  I felt the same way when I had statistics classes while studying computer science.  I couldn’t for the death of me imagine how I would ever need to know this stuff. Ironic, considering that I am doing more statistics than computer science these days.

To motivate students, you need to convince them either that the material is fun or that it is relevant. I am getting the clear impression that the “fun” strategy is not going to work here, and I know too little about the future skills needed by these people to be able to pull off the “relevance” strategy.

I hope I can get some help from their other teachers.  If the teachers in the classes that they really do find interesting tells them that they, themselves, need programming in their daily work — to varying degree, of course — then maybe that will convince the students of the relevance.

Installing Python is much harder than we ever imagined

We have told the students that they need to spend a lot of time programming on their own.  It really is the only way they are going to learn it.  There is no way of learning it by listening to lectures.

So, we suggested that they all installed Python on their own machines.  The do have access the the Dept. of Computer Science’s machines, but it is probably a lot nicer to have Python on their own machines.

We told them that they could bring their laptops to the lab sessions with their TAs and get help with this, and we didn’t expect anyone could have an problems with this.

Boy were we wrong!

Apparently, installing Python on Vista is a major problem.  Something to do with Administrator access, I don’t known, I’ve never run Vista.

We never expected that installing Python could be a problem.  On all machines I have ever used, it has been extremely easy.  On Linux “yum” or “apt-get” just does it, if it isn’t installed by default.  On OS X or XP you click the installer icon and then you are on your way.

On Vista, it would be just as easy, if you have the right permissions, but apparently by default you do not.  I don’t know the details, so I cannot tell exactly what the problem is.

Anyway, the focus for the first week completely shifted from learning Python to problems with installing it, and leaving everyone frustrated.

Habla Ingles?

If you wanted to run a marathon, would you train every day for three years, then rest completely for two, and then run the marathon?

No?  It wouldn’t be my training strategy either, but when it comes to speaking English, it apparently is Aarhus University’s strategy.

Oh, and a preemptive apology if any of the below turns sarcastic or a bit ranting…

The situation is this: Aarhus University wants to attract more foreign students.  Exchange students and especially PhD students.  At the Faculty of Sciences, they want to double the number of PhD students but without lowering the “quality” of the students, so we have to recruit abroad.

The uni has introduced a number of policies to do this, such as the ECTS point system and, important for this post, a politic on the language used for teaching.

If at all possible, we are to teach in English on all non-mandatory classes, if there is at least one non-Danish speaker in class.

A little anecdote: a guy teaching in the German department told me that he had been asked to teach in English because there was a German student in his class.  Everyone there were better at German than English, but the uni’s policy is clear…

I don’t know how true the story is, but I honestly have no problem believing it.  It sounds exactly like how my university would react in a situation like this.

With that in mind, notice that it is only the non-mandatory classes that we are asked to teach in English.  And usually do, as we actually do get all those exchange students the university is trying to get.

See, the mandatory classes are to be taught in Danish.

This means that when the students show up at the university — with the English skills they have from high school classes — they are kept safe from English the first two years and then almost certainly will be taught in English for the majority of the rest of their classes.

It is a great mystery to me how the students, who apparently do not have then language skills to be taught in English for the first two years of their studies, will magically obtain the skills at their third year.

To me, it seems that not practising would have the opposite effect.

The Studies Office does not share that view.

Why is this rant relevant to my programming class, you ask?

Well, this particular class is an optional class for people with statistics or biology as their main subject and mandatory for molecular medicine.  So obviously we have to teach in English if there are any non-Danish speakers (which there are) and just as obviously we must teach in Danish because it is a mandatory class.

This is pure doublethink and so doubleplusungood.

Since it will be impossible for non-Danish speakers to follow the class if it is in Danish, while at worst it will be an annoyance for the Danes to follow the class in English, we have to teach in English.

Still, we are getting complains about this, so we have to find a solution.

The easy solution is to stick with English, which I am sure we can get away with.  When we gave the Studies Office all the details of the class, we made clear that the language could be English, and when they signed up foreign students the wrote to us that we had to teach in English.

It is the same people who now tells us that we must teach in Danish, but that is because the fucked up when accepting the course description then, and not my problem.  It can be fixed next time the class is taught, but we cannot switch to Danish when we have already accepted foreign students into the class.

Our problem now is that the molecular medicine students really do see it as a problem that the teaching is in English.  That we have to deal with, but I don’t exactly know how.

We will have a talk with them today at the lecture and see what we can work out…

Junk DNA and Transposon Driven Evolution [Bayblab]

Posted: 05 Nov 2008 11:09 PM CST

A press release out of the Genome Institute of Singapore is claiming a function for junk DNA. The researchers show that a number of transcription factors bind repeats found in transposable elements.
More than 50 percent of human DNA has been referred to as "junk" because it consists of copies of nearly identical sequences. A major source of these repeats is internal viruses that have inserted themselves throughout the genome at various times during mammalian evolution. [...] The researchers showed that from 18 to 33% of the binding sites of five key transcription factors with important roles in cancer and stem cell biology are embedded in distinctive repeat families.
Now I could be wrong, but I thought we already knew that portions of these transposable elements acted as transcription factor binding sites (for example see here) so I don't think Larry over at Sandwalk is going to have to revise his estimates of junk DNA percentage yet. But what about the "deflated ego problem"? Is this an answer to Larry's excuse #5: that humans are more complex than other organisms in spite of similar gene numbers because of more complex regulatory mechanisms? Unfortunately I don't have access to the paper, but the press release and abstract seem to think so.
Over evolutionary time, these repeats were dispersed within different species, creating new regulatory sites throughout these genomes. Thus, the set of genes controlled by these transcription factors is likely to significantly differ from species to species and may be a major driver for evolution.
The argument here isn't for more complex gene regulation in humans, but rather different regulation depending on where these elements landed in early evolutionary history.

Nature Blogs: Another good step [ScienceRoll]

Posted: 05 Nov 2008 04:07 PM CST


Barack Obama and Nature Publishing Group have at least one thing in common. They try to use the best tools of web 2.0 to reach people. I’ve written several times about why Nature is the best in science 2.0. And I also invited Nature guys to present in one of the Scifoo Lives On sessions. Now they made a wise step again. How to reach the numerous medical and scientific blogs out there?

Of course, help them connect to each other by constructing a community for them. That’s what the idea of Nature Blogs is about.

Ensure that the results of science are rapidly disseminated to the public throughout the world, in a fashion that conveys their significance for knowledge, culture and daily life.

We think blogs are a good way of doing this. To make it easier for scientists to find interesting blogs and to help developers build blog related services Nature maintains a public, curated index of science blogs.

nature-blogs

You can browse among the many blogs; check the top stories or follow the top cited biomedical papers.

      

VideoMD.com [ScienceRoll]

Posted: 05 Nov 2008 03:29 PM CST


I’ve been updating a list of medical video sites for a long time now, and here is the newest addition, VideoMD.com.

VideoMD was created by physicians, for physicians and their patients. Our mission is to strengthen the relationship between doctors and their patients. Using contemporary technology to help physicians fully educate patients on their specific healthcare concerns, the bond between doctor and patient will be changed forever. Not only will patients be more satisfied by having their own doctor educate them about their ailment, physicians can better educate patients without adding more work to an already busy schedule. VideoMD proves that with modern video capabilities on the internet, combined with an array of searchable content features, video is the best resource to give patients information that they need in an easy and understandable format.

There are featured videos and video blogging as well. Undoubtedly, it is quite a useful addition to the squad.

video_md

      

Is IT ready for the Dreaded DNA Data Deluge? [ScienceRoll]

Posted: 05 Nov 2008 02:33 PM CST


Here is a recent presentation of the Google Tech Talks series. The speaker is Dr. Andras Pellionisz (also from JunkDNA.com) and the topic:

Is “Big IT” ready for the avalanche of data, to be obtained and processed e.g. while the patient is still on the operating table, to be diagnosed, and how the genomics glitch, that caused a benign or malign tumor, could be compensated for?

Algorithmic approaches are needed to better understand genome regulation, even for the simple reason to deploy most effective data retrieval, data storage and computational means, via both parallel hardware and software, but more importantly for opening entirely new perspectives.


      

No comments: