Showing posts with label biochemistry. Show all posts
Showing posts with label biochemistry. Show all posts

Saturday, February 17, 2007

What is junk DNA, and what is it worth?

by A. Khajavinia

Wojciech Makalowski, a Pennsylvania State University biology professor and researcher in computational evolutionary genomics, answers this query.

Our genetic blueprint consists of 3.42 billion nucleotides packaged in 23 pairs of linear chromosomes. Most mammalian genomes are of comparable size—the mouse script is 3.45 billion nucleotides, the rat's is 2.90 billion, the cow's is 3.65 billion—and code for a similar number of genes: about 35,000. Of course, extremes exist: the bent-winged bat (Miniopterus schreibersi) has a relatively small 1.69-billion-nucleotide genome; the red viscacha rat (Tympanoctomys barrerae) has a genome that is 8.21 billion nucleotides long. Among vertebrates, the highest variability in genome size exists in fish: the green puffer fish (Chelonodon fluviatilis) genome contains only 0.34 billion nucleotides, while the marbled lungfish (Protopterus aethiopicus) genome is gigantic, with almost 130 billion. Interestingly, all animals have a large excess of DNA that does not code for the proteins used to build bodies and catalyze chemical reactions within cells. In humans, for example, only about 2 percent of DNA actually codes for proteins.

For decades, scientists were puzzled by this phenomenon. With no obvious function, the noncoding portion of a genome was declared useless or sometimes called "selfish DNA," existing only for itself without contributing to an organism's fitness. In 1972 the late geneticist Susumu Ohno coined the term "junk DNA" to describe all noncoding sections of a genome, most of which consist of repeated segments scattered randomly throughout the genome.

Typically these sections of junk DNA come about through transposition, or movement of sections of DNA to different positions in the genome. As a result, most of these regions contain multiple copies of transposons, which are sequences that literally copy or cut themselves out of one part of the genome and reinsert themselves somewhere else.

Elements that use copying mechanisms to move around the genome increase the amount of genetic material. In the case of "cut and paste" elements, the process is slower and more complicated, and involves DNA repair machinery. Nevertheless, if transposon activity happens in cells that give rise to either eggs or sperm, these genes have a good chance of integrating into a population and increasing the size of the host genome.

Although very catchy, the term "junk DNA" repelled mainstream researchers from studying noncoding genetic material for many years. After all, who would like to dig through genomic garbage? Thankfully, though, there are some clochards who, at the risk of being ridiculed, explore unpopular territories. And it is because of them that in the early 1990s, the view of junk DNA, especially repetitive elements, began to change. In fact, more and more biologists now regard repetitive elements as genomic treasures. It appears that these transposable elements are not useless DNA. Instead, they interact with the surrounding genomic environment and increase the ability of the organism to evolve by serving as hot spots for genetic recombination and by providing new and important signals for regulating gene expression.

Genomes are dynamic entities: new functional elements appear and old ones become extinct. And so, junk DNA can evolve into functional DNA. The late evolutionary biologist Stephen Jay Gould and paleontologist Elisabeth Vrba, now at Yale University, employed the term "exaptation" to explain how different genomic entities may take on new roles regardless of their original function—even if they originally served no purpose at all. With the wealth of genomic sequence information at our disposal, we are slowly uncovering the importance of non-protein-coding DNA.

In fact, new genomic elements are being discovered even in the human genome, five years after the deciphering of the full sequence. Last summer developmental biologist Gill Bejerano, then a postdoctoral fellow at the University of California, Santa Cruz, and now a professor at Stanford University, and his colleagues discovered that during vertebrate evolution, a novel retroposon—a DNA fragment, reverse-transcribed from RNA, that can insert itself anywhere in the genome—was exapted as an enhancer, a signal that increases a gene's transcription. On the other hand, anonymous sequences that are nonfunctional in one species may, in another organism, become an exon—a section of DNA that is eventually transcribed to messenger RNA. Izabela Makalowska of Pennsylvania State University recently showed that this mechanism quite often leads to another interesting feature in the vertebrate genomes, namely overlapping genes—that is, genes that share some of their nucleotides.

These and countless other examples demonstrate that repetitive elements are hardly "junk" but rather are important, integral components of eukaryotic genomes. Risking the personification of biological processes, we can say that evolution is too wise to waste this valuable information.


reposted from: SciAm
my highlights / emphasis / comments

Tuesday, January 30, 2007

Human metabolism recreated in lab

Cells in dishes
Scientists can use the virtual model instead of working on real cells
US researchers say they have created a "virtual" model of all the biochemical reactions that occur in human cells.

They hope the computer model will allow scientists to tinker with metabolic processes to find new treatments for conditions such as high cholesterol.

It could also be used to individually tailor diet for weight control, the University of California team claimed.

Their development is reported in the journal, Proceedings of the National Academy of Sciences.

A team of six bioengineering researchers at the University of California analysed the human genome to see what genes corresponded to metabolic processes, such as those responsible for the production of enzymes.

They spent a year manually going through 1,500 books, review papers and scientific reports from the past 50 years before constructing a database of 3,300 metabolic reactions.

The information was then used to create a network of metabolic processes in the cell, similar to a traffic network.

You could make a metabolic model for an individual person which is a tantalising prospect
Professor Bernhard Palsson

Study leader Professor Bernhard Palsson said the network could be used to see what would happen if a drug was used to target a specific metabolic reaction, such as the synthesis of cholesterol.

Or it could be used to predict what would happen if you interfere with a metabolic reaction in a specific type of cell, such as a blood or heart cell.

And eventually it could even be used to create an individual network for a person.

"The new tool we've created allows scientists to tinker with a virtual metabolic system in ways that were, until now, impossible, and to test the modelling predictions in real cells," said Mr Palsson, who is professor of bioengineering and medicine.

"You can take a drug target and you can make the flow through that reaction more and more restrictive or you can calculate all the reactions that you have to go through to make a certain product."

Metabolism

Metabolic reactions in cells include those which convert food sources, such as fats, protein and carbohydrate into energy and to make other molecules used by the body.

There are hundreds of human disorders which are a result of problems with metabolism.

One example is haemolytic anaemia, a condition where red blood cells are broken down too rapidly.

To test the computer model, the team ran 288 different simulations, such as the synthesis of hormones, testosterone and oestrogen, and the metabolism of fat from the diet.

"We all have natural variation in the capacity of these pathways, for example in our ability to make cholesterol, so you could make a metabolic model for an individual person which is a tantalising prospect."

Keith Frayn, professor of human metabolism at the University of Oxford, said the model would allow scientists to spot potential problems with targeting certain reactions early on in their research.

"It's increasingly recognised there are these networks of metabolism and we need to know if we target something how that will spread out and this is potentially a way of dealing with that."

Dr Anthony Wierzbicki, consultant in specialist laboratory medicine at St Thomas's hospital, has done a lot of work on the role of cholesterol in heart disease.

"This is a potentially interesting tool for investigating metabolism of which cholesterol biochemistry forms a part," he said.

But he added that the model would have to be "sophisticated" enough to predict what happens in the production and breakdown of cholesterol as well how it is absorbed from the gut as the two were closely linked.

reposted from: http://news.bbc.co.uk/1/hi/health/6310075.stm
my highlights / edits