4/07/2012

Shannon's Mathematical Theory of Communication Applied to DNA Sequencing

If we could have James Joyce and Robert Anton Wilson in the mix we might get close to something very really close to 'the tale of the tribe'. With a focus on RAW's book 'Coincidance' in which he defines DNA based information theory through a Joycean measure of the redundancy of information, poetry as information, political speeches as low. love, fly.

 

Shannon's Mathematical Theory of Communication Applied to DNA Sequencing

Nobody knows which sequencing technology is fastest because there has never been a fair way to compare the rate at which they extract information from DNA. Until now.
kfc 04/02/2012
  • 2 Comments

One of the great unsung heroes of 20th-century science is Claude Shannon, an engineer at the famous Bell Laboratories during its heyday in the mid-20th century. Shannon's most enduring contribution to science is information theory, which underpins all digital communication.
In a famous paper dating from the late 1940s, Shannon set out the fundamental problem of communication: to reproduce, at one point in space, a message that has been created at another. The message is first encoded in some way, transmitted, and then decoded.

Shannon's showed that a message can always be reproduced at another point in space with arbitrary precision provided noise is below some threshold level. He went on to work out how much information could be sent in this way, a property known as the capacity of this information channel.

Shannon's ideas have been applied widely to all forms of information transmission with much success. One particularly interesting avenue has been the application of information theory to biology--the idea that life itself is the transmission of information from one generation to the next.

That type of thinking is ongoing, revolutionary, and still in its early stages. There's much to come.
Today, we look at an interesting corollary in the area of biological information transmission. Abolfazl Motahari and pals at the University of California, Berkeley, use Shannon's approach to examine how rapidly information can be extracted from DNA using the process of shotgun sequencing.

The problem here is to determine the sequence of nucleotides (A, G, C, and T) in a genome. That's time-consuming because genomes tend to be long--for instance, the human genome consists of some 3 billion nucleotides or base pairs. This would take forever to sequence in series.
So the shotgun approach involves cutting the genome into random pieces, consisting of between 100 and 1,000 base pairs, and sequencing them in parallel. The information is then glued back together in silico by a so-called reassembly algorithm.

Of course, there's no way of knowing how to reassemble the information from a single "read" of the genome. So in the shotgun approach, this process is repeated many times. Because each read divides up the genome in a different way, pieces inevitably overlap with segments from a previous run. These areas of overlap make it possible to reassemble the entire genome, like a jigsaw puzzle.

That smells like a classic problem of information theory, and indeed various people have thought about in this way. However, Motahari and co go a step further by restating it more or less exactly as an analogue of Shannon's famous approach.

They say the problem of genome sequencing is essentially of reproducing a message written in DNA, in a digital electronic format. In this approach, the original message is in DNA, it is encoded for transmission by the process of reading, and then it is decoded by a reassembly algorithm to produce an electronic version.

What they prove is that there is a channel capacity that defines a maximum rate of information flow during the process of sequencing. "It gives the maximum number of DNA base pairs that can be resolved per read, by any assembly algorithm, without regard to computational limitations," they say.

That is a significant result for anybody interested in sequencing genomes. An important question is how quickly any particular sequencing technology can do its job and whether it is faster or slower than other approaches.

That's not possible to work out at the moment because many of the algorithms used for assembly are designed for specific technologies and approaches to reading. Motohari and co say there are at least 20 different reassembly algorithms, for example. "This makes it difficult to compare different algorithms," they say.

Consequently, nobody really knows which is quickest or even which has the potential to be quickest.

The new work changes this. For the first time, it should be possible to work how close a given sequencing technology gets to the theoretical limit.

That could well force a clear-out-dead-wood from this area and stimulate a period of rapid innovation in sequencing technology.

Ref: arxiv.org/abs/1203.6233: Information Theory of DNA Sequencing

http://www.technologyreview.com/blog/arxiv/27689/

1 comment:

James said...

the paper you have explained above is of no use. It is a junk like so many other useless papers. Never ever any of the results of the paper are practical with the real genomes consisting of long repeat regions. The authors of the paper are also not aware of the fundamentals of genomics and are not competent enough to work in this area. I suggest that you at least find better papers instead of wasting your time.