Metagenomics versus Moore’s law

Go to the profile of Daniel Evanko
Aug 28, 2009
0
0

Moore’s law refers to the trend observed in computing hardware that the number of transistors on a computer chip doubles about every two years, thus effectively doubling computing power. This has been considered quite a rapid increase.

However, this increase pales in comparison to recent and continuing advances in the throughput of DNA sequencing technology that have resulted in an astonishing increase in the production of DNA sequence by biologists. This is certainly true in the field of metagenomics which involves shotgun sequencing of the genomes (or transcriptomes) of all the organisms in an environmental sample. Biologists are adopting this technology at an rate that was completely unanticipated by most people in the field. This is creating a situation where comprehensive analysis of the resulting sequences, whose analysis is far more complex than for single-genome sequence, is becoming computationally intractable with existing resources and pipelines. The Joint Genome Institute’s call for large scale (Terabase) "Grand Challenge” metagenomic projects highlights the scale of datasets that people are now discussing.

The editorial in the September issue of Nature Methods discusses this situation and calls for concerted efforts to ameliorate the metagenome-analysis gridlock that appears imminent. The recently formed M5 (metagenomics, metadata, metaanalysis, multiscale-models and metainfrastructure) Consortium will be proposing a promising solution, the ‘M5 Platform’, later this year. We hope these efforts will find support and be successful at ensuring this deluge of valuable data is analyzed efficiently and productively.

Go to the profile of Daniel Evanko

Daniel Evanko

Editor, Springer Nature

38 Contributions
0 Followers
0 Following

No comments yet.