TABLE OF CONTENTS

BACK TO ENTIRE TABLE OF CONTENTS (2008-1972)

July, 2008
(Jul 31) Admission upon Resignation - End of an Era

(Jul 30) deCODE and SGENE Consortium Discover Deletions in the Human Genome Linked to Risk of Schizophrenia [Junk DNA diseases "beyond SNPs"]
(Jul 26) Rewriting Darwin: The new non-genetic inheritance
(Jul 22) How the Personal Genome Project [of Church at Broad Inst.] Could Unlock the Mysteries of Life
(Jul 20) What You Should Know Before You Spit Into That Test Tube
(Jul 16) Principle of Recursive Genome Function Supersedes Dogmas; By Andras Pellionisz, Online Ahead of Print; (Scientific Visionary Vindicated)
(Jul 14) Intel, Others Back New DNA Sequencer
(Jul 07) New Targets For RNAs That Regulate Genes Identified
(Jul 05) Science is being held back by outdated laws
June, 2008
(Jun 25) When's a Gene Test Not a Gene Test?
(Jun 20) First Anniversary of ENCODE: The Principle of Recursive Genome Function
(Jun 19) Genetics Became Information Science - The Holistic View of Genome Structure and Function
(Jun 18) BIOCOM 2008: Turning personalized medicine into reality
(Jun 17) First Anniversary of ENCODE - video interviews with leading Genomics experts in Cold Springs Harbor
(Jun 16) Calif. cracks down on 13 genetic testing startups
(Jun 14) First ENCODE Anniversary
(Jun 13) Applied Biosystems Joins 1000 Genomes Project
(Jun 10) Apple in Parallel: Turning the PC World Upside Down?
(Jun 02) Third Wave to Acquire Stratagene's Full Velocity Patents for $3.9M
May, 2008
(May 28) Genome 'trailblazer' Francis Collins departing research institute

(May 28) Government's gene guru to resign [Francis Collins to go ... where?]
(May 26) Dutch scientists first to unravel a woman’s DNA
(May 24) President Bush Signs Genetic Nondiscrimination Legislation Into Law
(May 15) Genetics firm to build online health database [of Parkinson's disease]
(May 13) Agilent Technologies Announces Licensing Agreement with Broad Institute to Develop Genome-Partitioning Kits to Streamline Next-Generation Sequencing
(May 12) Merck's Informatics Mission [What should be the result of long-predicted "Genomics-IT confluence"?]
(May 08) Research firm bases FPGA on fractal-like structure
(May 03) Intel Seeks Partners to Develop FPGA-Based Solution for Next-Gen Sequencing Analysis
(May 03)
Unlocking the human genome: Pioneering researcher joins Buck Institute
(May 03) Slaughter's Anti-Genetic Discrimination Bill Becomes Law
April, 2008
(Apr 28) Mitrionics FPGA-Accelerated Computing Platform for Bio and Genome Informatics Demonstrated at Bio-IT World Conference
(Apr 28) Baylor College of Medicine to Use Applied Biosystems Genetic Analysis Technology as Part of 1000 Genomes Project
(Apr 24) Beijing Genomics Institute Signs Multi-Million Dollar Order for 11 Additional Illumina Genome Analyzers
(Apr 24) Genetic Discrimination Law Passes Senate with Compromises

(Apr 18) The $100 Genome
(Apr 18) GNS: Building a SNPs-to-Outcomes Engine
(Apr 17) New technologies speed development of personal genomes ["Jim Watson project"]
(Apr 16) Sydney Brenner Urges Cancer Researchers to Consider 'Bedside to Bench' Approach

(Apr 15) The [Dreaded] DNA Data Deluge
(Apr 14) New DNA sequencing strategy could be vital during disease outbreak
(Apr 14) May We Scan your Genome? [Newsweek skips on Genome Revolution to herald DTC business model]
(Apr 13) Global Biotech Competition Heats Up
(Apr 10) Al Gore Helps Navigenics Launch Personal Genomics Service
(Apr 09) Navigenics Debuts Gene Dx Service, Allies With Mayo to Study How Patients Use Data
(Apr 08) Enter Navigenics, Where Personal Genomics Gets More Medical

(Apr 07) Gene Regulation Video in Science website
(Apr 06) Gene Regulation in the Third Dimension
(Apr 04) BioNanomatrix Lands $5.1M in Venture Financing
(Apr 02) Roche NimbleGen Launches NimbleGen Sequence Capture Technology for Targeted Genome Resequencing
(Apr 02) NanoLabs Raises £10M to Aid Race for a Cheaper Genome
March, 2008
(Mar 29) New Research Provides Greater Insight into Gene Regulation

(Mar 28) New Software Aids Researchers Analyzing Millions of DNA Sequences
(Mar 26) BGI to Ramp up Sequencing Abilities with Illumina, Roche Tools
(Mar 24) Genetic Testing Gets Personal [and now they are more then 20 companies]
(Mar 21) Navigenics will Launch [its Personalized Genomics web service] April 8th in NYC
(Mar 17) Tapping miRNA-Regulated Pathways
(Mar 17) All Connected
(Mar 12) Applied Biosystems Surpasses Industry Milestone in Lowering the Cost of Sequencing Human Genome
(Mar 11) GATC opens up shop in Sweden
(Mar 07) New Partnership with Helicos Puts Expression Analysis at the Forefront of Genomic Research
(Mar 06) Helicos BioSciences declares first shipment of single molecule DNA sequencer
(Mar 04) Gene Map Becomes a Luxury Item [an American got it on taxpayers' money, another by non-profit funds, a Chineese (don't ask the source) - and now a Romanian entrepreneur -AJP]
(Mar 02) Illumina technology can lead researchers to new drug therapies, links to disease
February, 2008
(Feb 29) Google Backs Harvard Scientist's 100,000-Genome Quest
(Feb 29) Basically, DNA is a computing problem [Ballistics, in WWII, was also a computing problem]
(Feb 27) Upheaval in Genomics - news censored, exodus of academics to lucrative industry, proprpietary IP abound...
(Feb 14) How One Protein Binds To Genes And Regulates Human Genome
(Feb 12) PacBio Plans to Sell First DNA Sequencers in 2010; Aims for 100 Gigabases Per Hour
(Feb 11) California company claims faster, cheaper gene map
(Feb 08) Race is on to produce a personal - and cheap - genome readout
(Feb 07) On the front lines of the genomic revolution
(Feb 07) Illumina says inexpensive genome testing here
(Feb 07) RNA-associated introns guide nerve-cell channel production
(Feb 03) The Final Meltdown of JunkDNA Myth
(Feb 01) Florida Gives Miami Genetic Institute $80M
January, 2008
(Jan 31) Faith in the landscape of modernity [Francis Collins at Stanford, 5th Feb, 7:30pm]
(Jan 30) Reinventing the Sequencer
(Jan 29) SeqWright Announces Personal Genomics Service
(Jan 28) Fueling The Future: The oil well of tomorrow may be in a California lab full of genetically modified, diesel-spewing bacteria
(Jan 25) Life: A Tech-Centric View
(Jan 24) Venter Institute Scientists Create First Synthetic Bacterial Genome
(Jan 24) NIH Announces New Initiative In Epigenomics
(Jan 23) Navigating the Genome for Autism Clues
(Jan 22) International effort to catalog complete DNA of 1,000 people
(Jan 21) Supergene Labs Design Microbes to Change Sun to Fuel, Eat Waste
(Jan 18) Scientist hopeful about future of medicine, but funds needed
(Jan 17) Invitrogen Enters Non-Coding RNA Licensing Agreement with IMBcom [Mattick goes commercial]
(Jan 16) IPGS Founders meet in Silicon Valley this week for the "Next Big Thing" in "Genomics beyond Genes"
(Jan 16) Body Weight Influenced By Thousands Of Genes [6,000 genes - 25 percent of the genome]
(Jan 11) Is most of the human genome functional? ["Wait and see" is professionally negligent -AJP]
(Jan 10) Knome and the Beijing Genomics Institute Enter into Exclusive Strategic Alliance
(Jan 06) China makes 1st volunteer genome atlas
(Jan 05) Nutrigenomics: The Genome - Food Interface [Protein to DNA interaction "never" happens under Central Dogma - watch for upcoming release]
(Jan 04) New Route For Heredity Bypasses DNA [Post-ENCODE Genomics puts new light on Lamarckian concepts - watch for upcoming release]
(Jan 03) Seeking God in the Brain - Efforts to Localize Higher Brain Functions [Mathematical identification of brain functions in the genome - watch for upcoming release]
(Jan 01) Seattle research to map disease with U.S. grant

December, 2007

(Dec 30) Study Maps Life In Extreme Environments
(Dec 21) Breakthrough of the year - Human genetic variation [Misnomer: 2007 was the year of BreakDown]
(Dec 25)_Beijing Genomics Institute to Offer Sequencing Services [How big nations do in a Global Competition?]
(Dec 23) Hereditary diseases taking toll on Arab region [How big nations (will) do in a Global Competition]
(Dec 14) 4th International Greek Biotechnology Forum [How small nations (should) do in a Global Competition]
(Dec 13) Forget mistletoe - what about DNA?A new dating service matches singles using major histocompatibility complex genes
(Dec 12) More “Functional” DNA in Genome than Previously Thought
(Dec 10) Humans Evolving More Rapidly Than Ever, Say Scientist
(Dec 06) Craig Venter is the future
(Dec 06) Gene Security Network Raises $4M Series A Funding [Sunnyvale - Portola Valley]
(Dec 06) Swiss Government Launches $354M Systems Biology Initiative [Switzerland is to the lead]
(Dec 05) Mathematicians to decipher secrets of immune system [The Indian community in Thailand...]
(Dec 04) Danish Hotspot in Personalized Medicine [in San Francisco...]
(Dec 03) Chinese DNA. How China Uses Genome Projects to construct Chineseness [China is the next superpower]
(Dec 02) A Changing Portrait Of DNA [A Changing Story of Newsweek on Genetics Beyond Genes for USA Consumption]

November, 2007
(Nov 30) Knome Kicks Off Whole-Genome Sequencing Service
(Nov 29) MicroRNAs go from stop to start: MicroRNAs can both dampen and activate gene expression
(Nov 27) How low can a genome go? [The question is no longer the data - AJP]
(Nov 25) Decoding DNA fast and cheap
(Nov 24) European honour for Queensland scientist [Mattick]
(Nov 23) New Consumer Genomic Services Raise Questions About FDA Regulation
(Nov 20) Retail genomics: Within spitting distance
(Nov 18) The DNA Age - My Genome, Myself: Seeking Clues in DNA [Personalized PostGenetics?]
(Nov 15) Ancient retroviruses spurred evolution of gene regulatory networks in primates
(Nov 13) Bush Vetoes Bill that Would Raise NIH Funding Around 4 Percent over White House Request
(Nov 13) Are there rearrangement hot spots in the human genome? [Ohno proven wrong, again...]
(Nov 09) Characterizing the cancer genome in lung adenocarcinoma
(Nov 06) Dandruff is leftovers from meal on your head
(Nov 03) Cracking the code to life [how Venter's science and PR outsmarted the rest of the World]
(Nov 02) President Bush names Christian scientist [Francis Collins] a recipient of the President Medal of Freedom
(Nov 01) Bioinformatics - New Potion for Biotech & Pharma Firms
(Nov 01) Germany's Merkel pushes big business in India
(Nov 01) Shaking up scientific research [Venter shattering the 500-year assumption that businesses cannot do fundamental research]
October, 2007
(Oct 29) [Francis Collins] Presidential Medal of Freedom honorees named
(Oct 28) Richard Gibbs and the Baylor genome center are developing next-generation technology for sequencing DNA
(Oct 27) Where are we today with the real agenda of "re-thinking" "Post-ENCODE Genomics" (PostGenetics)?
(Oct 26) Watson, DNA Discoverer, Retires After Race Remark
(Oct 25) [How to avoid] The Brewing Biggest Embarrassment of the American R&D Since Sputnik
(Oct 24) How much [experimental] genomics is too much?
(Oct 23) Google Execs Really Do Hate Evil [Data format, Algorithm, Access]
(Oct 22) [Poitras' Gift] MIT Lab Wins $20M Gift to Study Genomics of Mental Illnesses
(Oct 22) James Watson: To question genetic intelligence is not racism [Where did he go wrong?]
(Oct 19) [Venter at] Web 2.0 Summit: Google The DNA Of Prospective Mates?
(Oct 19) Race row Nobel scientist James Watson scraps tour after being suspended
(Oct 18) The genetic map maker
(Oct 15) New Method Of Selecting DNA For Resequencing Accelerates Discovery Of Subtle DNA Variations [Differential Sequencing]
(Oct 12) Scientists map out first Asian genome
(Oct 10) [San Francisco] Bay Area Biotechs Poised to Win '07 VC Honors, But Living Costs Unnerve New Blood [where is Texas?]
(Oct 09) MIT Wins $100M [Koch] Gift to Create Cancer Research Institute That Melds Genomics, Cell Bio, Engineering
(Oct 09) NHGRI Unveils Second Phase of ENCODE Project With $80M in Grants
(Oct 08) The Human Genome: RNA Machine
(Oct 15) The Year of Miracles [2007? PostGenetics was founded 2005, IPGS formally abandoned "Junk DNA" in 2006]
(Oct 15) The Ten Hottest Nerds [The main theme is fighting "junk DNA diseases"]
(Oct 06) Gene genie [will it be out of the Petri dish by the 18th of October?]
(Oct 05) Emory Lands $3M Grant for Autism Research; [From Simons Foundation]
(Oct 05) Evolution Transforms "Junk" DNA into Genetic Machinery
(Oct 04) Sigma-Aldrich and the Universite de Montreal Establish Collaboration for RNA Interference Studies
(Oct 03) USC's Keck School Lands $10M Gift for Epigenomics Center
(Oct 02) The facts never speak for themselves ... scientists need to "frame" their messages to the public
(Oct 02) Beyond A 'Speed Limit' On Mutations, Species Risk Extinction
(Oct 01) [Inside the $100 Genome] BioNanomatrix Announces Issuance of Key Nanofluidics Patent Enabling Single Molecule Whole Genome Analysis
September, 2007
(Sep 29) Companies striving to sequence genome in a jiffy [$100 Genome! What to do with the avalanche of data?]
(Sep 28) DNA pioneer Watson far from elementary [a remarkable interview minus some cardinal questions]
(Sep 27) DNA repeats - a great chunk of "Junk" - or "Master Switches"? [The death of "Junk DNA numbers' game]
(Sep 27) Gene Regulation In Humans Is Closer Than Expected To Simple Organisms [Let's go for the simplest]
(Sep 27) DNA Collection Project in South Africa ["Everyone is a settler"]
(Sep 26) 2007: The Year of the Personalized .[PostGenetics; PPG]
(Sep 25) Slater Fund Invests in DNA sequencing “factory” of the future ["Put your money where your DNA is"]
(Sep 24) A 'scientific revolution' is taking place, as researchers explore the genomic jungle. DNA unraveled.
(Sep 24) He trolls genome for cancer clues
(Sep 21) Peer-reviewed publication in science journal to provide quantiative and experimentally testable predictions for "biological complexity"
(Sep 21) Human Genetic Ancestry uncovered by Algorithm [Now it is Microsoft, Google and Yahoo]
(Sep 21) Oxford scientist keen to set up subsidiary in India
(Sep 21) [Post]Genes in rheumatoid arthritis
(Sep 21) Personal Genomes: Mainstream In Five Years, But Who Should Have Access?
(Sep 20) Google Wants to Track Your Medical History -- And Your Genome
(Sep 19) Nova Scotia biotech sector has recipe for success [Biotech is a Global Race]
(Sep 18) Dutch Cabinet awards NGI € 271 million ($376 M) to Second phase Netherlands Genomics Initiative (NGI)
(Sep 17) Chinese scientists will soon complete the first genome atlas of the yellow race
(Sep 17) Perhaps They Should've Called It "One Man's Treasure" DNA
(Sep 14) Stop Calling it Junk!
(Sep 13) Google's Genetic Start-Up [Finally, the "Big One" starts shaking the IT World]
(Sep 11) European Science Foundation Proposes 'Massive' Systems Biology Program; Aebersold's on Board
(Sep 07) Genome 2.0 [Another misnomer. The "Genome" did not change. Genetics (1.0) changed to PostGenetics (2.0)

(Sep 06) Genomic, Proteomic Consortia Join $210M NIH Project Exhorting Interdisciplinary Research [NIH paradigm-shift]
(Sep 05) Are “Ultraconserved” Genetic Elements Really Indispensable? [Or even Darwinism is in Doubt?]
(Sep 03) In the Genome Race, the Sequel Is Personal [Venter presents Personal PostGenetics]
(Sep 01) ENCODE wrapped up early - the 2 years young IPGS calls for PostGenetics 1st World Congress

August, 2007
(31 Aug) A Genome within a Genome: The surprising finding suggests a new mechanism of evolution
(30 Aug) Barnsley's "SuperFractals" - Fractal Transformations; one fractal governing the growth of another fractal
(30 Aug) Gene regulation in humans is closer than expected to simple organisms
(24 Aug) What is a gene, post-ENCODE? History and updated definition [PostGenetics; "Genomics beyond ENCODE"]
(16 Aug) Human Genome Ultraconserved Elements Are Ultraselected
(15 Aug) A neurodegenerative disease is the second disease caused by mutant RNA
(15 Aug) Identification of hundreds of conserved and nonconserved human microRNAs [by computers]
(15 Aug) MIT's MicroRNA 'sponges' could aid cancer studies
(14 Aug) Rosetta Genomics to Develop Microrna-Based Therapy for Infectious Diseases Based on Viral MicroRNA
(10 Aug) Readying to Deploy Dx Products, Rosetta Genomics' CEO Moves to US
(10 Aug) Alnylam Seals $330M RNAi IP Deal, $42.5M Investment with Roche
(03 Aug) Biotechnology - Planet’s Next Big Opportunity
(03 Aug) [Big Pharma] Intellectual Propery Rights' Changes in China

July, 2007
(31 Jul) Duke to Create Systems Biology Center Through $14.5M NIGMS Grant
(26 Jul) Brain Cells Need MicroRNA To Survive [more precisely, to develop]
(22 Jul) Is most of our DNA really rubbish?... ["Oh, no", dear Dr. Ohno]
(19 Jul) MicroRNA Regulates Psoriasis [PostGenetic Medicine will improve your quality of life]
(14 Jul) Punctuating the book of life
(10 Jul) Roche Acquires Alnylam Lab to Build RNAi Excellence Center; Buys Stake and Forges Alliance
(10 Jul) Framing the debate about Junk DNA
(09 Jul) Junk and genomes in The Scientist [What's the new frame of Genomics? - AJP]
(08 Jul) From Genes to Energy [skipping "Junk"?]
(08 Jul) Genetic Engineers Who Don’t Just Tinker
(05 Jul) Junk Worth Keeping [Editor of "The Scientist" on Ohno and PostGenetics]
(03 Jul) A Challenge to Gene Theory, a Tougher Look at Biotech
(01 Jul) Bike guy at the controls ["How will 'beyond the genes' play in Peoria?"]

June, 2007
(30 Jun) Obituary of Junk DNA as a scientific term (1972-2007)

(30 Jun) Few genes underlie most microRNAs [microRNA self-similarity]
(29 Jun) New genome transplantation technique works in bacteria, and could ultimately enable synthetic biology
(28 Jun) A vision for the convergence of synthetic biology and nanotechnology
(24 Jun) DNA barcodes suggest fractal nature of genome
(22 Jun) Genes, move over: DNA Study forces rethink of what it means to be a Gene
(21 Jun) Really New Advances (RNA)

(18 Jun) Non-Gene Diseases
(13 Jun) "Junk DNA" isn't Junk [NIH becomes second after IPGS to formally abandon this "scientific term"]
(13 Jun) Blogging ENCODE [The Scientist]
(08 Jun) Venter Institute Claims Patent on Synthetic Life
(07 Jun) [Junk] DNA In Sperm Altered By Cigarette Smoke, Genetic Damage Could Pass To Offspring
(01 Jun) Genome of DNA Pioneer Is Deciphered ["Deciphered" - NO. Personal PostGenetics - YES]

========== NEWS IN DETAIL ==========

Admission upon Resignation of Dr. Collins. The begining of a New Era

Francis Collins, in a video interview to Charlie Rose on July 29, 2008 said (at 7:40 of the clip):

"That's the "function" question. How would we figure that out? Ah - so much of the genome - after all only about 1.5% of it is coding for protein. The rest of it is probably involved in this regulatory stuff, and for a long time were were a bit dismissive about that 98.5% of it and said that a lot of it was kind of a junk. I don't think people are using the word "Junk" any more when they are talking about the genome, because the more we study, the more functions we find in that "filler" - which is not a "filler" at all."

[It is symbolic and historical that on one of his last days as the head of the office of NIHGRI, Francis Collins fessed up that the government R&D structure was inappropriate to address (let alone, solve) "the function question" of the genome. True, upon publishing the Pilot Results of ENCODE a year ago, he alluded to the need that "the scientific community will have to re-think long-held beliefs". Upon conclusion of his tenure admitted that government genome research structure, that he headed, failed to deliver on the "function question". When asked by Charlie Rose if by massive IQ this challenge could be met, Dr. Collins answer was positive - but he fretted about the shrinking budget of NIH. Ever since it was asserted by the entire field that DNA equals information, it is even a question if medical doctors (at NIH or elsewhere) should spearhead Genome Informatics. For instance bringing in information and computer scientists, physicists, etc. from e.g. the National Science Foundation could be a constructive change - and similar to "Genome Programs" of other leading nations, time has arrived to for the International HoloGenomics Society to propose a "USA Genome Program Administration". Outside of government, Dr. Collins, who declared openly that the "DNA and Mathematics" are divivine languages will thus probably work in the future with Information Theorists and Big IT to get ahead in personal genomics as supported by genome information technology. The precedent for theoretical breakthrough coming from unsupported (thus, unbiased) intellectual resource is not new. One physicist, working as a clerk in the Patent Office, produced his theoretical breakthrough free from the constraints of the establishment, since paradigm shifts hardly ever originate from entities whose best interest is maintaining the "status quo". Fact is that "The Principle of Recursive Genome Function" received not a penny of taxpayers' money. Dr. Francis Collins, entering Personal Genomics will find himself increasingly relying on IT Theoretical workshops to establish replacements of "long-held beliefs" for the new era of HoloGenomics, and will work closely together with Information Technology to provide means of Personal Genomics. pellionisz_at_junkdna.com, July, 31, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

deCODE and SGENE Consortium Discover Deletions in the Human Genome Linked to Risk of Schizophrenia

Findings may provide the foundation for a test to complement standard clinical diagnosis, potentially enabling earlier intervention and treatment

Last update: 1:25 p.m. EDT July 30, 2008

REYKJAVIK, Iceland, July 30, 2008 /PRNewswire-FirstCall via COMTEX/ -- In a major paper published today in the online edition of the journal Nature, scientists from deCODE genetics (DCGN:decode genetics inc com DCGN 1.55, +0.02, +1.3%) and the University of Iceland, along with academic colleagues from the deCODE-led European SGENE consortium, China and the United States, report the discovery of three rare deletions in the human genome that confer risk of schizophrenia. Such deletions are gaps in the normal sequence of the genome that can arise spontaneously during the recombination or reshuffling of the genome that takes place in the creation of sperm and eggs. The deletions reported in today's study are located on chromosomes 1q21, 15q11 and 15q13, and confer, respectively, 3, 15 and 12 times greater than average risk of schizophrenia. These are the first such deletions to be associated with risk of mental illness using large sample sizes and validated across many populations. The substantial increase in risk they confer make them a valuable basis upon which to develop molecular diagnostic tests to complement standard clinical diagnosis. The study, 'Large recurrent microdeletions associated with schizophrenia,' will appear online today at www.nature.com.

"Schizophrenia is a disorder affecting thoughts and emotions. It is therefore a quintessentially human disease, but one that is little understood biologically and which is difficult to diagnose. These findings are important because they shed light on its causes and provide a first component to a molecular test to aid in clinical diagnosis and intervention. These discoveries also demonstrate one way in which we can use SNP-chips to find rarer genetic factors conferring risk of disease. In many disease areas we have had great success of late in identifying what these chips are best suited to find: common variants conferring relatively modest increases in risk. But we know that individuals with certain mental disorders such as schizophrenia tend to have few children, and thus that we may have to identify a larger number of rare but high risk variants to understand the genetic contribution to susceptibility. It is encouraging that our efforts to use SNP chips to detect rarer variations such as spontaneous deletions and duplications is now bearing fruit," said Kari Stefansson, CEO of deCODE.

In the recent wave of discoveries of risk variants for common diseases, those associated with mental disorders such as schizophrenia, autism and others have been conspicuously absent. This phenomenon, and the fact that people with these disorders tend to have few children, suggest that rarer and perhaps spontaneously generated variants may account for a greater proportion of the disease burden in these conditions than in others. SNP-chips are not well suited to finding rare SNPs but can, with sufficiently large sample sizes, be used to identify deletions and duplications -- known as copy number variations, or CNVs -- which can also be carried by healthy individuals in one generation and contribute to risk of disease in the next.

In order to identify novel CNVs, deCODE first analyzed the genomes of a total of approximately 15,000 parents and offspring taking part in deCODE gene discovery programs and who had been genotyped with the more than 300,000 SNPs on the HumanHap300 chip. The deCODE team discovered 66 de novo CNVs, that is, CNVs present in the genomic DNA of the offspring but not in that of their parents. deCODE then tested these variants for association with schizophrenia in more than 1,400 schizophrenics and 33,000 control subjects. The deletions on chromosomes 1q21, 15q11 and 15q13 were suggestively associated with schizophrenia in this first phase, and then validated in 3,300 cases and 8,000 controls. The SGENE consortium is comprised of deCODE genetics, the National-University Hospital in Reykjavik, the University of Aberdeen, the Ravenscraig Hospital in Greenock, the Institute of Psychiatry at King's College London, the National Public Health Institute in Helsinki, the Ludwig Maximilians University and GlaxoSmithKline's Genetic Research Center in Munich. The SGENE affiliated groups taking part in the second phase of the project were the University of Copenhagen, the University of Oslo, the University of Heidelberg, the University of Bonn, the University Medical Center of Utrecht, Nijmegen Medical Center, the University of Verona, the Duke University Center for Population Genomics and Pharmacogenetics and the University of Sichuan, China.

deCODE and the SGENE consortium gratefully acknowledge the participation in this study of the thousands of patients, family members and control subjects from these eleven countries. This study was supported by the European Union through the SGENE consortium ( www.SGENE.eu), by grants LSHM-CT-2006-037761 and PIAP-GA-2008-218251.

About deCODE

deCODE is a biopharmaceutical company applying its discoveries in human genetics to the development of diagnostics and drugs for common diseases. deCODE is a global leader in gene discovery -- our population approach and resources have enabled us to isolate key genes contributing to major public health challenges from cardiovascular disease to cancer, genes that are providing us with drug targets rooted in the basic biology of disease. Through its CLIA-registered laboratory, deCODE is offering a growing range of DNA-based tests for gauging risk and empowering prevention of common diseases, including deCODE T2(TM) for type 2 diabetes; deCODE AF(TM) for atrial fibrillation and stroke; deCODE MI(TM) for heart attack; deCODE ProCa(TM) for prostate cancer; and deCODE Glaucoma(TM) for a major type of glaucoma. deCODE is delivering on the promise of the new genetics.(SM) Visit us on the web at www.decode.com; on our diagnostics website at www.decodediagnostics.com; and, for our pioneering personal genome analysis service, at www.decodeme.com.

[Till the Nature article is available from a press release it is somewhat unclear if "indels" are confused with "copy number variations", it is already clear that we have entered the "beyond SNP-s" era of detecting "junk DNA diseases" like schizophrenia. Nature on the 31st of July broadcast Kari Stefansson clearing the mystery if the finding meant "indels" or (missing) "Copy Number Variations" (of long sequences). Clearly the latter; Dr. Stefansson mentions the absence of "long sequences" that normally are not only there in one copy, but their number is repeated. The implication is that CNV accounts for genomic diversity from one healthy person to another, but absence of such (regulatory) sequence altogether may cause major "junk DNA disease". SNP-s are simple "spelling mistakes" of single nucleotides of A,C,T,G - most of them causing variance and not disease. However, there is much more to "Junk DNA" than SNP-s, for instance insertions and deletions of single nucleotides, or occasionally very long sequencences, copy number variations, a slew of "repeats" e.g. "tandem repeats", etc - up to the "fractal defects of DNA regulatory mechanisms" by this author, putting into use his Principle of Recursive Genome Function. Another clear fact from the press release appears to be that deCODE intends to include such "beyond SNPs" analysis into its repertoire of serving "direct-to-customer" probabilities of hereditary syndromes based on saliva samples. It is fortunate for Iceland that their endeavor can not be interfered with by US State Authorities... Of course, there will be naysayers everywhere, for instance some asking why is it good for anyone to learn a higher than usual risk of diseases like schizophrenia. Most of such naysayers will be blown away by the article below - that according to our new understanding of genome regulation nobody is "stuck" with the genome one inherits from their parents - but proteins (our own, environmental proteins such as nutrients and novel drugs can affect our DNA). This is yet another major leap and replaces the fatalistic stance of "I can't do anything, since it is in my genome" with realistic optimism to put pressure on R&D to speed up the agenda of an entirely new medicine. pellionisz_at_junkdna.com, July, 31, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Rewriting Darwin: The new non-genetic inheritance

New Scientist
Magazine issue 2664

09 July 2008
Emma Young

[When experimental evidence is show by New Scientist on YouTube, "Rewriting Darwin and Dawkins", it may be time for a holistic view of the Genome Informatics, together with the environment. Time for HoloGenomics - AJP]

HALF a century before Charles Darwin published On the Origin of Species, the French naturalist Jean-Baptiste Lamarck outlined his own theory of evolution. A cornerstone of this was the idea that characteristics acquired during an individual's lifetime can be passed on to their offspring. In its day, Lamarck's theory was generally ignored or lampooned. Then came Darwin, and Gregor Mendel's discovery of gen etics. In recent years, ideas along the lines of Richard Dawkins's concept of the "selfish gene" have come to dominate discussions about heritability, and with the exception of a brief surge of interest in the late 19th and early 20th centuries, "Lamarckism" has long been consigned to the theory junkyard.

Now all that is changing. No one is arguing that Lamarck got everything right, but over the past decade it has become increasingly clear that environmental factors, such as diet or stress, can have biological consequences that are transmitted to offspring without a single change to gene sequences taking place.... fully accepting the idea, provocatively dubbed the "new Lamarckism", would mean a radical rewrite of modern evolutionary theory. ... "It means the demise of the selfish-gene theory", says Eva Jablonka at Tel Aviv University, Israel. "The whole discourse about heredity and evolution will change"...

That's not all. The implications for public health could also be immense. Some researchers are talking about a paradigm shift in understanding the causes of disease. For example, non-genetic inheritance might help explain the current obesity epidemic, or why there are family pattern for certain cancers and other disorders, but no discernible genetic cause. "It's a whole new way of looking at the inheritance and causes of various diseases, including schizophrenia, bipolar disorder and diabetes, as well as cancer", says Robyn Ward of the cancer research centre at the University of New South Wales in Sydney, Australia.

...recent research... has a firm basis in biological mechanisms - in so-called "epigenetic" change.

... Inside the nucleus, the DNA is packaged around bundles of proteins called histones, which have tails that stick out from the core. One factor that affects gene expression is the pattern of chemical modifications to these tails, such as the presence or absence of acetyl and methyl groups. Genes can also be silenced directly via enzymes that bind methyl gropus ont the DNA. The so-called RNA interference (RNAi) system can direct this activity, via small RNA strands. As well as controlling DNA methylation and modifying histones, these RNAi molecules target messenger RNA - much longer strands that act as intermediaries between DNA sequences and the proteins they code for. By breaking mRNA down into small segments, the RNAi molecules ensure that a cerain gene cannot be translated into its protein. In short, RNAi creates the epigenetic "marks" that control the activity of genes.

We know that genes - and possibly also non-coding DNA - control RNAi and so are involved in determining in individual's epigenetic settings. It is becoming increasingly apparent, though, that environmental factors can have a direct impact too, with potentially life-changing implications. The clearest example of this comes from honebees [here the article explains the discovery of the Australian team lead by Ryszard Maleszka, Science, DOI:10.1126/science.1153069, and describes the work of Randy Jirtle in 2000 at Duke University as a groundbreaking experiment on a strain of genetically identical mice, and a diet rich in methyl groups before conception and during pregnancy made them different - AJP]...

These and other animal studies strongly suggest that a pregnant woman's diet can affect her child's epigenetic marks. So perhaps it is not surprising that the effect of certain nutrients is being called into question. Folate, for example, is a potent methyl donor. It is routinely recommended during pregnancy and added to cereal products in certain countries, including the US, because it reduces the risk of spinal tube defects if eaten around the time of conception...

The legacy of stress

Diet is not the only environmental factor that can influence the epigenetic setting of some genes. Michael Meaney at McGill University in Montreal, Canada, and colleagues have found that newsborn mice neglected by their mothers are more fearful in adulthood and that these mice show much higher than normal levels of methylation of certain genes involved in the stress response...

There is recent evidence that abnormal epigenetic patterns play a role in mental health disorders. In March, Arturas Petronis at the Centre of Addiction and Mental Health in Toronto, Canada and colleagues reported the first epigenome-wide scan of post-mortem brain tissue from 35 people who had suffered from schizophrenia. They found a distinctive pigenetic pattern, controlling the expression of roughly 40 genes (The American Journal of Human Genetics, vol 82, p. 696). Several of the genes were related to neurotransmitters, to brain development and to other processes linked to schizophrenia. These findings lay the groundwork for a new way of understanding mental illness, says Petronis, as a disease with a significant epigenetic component..... Intriguingly, ...the abnormalities in DNA methylation in Petronis' subjects were not restricted tot heir frontal cortex: they were also present in their sperm". "[This] suggests that it is possible that inherited epigenetic abnormalities may be contributing to the familial nature of schizophrenia and bipolar disorder" says the team member Jonathan Mill at the Institute of Psychiatry at King's College London...

This work is only suggestive, but when it comes to cancer, the evidence is stronger. Some colorectal cancers are known to develop when a key DNA-repairgene called MHL1 becomes coated in methyl groups, preventing it from working. In 2007, Ward and her colleagues published a study of a woman with this type of cancer and her three children, but one son had a heavily methylated, silenced gene like his mother (The New England Journal of Medicine, vol 356, p. 697)....

Some epigenetic marks may also be inherited from fathers, however. In a now classic study published in 2005, Matthew Anway at the University of Idaho in Moscow and colleagues showed that male rats exposed to the common crop fungicide vinclozolin in the womb were less fertile and had a higher than normal risk of developing cancer and kidney defects. Not only were these effects transmitted to their offspring, they were passed from father to son through the three following generations as well (Science, vol. 308, p. 1466). The team found no DNA changes, only altered DNA methylation patterns in the sperm of these rats, suggesting that epigenetic factors were to blame. ...

Nutrition does seem to have some lasting effect, according to a study by Marcus Pembrey of the Institute of Child Health at University College London and his colleagues. They analysed records from the isolated community of Overkalix in northern Sweden and found that man whose paternal grandfathers had suffered a shortage of food between the ages of 9 and 12 lived longer than their peers (European Journal of Human Genetics, vol. 14, p. 159). ...

Also in 2006, Tony Hsiu-Hsi Chen at the National Taiwan University in Taipei and colleagues reported that the offspring of men who regularly chewed betel nuts had twice the normal risk of developing metabolic syndrome during childhood. Betel nuts are also associated with several symptoms of metabolic syndrome in chewers including increased heart rate, blood pressure, waist size and body weight. ...

[BOX]

Rewriting Darwin and Dawkins?

The realisation that individuals can acquire characteristics through interaction with their environment and then pass these on to their offspring may force us to rethink evolutionary theory... [This sounds very familiar to Francis Collins' statement upon the release of ENCODE results a year ago, concluding that "the concept of genes is a myth", that "the scientific community need to re-think long-held beliefs..- AJP; see "The Principle of Recursive Genome Function" that opens up the recursive system to protein-to-DNA interaction, of course including not only proteins generated by the DNA, but also to proteins from the envirionment - most spectacularly with prions, bovine proteins causing "mad cow disease" in humans - AJP]...

"There was a trickle of findings of epigenetic inheritance in animals through the 20th century, and it is turning into a flood about now" says Russel Bonduriansky at the University of New South Wales in Sydney, Australia. ...

For Bonduriansky the accumulating evidence calls for a radical rethink of how evolution works. Jablonka, too, believes that "Lamarckian" mechanisms should now be integrated into evolutionary theory, which should focus on mechanisms, rather than units, of inheritance. "This would be very significant", she says. "It would reintroduce development, in a very direct and strong sense, into heredity and hence evolution. It would mean that pre-synthesis view of evolution, which was very diverse and very rich, can return, but with molecular mechanisms attached". ..

What does Dawkins himself think?

...He suggests, though, that the word "gene" should be replaced with "replicator". [Which is a profoundly holistic view, since "replication" is the propensity of the entire Genome - AJP]

[The heretofore restricted views, neither Darwinian evolution, nor modern Genetics permitting "feedback" (with the key of recursion from proteins to DNA), restricting our views only to 1.3% of the Genome (falsely labeling 98.7% as "Junk DNA"), and by adhering to the Central Dogma that information from proteins to DNA "never happens", thus e.g. external proteins were particularly excluded from interaction with growth and evolutionary processes, have to come together now into a new holistic approach by HoloGenomics, pellionisz_at_junkdna.com, July, 26, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

How the Personal Genome Project Could Unlock the Mysteries of Life

WIRED MAGAZINE: 16.08
By Thomas Goetz 07.21.08

George Church is dyslexic, narcoleptic, and a vegan. He is married with one daughter, weighs about 210 pounds, and has worn a pioneer-style bushy beard for decades. He has elevated levels of creatine kinase in his blood, the consequence of a heart attack. He enjoys waterskiing, photography, rock climbing, and singing in his church choir. His mother's maiden name is Strong. He was born on August 28, 1954.

If this all seems like too much information, well, blame Church himself. As the director of the Lipper Center for Computational Genetics at Harvard Medical School, he has a thing about openness, and this information (and plenty more, down to his signature) is posted online at arep.med.harvard.edu/gmc/pers.html. By putting it out there for everyone to see, Church isn't just baiting identity thieves. He's hoping to demonstrate that all this personal information — even though we consider it private and somehow sacred — is actually fairly meaningless, little more than trivia. "The average person shouldn't be interested in this stuff," he says. "It's a philosophical exercise in what identity is and why we should care about that."

As Church sees it, the only real utility to his personal information is as data that reflects his phenotype — his physical traits and characteristics. If your genome is the blueprint of your genetic potential written across 6 billion base pairs of DNA, your phenome is the resulting edifice, how you actually turn out after the environment has had its say, influencing which genes get expressed and which traits repressed. Imagine that we could collect complete sets of data — genotype and phenotype — for a whole population. You would very quickly begin to see meaningful and powerful correlations between particular genetic sequences and particular physical characteristics, from height and hair color to disease risk and personality.

Church has done more than imagine such an undertaking; he has launched it: The Personal Genome Project, an effort to make those correlations on an unprecedented scale, began last year with 10 volunteers and will soon expand to 100,000 participants. It will generate a massive database of genomes, phenomes, and even some omes in between. The first step is to sequence 1 percent of each volunteer's genome, focusing on the so-called exome — the protein-coding regions that, Church suspects, do 90 percent of the work in our DNA. It's a long way from sequencing all 6 billion nucleotides — the As, Ts, Gs, and Cs — of the human genome, but even so, cataloging 60 million bits multiplied by 100,000 individuals is an audacious goal.

The PGP stands as the tent pole of what Church calls his "year of convergence," the moment when his 30 years as a geneticist, a technologist, and a synthetic biologist all come together. The project is a proof of concept for the Polonator G.007, the genetic-sequencing instrument developed in Church's lab that hit the market this spring. And the PGP will also put Church's expertise in synthetic biology to use, reverse engineering volunteers' skin cells into stem cells that could help diagnose and treat disease. If the convergence comes off as planned, the PGP will bring personal genomics to fruition and our genomes will unfold before us like road maps: We will peruse our DNA like we plan a trip, scanning it for possible detours (a predisposition for disease) or historical markers (a compelling ancestry).

Bringing the genome into the light, Church says, is the great project of our day. "We need to inspire our current youth in a way that outer space exploration inspired us in 1960," he says. "We're seeing signs that knowing about our inner space is very compelling."

To Church, who built his first computer at age 9 and taught himself three programming languages by 15, all of this is unfolding according to the same laws of exponential progress that have propelled digital technologies, from computer memory to the Internet itself, over the past 40 years: Moore's law for circuits and Metcalfe's law for networks. These principles are now at play in genetics, he argues, particularly in DNA sequencing and DNA synthesis.

Exponentials don't just happen. In Church's work, they proceed from two axioms. The first is automation, the idea that by automating human tasks, letting a computer or a machine replicate a manual process, technology becomes faster, easier to use, and more popular. The second is openness, the notion that sharing technologies by distributing them as widely as possible with minimal restrictions on use encourages both the adoption and the impact of a technology.

Inside the Personal Genome Project

The project will turn information from 100,000 subjects into a huge database thath can reveal the connections between our genes and our physical selves. Here's how. — Thomas Goetz


1. Entrance Exam

Volunteers take a quiz to show genetic literacy. One question: How many chromosomes do unfertilized human egg cells contain? a) 11, b) 22, c) 23, d) 46, or e) 92? (Answer: c.) Only those with a perfect score proceed, but retests are allowed. 2. Data Collection

Volunteers sign an "open consent" form acknowledging that their information, though anonymized, will be accessible by others. They fill out their phenotype traits, listing everything from waist size to diet habits. Suitable respondents go on to the next step. 3. Sample Collection

Volunteers hit the medical center, where they are interviewed by an MD. Then a technician draws some blood, gathers a saliva sample, and takes a punch of skin. Don't worry: It hurts about as much as a bee sting.


4. Lab Work

The tissues are sent to a biobank, where DNA is extracted from the blood. One percent of it — the exome — is sequenced. Meanwhile, bacteria DNA is extracted from the saliva and sequenced to reveal the volunteer's microbiome. 5. Research

Now the fun part: Crunching the numbers. PGP scientists and other researchers start working with the data assembled from 100,000 individuals to investigate potential links between phenotypes and genotypes. The team will look for patterns and statistically significant anomalies. 6. Sharing

The volunteers get access to not only the raw data from their genome, but anything the research team gleans from their information. Insights — a newly discovered cancer risk, for example — are posted in a volunteer's file, which they'll be free to share with other PGP participants.



"I always tell people, your biggest problem in life is not going to be hiding your stuff so nobody steals it," Church says. "It's going to be getting anybody to ever use it. Start hiding it and that decreases the probability to almost zero."

For most of his career, Church has been known as a brilliant technologist, more behind-the-scenes tinkerer than scientific visionary. Though he was part of the group that kicked off the Human Genome Project, he's far less known than scientists like Francis Collins or J. Craig Venter, who took the stage at the end. His obscurity is due partly to his style. He talks about his accomplishments with a certain detachment that one might mistake for ambivalence. "He's not without ego; it's just a different sort of ego," says entrepreneur Esther Dyson, a friend and one of the first 10 PGP volunteers. "Everything is a subject of his intellectual curiosity, including himself."

His low profile may be the result of his tendency to get too far ahead of the curve, working a decade or two ahead of his field — so far that even the experts don't always get what he's talking about. "Lots of George's work is so advanced it's not ready to become standard," says Drew Endy, a professor of bioengineering at Stanford and cofounder with Church of Codon Devices, a synthetic-biology startup. "He's perfectly happy to spin out tons of ideas and see what might stick. It's high-throughput screening for technology and science. That's not the way most people work."

But thanks to the PGP, the Polonator, and the fact that the rest of the world is finally starting to understand what he's been talking about, Church's obscurity is coming to an end. He sits on the advisory board of more than 14 biotech companies, including personal genomics startup 23andMe and genetic testing pioneer DNA Direct. He has also cofounded four companies in the past four years: Codon Devices, Knome, LS9, and Joule Biosciences, which makes biofuels from engineered algae. Newsweek recently tagged him as one of the 10 Hottest Nerds ("whatever that means," Church laughs).

For someone who has spent his whole career ahead of his time, he is suddenly very much a man of the moment.

Most historians would cite Prague or Paris or Berkeley as the intellectual hub of the 1960s, but for people interested in computers, there was no place so significant as Hanover, New Hampshire. There, at Dartmouth College, an experiment in time-share computing was flourishing. Developed by professors John Kemeny and Thomas Kurtz, the Dartmouth Time-Sharing System let students remotely access the power of a mainframe computer to do calculations for mathematics or science assignments or to play a simulated game of college football. It ran on an easy-to-learn, intuitive program that Kemeny and Kurtz called Basic.

In 1967, the DTSS transitioned to a more-powerful GE-635 machine and offered remote terminals to 33 secondary schools and colleges, including Phillips Academy, a prep school in nearby Andover, Massachusetts. The terminal — not much more than a teletype machine, really — sat in the basement of the school's math building, forgotten until the next fall, when a young George Church showed up for his freshman year and began asking whether there was a computer on campus. Someone pointed Church to the basement. "There wasn't even a chair in the room. I had used a typewriter before, but never a teletype. And so I just started pressing keys," Church recalls. "Eventually I hit Return, and it came back with 'What?' And so I started typing in stuff like crazy and hitting Return. And it kept coming back with 'What?' At that point, I was pretty convinced it wasn't a human, but it was actually talking in words. So I just hadn't asked the right question or given the right answer."

Soon, Church found a book on Basic. "I was just sailing," he says. He spent endless hours in that basement — he eventually borrowed a chair — and taught himself the intricacies of coding, learning to program in Basic, Lisp, and Fortran. Indeed, thinking in code came so naturally to Church that he stopped going to his classes (a habit that would later get him kicked out of graduate school at Duke) and taught the computer linear algebra instead.

It turns out that learning how to write code — change it, hit Return, see what it will do — was ideal training for Church's eventual career in computational biology. "That's how we reverse engineer things like E. coli — you change something, and you see how it behaves," he says. "Little did I know that 30 years later, we would use almost exactly the same operations to optimize metabolic networks."

Church first hit on the power of computation to automate biology in the mid-'70s when he was in graduate school at Harvard. At the time, he was working on recombinant DNA, a then-new technique to splice a gene from one organism into another. Identifying a sequence of 80 or so base pairs of genetic code was a slow, tedious process. "You had to literally read off the bases and write them on a piece of paper, one by one," Church says. "So I wrote a sequence-reading program that would crunch it out. When the senior graduate student heard I had automated that, he said, 'What do you want to do that for? That's the only fun part.'"

By 1980, when Church's adviser, Wally Gilbert, won the Nobel Prize for DNA sequencing techniques, the process was still slow and expensive, executing one DNA strand at a time. So Church began working on one of his earlier targets for automation. His idea was to sequence several strands together by combining them into a single sample mixture. He called it multiplexing, drawing an analogy to signal multiplexing in electronics, in which more than one signal flows through a current at the same time. Church thought most of the work could even be integrated into one device rather than numerous machines.

It was a provocative idea, not just because he was substituting several human tasks for machine-driven ones, but also because he didn't make the usual false promise that technology would simplify the process. On the contrary, multiplexing would be complicated, Church maintained. But technology was up to the task.

Four years later, Church was invited to present his work on multiplexing at a small meeting in Alta, Utah. The Department of Energy had gathered about 20 scientists to mull over one question for five days: How might recent advances in genetics be used to measure an increase in genetic mutations arising from radiation exposure, as in Hiroshima? The group quickly reached the conclusion that technology circa 1984 couldn't answer that question. Meanwhile, they still had several more days in the mountains. "There were a bunch of us there who could talk about genomics as if it were an engineering exercise. And then we said, well, as a kind of booby prize, we could think of other things you could do," Church recalls, "like, say, sequencing the human genome."

Though Church was almost entirely unknown before the meeting, his presentation on multiplex sequencing methods stole the show. When he fell into a huge snow drift during a break one afternoon, one participant worried that the future of sequencing had disappeared with him.

That Alta brainstorm would become the Human Genome Project — the effort, adopted by the National Institutes of Health, to sequence one human genome for $3 billion within 15 years. However audacious the HGP seemed, Church was disappointed by it almost from the start. "We could have said our goal was to get everybody's genome for some affordable price," he says, "and one genome would be a milestone" on the way toward that goal.

The HGP also played it safe with its choice of technology. Despite the promise of Church's multiplexing system, the HGP instead used a more established instrument manufactured by Applied Biosystems, based on a technique developed by biochemist Frederick Sanger. As Church saw it, this meant that the project had failed to put its $3 billion toward improving the state of the art. Even worse, the HGP consumed so many of the resources available to the field of genetics that it effectively locked that state of the art into 1980s technology.

The result was nearly two decades of inertia. It wasn't until 2005, when the Human Genome Project was complete and new goals were put forth, that Church finally perfected the multiplexing approach he had presented 20 years earlier at Alta. In a paper published in Science, Church demonstrated a technique that could analyze millions of sequences in one run (Sanger's method could handle just 96 strands of DNA at a time). And Church's method not only accelerated the process, it made it far cheaper, too, elegantly demonstrating the power of automation to drive exponential advances and bring down costs. Church's approach, and a competing innovation developed by 454 Life Sciences that same year, inaugurated the second generation of sequencing, now in full swing.

In the past three years, more companies have joined the marketplace with their own instruments, all of them driving toward the same goal: speeding up the process of sequencing DNA and cutting the cost. Most of the second-generation machines are priced at around $500,000. This spring, Church's lab undercut them all with the Polonator G.007 — offered at the low, low price of $150,000. The instrument, designed and fine-tuned by Church and his team, is manufactured and sold by Danaher, an $11 billion scientific-equipment company. The Polonator is already sequencing DNA from the first 10 PGP volunteers. What's more, both the software and hardware in the Polonator are open source. In other words, any competitor is free to buy a Polonator for $150,000 and copy it. The result, Church hopes, will be akin to how IBM's open-architecture approach in the early '80s fueled the PC revolution.

In the sequencing game, though, the cost of the machine is only half the equation. The more telling expense is the operating cost, particularly the cost of sequencing entire human genomes. Executives at 454 estimate that their latest machine can pull off a whole genome sequence for $200,000. Applied Biosystems claims its instrument has completed a genome for just $60,000. Church maintains that, while the Polonator isn't up to whole-genome reads, it is clocking in at about one-third the cost of Applied Biosystems' estimate. A whole sequence from Knome, the retail genomics firm cofounded by Church, goes for $350,000. (It's worth noting that these figures are only roughly comparable, since each company uses slightly different quality measures and specifications.)

As these numbers continue to drop, the mythical $1,000 genome comes ever closer. Sequencing a human genome for $1,000 is the somewhat arbitrary benchmark for true personalized genomics — when the science could become a component of standard medical care. An important catalyst in achieving that point is the Archon X Prize for Genomics, which is offering $10 million to the team that can sequence 100 complete genomes in 10 days for less than $10,000 each. As of June, seven teams, including Church's lab, had entered the competition. Church, who served for a time on the advisory board of the contest, says that the prize will drive costs down further and help publicize the potential of personalized whole-genome sequencing.

That's important because Church hopes the Polonator and other next-generation instruments will inspire a new generation of smaller labs to begin work in personal genomics, as well as other genetic sciences. Already, the onslaught of technology has jump-started new projects, like sequencing part of the Neanderthal genome, examining extremophile microbes in old California iron mines, and studying the regenerative properties of the salamander. In medicine, cheaper sequencing has enabled research into drug-resistant tuberculosis; the genetics of breast, lung, and other cancers; and the DNA architecture of schizophrenics.

But if the Polonator is going to lead that charge, it has to work — and work on a massive scale. And that means passing a major test: successfully sequencing the 100,000 exomes in the PGP.

All of us know our height, weight, and eye color. Fewer of us know our arm span or resting blood pressure. But who among us knows the direction of our hair whorls or the Gell-Coombs type of our allergies? This is the level of detail that the PGP requires the 100,000 volunteers to reveal about themselves, a list staggering in its exhaustiveness. The PGP will tally head circumferences, injuries, chin clefts and cheek dimples, whether volunteers can roll their tongues or hyperflex their joints, whether they dislike hot climates or are hot tempered, if they've often been exposed to power lines or wood dust or diesel exhaust or textile fibers. The project questionnaire asks how many meals they eat a day and whether they prefer their food fried, broiled, or barbecued. It even demands to know how much television they watch. And, of course, PGP volunteers will hand over most aspects of their medical history, from vaccines to prescriptions.

This phenotype data will be integrated with a volunteer's genomic information, then combined with statistics from all the other subjects to create a potent database ripe for interrogation. In contrast to the heavy lifting that genetic research requires now — each study starts from scratch with a new hypothesis and a fresh crop of subjects, consent forms, and tissue samples — the PGP will automate the research process. Scientists will simply choose a category of phenotype and a possible genetic correlation, and statistically significant associations should flow out of the data like honey from a hive. A genetic predisposition for colon cancer, for instance, might be found to lead to disease only in connection with a diet high in barbecued foods, or a certain form of heart disease might be associated with a particular gene and exposure to a particular virus. Genomic discovery won't be a research problem anymore. It'll be a search function. (This helps explain why Google, among others, has donated to the project).

The process began last year, and each of the first 10 volunteers has a background in medicine or genetics. They include John Halamka, CIO of Harvard Medical School and a physician; Rosalynn Gill, chief science officer at Sciona (a personalized genetics nutrition company); and Steven Pinker, the noted psychologist and author. The other 99,990 participants won't be expected to be so elite, though they will have to pass a genetics-literacy quiz to demonstrate informed consent. The general selection process, which starts with registration at personalgenomes.org, is scheduled to begin later this year.

Besides offering up their genomes, subjects will have to part with some spit and a bit of skin. The saliva contains their microbiome — the trillions of microbes that exist, mostly symbiotically, on and in our bodies. If phenotype is a combination of genotype plus environment, the microbiome is the first wash of that environment over our bodies. By measuring some fraction of it, the PGP should offer a first look at how the genome-to-microbiome-to-phenome chain plays out.

The skin sample goes into storage, creating what would be one of the world's largest biobanks. Members of Church's lab have devised a way to automate turning the skin cells into stem cells, and they hope to publish the technique later this year. (Similar work has been done at the University of Wisconsin and Kyoto University.) By reprogramming the skin cells using synthetically engineered adenoviruses, Church's team can transform the skin cells into many sorts of tissue — lungs, liver, heart. These tissues could be used as a diagnostic baseline to detect predisposition for various diseases. What's more, the reprogrammed cells could be used to treat disease, replacing damaged or failing tissue. It's an intriguing hint of how Church's work with synthetic biology complements genomic sequencing.

If the PGP were simply an exercise in breaking down 100,000 individuals into data streams, it would be ambitious enough. But the project takes one further, truly radical step: In accordance with Church's principle of openness, all the material will be accessible to any researcher (or lurker) who wants to plunder thousands of details from people's lives. Even the tissue banks will be largely accessible. After Church's lab transforms the skin into stem cells, those new cell lines — which have been in notoriously short supply despite their scientific promise — will be open to outside researchers. This is a significant divergence from most biobanks, which typically guard their materials like holy relics and severely restrict access.

For the PGP volunteers, this means they will have to sign on to a principle Church calls open consent, which acknowledges that, even though subjects' names will be removed to make the data anonymous, there's no promise of absolute confidentiality. As Church sees it, any guarantee of privacy is false; there is no way to ensure that a bad actor won't tap into a system and, once there, manage to extract bits of personal information. After all, even de-identified data is subject to misuse: Latanya Sweeney, a computer scientist at Carnegie Mellon University, demonstrated the ease of "re-identification" by cross-referencing anonymized health-insurance records with voter registration rolls. (She found former Massachusetts governor William Weld's medical files by cross-referencing his birth date, zip code, and sex.)

To Church, open consent isn't just a philosophical consideration; it's also a practical one. If the PGP were locked down, it would be far less valuable as a data source for research — and the pace of research would accordingly be much slower. By making the information open and available, Church hopes to draw curious scientists to the data to pursue their own questions and reach their own insights. The potential fields of inquiry range from medicine to genealogy, forensics, and general biology.

And the openness doesn't serve just researchers alone. PGP members will be seen as not only subjects, but as participants. So, for instance, if a researcher uses a volunteer's information to establish a link between some genetic sequence and a risk of disease, the volunteer would have that information communicated to them.

This is precisely what makes the PGP controversial in genetics circles. Though Church talks about it as the logical successor to the Human Genome Project, other geneticists see it as a risky proposition, not for its privacy policy but for its presumption that the emerging science of genomics already has implications for individual cases. The National Human Genome Research Institute, for example, has cautioned that the burgeoning personal-genomics industry, which includes research-oriented projects like the PGP as well as straight-to-consumer companies like Navigenics and 23andMe and whole-genome-sequencing shops like Knome, puts the sales pitch ahead of the science. "A lot of people would like to rapidly capitalize on this science," says Gregory Feero, a senior adviser at the NHGRI. "But for an individual venturing into this now, it's a risk to start making any judgments or decisions based on current knowledge. At some point, we'll cross over into a time when that's more sensible."

Church cautions, however, that keeping clinicians and patients in the dark about specific genetic information — essentially pretending the data or the technology behind it don't exist — is a farce. Even worse, it violates the principle of openness that leads to the fastest progress. "The ground is changing right underneath them," he says of the medical establishment. "Right now, there's a wall between clinical research and clinical practice. The science isn't jumping over. The PGP is what clinical practice would be like if the research actually made it to the patient."

In the not-too-distant future, Church says, hospitals and clinics could be outfitted with a genome sequencer much the way they now have x-ray machines or microscopes. "In the old books," Church says, "almost every scientist was sitting there with a microscope on their table. Whether they're a physical scientist or a biological scientist, they've got that microscope there. And that inspires me."

Wired deputy editor Thomas Goetz (thomas@wired.com) wrote about personal genomics in issue 15.12.

What You Should Know Before You Spit Into That Test Tube

By Rick Weiss

Washington Post

Sunday, July 20, 2008; Page B01

Jeffrey Gulcher had no reason to think much about prostate cancer. He was just 48, and the disease typically strikes later in life. Even the most cautious medical groups agree that most men need not begin annual prostate screenings until age 50.

But Gulcher happens to be the chief scientific officer of deCODE Genetics -- one of several companies that, amid some controversy, have begun offering direct-to-consumer DNA tests that can help people predict which diseases they are likely to get. So in April, he spat into a test tube and, without giving the matter much thought, sent the sample in for analysis by his own company.

He was in for a shock. The test indicated that he carries a genetic variant that nearly doubles his lifetime risk of getting prostate cancer: While the average man has a 15 percent chance of being stricken, Gulcher had a 30 percent shot. That spurred his physician to order a standard blood test for prostate cancer. The result was toward the high end of the range considered normal, which, together with the DNA test, worried the doctor. He referred Gulcher to a urologist, who performed an exploratory biopsy -- and found that Gulcher's prostate gland was riddled with cancer, and a fairly aggressive version of it at that.

Gulcher is going in for surgery tomorrow, and not a moment too soon. Tests suggest that the disease has not yet spread to other parts of his body, a milestone that often portends death and that may well have been passed had he waited until he turned 50 to get a standard prostate-specific antigen (PSA) test.

Did genetic testing save Gulch er's life? I think it may have. His dramatic story seems to illustrate perfectly the claims, made by his company and others, that an open market of DNA tests is the 21st century's ticket to a healthier nation. But a closer look suggests that this fast-growing industry, with its snazzy Web-based come-ons, could benefit from some temperance and independent oversight.

The technology is undeniably impressive. For as little as $1,000, anybody who can drool into a mailing tube can now find out his or her genetic odds of getting any of 20 or more potentially debilitating diseases, including cancer, heart disease and diabetes. Most of these tests will not lead to a frank diagnosis, as happened with Gulcher. But discovering an inherited propensity toward a particular illness can motivate consumers -- or, as they used to be known, patients -- to get more frequent checkups, take preventive medicines or make lifestyle changes to try to ward off the specter of disease. At last, we seem to be on the cusp of the long-promised personalized-medicine revolution in which gene tests allow physicians to craft far more individualized and effective ways of keeping us well.

But tests that look into the fog of people's medical futures are freighted with tricky medical, economic and bioethical implications. For one thing, most genes are not determinative, so these tests can convey only odds, not destinies. Even with the doubled lifetime risk for cancer that's associated with Gulcher's prostate gene variant, two out of three men who receive a "positive" test for that gene will never get the disease. And many of those who do will get it so late in life and in such a benign form that no treatment would be justified. So that's at least two new members of the "worried well" who could be losing sleep and spending money on unnecessary follow-up tests for every person who would arguably be appropriately forewarned.

Moreover, the tests are still new and easily misinterpreted, even by professionals. Online results may be subject to security and privacy breaches. And some companies are using people's gene profiles to conduct independent research. That suggests to many ethicists and lawyers that these firms' paying clients ought to be informed that they are subjects in experiments, with full disclosure of potential risks and rights.

Most worrisome of all, at least a few companies seem to be peddling DNA-based versions of snake oil. Some firms claim to be able to identify inherited nutritional deficiencies that -- guess what? -- are treatable with pricey supplements that they just happen to sell. Some even promise to discern from your genes what kind of person you should marry to ensure a blissful sex life and healthier babies. Welcome to the Wild West of personalized genomics.

These problems are not insurmountable. But there is precious little oversight of this burgeoning new industry, in part because genetic analysis does not fit cleanly into any existing category of medical practice. And if the first wave of DNA-screening companies to hit the market gets its way, there won't be any more adult supervision in the foreseeable future. In a blatant effort to stave off regulation, top officials from all the major competing gene-test companies met early this month and quietly agreed to spend this summer hammering out a "best practices" document that they would promise to follow. This is a great idea, but it's not enough.

No state or federal agency can today assure consumers that the DNA tests they order will give accurate results -- or that the results, even if technically accurate, will have any practical value. The Food and Drug Administration says it has the authority to regulate all gene tests but has decided, at least for now, to ignore the vast majority of those developed so far. The Federal Trade Commission (FTC), which is supposed to protect consumers from fraudulent claims, has never taken an enforcement action against even the most transparently deceptive gene-test companies. And the Centers for Medicare and Medicaid Services, the division within the Department of Health and Human Services that oversees clinical laboratories, has so far opted to steer clear of the genetic-testing world, despite pleas from federal advisers to ensure a minimal standard of gene-test proficiency.

The companies say that what they do is different enough that they should not be shoehorned into the conventional medical-testing rules. "For the first year and a half of our existence, all we did was try to figure out how to fit into the regulatory environment," said Dietrich Stephan, co-founder of Navigenics, a leading California-based gene-test company, adding that the effort cost an estimated $10 million.

That's real money. Yet even with all that preparation, Navigenics and a dozen other testing companies recently received warnings from individual states accusing them of violating state rules for labs. Situations such as this cry out for the guiding hand of the federal government -- not necessarily through cumbersome regulations, which can be too rigid to keep up with quickly changing science, but through formal guidelines, at least, promulgated by HHS. These could set clear expectations about how accurate gene tests should be -- and what it means to be "accurate" in the brave new world of predictive health -- and what level of informed consent should be obtained from clients. The promulgation of such standards will take real effort from HHS Secretary Mike Leavitt, who has championed personalized medicine but who has thus far been largely AWOL on the gene-test issue and has little incentive to push hard in the final months of the Bush administration. The FTC also needs to show that it has teeth and can bite.

Genetic-testing companies need to ante up, too. The responsible ones could buy a lot of good will by offering the public easily accessible scientific details (online and elsewhere) about the specific genes or genetic markers they are testing for; citations for the published studies they use to justify their claims that those genes have real medical relevance; the privacy and security systems they have in place; and the protocols for any experiments that clients' specimens may be used in. The firms should also disclose any approvals they have sought, obtained or denied from independent scientific and ethical review boards.

I took heart that such a future is possible when, at an HHS meeting two weeks ago, I saw chiefs from the five major competing gene-test companies sitting next to one another, speaking cooperatively to federal advisers. If these executives move aggressively to do the right thing, and if federal officials help them with some smart but tough guidance, perhaps those corporate heads can avoid a future in which they are called upon to appear side by side again -- this time before Congress, looking more like those famously photographed tobacco CEOs, being asked tough questions about what exactly they have been selling, and at what cost to American health.

rweiss@americanprogress.org

Rick Weiss, a former science reporter for The Post, is a senior fellow at the Center for American Progress.


^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Principle of Recursive Genome Function Supersedes Dogmas; By Andras Pellionisz, Online Ahead of Print; (Scientific Visionary Vindicated)

[The Pri nciple of Recursion comprised in a single color Fig. (Not in the paper; see Supplementary Information)]

[Free Full Text of Journal Article, with clickable references, plus Supplementary Information is available]

A Eureka Moment concerning the fractal character of neurons led in turn to a novel picture of genomics where protein structures act back recursively upon their DNA code -- in outright contradiction to prevailing orthodoxy. A household name in neuroscience for his tensor network theory, Dr. András Pellionisz has recently had another far-reaching discovery borne out. This insight has now received striking confirmation in stunning results from the new field of epigenetics -- promising a whole raft of novel medical diagnoses and therapies.

Sunnyvale, Calif. (PRWEB) July 16, 2008 -- A landmark article on "The Principle of Recursive Genome Function" (received December 7, accepted December 18, 2007) by András J. Pellionisz appears online in Springer's e-Journal Cerebellum.

The paper marks the first anniversary of an historic event--the release of pilot results for ENCODE, the Encyclopedia of DNA Elements project. Building on the results of the Human Genome Project, the ENCODE effort revealed a far more complex DNA coding sequence than was ever previously imagined. "There's a lot more going on than we thought," said Collins, who was director of the National Human Genome Research Institute (NHGRI). Dr. Collins issued a mandate a year ago "the scientific community will need to rethink some long-held views".

A happy few did not need to rethink either the "central dogma of molecular biology" (Crick, 1956) or the misnomer of "junk" DNA (Ohno 1972), since they never believed them in the first place. The dictum claiming that a flow of information from proteins back to DNA "never happens" or the idea that 98.7% of the human genome should be disregarded as junk was never very believable.

As a direct response to Dr. Collins' call, the principle of recursive genome function (PRGF) in one stroke sweeps away two dogmas which prevailed for over 50 years concerning the function of the double helix.

Recursive genome function is a process whereby proteins iteratively access information packets of DNA to build hierarchies of more complex protein structures. Such recursive development is illustrated in the fractal growth of cerebellar Purkinje neuron:

[See all Figs. plus color versions here]

Starting from a primary information packet, a Y-shaped, fractal protein template is constructed by a "forward growth" process - in accord with the traditional picture - via transcription of DNA to RNA (where, in turn, RNA builds nucleic acids up into structural protein). In the course of constructing the Y-shaped template, the primary gene is turned on. Thus, the most primitive part of the process retains Watson's simplified scheme. The principle does not contradict the 'DNA makes RNA makes proteins' picture, but rather goes beyond it - dispensing with both the hitherto forbidden feedback mechanism as well as the entire notion of junk DNA.

On the contrary, the genetically crucial process known as methylation demonstrates just such a "backward" flow. In a stunning reversal of long-held views, it now appears that environmental influences can act directly on the genetic code. Moreover, methylation of DNA is not merely epigenetic, but HoloGenomic.

Dr. Alexandre Akoulitchev, Oxford University, UK (not involved in the study) says: "The PRGF of Pellionisz is helping not only his recursive algorithmic approach to the genome (FractoGene), but puts the various meanings of 'epigenetics' into the perspective of clearly defined novel axioms. The PostModern Age of Genomics (starting with his PostGenetics.org), synthesizes inconsistent interpretations and haphazard notions of "epigenetics" into a solid scientific foundation of HoloGenomics."

Leroy Hood (2003) and finally Richard Dawkins (2008) have suggested that genomics is now a branch of information science. With modern genomics becoming postmodern genome informatics, a natural question arises: What axioms will take the place of outmoded assumptions?

The traditional axioms could not put to a dignified rest because, as the wisdom has it, "data never kill theories, only better theory can kill less tenable theories."

The principle of recursive genome function addresses this fundamental, decisive role. The time has come to go public, after more than a decade of clandestine work - not even asking for support.

András Pellionisz is a biophysicist, formerly of New York University. Since heading up HelixoMetry in Silicon Valley, he has been busy assembling a portfolio in anticipation of the time when the imposing dogmas and their bulwarks would give way. A widely published author, Pellionisz remained largely silent for 15 years to spare him a collision with the powers that were.

His pioneering work in biological neural networks, aired in over a hundred publications, won him both NIH support and recognition by way of the Alexander von Humboldt Prize for Senior Distinguished American Scientists.

When Pellionisz also made a bold step and published his research on the fractal geometry of cellular development based on a recursive DNA paths (1989), his next NIH application was overlooked by his peers and the establishment maintained a double lock on genomics. Their ideology was: Don't look back on DNA, since recursion can "never happen" and even if you would, "there is only junk."

As a scientist with his first degree in engineering, he developed a neural net application for NASA, using the parallel computers of the time (so-called Transputers).

By 2005 fundamental problems with underlying axioms of genomics became too obvious. Meanwhile, millions, if not hundreds of millions were dying of junk DNA diseases while 98.7% of the human DNA was officially still considered untouchable.

Together with his fellow pioneers, Dr. Pellionisz launched the trial balloon of International PostGenetics Society. Indeed, almost a year ahead of disclosing the official conclusions of ENCODE that "junk" DNA is anything but, the IPGS became the first organization to officially abandon the misnomer at its European Inaugural in 2006. At that meeting, Pellionisz pioneered the approach of diagnosis (leading to therapy and eventual cure) of junk DNA diseases caused by fractal defects in genomic regulatory sequences.

In late 2006 a manuscript attempting to close the chapter on junk DNA was co-authored by 20 Founders of IPGS. Those suffering from "junk DNA diseases" probably wish that the manuscript was given the benefit of a peer-review.

Instead, the mounting pressure caused publication of ENCODE results 3 months earlier than planned. Thirty major papers shredded long-held views and printed staggering statements such as "the concept of genes is a myth." A deafening silence ensued.

Rather than heeding advice of Dr. Collins of "re-thinking long-held beliefs" research went "genome-wide" for more data, as next-generation sequencing made the entire genome of many species (including humans) available with rapidly melting price tag.

Application of brute force to turn out more data instead of revising axioms created its own problems, however. A dreaded DNA data deluge looms large. Without a combination of algorithmic reduction as well as building the proper computing architecture, the brute force approach of full genome sequencing and genome-wide analysis have already hit a compute- and data-wall.

The old bottleneck was "get info" (sequencing to obtain data). The new bottleneck is "use info" (understanding what sequenced data mean). The promise inherent in the Principle is that an algorithmic reduction delivers us an understanding of physiological and therefore pathological genome function in a new light. This clears the road for rapid advancement beyond a long-overdue breakthrough. In HoloGenomics, all, including non-genic conditions can now be focused upon. This is the direct response to consumers, including those who are not even patients. For those impatient enough to prevent some undesirable conditions, the principle opens an opportunity.

PRESS CONTACT:
Brian Flanagan
Phone: (+1) 319-338-6250

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Intel, Others Back New DNA Sequencer

By BRIAN GORMLEY

July 14, 2008; Page B6

Investors are pumping $100 million into a start-up developing technology to propel DNA sequencing into mainstream medicine.

The infusion is expected to enable Pacific Biosciences of California Inc. to introduce in 2010 its high-speed system for reading the chemical "letters" of DNA. The technology is designed to expand the use of sequencing to develop treatments tailored to patients' genetic makeup.

In time, for example, pharmaceutical researchers may routinely use sequencing to identify people who respond well, or poorly, to a drug based on specific genetic markers.

The federally funded Human Genome Project, completed in 2003, took 13 years and cost $450 million. In five years, Pacific Biosciences will make it possible to sequence a genome in 15 minutes, Chief Executive Hugh Martin said. The initial product will facilitate genome sequencing in a matter of days, at an undisclosed cost, he said.

Corporate investors, asset managers and venture-capital firms are taking part in the latest funding round for Pacific Biosciences, of Menlo Park, Calif. They include Intel Capital, an arm of Intel Corp.; Deerfield Capital Management LLC; T. Rowe Price Group; Morgan Stanley; FMR LLC; AllianceBernstein Holding LP; Maverick Capital Ltd.; Redmile Group; Alloy Ventures; DAG Ventures; Teachers' Private Capital; Kleiner Perkins Caufield & Byers; and Mohr Davidow Ventures.

A key part of Pacific Biosciences' system is an enzyme called polymerase that human cells use to copy DNA. With the system, scientists break double-stranded DNA molecules into single strands and then fragment the strands. These fragments are fed into chambers on sequencing chips. Then, individual nucleotide letters, each linked to a fluorescent marker, are added.

Inside each chamber, the polymerase enzyme pairs these letters to their corresponding nucleotides on the DNA fragment. As letters bind, they emit flashes of light, enabling scientists to read the DNA fragment's sequence.

Other companies seeking to cut the time and cost of DNA sequencing include publicly traded Illumina Inc. and Helicos BioSciences Corp., and closely held Complete Genomics Inc.

Write to Brian Gormley at brian.gormley@dowjones.com

[Mark the time. Usually nothing happens in the middle of a business summer. Except on 14th of July, 2008, when the "Genome" continent piled up on the "Informatics" tectonic plate, causing the long-predicted "Big One" earthquake. - comment by pellionisz_at_junkdna.com, July, 14, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

New Targets For RNAs That Regulate Genes Identified

[IPGS Founders Drs. Bethany Janowski and David Corey shake the foundations of "Modern Genetics" - AJP]

ScienceDaily (July 6, 2008) — Tiny strands of genetic material called RNA -- a chemical cousin of DNA -- are emerging as major players in gene regulation, the process inside cells that drives all biology and that scientists seek to control in order to fight disease.

The idea that RNA (ribonucleic acid) is involved in activating and inhibiting genes is relatively new, and it has been unclear how RNA strands might regulate the process.

In a new study available online today and in a future issue of Nature Structural and Molecular Biology, RNA experts at UT Southwestern Medical Center found that, contrary to established theories, RNA can interact with a non-gene region of DNA called a promoter region, a sequence of DNA occurring spatially in front of an actual gene. This promoter must be activated before a gene can be turned on.

"Our findings about the underlying mechanisms of RNA-activated gene expression reveal a new and unexpected target for potential drug development," said Dr. David Corey, professor of pharmacology and biochemistry at UT Southwestern and one of the senior authors of the study.

Genes are segments of DNA housed in the nucleus of every cell, and they carry instructions for making proteins. Faulty or mutated genes lead to malfunctioning, missing or overabundant proteins, and any of those conditions can result in disease. Scientists seek to understand the mechanisms by which genes are activated, or expressed, and turned off in order to get a clearer picture of basic cell biology and also to develop medical therapies that affect gene expression.

In previous studies, Dr. Corey and Dr. Bethany Janowski, assistant professor of pharmacology at UT Southwestern and a senior author of the current study, have shown that tiny strands of RNA can be used to activate certain genes in cultured cancer cells. Using strands of RNA that they manufactured in the lab, the researchers showed that the strands regulate gene expression by somehow perturbing a delicate mixture of proteins that surround DNA and control whether or not genes are activated.

Until now, however, it was not clear exactly how the synthetic RNA strands affected that mix of regulating proteins.

In the current study, also carried out in cancer cell cultures, the UT Southwestern research team discovered an unexpected target for the manufactured RNA. The RNA did not home in on the gene itself, but rather on another type of RNA produced by the cell, a so-called noncoding RNA transcript. This type of RNA is found in association with the promoter regions that occur in front of the gene. Promoter regions, when activated, act essentially as a "start" command for turning on genes.

The researchers found that their man-made RNA strand bound to the RNA transcript, which then recruited certain proteins to form an RNA-protein complex. The whole complex then bound to the promoter region, an action that could then either activate or inhibit gene expression.

"Involvement of RNA at a gene promoter is a new concept, potentially a big new concept," Dr. Janowski said. "Interactions at gene promoters are critical for understanding disease, and our results bring a new dimension to understanding how genes can be regulated."

Until recently, many scientists believed that proteins alone control gene expression at promoters, but Drs. Corey and Janowski's results suggest that this assumption is not necessarily true.

"By demonstrating how small RNAs can be used to recruit proteins to gene promoters, we have provided further evidence that this phenomenon should be in the mainstream of science," Dr. Corey said.

Although using synthetic RNA to regulate gene expression and possibly treat disease in humans is still in the future, Dr. Corey noted that the type of man-made RNA molecules employed by the UT Southwestern team are already being used in human clinical trials, so progress toward the development of gene-regulating drugs could move quickly.

Other researchers from UT Southwestern involved in the research were lead author and student research assistant Jacob Schwartz; student research assistant Scott Younger; and research associate Ngoc-Bich Nguyen. Researchers from the University of Western Ontario and ISIS Pharmaceuticals also participated.

[These spectacular results by two IPGS Founders are shaking the very fundamentals of the traditional (modern) genetics - but they are totally consistent with the Principle of Recursive Genome Function. Thus, rapid progress is expected along an information-theoretically well founded concept and the experimental verification or falsification. - comment by pellionisz_at_junkdna.com, July, 7, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Science is being held back by outdated laws

From The Times

July 5, 2008

The question "who owns science?" is now crucial

Sir, It is now widely recognised that the system of law and practice that has regulated science and protected the rights of those who make scientific discoveries and turn them into products and therapies in a process known as “innovation” is unfit to serve the needs of the contemporary world.

Science and the innovation it generates is a vast enterprise: commercial and pro-bono, public and private, industrial and educational, amateur and professional. It permeates our lives and shapes the world. Some would say it is the defining characteristic of modern society, stimulating and harnessing our innate curiosity and, more than any other endeavour, shaping our world and, increasingly, ourselves.

An important component of the innovation process is the idea of ownership of science and technology and its products, enabling profits to be made from research and development. The question of “Who owns science?” is therefore a crucial one, the answer to which will have broad-reaching implications for scientific progress and for the way in which the benefits of science are distributed, fairly or otherwise. Two of the most pressing issues concern equity of access to scientific knowledge and the useful products that arise from that knowledge.

The current system of managing research and innovation incorporates a complex body of law governing the ownership of “intellectual property” — copyright and patents being the most familiar. Intellectual property rights are intended to provide incentives that encourage the advancement of science, enhance the pace of innovation, increase the derived economic benefits and provide a fair way of regulating access to these benefits. But does it really achieve these purposes? There is increasing concern that, to the contrary, it may, under some circumstances, impede innovation, lead to monopolisation, and unduly restrict access to the benefits of knowledge.

We believe it is time to reassess the effect of the present regime of intellectual property rights, especially with respect to the area of patent law, on science, innovation and access to technologies and determine whether it is liberating — or crushing; whether it operates to promote scientific progress and human welfare – or to frustrate it.

The second issue we wish to highlight is that of access to science itself. The ideal shared by almost all scientists is that science should be open and transparent, not just in its practices and procedures, but so that the results and the knowledge generated through research should be freely accessible to all. There is a broad consensus in the scientific community that such openness and transparency promotes the advancement of science and enhances the likelihood that the benefits of science are enjoyed by all. For more than a hundred years, these principles have been the bedrock of academia and the scientific community.

We call upon all interested in the future of science to join with us in an active and open-ended search for answers.

John Sulston

Chair, Institute for Science, Ethics and Innovation, University of Manchester

Joseph Stiglitz

Chair, Brooks World Poverty Institute, University of Manchester

[The question "who owns science?" is a very simple one. The scientist owns his/her piece of science he/she discovered. It is a much more complicated problem for whom and at what price he/she is going to sell the commodity. In some civilized societies scientists are kept well-fed and taken care of - thus, like in a farm, although the cow owns her own milk - the "gentlemen's agreement" between the farmer and the cow passes the property (butter included) to the farmer, in exchange for the room and board the cow is getting. In the jungle, the tiger owns her prey - and it takes a fight to get it from her claws. Most societies are somewhere in between - and since science is global, we have the mess what scientists have to live with - comment by pellionisz_at_junkdna.com, July, 5, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

When's a Gene Test Not a Gene Test?

Jun 25 2008 3:11PM EDT
Wired

Genetic-testing company Navigenics has responded to the state of California's cease-and-desist letter with a novel defense: It doesn't actually test patients' genomes, it just analyzes 'em.

In a letter sent to the Health Department obtained by Wired.com, the company argues that it does not actually perform genetic tests, and therefore should not be regulated as a clinical laboratory under California state law.

Instead, Navigenics argues it merely applies algorithms to DNA data it receives from tests performed by a third-party, a licensed laboratory.

As we noted Monday, this regulatory battle hinges on the definition of a clinical laboratory test.

"Nothing in the definition of a clinical laboratory test supports a conclusion that the interpretation of the data resulting from such a test is itself a test," Navigenics wrote in its response.

Though abstruse, these definitions could shape the long-term future of genetic testing. The arguments boil down to whether or not the information contained in your DNA should be treated like blood or like data.

Navigenics is arguing that once the state-licensed lab turns a biological sample into digital data, DNA is no longer within the purview of health department laboratory regulation. Navigenics is just an information service, combining scientifically-published genetic disease correlation data with personal genotype data.

Whether or not the health department (or eventually the courts) will buy this argument remains to be seen. The state is reviewing responses from the thirteen companies it served with cease-and-desist letters.

According to Navigenics, it contracts the actual biological work to a Federally-certified and California-licensed lab run by Affymetrix, so it never touches the spit-containing DNA that forms the basis of genetic testing. What Navigenics receives from Affymetrix is merely digital data about a person's genetic variations.

In that way, it argues, the company merely interprets clinical lab tests, much like a physician would, and physicians are not regulated as clinical labs.

The state, on the other hand, holds that because Navigenics obtains the biological data, it is essentially doing the test.

Navigenics' also proposes a second line of defense relating to the necessity of including a physician in ordering a genetic test.

Navigenics has argued all along that it has a California physician who actually orders and receives the tests, but it is not clear whether any physician can order a test, or whether it had to be "your doctor" (whatever that means in today's health care system).

The letter to the health department makes a strong argument that Navigenics' on-staff doctor can order a test even if the test is initiated by a consumer. It quotes from a 2003 communication between the health department and Quest Diagnostics, in which the agency recognizes the difference between ordering a prescription drug and a clinical test.

"Until and unless the law is changed, it would appear that any licensed physician in California may order laboratory tests on persons of whom they have no knowledge," the health department wrote.

With these two trenches dug, Navigenics also extended an olive branch to the health department in the form of a three-page letter from the company's CEO Mari Baker.

"I look forward to the opportunity to meet with you and your team as soon as possible, and preferably within the next two weeks, to fully brief you on our company's approach and operational practices," Baker wrote.

The genetic testing industry, led by Navigenics and 23andMe, are eager to come up with a regulatory framework that would allow their businesses to run smoothly and get the health department out of their hair, said Rick Weiss, a senior fellow at the Center for American Progress.

"The companies' responses have been quite cordial. It's been, 'OK, let's talk,'" Weiss said. "They clearly want to hammer out a system that will allow this industry to grow."

Still, both companies have told the state of California that they plan to remain in business without substantive changes to the business practices that earned them sent cease-and-desist letters in the first place.

Weiss sees that as a sign that the two sides need to acknowledge that genetic testing is something new that falls outside the existing regulatory paradigm.

"[Genetic testing companies] are offering something new that doesn't fit into the landscape of clinical laboratory regulation," he said. "There needs to be some sort of dialogue between regulators and the industry."

by Alexis Madrigal for Wired.com

[Navigenics' legal theory is brilliant - even if it will break "Personal Genomics" into two separate industries. Of course, with KPCB (with Al Gore and Collin Powell as Venture Capitalist Partners) behind Navigenics, nobody should expect amateurish government-fight from KPCB and thus Navigenics. There is not a chance to win the fight that everyone who owns a genome has an absolute right to know his/her genome information. (The "freedom of information act" further asserts this natural right). It has been widely acknowledged, internationally, that "Genetics became a branch of informatics". Therefore, once one's genome is turned into digital information, IT companies can provide all the necessary software to analyze ANY kind of information. Health departments (of either States of Federal) will never win a fight about an issue they hardly know anything about. The "best" they can achieve is to break "Personal Genomics" into two separate industries; properly certified laboratories converting specimens (e.g. saliva) into digital genome information for the owner of the genome (the person who wants to have that conversion performed) and after that the person has an absolute right to use any software available for him/her to analyze genomic information that he/she owns. It is not certain if breaking the industry into two in the USA serves American interests, since services outshored to Iceland (or China) in an integrated manner may put the USA at a competitive disadvantage. US bureaucrats better learn some basics about Personal Genomics before imposing any disadvantages on their taxpayers. - comment by pellionisz_at_junkdna.com, June 26, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

First Anniversary of ENCODE: The Principle of Recursive Genome Function

[The full paper is available here with more clickable references than in the Online publication by Springer, Heidelberg & New York]

"Upon publication of the results of project Encyclopedia of DNA Elements (ENCODE), a 4-year research effort let by the US Government, its architect issued a mandate: 'The scientific community will need to rethink some long-held views [Francis Collins, June 2007].

What views require our revision? The sequencing of the human genome engendered the idea of genomics as information science [Leroy Hood, 2003]. New avenues must be explored that are opening up to scientific research once breakthroughs from decades-old theoretical cul-de-sac lead to theoretical and experimental advances...This paper reviews the dichotomy of genomics regarding historical conflicts regarding gene and gene regulation and offers a guiding principle of their synthesis. Introduction of this principle is made possible by a long-delayed but now respectful removal of two pragmatic dogmas, replacing them by a sound information-theoretical axiom..."

[This "re-thinking" was submitted within half a year of Francis Collins' call on December 7, 2007, accepted on December 18, 2007 and published online by The Cerebellum, a Springer New York peer-reviewed science journal to commemorate the First Anniversary of ENCODE, on the 20th of June, 2008.]

Colleagues - not involved in the study - comment:

"Epigenetics is rapidly revising Darwin and opening whole new realms in medical science and technology. Dr. Andras Pellionisz, who is Director of Genome Informatics at Mitrionics in Silicon Valley, saw it all coming decades ago.

A scientific visionary, Pellionisz has made a career of being ahead of his time. His profoundly influential work on the mathematics of brain function inspired generations of leaders in neuroscience, including the Churchlands.

This time around, his far-sighted work on fractal mechanisms in genetics has recently been vindicated.

Pellionisz drew attention early on to the fractal character of dendritic trees. He then moved on to genetics, where he argued that gene expression is not, as was dogmatically asserted, a one-way street from DNA to organism, but rather a recursive process akin to the generation of fractals.

Today, in groundbreaking work on 'nanotrees,' scientists have produced a spiral shape akin to the helical structure of DNA -- an aperiodic crystal. Nanotrees are microscopic structures which result from crystalline 'defects' in nanowires.

Although these developments are fast-breaking and the implications have yet to be worked out, it seems clear enough that we have here a direct path from the atomic symmetries of quantum theory thru simple fractal crystals thru DNA and on up the ladder to fractal neurons.

What's also clear is that these developments open entire new vistas for R&D on medical diagnosis, therapy and up to cure for a host of diseases and hologenomic risk factors and disorders."

[Brian Flanagan, USA]

---

"PRGF of Pellionisz is helping not only his algorithmic recursive approach to the genome (FractoGene), but puts ‘epigenetics’ into the perspective of clearly defined novel axioms.

The PostModern Age of Genomics (starting with his PostGenetics), embraces many interpretations and examples of ‘epigenetics’ and is synthesizing haphazard notions into a solid scientific foundation of our era of HoloGenomics".

[Alexandre Akoulitchev, Fellow of the Royal Society of Medicine, Oxford University, UK]

---

“Based on Pellionisz' 'Principle of Recursive Genome Function' the puzzling history becomes understandable why the first wave of suspecting fractality of DNA in the late 1980-s and early 1990-s by his own fractal Purkinje neuron model (1989), and efforts by Drs. Buldyrev, Stanley et al (1993), Flam 1994, Mantegna (1994) could not break through.

They violated not one, but two prevailing dogmas - and could not provide a replacement for the dogmas of "junk DNA” and “No feedback recursion".

Now, with Pellionisz’ Principle, such unifying synthesis is available to greatly facilitate progress. Therefore, an avalanche of recursive algorithms is likely to ensue his breakthrough.

Fractals, however, are very computation-intensive, though both an algorithmic approach, and especially the data-compression by fractals are likely to significantly ease the burden of a brute-force approach to cope with the dreaded DNA data-deluge."

[Jules Ruis, Director of EU Center of Excellence of Fractal Design, THE NETHERLANDS]

---

“The Principle accounts for and smoothly puts together hitherto ill-fitting pieces in the old puzzle. The different picture, with a new meaning, calls for a new name.

With PRGF, the unresolved relationship between mathematical information and biological formation is explained by the repetitive action and consequent feedback of the genome where any incremental DNA information refines a formative protein growth, governed e.g. the algorithmic guidance of fractals.

Reading The Principle, Eugene Wigner’s reminder in the year of discovery of Operon regulation (1961) rings loudly in our ears:

‘There is a contradiction between the model of reproduction proposed by Crick and Watson, in which a determined mechanism is transferring the characteristics to the descendants. This model is also based on classic and not quantal concepts ... the particulars of this model are not completely worked out (Eugene Wigner; The Probability of the Existence of a Self-Reproducing Unit, In: The Logic of Personal Knowledge, Routledge and Kenan Paul Ltd, London 1961 p. 231)’”

Another early giant, John von Neumann , architect of both serial and parallel computers in the 1940s-1950s would probably look upon this progress with interest as well, since genome computing is likely to bring about a synthesis of both architectures.

[The Michael Conrad Group for Bioinformation and Biocomputation Research; E. Perjes, E. Pataki and I. Szentesi, HUNGARY]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Genetics Became Information Science - The Holistic View of Genome Structure and Function

See video Richard Dawkins actually saying so at the 2008 DLD conference with Craig Venter (below):

Neither of them were the first to say so, at the least in 2003 Leroy Hood already came to the same realization. So did Francis Collins ("God's language [DNA and mathematics]). Dr. Dawkins' admission of the handover from molecular biology to informatics is paramount, because he is "neutral" - neither a molecular biologist nor a genome informatics specialist. As the arguably most influential science communicator he does not have a vested interest (like Leroy Hood or Eric Lander) in declaring genome informatics holding the torch.

Indeed, it has become common knowledge: "At this point, the old sort of science is almost entirely irrelevant. 'It now has come out of the labs and into the domain of informatics,' Butcher says" (Basically, the DNA is a computing problem).

So, what are we waiting for? Isn't it time for Genome Informatics to come forward with a Holistic perspective where not only the outmoded "Genes" thesis and "Junk DNA" antithesis become a Genomewide synthesis, but molecular biology and genome informatics is unified into HoloGenomics?

[A. Pellionisz, commemorating the First Anniversary of the ENCODE release, June 14, 2008 - pellionisz_at_junkdna.com]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

BIOCOM 2008: Turning personalized medicine into reality

By Terri Somers

UNION-TRIBUNE STAFF WRITER

June 18, 2008

[Collins in Genomics led the chase of genes. In the new Post-ENCODE era of HoloGenomics, there is a new role - AJP]

It's been a hot couple of years for genomics, the study of the long chain of chemicals that determines all the hereditary information in a person's DNA, from hair and eye color to propensity for disease.

First, the Human Genome Project mapped the entire human genome, a chain of 3 billion chemicals, giving scientists a template for comparing the genetic makeup of all individuals to spot differences and their possible relationship with disease.

Subsequent advances in technology have led to machines that quickly and efficiently read samples of genes, allowing scientists to uncover more than 100 genetic links to disease. Ultimately, their goal is personalized medicine – finding specific therapies that work for each individual.

Some of the world's top genetic experts, including Francis Collins and Craig Venter – two scientists who played key roles in mapping the human genome – are in San Diego this week for the BIO International Convention, discussing the technologies that might help them reach that goal.

The Human Genome Project allowed scientists a glimpse of diseases caused by a single gene, which were few, Collins said during a panel discussion yesterday on innovation in genomics. The information from the project was not helpful in unraveling complex genetic diseases such as diabetes, hypertension, cancer and Alzheimer's, Collins said.

But innovation since then has led to a dramatic change, Collins said. Government-funded programs such as the 1,000 Genome Project, which will collect information on the genomes of 1,000 people from various ethnic backgrounds and store it in a public database, will give scientists an even more detailed catalog of human genetic variation, Collins said.

“Genomics is a necessary step,” said Charles Cantor, chief science officer at San Diego-based Sequenom, which is developing a noninvasive genetic test for pregnant women and their fetuses.

“Now we know what the genes are, but we still have to figure out which are important to which traits and disease,” said Cantor, who was director of the Human Genome Center Project at the Lawrence Berkeley National Laboratory. “It's like having a list of parts to your automobile but not knowing how to build it, or how it works, but knowing that the tires play a very important role.”

One technology pushing the science forward is gene sequencing, which is determining the order of the genes on a genome and identifying mutations.

Illumina, another San Diego company, has played a role in the field with its Solexa sequencing technology. And it has benefited from the boom, with its revenue jumping from $184.6 million in 2006 to $366.8 million for 2007.

Last week, Carlsbad's Invitrogen spent $6.4 billion to become a bigger competitor in the field, acquiring Applied Biosystems and its system for genetic testing. Applied is best known for supplying genetic-analysis machines to the Human Genome Project.

Only a few years ago, mapping a genome cost millions of dollars and took years to complete. Now the process is faster and costs just thousands of dollars.

“People have generated more data about the human genome with our next-generation sequencing (tools) than in all of history,” Illumina Chief Executive Jay Flatley said. “We've sequenced three human genomes since December. In the history of the world, only seven or eight genomes had been sequenced previously.”

Last June, Elaine Mardis' lab at Washington University in St. Louis began mapping the genome of a leukemia patient. The task was finished in six months, said Mardis, who heads the university's Genome Sequencing Center.

The cost of mapping an individual human genome is now flirting with $100,000, said Mardis, who will be at the BIO conference discussing questions that may be answered using this improved technology.

Collins, who heads the National Human Genome Research Institute, is pushing for the cost to drop to $1,000.

And the X-Prize Foundation, an educational nonprofit organization that seeks to solve scientific challenges, is offering a $10 million prize to whoever can develop technology for the $1,000 human genome test. Venter, who sequenced his complete genome as part of the Human Genome Project, is a member of the board's foundation.

Faster and cheaper means that sequences can be done of more people, providing information on what mutations are rare and appear to be linked to disease. As the database of genomic information grows, the more statistically significant it is for scientists, Mardis said.

This could be very important for understanding diseases such as cancer, which are thought to be connected to structural changes in the genome.

At the J. Craig Venter Institute in La Jolla, scientists are testing genomics-sequencing technologies to determine which is most accurate, Venter said. Since his genome is one of only two that are completely sequenced and known, Venter and his co-workers can use it as a standard for comparison.

A broader database of human genome information will help prioritize drug development, such as work being done on cancer because a single drug probably cannot be developed to work on all the genes that play a role in the disease, said Dr. Lynda Chin of Harvard's Dana-Farber Cancer Institute.

Genetic information will give researchers and physicians a blueprint on how to use drugs, Chin said.

Chin, who is speaking at BIO, is chairwoman of Harvard's Cancer Genome Atlas, a three-year pilot project of the National Cancer Institute and the National Human Genome Research Institute that seeks to understand the genomic changes that occur in cancer.

All the experts agree: The Human Genome Project, and the research conducted during the ongoing genomics boom, showed scientists that they still have much more to learn about genetics and disease. But the growing body of knowledge is helping them to ask better questions.

For instance, researchers once thought an individual's genome, which is essentially a chemical chain, remained unchanged. They now know that environmental factors can cause alterations, Cantor said.

As a result, the study of epigenetics, environmentally fueled changes in DNA, is beginning to take off, he said.

“Just because you have a certain set of genes linked to a disease like cancer doesn't necessarily mean that's going to happen,” Cantor said. “You have to look at genetic/environment interaction.”

Products that allow researchers to determine whether a specific gene has been turned on or off are just coming onto the market, Illumina's Flatley said.

Genomics has also revved up research into biomarkers – traces of chemicals that can be found in blood, urine, skin and other cells that indicate whether a specific gene is performing a function that can be linked to disease.

Sequenom, for instance, is developing a diagnostic test that would look for certain biomarkers from an unborn child. Those biomarkers would be traces of chemicals from the fetus, such as proteins, that show whether specific genes in the unborn baby are functioning in a way that is linked to disease.

Meanwhile, there are obvious signs we are moving closer toward personalized medicine and away from the concept of “the normal dose for an average white male,” Flatley said.

There is some ability now to determine whether a drug, such as the expensive cancer therapy Herceptin, will work on a specific individual. And Genoptix, a Carlsbad company, tests blood samples of people with cancer to help oncologists determine which drugs and what dosages would be the best course of treatment.

Some companies are working on cancer vaccines, drugs derived from a cancer antigen on a patient's tumor. An antigen is a protein on the surface of a tumor cell that varies from patient to patient.

In theory, such a cancer vaccine is designed to teach the body's immune system to recognize and kill the tumor cells.

“My projection is that five years from now we are going to be genotyping everyone when they are born, so we have the basic knowledge of an individual's genetics as part of their medical record,” Flatley said. “It will be essential for doctors to determine the right dose of the right drug at the right time.”

[While Dr. Collins has a plethora of choices for his new role(s), this time in HoloGenomics, it will be immensely difficult if not outright impossible to find his replacement at the helm of NHGRI (an NIH Institute). Why? Because Genomics, as declared e.g. by Leroy Hood (2003) and very recently by Richard Dawkins (2008) who can not be accused to have a personal stake in genome informatics, "Genomics became a branch of information science". If so, replacement of Dr. Collins must be a Genome Informatics specialist - and the kind who anticipated and actively fostered a new kind of Genomics for its second Century. It is conceivable that the old funding structure where NIH controlled a great chunk of government funds for Genomics (with the Department of Energy, for historical reasons of nuclear effects on the genome also had an important role, but for instance neither NSF not DARPA played primary roles) needs a "bit of adjustment". It might not be a bad idea, especially since other countries already established their "National Program" for Genomics, to set up (or shape and existing government entity, perhaps NSF or DARPA) for the "USA HoloGenomics Program", if for nothing else, coordination purposes. pellionisz_at_junkdna.com June 19, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


First Anniversary of ENCODE - video interviews with leading Genomics experts in Cold Springs Harbor

LEIF ANDERSSON

KELLY FRAZER

MICHAEL ASHBURNER

RICHARD GIBBS

LEENA PELTONEN

DAVID BENTLEY

DAVID HAUSSLER

DAVID SCHWARTZ

LEO BRIZUELA

THOMAS HUDSON

KARI STEFANSSON

CARLOS BUSTAMANTE

MICHAEL LEVINE

DIETRICH STEPHAN

MICHELE CLAMP

JOSEPH MCINERNEY

HILLARY SUSSMAN

JOSEPH ECKER

ELAINE MARDIS

GEORGE WEINSTOCK

XAVIER ESTIVILL

RICK MYERS

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Calif. cracks down on 13 genetic testing startups

By MARCUS WOHLSEN – 42 minutes ago

SAN FRANCISCO (AP) — California health regulators have demanded that 13 direct-to-consumer genetic testing startups halt sales in the state until they prove they meet state standards.

The state Department of Public Health sent the cease-and-desist letters last week following an investigation spurred by consumer complaints about the tests' accuracy and cost, a department spokeswoman said Monday.

Two of the most visible companies to offer consumer genetic tests — Redwood Shores-based Navigenics Inc. and Mountain View-based 23andMe Inc. — confirmed receiving the letters.

Health officials would not identify the companies involved until confirming they had received the letters but said all the targeted companies advertise on the Internet.

All the companies have two weeks to demonstrate to regulators that their laboratories are certified by the state and federal governments, said department spokeswoman Lea Brooks. The startups also must show the tests they are selling California residents have been ordered by a doctor as required by state law.

"There's either concern they don't have a license, there isn't a physician's order, or both," Brooks said. "That's what's under investigation."

Companies face fines of up to $3,000 a day.

The New York State Department of Health issued similar notices to nearly two dozen testing companies in April.

The crackdowns follow the launch of a batch of new DNA analysis services spawned by recent genetic discoveries. The mostly Web-based services will scan customers' genes to spot potential health risks, from cancer to lower back pain.

State and federal public health officials have urged consumers to be skeptical, pointing out that related research is in its earliest stages and doctors have little training in interpreting the results.

The federal Food and Drug Administration does not evaluate the tests for accuracy, though a federal panel recently recommended stepped-up oversight to ensure their validity.

A spokeswoman for 23andMe, which has financial backing from Google Inc. and Genentech Inc., described the company as an "informational service."

"What we do is offer people information about their genetic makeup, including ancestry and applicable scientific research," spokeswoman Rachel Cohen said. The company scans customers' DNA for about $1,000. Cohen declined to say Monday whether 23andMe had halted sales in California.

Navigenics charges $2,500 to screen nearly 2 million genetic markers in a DNA sample — typically a swab of saliva — for potential health risks.

In a statement released Monday, Navigenics said it believed it was in full compliance with California law. The company said it would submit details to regulators showing its labs were certified and its tests are ordered and reviewed by California-licensed physicians.

[Good news for Navigenics - any controversy will rachet up interest, and since Navigenics claims they are in full compliance with California law, go forward "business as usual" while competitors may be slowed down. pellionisz_at_junkdna.com June 13, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

First ENCODE Anniversary

Today (June 14, 2008) marks the First Anniversary of the ENCODE Pilot-result release.

It is time to take stock if Francis Collins' call for the science community to "re-think long-held views" was appropriately answered in a single year.

An announcement in a few days will provide a definite answer. [UPDATE: See announcement of The Principle of Recursive Genome Function, Online by Springer, New York]

[More info at pellionisz_at_junkdna.com June 14, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Applied Biosystems Joins 1000 Genomes Project

June 13, 2008

Foster City, CA - Leaders of the 1000 Genomes Project announced recently that three firms that have pioneered development of new sequencing technologies have joined the international effort to build the most detailed map to date of human genetic variation as a tool for medical research. The new participants are: Applied Biosystems, an Applera Corporation business in Foster City, Calif.; 454 Life Sciences, a Roche company in Branford, Conn.; and Illumina Inc. in San Diego.

The 1000 Genomes Project, which was announced in January 2008, is an international research consortium that is creating a new map of the human genome that will provide a view of biomedically relevant DNA variations at a resolution unmatched by current resources. Organizations that have already committed major support to the project are: the Beijing Genomics Institute, Shenzhen, China; the Wellcome Trust Sanger Institute, Hinxton, Cambridge, U.K.; and the National Human Genome Research Institute (NHGRI), part of the National Institutes of Health. The NHGRI-supported work is being done by the institute's Large-Scale Sequencing Network, which includes the Human Genome Sequencing Center at Baylor College of Medicine, Houston; the Broad Institute of MIT and Harvard, Cambridge, Mass.; and the Washington University Genome Sequencing Center at Washington University School of Medicine, St. Louis.

"The additional sequencing capacity and expertise provided by the three companies in the pilot phase will enable us to explore the human genome with even greater depth and speed than we had originally envisioned, and will help us to optimize the design of the full study to follow," said Richard Durbin, Ph.D., of the Wellcome Trust Sanger Institute, who is co-chair of the consortium. "It is a win-win arrangement for all involved. The companies will gain an exciting opportunity to test their technologies on hundreds of samples of human DNA, and the project will obtain data and insight to achieve its goals in a more efficient and cost-effective manner than we could without their help."

The genetic blueprints, or genomes, of any two humans are more than 99 percent the same. Still, the small fraction of genetic material that varies among people holds valuable clues to individual differences in susceptibility to disease, response to drugs and sensitivity to environmental factors.

The 1000 Genomes Project builds upon the International HapMap Project, which produced a comprehensive catalog of human genetic variation – variation that is organized into neighborhoods called haplotypes. The HapMap catalog laid the foundation for the recent explosion of genome-wide association studies that have identified more than 130 genetic variants linked to a wide range of common diseases, including type 2 diabetes, coronary artery disease, prostate and breast cancers, rheumatoid arthritis, inflammatory bowel disease and a number of mental illnesses.

The HapMap catalog, however, only identifies genetic variants that are present at a frequency of 5 percent or greater. The catalog produced by the 1000 Genomes Project will map many more details of the human genome and how it varies among individuals, identifying genetic variants that are present at a frequency of 1 percent across most of the genome and down to 0.5 percent or lower within genes. The 1000 Genomes Project's high-resolution catalog will serve to accelerate many future studies of people with specific illnesses.

"In some ways, this application of the new sequencing technologies is like building bigger telescopes," said NHGRI Director Francis S. Collins, M.D., Ph.D. "Just as astronomers see farther and more clearly into the universe with bigger telescopes, the results of the 1000 Genomes Project will give us greater resolution as we view our own genetic blueprint. We'll be able to see more things more clearly than ever before and that will be important for understanding the genetic contributions to health and illness."

The HapMap was based mainly on genotyping technology, in which genetic markers were used to broadly scan the genome. In contrast, the 1000 Genomes Project catalog will be built on sequencing technology, in which the genome is examined at the level of individual DNA letters, or bases. The increased resolution will enable the 1000 Genomes' map to provide researchers with far more genomic context than the HapMap, including more precise information about the genetic variants that might directly contribute to disease.

To enhance the production of the 1000 Genomes map, each of the three biotech companies, including Applied Biosystems, has agreed to sequence the equivalent of 75 billion DNA bases as part of the pilot phase. The human genome contains about 3 billion bases. Consequently, each company will contribute the equivalent of 25 human genomes over the next year, and additional sequence data over the project's expected three-year timeline. In addition, Applied Biosystems is expected to contribute an additional 200 billion bases of human sequence through its collaboration with Baylor.

"This project is clearly the most ambitious and comprehensive study to date of the human genome," said Francisco de la Vega, distinguished scientific fellow and vice president for SOLiD System applications and bioinformatics at Applied Biosystems. "Our participation continues our commitment to partner with the scientific community to explore the genetic factors involved in human disease."

In its first phase, expected to last about a year, the 1000 Genomes Project is conducting three pilots that will be used to decide the best strategies for achieving the goals of the full-scale effort. The first pilot involves sequencing the genomes of six people (two nuclear families) at high resolution; the second involves sequencing the genomes of 180 people at lower resolution; and the third involves sequencing the coding regions of 1,000 genes in about 1,000 people.

The full-scale project will involve sequencing the genomes of at least 1,000 people, drawn from several populations around the world. The project will use samples from donors who have given informed consent for their DNA to be analyzed and placed in public databases. Most of these samples have already been collected, and any additional samples will come from specific populations. The data will contain no medical or personal identifying information about the donors.

Given the rapid pace of sequencing technology development, the cost of the entire effort is difficult to estimate, but is expected to be about $60M. The sequence data provided by the three companies are estimated to be worth approximately $700,000 for the pilot phase, and the firms are expected to contribute much more sequencing to the full project.

Already, the 1000 Genomes Project has generated such vast quantities of data that the information is taxing the current capacity of public research databases. Since the first phase was begun in late January, project participants have produced and deposited some 240 billion bases of genetic information with the European Bioinformatics Institute and the National Center for Biotechnology Information, a part of the U.S. National Library of Medicine. Data generated by the 1000 Genomes Project also will be distributed from a mirror site at BGI Shenzhen.

Along with their contributions of sequencing capacity, Applied Biosystems, like all other project participants, have agreed to comply with the policies established by the 1000 Genomes Project Steering Committee. Those policies include rapid public release of the data, including project participants having no early access to the data; an intellectual property policy that precludes any participants from controlling the information produced by the project; regular progress reporting; and coordination of scientific publications with the rest of the consortium.

Additional information about the project can be found at http://www.1000genomes.org/.

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


Apple in Parallel: Turning the PC World Upside Down?

Steve Job's Parappler Computer [ AJP]

June 10, 2008, 9:50 am
By John Markoff

At the outset of his presentation at the opening session of Apple’s Worldwide Developers Conference, Steve Jobs showed a slide of a stool with three legs to describe the company’s businesses: Macintosh, music and the iPhone.

The company is making another bet on parallelism, and the implications may be more profound than anyone yet realizes.

In describing the next version of the Mac OS X operating system, dubbed Snow Leopard, Mr. Jobs said Apple would focus principally on technology for the next generation of the industry’s increasingly parallel computer processors.

Today the personal computer industry is going through a wrenching change in trying to find a way to keep up with the speed increases that were the hallmark of the PC business until about five years ago. At that point, companies like Intel, I.B.M. and A.M.D. had simply lived off their continual ability to increase the clock speeds of their microprocessors. But the industry hit a wall as chips reached the melting point.

As a consequence, the industry shifted gears and began making lower-power processors that added multiple C.P.U.’s. The idea was to gain speed by breaking up problems into multiple pieces and computing the parts simultaneously.

The problem is that, having headed down that path, the industry is now admitting that it doesn’t know how to program the new parallel chips efficiently when the number of cores goes above a handful.

On Monday, Mr. Jobs asserted that Apple was coming to the rescue.

“We’ve added over a thousand features to Mac OS X in the last five years,” he said Monday in an interview after his presentation. “We’re going to hit the pause button on new features.”

Instead, the company is going to focus on what he called “foundational features” that will be the basis for a future version of the operating system.

“The way the processor industry is going is to add more and more cores, but nobody knows how to program those things,” he said. “I mean, two, yeah; four, not really; eight, forget it.”

Apple, he asserted, has made a parallel-programming breakthrough.

It is all about the software, he said. Apple purchased a chip company, PA Semi, in April, but the heart of Snow Leopard will be about a parallel-programming technology that the company has code-named Grand Central.

“PA Semi is going to do system-on-chips for iPhones and iPods,” he said.

Grand Central will be at the heart of Snow Leopard, he said, and the shift in technology direction raises lots of fascinating questions, including what will happen to Apple’s partnership with Intel.

ADDED: Snow Leopard will also tap the computing power inherent in the graphics processors that are now used in tandem with microprocessors in almost all personal and mobile computers. Mr. Jobs described a new processing standard that Apple is proposing called OpenCL (Open Compute LibraryComputing Language) which is intended to refocus graphics processors on standard computing functions.

“Basically it lets you use graphics processors to do computation,” he said. “It’s way beyond what Nvidia or anyone else has, and it’s really simple.”

Since Intel trails both Nvidia and A.M.D.’s ATI graphics processor division, it may mean that future Apple computers will look very different in terms of hardware.

Just this week, for example, a Los Alamos National Laboratory set the world supercomputer processing speed record. The machine was based largely on a fleet of more than 12,000 I.B.M. Cell processors, originally designed for the Sony PS3 video-game machine.

If Apple can use similar chips to power its future computers, it will change the computer industry.

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Third Wave to Acquire Stratagene's Full Velocity Patents for $3.9M

[May 30, 2008]

This article has been updated to include the purchase price as disclosed in a regulatory filing.[Contratulation to GenomeWeb staff writer! - AJP]


NEW YORK (GenomeWeb News) – Third Wave Technologies said today that it has agreed to acquire Stratagene’s Full Velocity patents from Agilent Technologies.


In a filing with the US Securities and Exchange Commission, Third Wave said that it would pay Agilent $3.9 million, to be paid in quarterly installments over a period of three years with simple interest at an annual rate of 4.75 percent.


The Full Velocity technology provided the basis for Stratagene’s PCR and RT-PCR-based reagents and assays and is a competitor to Third Wave’s Invader technology. Stratagene’s technology was at the center of a patent suit that Third Wave brought against the firm and won in a US District Court in Wisconsin in December 2005.


Stratagene had also filed a countersuit against Third Wave alleging patent infringement. But the firms settled all litigation in January 2007, which resulted in Stratagene paying Third Wave $10.75 million.


Stratagene was subsequently acquired by Agilent nearly a year ago for $250 million. In acquiring Stratagene, Agilent gained a number of molecular diagnostic partnerships, including one with Bayer Diagnostics. The products developed under those collaborations are based on the Full Velocity technology.


Third Wave said that the acquisition of the Full Velocity patents strengthens its intellectual property position for its Invader Plus products, which combine Invader chemistry with PCR.


“Third Wave’s strategic acquisition of the Full Velocity patents cements an integral part of our clinical menu expansion plans and provides us valuable options in the research market,” Third Wave President and CEO Kevin Conroy said in a statement.


The Madison, Wis.-based firm said it is developing the next generation of Invader chemistries, which will amplify and detect DNA, RNA, and microRNA on real-time PCR instruments. It said that the new chemistries would enable it to reach new research markets including the $500 million quantitative PCR market.


Financial details of the patent acquisition were not released by the firms.

["If Genomics is a goldmine, where is the gold?" - asked in his special session the smartest VC in the valley, Steve Jurvetson of DFJ at the 50th Anniversary of the discovery of DNA Double Helix (2003). It took only 5 years to know the answer. Money is in leveraging Intellectual Property that can produce direct to customer products and services, with short repetitive sequences (microRNA-s) in the lead for revolutionary pharma and therapy, and finding SNP patterns and other short motifs to directly help in Personal Genomics quality of of life, prevention of adverse hereditary conditions and possibly mitigate their effects or providing therapy and cure.


Genome 'trailblazer' Francis Collins departing research institute

Francis Collins, the guitar-playing geneticist who mingled a belief in Christianity with a defense of evolution, said Wednesday that he would step down as director of the National Human Genome Research Institute, where he led the historic effort to decode the human genome.

Collins, author of three books, said he would leave the institute, part of the National Institutes of Health, on Aug. 1. He said he planned to write a book on personalized medicine and would explore other endeavors. "It's been a marvelous ride," he told reporters Wednesday. "My time at NHGRI has been the most remarkable in my life."

"Francis has provided 15 years of outstanding leadership to NHGRI and has been a trailblazer in the scientific community," NIH director Elias Zerhouni said in a statement. Collins' deputy, Alan Guttmacher, will serve as acting director during the search for a successor.

Collins, 58, said he chose to leave before he has found a new role because conflict-of-interest rules make negotiating his next position "awkward." He added that President Bush's signing last week of the Genetic Information Nondiscrimination Act of 2008, which Collins championed, played a role in the timing.

He said he planned to consider a variety of options and added without mentioning names, that he would consider advising a presidential candidate. "If there's some way I can help in that regard, of course I'd be interested in doing so."

Collins has been the institute's director since April 1993. He succeeded James Watson, who, with Francis Crick, in 1953 identified the double helix as the structure of DNA. In 2001, Collins appeared with President Clinton and rival biologist J. Craig Venter of Celera Genomics to announce that their teams had produced a rough draft of the human genetic code. Collins announced the project's completion in 2003.

Venter said he "wishes Francis well" and noted that the J. Craig Venter Institute gets significant funding through NHGRI, "so we look forward to continuing to work with the new leadership."

The Human Genome Project led to other milestones, including the International HapMap Project, an effort to map genetic features that might shed light on common diseases; the Mammalian Gene Collection; and the Cancer Genome Atlas.

Collins also has played a key role in using gentics to understand the risk factors for diabetes, heart disease, cancer and mental illness. He and his co-workers have racked up impressive discoveries, including identifying genes for cystic fibrosis and Huntington's disease.

Collins' books include The Language of God: A Scientist Presents Evidence for Belief, published in 2006, which asserts that genetics and evolution reflect God's creativity. "Watching our own DNA instruction book emerge letter by letter … provided a profound sense of awe unlike anything I could've imagined. It was, after all, reading the language of God," Collins told Bob Abernethy, host of PBS' Religion and Ethics News Weekly.

["He would consider advising a presidential candidate"... One wonders which presidential candidate needs his advise the most...

Government's gene guru to resign

By LAURAN NEERGAARD – 48 minutes ago

WASHINGTON (AP) — Dr. Francis Collins, who became the public face for a watershed science project — unraveling the human genetic code — is resigning as the government's gene guru.

Collins, arguably the nation's most influential geneticist, announced Wednesday that he will leave the National Institutes of Health this summer to explore other opportunities.

The folksy geneticist helped translate the complexities of DNA into everyday vernacular, once famously calling the human genome or genetic code the "book of human life." He became a leading advocate for the privacy of genetic information.

But Collins may be better known to laymen for his 2007 best-selling book about his belief in both God and science.

[Richard Dawkins stated that "Genetics has become a branch of information technology". One wonders if Dr. Collins replacement at NIH would have to be an information technology leader of genomics.

Dutch scientists first to unravel a woman’s DNA

Monday 26 May 2008

A red-haired, 34-year-old Dutch woman has become the first woman in the world to have her compete DNA sequenced, scientists at Leiden University Medical Centre announced on Monday.

The entire genome of Marjolein Kriek, a clinical genetic scientist at the university, will be made public in the near future, minus a few sensitive details, professor Gert-Jan van Ommen told a press conference.

Male sequencing data has already been unraveled for four men, including Jim Watson, co-discoverer of the double helix structure of DNA. ‘It was time to balance the genders a bit,’ news agency AP reported Van Ommen as saying.

Kriek said she considered it an enormous honour to have her DNA unraveled and hoped it would help break through taboos surrounding DNA research. ‘It is not as if I know when I am going to die, just as I don’t know if I will win the lottery,’ she told reporters.

Kriek does now know that some 10,000 years ago her ancestors came from Ireland, Poland or eastern Turkey. The research also shows that Kriek’s DNA deviates from the norm on 18 points. ‘If the research shows that I have a greater risk of cancer, that is something I will keep private,’ she said.

But, she said, having more information about any health risks associated with her genetic make-up means she can take action to minimise them.

[It is a small step for a woman - but a giant leap for mankind. Within that, a major feat for the Netherlands. Dr. Kriek may wish to have the geographical origins narrowed from the present somewhat borderless region of "Ireland, Poland or eastern Turkey" - since one is not sure how Poland looked like some 10,000 years ago. Providing a clue for redheads might be interesting. For absolutely sure, if she'll learn ways to minimise undesirable genomic conditions, would be fantastic. For science, this is the first time that we'll learn how XX may vary in humans.

President Bush Signs Genetic Nondiscrimination Legislation Into Law

[May 22, 2008]

President Bush on Wednesday signed into law a bill (HR 493), the Genetic Information Nondiscrimination Act, that prohibits discrimination based on the results of genetic tests, the AP/San Francisco Chronicle reports (Feller, AP/San Francisco Chronicle, 5/22). Under the bill, employers cannot make decisions about whether to hire potential employees or fire or promote employees based on the results of genetic tests.

In addition, health insurers cannot deny coverage to potential members or charge higher premiums to members because of genetic test results. The House this month voted 414-1 to approve the bill, while the Senate last month approved the legislation 95-0 (Kaiser Daily Health Policy Report, 5/2).

Bush said the bill "protects our citizens from having genetic information misused ... without undermining the basic premise of the insurance industry" (Ward, Washington Times, 5/22).

After signing the legislation, Bush thanked Rep. Louise Slaughter (D-N.Y.), who waged a "13-year battle" to get genetic nondiscrimination legislation passed, and other congressional members instrumental in passage of the bill, the Rochester Democrat and Chronicle reports. According to the Democrat and Chronicle, in order to pass the bill, legislators had to overcome opposition from the U.S. Chamber of Commerce and other business groups, who said that the legislation could lead to frivolous lawsuits against employers (Kelly, Rochester Democrat and Chronicle, 5/22).

Reaction

Supporters called the bill the "first major civil rights act of the 21st century" and said they hope it will encourage more people to participate in clinical research for treatments of specific genetic sequences, according to the Chicago Sun-Times (Thomas, Chicago Sun-Times, 5/22).

[Although signing the law was expected, without question it will provide major momentum for Personal Genomics. Note that the law has an almost total support of both Houses - making any legal "contest" of the law virtually impossible. It is hoped that other countries will follow this US example.

.... - pellionisz_at_junkdna.com May 24, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Genetics firm to build online health database [of Parkinson's disease]

Bernadette Tansey

[Michael J. Fox' (pictured) Parkinson Foundation collaborates with Parkinson Institute and 23andMe - AJP]

San Francisco Chronicle - The Google-backed consumer genome service 23andMe staked out a role in the growing medical database industry Wednesday, announcing that it will collaborate with Parkinson's disease researchers to collect key information from patients in an all-online format.

The private Mountain View company, which offers customers a personal scan of their DNA for $999, formed its first research alliance with the Parkinson's Institute and Clinical Center in Sunnyvale. The Institute and 23andMe will conduct a Web-based study that will combine genetic data on clinical trial subjects with other information they provide, such as their family health histories, that could shed light on the causes of Parkinson's disease.

The alliance gives the first glimpse of the broader business plan 23andMe sketched out in November, when it began its Personal Genome Service with the interactive features of a social networking site. Customers can learn how the genetic variations revealed in their DNA scans might affect their health. They can share the information with relatives, and construct family trees. The same platform will now be tested as a means to help the Parkinson's Institute conduct a virtual "field study" of disease, without the need to send investigators to scattered patients' homes.

"We hope to establish an entirely new paradigm for how genetic research is conducted that actively involves the patient," said 23andMe co-founder Linda Avey.

The Parkinson's study will also help 23andMe build a membership among people with the progressive brain disorder. The institute will recruit 150 subjects to be enrolled in the company's genome service. Half will have Parkinson's disease, and the other half will be a healthy control group. If 23andMe eventually succeeds in hosting large-scale communities of members with various illnesses, it can become a conduit for pharmaceutical companies that would pay the company to relay their offers to participate in clinical trials, Avey said. It can also support clinical trials with database services, she said.

The patients, in turn, could use the 23andMe site as a forum to prompt research into areas they find important, she said.

Funding for the collaboration comes from a $600,000 grant awarded by the Michael J. Fox Foundation for Parkinson's Research, which will pay the 23andMe fees for participants in the trial. The study will assess whether information gathered from subjects sitting at home in front of a computer will match the data already gathered from the same people when they were interviewed by phone or in person by a researcher.

If it works, the online method could speed research that now relies on laborious and time-consuming conventional methods, said Dr. J. William Langston, scientific director of the Parkinson's institute. "I think this actually could be a real game-changer in terms of research," he said. "If we're successful, this is going to attract genetics groups all over the world."

Langston is already looking for funding to take the next step - to analyze the data gleaned from study subjects in a search for clues to the cause of Parkinson's disease. The complex disorder, which causes tremors and impaired mobility, is believed to result from a combination of genetic susceptibility and environmental factors. Researchers query people with the disease about past occupations that might have exposed them to chemicals, for example.

Langston is also eager to try next-generation field studies conducted via home computer equipment. For example, Parkinson's researchers might be able to design tests for the reaction time of clinical trial subjects by measuring their computer keystrokes or clicks on a mouse.

The name 23andMe refers to the 23 pairs of chromosomes in the human body. The company drew unusual interest for a startup last year because it received early funding in the form of a $2.6 million loan from Google co-founder Sergey Brin, who had married 23andMe co-founder Anne Wojcicki. After Google and other investors provided formal financing for the company, Brin was repaid. He also recused himself from formal discussions regarding Google's investment.

Avey said 23andMe has developed computer algorithms to ferret out clues to the origins of disease by looking for common elements within the mass of data collected from large groups of people. In the future, its own researchers might pursue such leads, in addition to supporting outside clients such as nonprofit groups and drug companies. A 23-disease initiative is under discussion, she said.

"We'll probably start with a handful," she said. "We can't do 23 at once."

[Dr. Langston is absolutely right, "this is a game-changer" - and in many more than one sense. It obviously adds a new dimension to the path-breaker 23andMe that started cautiously with "social networking". With fiercely competing Navigenics, targeting "actionable hereditary conditions" it appears that "Junk DNA diseases" (since SNiP-hunting is not limited to "genes" at all...) emerge as the "name of the game". Second, and perhaps more importantly, alliance of "advocacy groups" such as Michael J. Fox' Parkinson Foundation, the non-profit research institution of Parkinson Institute with the industrial partner of 23andMe switched PostModern Medical Research from the bottom of the food-chain to feeding from the top of the food-chain. It used to be that taxpayer patients had only a very indirect control over their tax-dollars if they would find their paths to helping them with their "JunkDNA diseases" through the maze of the establishment. Indeed, such path was outright prohibited as long as the establishment considered 98.7% of the genome "Junk" - and thus researchers, research institutions and even advocacy groups were simply and flatly denied funds for "researching the junk". From now on, millions if not hundreds of millions of patients have an absolute direct control to command research to investigate their hereditary condition by the "direct to customer" service of Personal Genomics. This is an epoch-making revolution in medicine. We'll soon find that Personal Genomics will not only do "23 at once" - but will successfully address scores of hereditary conditions. Of course, suddenly the value of algorithms (way beyond SNiP-hunting...) catapult, since Personal Genomics will use whatever algorithm works, regardless of "peer group resistance". Statistical correlation of random SNP-s will yield to algorithmic predictions of defects. Also, competition of Personal Genomics service providers will drive the costs of research done, for instance by forcing them to use more efficient (parallel) computing. One has to admit that this business model popping up in the Genome Revolution is not unprecedented in Medicine. Pharma business works based on industrial competition and "direct to customers" marketing for quite a while. This will, perhaps, explain why Genentech contributed with an undisclosed sum to the funding of 23andMe...

Agilent Technologies Announces Licensing Agreement with Broad Institute to Develop Genome-Partitioning Kits to Streamline Next-Generation Sequencing [Back to "Junk DNA"? - No, a different "partitioning" of the whole genome - AJP]

SANTA CLARA, Calif., May 13, 2008

Agilent Technologies Inc. today announced that it has acquired a license to commercialize a method developed at the Broad Institute for genome partitioning using Agilent's Oligo Library Synthesis (OLS) technology. Financial terms were not disclosed.

Recent advances in DNA-sequencing technology have increased the speed of data acquisition. However, without accompanying improvements in the ability to select relevant portions of the genome, the technology cannot achieve its full potential in studying the relationships between genes and diseases. The Agilent genome-partitioning portfolio, which is currently in development, holds great potential for eliminating this bottleneck by enabling users to efficiently design and acquire ready-to-use, custom mixtures of biotinylated long RNA probes in a single tube.

Chad Nusbaum, Ph.D., co-director of the Genome Sequencing and Analysis Program at the Broad Institute, described the improved method at an Agilent customer event recently at the American Association of Cancer Research meeting in San Diego, Calif.

"We're working on a simple, highly multiplexed, cost-effective way to enable investigators to remove the sample-preparation bottleneck in sequencing targeted regions of mammalian genomes, using relatively small amounts of input DNA," Nusbaum said. "Agilent's expertise in custom oligo synthesis and our expertise in production-scale sequencing are a natural match-up to overcome these challenges."

"The Broad is part of our early-access program, in which we made oligo library synthesis capability available to a select number of luminaries to find out how these creative scientists could use custom complex mixtures of long oligonucleotides," said Yvonne Linney, Ph.D., Agilent vice president and general manager, Genomics. "This work to eliminate the sample preparation bottleneck of next-generation sequencing will greatly accelerate our understanding of how genes operate."

Agilent plans to offer kits containing custom mixtures of long biotinylated RNA molecules that can efficiently capture 5-10 megabases of genomic DNA sample in a single tube. The method is based on the combination of the Agilent SurePrint platform, capable of synthesizing high-quality oligo mixtures, and the protocols developed by the Broad Institute to transform these into RNA probes.

These RNA probes can, in turn, be used to capture genomic regions of interest in a simple, scalable and highly multiplexed manner, greatly simplifying the process for targeted re-sequencing.

"The ability to perform the capture reaction in a small volume solution enables a significant increase in the kinetic efficiency and enables users to scale from tens to thousands of samples without adding significant personnel," said Emily LeProust, Ph.D., Agilent R&D Chemistry and Genome Partitioning program manager.

In February, a researcher from Nusbaum's group presented a paper at the Advances in Genome and Biology Technology conference, describing the use of Agilent Oligo Libraries, the basis of the genome partitioning capability and next-generation sequencing to correctly identify several previously known mutations and several new ones in certain tumor types.

Linney said that this is also an example of how Agilent is leveraging the reagent manufacturing expertise of Stratagene, which joined Agilent in June 2007, to provide high-quality, cost-effective solutions.

The Broad Institute is a research collaboration of the Massachusetts Institute of Technology and Harvard University.

About Agilent Technologies

Agilent Technologies Inc. is the world's premier measurement company and a technology leader in communications, electronics, life sciences and chemical analysis. The company's 19,000 employees serve customers in more than 110 countries. Agilent had net revenues of $5.4 billion in fiscal 2007. Information about Agilent is available on the Web at www.agilent.com.

[The tsunami of the "Dreaded DNA Data Deluge" has hit the shores of Genomics at its "cliff-house", the pre-eminent Broad Institute. A segmentation of what portion of the DNA should be looked at (and what not) is a similar practical necessity as in 1972 Ohno's declaration that 98.7% of the DNA could be left for later consideration (admitted in 2002 by Sydney Brenner's Nobel lecture that the science of genomics was just too overwhelmed to consider anything but the "genes"). This time around, less than a single year after ENCODE, "Genomics of the whole genome", HoloGenomics, must be even more practical, since it already feeds on "Direct-to-Customers" markets by Personal Genomics. ENCODE erased the notion of "junk DNA" - thus e.g. SNiP-hunting in the intergenic areas suddenly not only makes sense - but it makes money at 23andMe, Navigenics, DecodeMe, etc, by looking at less than a million letters of the human DNA (much-much less than the 1.3%, "the genes"). The brute force approach of full sequencing, while possible at a rapidly melting rate from the list-price of Knome at $375,000 to the present $60,000 by Illumina, and within years below the $1,000 range, is clearly untenable at the moment, since Information Technology has not yet geared up for the challenges of storing the data, assembling the sequence-reads ("Next-Gen sequencing") - let alone analyzing the data by other than statistical sampling methods. As in all sciences, however, "brute force" approaches can be greatly helped by breakthroughs in the understanding of the underlying principles. For HoloGenomics, time will come before the anniversary of ENCODE.

Merck's Informatics Mission [What should be the result of long-predicted Genomics-IT confluence? -AJP]

By Kevin Davies
Bio-IT World

May 12, 2009 | After five years at the helm of Merck's basic research IT group, Ingrid Akerblom calls her move to the clinical side "quite an eye opening experience." Akerblom has a Ph.D. in biology from University of California, San Diego and the Salk Institute, and later joined Incyte Pharmaceuticals as its 50th employee and "annotation guru," eventually leading informatics. She was then recruited by Merck - ironically the only pharma not to buy the Incyte database. She joined Merck in November 2002 - just over a year after the acquisition of Rosetta Inpharmatics. [For $620 M - AJP]. Akerblom worked extensively with the Rosetta IT leaders, helping to integrate systems around target identification and chemistry systems.

Assuming her former role is Martin Leach, a Brit who spent nine years leading IT and informatics at Curagen, spanning corporate IT, basic, pre-clinical, and regulatory informatics. But he also brings experience in regulatory and clinical IT areas gained during a two-year stint at Booz Allen prior to joining Merck last year. That clinical insight could prove useful even as he refocuses on basic research, and complements Akerblom's background in basic research as she transitions to the clinical side.

Kevin Davies spoke to Akerblom and Leach about their complementary roles and mutual understanding of the needs of both basic research and clinical teams, which could pay big dividends for Merck.

Bio•IT World: Ingrid, how did your move to clinical IT come about?

Ingrid: After five years in [research IT], with the last few years including leadership of Merck's Biomarker IT, it made sense to bring some of this expertise into the Clinical IT areas in order to meet the growing need to marry up clinical and discovery research information. In terms of how we operate, there has also been a significant evolution of the IT teams and operating model. At that time it was fully vertical, integrated. I had all the developers on my team; we had all the support on my team. Now, Martin and I really lead more of a client services team, where we have account managers, program managers, and business analysts, people with business expertise and technology expertise. Most of the delivery is done through shared services in the corporate area. And even within the Merck research labs IT, for some of the more innovative types of things that go on in research that aren't found in the other divisions like manufacturing and marketing. But we're evolving to a fully shared services model, which has its benefits, especially in clinical, with large projects.

Martin: For my groups, I have scientific computing, which is predominantly in the bio space, some cheminformatics, biomarker IT, of which there's a lot of collaboration with Ingrid. And drug discovery services, focused on lab automation, capture and management of biologic and chemical data and information, all of the things that make the basic research lab tick.

How useful will your mutual cross training prove, do you think?

Martin: One thing important to note is the new head at [Merck subsidiary] Rosetta. Kathleen Metters, the worldwide head of basic research, recently appointed Rupert Vessey to be the new head of Rosetta ... Rupert is a former clinical therapeutic area head, [so] basically we have a clinical leader heading up Rosetta, which is predominantly working on genomics, proteomics, genetics, and causal networks. So it's not just the IT with that cross-pollination - we've got two people from IT, from a clinical point of view and a basic research point of view, interacting with a former clinical person who is heading up the genomics space.

Ingrid: We've invested a lot in some core platforms; we need to start translating that into results in the clinic at some point. And so having people who have an understanding of what does that really take to help inform the earlier research directions, the platform directions, is a key theme...

When I was in Martin's position, it was very difficult to get the clinical IT teams to focus on longer term strategic projects, even short-term partnerships that weren't about a late-stage trial. Because at the end of the day, that's what they work for, right? They've got to get those trials filed. But when we think about the future, we want to have our data more integrated, and we weren't really getting a lot of traction. So one of the attractions for me moving to this position was someone who has that background will keep their eye on that ball and it won't be all about late stage. That's already proving true - there were a couple of times this year where there were scheduling conflicts between critical projects on both sides. And in the past, I know which one would have gotten dropped - it would have been the basic research project.

How is Rosetta working at Merck today?

Martin: There's Rosetta Inpharmatics, which is the part of Merck that's doing molecular profiling and genetics research, and then we have Rosetta Biosoftware, that is part of Rosetta Inpharmatics that makes and sells software products such as Resolver, Elucidator, and Syllego, which we also use internally at Merck.

At Rosetta Inpharmatics, I work closely with scientists working in the bioinformatics and the pathways space, who have taken a biological point of view to integrate information. One approach is trying to integrate as much information that is accessible, assay data and so on, for when scientists pull up a gene or target. They have developed a target gene index (TGI)... With a given gene, you can see all the relevant information. I think most pharmas have attempted that. I'd say it's the depth of the information and integration with some of the chemical space that is different than what I have seen at other pharmas. This depth of integration within TGI is still growing... We do data integration and data management within basic research IT, and we provide some of the core services needed to do it from a research point of view.

How do you interact on a more day to day level?

Martin: We have some very high level, strategic, long-term projects that we're working on. We have a large number of folks from my camp working with a large number from Ingrid's camp around the IT needs and implications with all the different clinical data, as well as sample data, and access to this information that's needed to enable translational research. So we have joint projects, very strategic, they have visibility all the way up to the MRL leadership.

In terms of some of the things being done at Rosetta, again, it crosses into the basic and clinical space, and we work together on making sure the right people are engaged in either basic or clinical IT. [Between the basic and clinical IT teams] we are very collaborative in terms of key strategic hires.

Ingrid: We're getting much closer to actually using genetics in our trials, based on the technology set up by our Seattle genetics group and the whole genome analysis group (See, "Merck Ties Gene Networks to Obesity"). We have a project team meeting with Martin, our business and information architects, and Rosetta Biosoftware together with clinical franchise and regulatory leaders, to talk about what is the actual proposed data flow and architecture for moving genetics data from research systems into the clinical systems. Having formerly been in basic [research], it's a lot easier to really see how that all fits together and how to move this data into the clinical systems now.

The Rosetta Biosoftware Syllego system that is being used by the FDA, is something we're looking at - How does that fit into the clinical architecture? We have a clinical warehouse, where should the genetic data go? Should it be Syllego for raw data and CDR for metadata? Again, it's moving into reality now, so understanding what that means and being on the clinical side I think is going to make it a lot easier to easily assimilate that type of data into the mainstream clinical systems.

My Basic IT team worked with the Imaging Research team to put in place an imaging platform with IBM, and Martin's team is continuing this work, that's working well in the early development and research space. Now I want to say look, we could save a tremendous amount of money if we move that into the late stage. But how to do that where every investigator now has to learn that system?... Do we show it through our portal or does it come in through EDC or on its own? So there are all these support issues once you start thinking about really getting out into the clinic with some of these newer things.

How are you handling the surge of data, especially related to genomics?

Martin: Where we are doing work on pharmacogenomics and genetics in the clinical space, there is so much data. For example, one of my team had to secure an additional 100 terabytes (TB) on the East Coast to just accommodate one experiment they were doing! Soon, I'm going to be playing around in the petabytes... At the moment, we need to keep the raw data because there's no clear guidance from the FDA as to what you need to keep. It's going to literally swamp us working in this space until we get better guidance around what data we need to keep versus could keep. One of my [team's] projects is basically a storage strategy this year because if it's 100 TB this year, it's probably going to be a couple of hundred TB next year...

We all [in the industry] have data and document retention policies, but what tools do we have to really monitor and manage that? If I've got a couple of hundred TB that's going to come around in the next couple of years, how do I know what to purge five years from now? Where are the tools to do that really large data management and purging? In the current file sharing landscape we have millions of files that normally have to be managed through retention policies. That's a challenge in itself. What is developing is managing a fewer number of files but with a large overall volume.

How do you view translational medicine?

Martin: In two parts. The first part is increasing the clinical context of basic research experiments, using clinically relevant samples with their clinical information, allowing you to "translate" additional research measurements on the samples with a clinical context. So that enables the research, but then as you get into the pharmacogenomics space, where you're looking at genetic information to segregate populations for responders and non-responders, that's then taking basic research discoveries and really applying them into the clinical space. So I sort of see translational medicine as that mix of pharmacogenomics and biomarkers and everything rolled into one.

Ingrid: I agree. One of the key areas is clearly samples, whether you're doing proteomics, gene expression, genetics, or potentially looking at what populations eventually could respond to your drug. Samples are at the center of that and so we have been actively pursuing better informatics around that in order to make it clear what samples are available from what trials, which are consented, and what can we use them for. We already have siloed platforms to show that data, we need to integrate it more than it is... We have a new standards-based clinical warehouse that went into production last year, where we're really planning to have all the patient data - whether it's through collaborations or Merck trials - in one place so that it's more available for our future data mining and understanding what types of patients and associated samples we have.

Martin: We have a major strategic collaboration with the H. Lee Moffitt Cancer Center [Tampa, FL] (See "Cancer Center Builds Gene Biobank," Bio•IT World, June 2007.) We get different types of cancer samples and those samples go to Rosetta [for] expression profiling. Moffitt uses that expression data internally for their research, and we get clinical data associated with the samples, as well as the expression profiling data, and we get to use that at Merck... This is a major collaboration driven by Stephen Friend [senior VP of Oncology] and the Oncology franchise. I think it's a landmark in how we approach translational medicine at Merck... Data from this collaboration was the first clinical data from oncology that made its way into Merck's clinical data repository (CDR).

So we have clinical data securely flowing directly from Moffitt through the firewalls, etc., into Merck's CDR meeting all compliance needs. And that data through web services is then shared to Rosetta and other places so that it can be integrated with expression profiling data. We've really embraced industry standards to make that happen. This really has been breaking down silos - it's very hard to find a clinical group that opens up web services where that information is then accessible to basic research. I think that in itself was groundbreaking at Merck. We've tried looking around to other pharmas, like are you guys doing this sort of thing? Everyone is talking to the standards boards, but I think we've really [made] an investment by implementing some of this work in a real active strategic collaboration...

Ingrid: The other important piece in that project that addresses the translational medicine question is that there are joint project teams between Merck and Moffitt clinical and basic researchers, all trying to mine and look at field experiments, build trials, identify new mechanisms, think about the future together. It's a very powerful collaboration, and IT has a seat at that table and is an active participant in those conversations. So I think it's a great area of translation where we really are leveraging clinical data to drive research.

How has informatics evolved at Merck? With budget tightening everywhere, does that impact the build-buy decision?

Ingrid: Ever since I joined, we've been primarily a buy shop, even in basic research. We mostly buy and we try not to customize too much, but you still end up in that space. The clinical systems have been primarily internally built, and now they are mostly purchased, with the exception of the data warehouse, which is based on the Janus data model, but it was still built inside with outsourcing. Where we're trying to find cost savings is in sharing services, particularly around support, maintenance of applications, infrastructure - trying to drive down cost on the maintenance and operations side in order to continue to invest in the new development of strategic applications.

There are innovative areas with many of them in the emerging research technologies where you're doing things that you just can't buy, where faster iterative in-house development is needed, for example we developed MouseTrap to support the management and display of animal phenotypic data. Generally speaking, it's quite a challenge. You've got to really be focused and the business has to partner with you to prioritize... it's critical to have a strong partnership and governance with scientific leaders to assure we are focusing IT resources on the right projects. The other thing is they're also feeling the money pressure. So it's not just IT, it's not just the services anymore, it's everybody really looking at how are we going to contribute to optimizing the bottom line, and how are we going to grow the top line, and let's all prioritize those initiatives together.

What are some projects where you think you're really going to be able to expedite or make better decisions? And what outstanding challenges remain?

Martin: We're in a position now where we know how to generate information for biomarkers, and we know how to collect clinical information. So at least one project this year is, "What is that killer application that you need to integrate the clinical information with biomarker information, so that we really do enable our scientists in their biomarker discovery or validation experiments?" At the moment, we've got bits of the puzzle - genetics being managed in Syllego, expression managed in Resolver, proteomics managed in Elucidator, these all being separate applications and repositories. But what is that killer application that brings it all together and integrates it with clinical, so that you can do some meaningful mining and analysis? That's one of my goals, and I've got some exciting challenges to work through there.

Ingrid: I think that's a shared one, because in the clinical sample area, combining the results data from clinical samples with the associated patient data, what's that platform? I know there are new commercially available things coming out like Azyxxi from Microsoft. So we need to be looking at what's out there, what's the gap, and do we put something together ourselves? We did a pilot last year with an EII platform collaborating with IBM. There was enough productivity gain from that to justify taking EII to the next level which our Innovation IT team is doing in 2008. The whole integration space and then the actual viewing of integrated data in a meaningful way continues to be a major focus.

We're embarking on an electronic medical records (EMR) strategy looking for signal detection among other uses. We're redoing our pharmacovigilance system and approaches. Those are things that are just starting to be reinvested in, figuring out how do we leverage that information, how do we get that connected? There's also appetite for clinical trial simulations across a number of dimensions including enrollment and operations optimization. We just overhauled our entire late-stage development systems in 18 months, so right now we're focused on ensuring that that gets optimized and the value from that investment gets realized.

Martin: Who is going to be the health partner with Merck? Where do we place our bet in key strategic partnerships thinking around EMR data or personal medical record data, and how do we find the best partners to enable translational research? From there it's doing the analysis of who will be the best partner and when will they be mature enough or Merck mature enough to interact with them.

Another exciting challenge is working with the external basic research team, [Catherine Strader, former head of research at Schering-Plough]. I'm working with her so that we really leverage information from our collaborations. In the past, how information flows in a collaboration has been managed ad hoc. Moving forward we really want to leverage and integrate this information more strategically.

What roles do the senior executives such as Peter Kim and Stephen Friend play?

Ingrid: Peter has a vision. He focuses us all on recognizing that the vast majority of information and innovation is happening outside the walls of Merck. We need to leverage it more by providing platforms that allow deep collaboration with external partners; there also is a focus on combining our own data with publicly generated data for competitive advantage - but holding the line to work pre-competitively where it makes sense. You get that vision through the research strategy meetings.

I think Stephen Friend is clearly a visionary who inspires many individuals at Merck both on the science side and the IT side, a very forward thinker pushing all the teams, Rosetta as well as myself and Martin, to think out of the box.

[Four years ago a dramatic confluence of "Genomics beyond Genes" and "Information Technology" was predicted. By January, 2008 even the unbiased old-timer Richard Dawkins admitted the obvious; "Genetics has become a branch of information technology". Merck has purchased not only the bioinformatics shop Rosetta Inpharmatics for $620 M, but also SiRNA that successfully mined small interfering microRNA-s by proprietary algorithms, for the remarkable sum of $1.1 Billion. Now Merck is also aligned with IBM and Microsoft in an attempt of "integration", to fend off what is dreaded as a DNA Data Deluge. The "box" under the article refers to Merck' finding that a network of 6,000 "genes" are implicated in obesity - a result that required the work of 7,000 computers. The paradigm-shift of "Genomics beyond Genes" becoming a branch of Information Technology raises the question if e.g. Big Pharma will entrust their "mircoRNA secrets" with Big IT - or one giant is going to swallow the other. Neither is likely to happen - though time is pressing as e.g. Merck openly complains that the data will "swamp" them. (How about players that are much-much smaller than Merck? How will they get a chance? Global giants are wrestling with the paradigm-shift - and the likely answers are in the classic books about earlier disruptive technologies...

Research firm bases FPGA on Fractal-like Structure

Amir Ben-Artzi

EE Times Europe

(04/29/2008 4:00 DE EDT)

NETANYA, Israel — Cellot Ltd. (Netanya, Israel) is developing a programmable logic and memory architecture dubbed Field Programmable Cell Logic (FPCL), which is based on a recursive, fractal-like, structure. Ofer Meged, Cellot's founder, president and CTO, told EE Times that the company's chip will be primarily targeted at fabs, ASIC developers and hardware developers in various areas, including space applications. "FPLC technology has been validated and tested both on the theoretical and the implementation level," said Meged, who added that the company is currently in the process of raising initial funding of $5 million.

The FPCL architecture is optimized for complex and real-time applications where the approach usually an SOC or a collection of DSPs or FPGAs, said Meged. "In my opinion, we have created a new category in the worlds of programmable devices and structured ASICs." Meged argued that FPCL represents a small-sized programmable solution which offers a short time to market and an attractive price, and therefore the company will initially aim to serve developers, foundries and fabs.

Although some implementation work has been done the the core FPCL chip design is described as being in a pre-fab state. "We have patented our technology in Europe, the U.S and Japan, but we need the funding before we are able to go forward," said Meged. Cellot's FPCL architecture is a collection of registered memory cells interconnected via a programmable matrix that creates a recursive hierarchical (fractal-like) structure. That means that all of the cells are similar and any number of cells may be combined to create similar larger cells. Each cell can be used for logic when memory is used as a look-up table, or as storage, or both. The matrix can route the output of each cell to the input of each cell and each I/O with a fixed and predictable delay, according to Cellot.

Cellot said its PC-based development system enables the use of a high level language such as C++, Verilog, and schematic capture in the design process.

The architecture can be used to create any application, the company claimed. Cellot argued that FPLC technology is particularly applicable to fields such as evolvable hardware applications, supercomputers and neural networks.

"Every part of our component is 100 percent replaceable," claimed Meged. "In certain configurations, it can correct itself, and this is useful for space applications." The company said it plans to offer a family of FPCL chips, plus a library of intellectual property cores for useful functions that can be re-used as design building blocks. These will be accompanied by design and debugging tools including simulation and testing tools, in addition to logic analyzers, signal generators, synthesis tools, and a PCI acceleration board.

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Unlocking the human genome: Pioneering researcher joins Buck Institute

Richard Halstead

Article Launched: 05/03/2008 10:50:34 PM PDT

VICTORIA LUNYAK, a groundbreaking genetic researcher who has just joined the Buck Institute for Age Research in Novato, credits her father for launching her career in science.

"My father, he was a physicist," Lunyak said. "He pushed me to become a scientist. Otherwise, I would have become probably a ballerina, because my mother wanted me to become a ballerina."

Lunyak, 41, earned her doctorate in molecular biology at the St. Petersburg Nuclear Physics Institute in Russia before coming to the United States 12 years ago to do research at the Department of Molecular Biology at Brown University. More recently, she was an assistant research scientist at the University of California at San Diego.

This spring, Lunyak arrived at the Buck Institute, where she will establish a program in epigenetics, a promising new field of biological research with far-reaching implications for the understanding of cancer and aging. While at UC San Diego, Lunyak led several studies describing the role of epigenetics in neurobiology and organ development.

Epigenetics, a field that is only about a decade old, [actually, more than seven decades old; The word "epigenetics" (as in "epigenetic landscape") was coined by C. H. Waddington in 1942 - AJP] as revolutionized science's understanding of biological inheritance. Prior to the mapping of the human genome in 2003, it was believed that DNA carried all heritable information and that nothing people did during their lifetime would be biologically passed to their children.

Scientists working in the field of epigenetics have since discovered switches that can turn genes on or off.

These switches can be activated by things people experience, such as nutrition and stress. Epigenetic theory has explained how identical twins become distinct as they age. It also explains why different types of cells - brain cells and skin cells, for example - are different even though they contain the same DNA. And epigenetics explains how patterns of gene expression are passed from one cell to its descendants as cells age.

"Epigenetic changes can be acquired during life, but can also be inherited, and appear to be important in the aging process, in certain diseases of aging, and in stem cell development," said David Greenberg, vice president of special research at the Buck Institute.

At UC San Diego, Lunyak focused her research on the boundaries that separate areas of activated genes from areas of genes that have been deactivated by the epigenetic switches. These boundaries in the genome are made up of DNA that was previously regarded as "junk" because its function was mysterious.

But research by Lunyak, detailed in the July 2007 issue of Science, revealed that these boundaries act as epigenetic punctuation marks - commas and periods that help orchestrate waves of transcriptional programs in the coding portion of the genome.

Not long ago, it was believed that cancer resulted solely because of genetic changes. Now, however, scientists think that cancer is caused by a mixture of genetic and epigenetic changes. One explanation for cancer is that "something is happening in the ability of cells to properly place those bookmarks between active domains and repressed domains," Lunyak said.

That is good news because, unlike genetic damage, epigenetic changes can sometimes be reversed. Scientists have found a few drugs that can alter epigenetic patterns. Epigenetic therapy has shown promise in treating people with a form of cancer that causes overproduction of blood cells in the bone marrow.

The problem, Lunyak said, is that these drugs use a shotgun approach to altering epigenetic patterns. It is unknown what unintended effects they may be having on the genome's normal epigenetic functioning.

"It is very dangerous," she said.

That is why Lunyak intends to focus her work at the Buck Institute on mapping epigenetic blueprints for healthy cells and cells that are diseased. The hope is that such blueprints will allow doctors to target drugs so they alter only that section of the epigenetic program that is causing a problem.

It will be a big job. Different types of cells - skin, eyes, teeth, hair - will all have unique epigenetic blueprints. Epigenetic mapping efforts are also proceeding at Baylor College of Medicine in Houston, Texas and at UCLA. Lunyak will start with stem cells due to their unique ability to transform into many different cell types in the body. She estimates the project could take five years.

"It's not a quick-fix project," Lunyak said.

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Intel Seeks Partners to Develop FPGA-Based Solution for Next-Gen Sequencing Analysis

May 2, 2008
BioInform (GenomeWeb)

By Bernadette Toner

BOSTON – Intel officials said this week that they hope to build an “ecosystem” of partners who will use programmable hardware to create a standardized approach for analyzing data from second-generation sequencing instruments.

Speaking at a workshop on second-generation sequencing data management ahead of this week’s Bio-IT World Conference, Wilford Pinfold, general manager of integrated analytic solutions at Intel, said that the company has identified genomics as a key application for a technology it has developed that can more tightly integrate FPGAs and ASICs with the Intel platform.

With this system, the programmable chips are directly connected to the front side bus of Intel’s system so that they share memory with the general-purpose processor...

Through discussions with some initial potential partners, Pinfold said that Intel estimates it takes a typical lab around six months to design, purchase, and validate the IT systems that are required to support a next-generation sequencer. Since the field is still relatively new, such labs have very little guidance – particularly those that have little or no IT support staff.

Pinfold said that Intel envisions building an FPGA-based “appliance” that researchers can purchase alongside a sequencer that will eliminate that six-month planning stage.

One key element of this approach, he said, is that the system would offer a standard for the community, so that developers could write code that could easily be shared among users with the same platform. Most labs using clusters currently can’t share their code very easily because they are set up so differently, he said. This aspect of the approach is particularly important in a field like next-generation sequencing, where all the algorithms continue to evolve very quickly.

One downside to this scenario, however, is that these codes would need to be written for FPGA-based systems – a very specialized programming skill that many bioinformatics developers don’t have. Pinfold acknowledged this challenge and noted that it could present an opportunity for companies that develop FPGA-based algorithms and software-development toolkits.

“We suspect there will be a lot of business for those guys,” he said.

Officials from one such firm, Mitrionics, were present at the workshop. Michael Calise, executive vice president and US general manager, told BioInform that the company has spoken to Intel about its plans for this market but could not elaborate.

Intel’s Pinfold was purposefully light on specifics, noting that the company is still in the information-gathering stages of the project. He did stress that Intel doesn’t plan to create the solution entirely on its own, “but will work with the community to make it happen.”

In line with this, he said that the company has partnered with the Wellcome Trust Sanger Institute to create a portal called Genographia, which is a discussion board for a range of next-generation sequencing issues, including informatics support.

Anne Chapman, senior marketing manager for genomics at Intel, was also tightlipped about the specific hardware and software components that might make up the proposed appliance, but did provide a rough timeline for the project. She told BioInform that the company is aiming to have a set of partners and collaborators identified by the end of the second quarter, some initial results by the end of the third quarter, and an available system by the end of the year.

Vendors Make Do

In the meantime, manufacturers of next-generation sequencing instruments have found that they must provide a certain amount of computational power with their systems in order to perform primary analysis. At the workshop, representatives from Applied Biosystems, Illumina, and Helicos discussed computing platforms that they are shipping with their sequencers as so-called on-machine or “on-rig” systems.

While ABI and Helicos combined compute systems with the first versions of their instruments, the first version of Illumina’s Genome Analyzer did not offer any support for primary analysis. This required researchers to transfer off the instrument all sequencing data for analysis, a step that added a considerable amount of time to sequencing experiments.

Admitting that the company was “caught with our pants down” when it initially shipped the Genome Analyzer, Abizar Lakdawalla, senior product manager at Illumina, said that the company has addressed this issue with a new module called Integrated Primary Analysis and Reporting, or IPAR, which provides real-time quality control and online processing of primary data during sequencing runs.

IPAR is available with the company’s Genome Analyzer II, as “a standalone box,” or integrated with a research center’s in-house architecture, Lakdwalla said. The system, which runs Windows XP, includes a four-core HP ProLiant DL380 server and 3 terabytes of storage.

IPAR currently evaluates the performance of a sequencing run in real time and performs image analysis. Lakdwalla said Illumina will release an upgrade in June that will enable base calling.

Some users welcomed the change. Richard McCombie, a professor at Cold Spring Harbor Laboratory who oversees a lab running eight Genome Analyzers, said that his group initially had a number of informatics issues related to the instruments, and that it took around 24 hours to transfer data from a run and then another week to analyze it. However, he said that “many of these problems have been solved” due to improvements from Illumina as well as internal workflow procedures that his lab had developed.

The on-rig approach isn’t perfect, though. Matthew Trunnell, group leader in the Broad Institute’s Application and Production Support Group, said that the Broad has been working with the ABI SOLiD platform for several months and has found that alignment is “too big of a job for the on-machine system” for large genomes.

He added that while the SOLiD software allows researchers to monitor the experimental status of a single instrument, it would be helpful to have the option to monitor the status of multiple machines at once.

He said that the institute is “still assessing” how much hardware it will have to add to the SOLiD’s own computational system.

In a discussion panel during the workshop, instrument vendor representatives all agreed that they’d prefer to stay out of customers’ IT-purchasing decisions, but decided to add computers to their instruments in order to help users.

Selling computers “is not a focus for us from a margin standpoint,” said Kevin McKernan, senior director of scientific operations at ABI. “We put the computer on board in order to facilitate the use of the system.” He added that since there were no tools off the shelf to do that, ABI decided its best option was to provide the IT as part of the system.

Likewise, all the vendors stressed that they are not looking to control the downstream analysis of the data that comes off the instruments. All four companies said that the source code for their software is freely available to customers.

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Slaughter's Anti-Genetic Discrimination Bill Becomes Law

Rachel Ward

ROCHESTER, NY (2008-05-03) Legislation slated for President Bush's signature will make much genetic discrimination illegal. The president has indicated that he will sign the Genetic Information Nondiscrimination Act, or GINA, which was first proposed by Fairport congresswoman Louise Slaughter 13 years ago...

The bill would bar health insurance companies from using genetic information to set premiums or determine enrollment eligibility. Similarly, employers could not use genetic information in hiring, firing or promotion decisions.

The bill passed the Senate unanimously last week and had only one dissenting vote in the House earlier on Thursday.

Slaughter is a microbiologist. She says that she was delighted by the scientific possibilities that could stem from the mapping of the human genome, but unnerved by the potential abuses. Slaughter says that having a faulty gene is only a prediction -- not a certainty -- of having future health problems. She says that's not sufficient cause to be denied or terminated from a job

Slaughter says when people don't have to fear genetic discrimination, a whole new type of health care is possible in which therapies can be tailored to each patients' needs.

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Mitrionics FPGA-Accelerated Computing Platform for Bio and Genome Informatics Demonstrated at Bio-IT World Conference

Monday April 28, 11:42 am ET

BOSTON, M -- Bio-IT World Conference & Expo -- Mitrionics(TM), Inc., developer of the Mitrion(TM) Software Acceleration Platform and the Mitrion Virtual Processor, today announced it will be presenting and promoting its latest advancements in accelerated computing at the Bio-IT World Conference this April 28-30, 2008 in Boston, Massachusetts. Mitrionics released an upgrade to its SDK, including the Mitrion SDK Personal Edition, a free, downloadable, software development kit (SDK) designed to promote the development of accelerated applications with researchers in life science industries such as bioinformatics, genome informatics and high throughput sequencing. Mitrion-accelerated applications running on the Mitrion Virtual Processor (MVP) provide a dual customer benefit by increasing application performance 20x to 60x or greater when used with Intel and AMD processors while using ninety percent less power consumption than traditional systems. Mitrionics will be located in booth #306/308 with its partner SGI.

"The genomics industry is at the forefront for the adoption, development, and implementation of accelerated computing technologies and applications," said Anders Dellson, CEO of Mitrionics, "Our free, Mitrion SDK Personal Edition is designed to help build a healthy bio-centric ecosystem for FPGA-based Accelerated Computing and Green Computing."

Mitrionics has achieved a number of milestones in the accelerated computing industry over the past 18 months including: running its open source Mitrion-accelerated BLASTn on a 70 FPGA cluster, achieving up to 900x performance increase versus commodity clusters, the establisment of the Mitrion Open Bio Project, and selection by a number of leading global research organizations customers for its accelerated BLASTn software.

In the past year, numerous accelerated computing hardware platforms have been introduced into the market but a problem existed in fully utilizing them. Mitrionics fills the gap with broad accessibility to a development platform enabling software acceleration that researchers and software developers can use immediately.

"Our customers are pleased to see processor companies Intel and AMD, and system vendors SGI and HP increase their support for FPGA-accelerated computers. Mitrionics solves the previous problem with programming FPGAs," said Mike Calise, executive vice president of Mitrionics, Inc.

Mitrion SDK PE

The Mitrion SDK Personal Edition (PE) will allow researchers, scientists, developers, institutions, and independent software vendors (ISVs) to develop and accelerate a wide range of high performance computing applications to run on the Mitrion Virtual Processor (MVP) configured on FPGA-based computer systems. The software-centric Mitrion SDK is the fastest and easiest way to accelerate an application's performance by programming the computationally intensive part of the application to run on an FPGA. The Mitrion SDK is different from other FPGA programming solutions because it requires absolutely no hardware design skills or experience.

The Mitrion Software Development Kit Personal Edition is a free version of the Mitrion SDK that allows the developer to start developing accelerated applications for the MVP ahead of FPGA hardware. The Mitrion SDK PE is a complete development environment for Mitrion-C applications. It includes an IDE, a Mitrion-C compiler and a graphical debugger. It does not, however, include the capability to generate Mitrion Virtual Processors that will run in FPGA hardware. To do this, the commercial version Mitrion SDK is required. The Mitrion SDK PE can be downloaded for free at www.mitrionics.com.

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


Baylor College of Medicine to Use Applied Biosystems Genetic Analysis Technology as Part of 1000 Genomes Project

SOLiD™ System to Help Researchers Decide Best Approach for Sequencing Complete Human Genomes

HOUSTON & FOSTER CITY, Calif.--(BUSINESS WIRE)--Scientists at the Baylor College of Medicine Human Genome Sequencing Center (HGSC) will use high-throughput sequencing systems from Applied Biosystems (NYSE:ABI), an Applera Corporation business, for a significant part of their contribution to the first pilot phases of the 1000 Genomes Project, sponsored by the National Human Genome Research Institute (NHGRI), the Wellcome Trust and the Beijing Genome Institute. This project is a worldwide research effort that will involve the sequencing of 1,000 genomes from people from around the world to create the most detailed and medically useful picture to date of human genetic variation. The HGSC will acquire six SOLiD™ Systems in order to complete the work.

The data generated as part of the 1000 Genomes Project is expected to reveal clues about how variant DNA sequences contribute to conditions such as cancer, diabetes and heart disease. The HGSC is utilizing the SOLiD System to expand its contribution of the first phases of the project and help researchers to determine the best methods for sequencing these 1,000 human genomes.

The first phases of the project began earlier this month. When the pilot phase of this project is complete, the HGSC will have used the SOLiD System to generate approximately 200 billion bases of sequence data over a span of four months. This sequence data will consist of significant sequence coverage of 24 human genome samples; and much deeper coverage of a single human genome sample. This amount of data is equivalent to the entire contents of GenBank, the largest public repository of DNA sequence data.

In the analysis of human genetic variation, the depth of coverage refers to the number of times each of the approximately 3 billion base-pairs of DNA from one genome is read by a genetic analysis system. Deeper coverage of a genome increases the confidence researchers have in characterizing the bases that exist at each position within a genome. As a result, researchers are better able to recognize the occurrence of variants in the genome. According to the HGSC, one goal of the pilot phase of the 1000 Genomes Project will be to determine the depth of sequence coverage from different data types that are needed in order to fully understand how sequences of DNA in the genome vary significantly between individuals.

One reason why the HGSC chose to use the SOLiD System for this project is because of its extremely high throughput capability. The SOLiD System has now demonstrated that it can produce greater than 10 gigabases per run, which is more than 3x genome coverage. The throughput of the SOLiD System establishes it as the highest throughput genetic analysis system available today. An ultra-high-throughput genetic analysis system will help enable scientists at the HGSC to complete this project in an efficient and cost-effective manner.

“There is clearly a role for very high density data from platforms that generate read lengths in the 25-50 base range,” said Dr. Richard Gibbs, director of the HGSC at Baylor College of Medicine. “We believe that the SOLiD System will dominate in this arena. The production and pooling of data from multiple sources and platforms on the same samples in the 1000 Genomes Project will help researchers ultimately determine the genetic analysis platform of choice.”

Although most human genetic information is the same in all people, researchers are generally more interested in studying the small percentage of genetic material that varies among individuals. Researchers characterize genetic variation as either single-base changes – single nucleotide polymorphisms – or as a series of larger stretches of sequence variation known as structural variants. To effectively characterize SNPs in the genome for the 1000 Genomes Project, researchers must be able to distinguish real genetic variants from sequencing errors, which requires a highly accurate genetic analysis system. A combination of depth of coverage and a highly accurate genetic analysis system helps researchers identify the genetic differences that exist between individuals.

Use of higher accuracy genetic analysis systems will require lower depth of coverage to confidently characterize variants in genome samples. HGSC’s decision to use the SOLiD System for the 1000 Genomes Project was in part based on a comparative study of microbial and mammalian genomes sequenced by the SOLiD System and competing short-read platforms.

“Accuracy is vital, not just for the 1000 Genomes Project, but for all other applications, too,” said Donna Muzny director of operations at the HGSC. “The internal error-checking strategy for the SOLiD System makes it superior for the read lengths that are produced.”

In deciphering the human genome, researchers strive to both accurately identify and locate genetic variants in the genome. Mate pair analysis, or the ability of a genetic analysis system to analyze pairs of sequences separated by a known distance between them – known as the insert size – allows researchers to determine the precise location of structural variants in the genome. Structural variants consist of gene copy number variations, single base duplications, inversions, translocations, insertions and deletions. For instance, by analyzing insert sizes up to 10,000 base pairs, the SOLiD System can cover sequences that span large regions of repeated patterns within the genome. Mate pair analysis also enables accurate placement of sequence reads in and around repeat regions, which can vary greatly among individuals. This capability of the SOLiD System will help researchers to accurately piece together large numbers of DNA sequence fragments as they assemble these sequences into entire human genomes.

Once researchers determine the depth of coverage necessary to confidently characterize SNPs and structural variants in human genome samples, they will be able to make better use of genomic information. For instance, researchers will be able to more effectively use this information to understand how these variations are related to an individual’s susceptibility to disease and response to treatment for disease, which is the promise of personalized medicine.

“The HGSC’s decision to scale up on the SOLiD System for a large population resequencing project further validates the benefits of the platform for these types of studies,” said Shaf Yousaf president for Applied Biosystems’ molecular and cell biology genomic analysis division. “The successful completion of this project should enable broader applications of genomic analysis and open the door for a host of personalized medicine, and disease association studies. We believe this project will move life-science researchers a step closer to defining the precise relationship between human genetic variation and disease.”

About the SOLiD System

The SOLiD System is an end-to-end next-generation genetic analysis solution comprised of the sequencing unit, chemistry, a computing cluster and data storage. The platform is based on sequencing by oligonucleotide ligation and detection. Unlike polymerase sequencing approaches, the SOLiD System utilizes a proprietary technology called stepwise ligation, which generates high-quality data for applications including: whole genome sequencing, chromatin immunoprecipitation (ChIP), microbial sequencing, digital karyotyping, medical sequencing, genotyping, gene expression, and small RNA discovery, among others.

Unparalleled throughput and scalability distinguish the SOLiD System from other genetic analysis sequencing platforms. The system can be scaled to support a higher density of sequence per slide through bead enrichment. Beads are an integral part of the SOLiD System’s open-slide format architecture, which enables the system to generate greater than 6 gigabases of sequence data per run. The SOLiD System has demonstrated runs greater than 10 gigabases per run at customer locations. The combination of the open-slide format, bead enrichment, and software algorithms provide the infrastructure for allowing it to scale to even higher throughput, without significant changes to the platform’s current hardware or software.

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Beijing Genomics Institute Signs Multi-Million Dollar Order for 11 Additional Illumina Genome Analyzers

Illumina, Inc. ILMN today announced that in early March Beijing Genomics Institute (BGI) purchased 11 Genome Analyzers, increasing the organization's total number of the Illumina sequencing systems to 17. BGI is the fourth genome center to expand its Genome Analyzer install base to double digits. As a result of recent enhancements to the Genome Analyzer, BGI researchers will have the capacity to sequence more than two human genomes to 25x coverage per week. This significant increase in throughput will expand BGI's capacity for sequencing services to the local genetic research labs, as well as broaden the range and pace of sequencing projects undertaken by the organization, such as the YangHuang 99 project, the 1000 Genomes Project, the Giant Panda Project, the tree of life project, and numerous other large-scale initiatives.

"BGI is focused on accelerating the rate of scientific discovery, and expanding our understanding of genetic variation and diversity across human genomes. We have worked closely with Illumina to determine ways to dramatically increase our sequencing capacity in order to efficiently and quickly complete several new large sequencing projects," stated Xiuqing Zhang, Director of the Sequencing Division of the Beijing Genomics Institute. "After rigorously testing this sequencing platform, we opted to purchase the additional Genome Analyzers because of the machine's enhanced level of performance, price, and ease of use. Combined with the longer 500bp reads from other platforms, Illumina is the most suitable high throughput sequencing platform for accurate de novo genome sequencing. The collaborative relationship that we established with Illumina this past year also played a significant role in our decision to scale up to 17 Genome Analyzers."

BGI is among the world's leading scientific organizations, and aims to advance the understanding of biology and medicine through the use of large-scale sequencing and bioinformatics analysis. The institute also offers sequencing services to the international community. BGI promotes the use of genome-scale scientific approaches and strongly supports collaborative efforts in order to achieve this goal.

"BGI's decision to acquire 11 additional Genome Analyzers is further validation that our sequencing platform is delivering leading performance, and becoming the sequencing platform of choice," said Christian Henry, Acting General Manager of Illumina's Sequencing Business. "With its unmatched rate of daily output, ease of use, and proven paired-end sequencing capability, the Genome Analyzer will continue to provide the scalable solution researchers need to complete projects that were not possible one year ago."

Designed for facilities of all sizes, the Genome Analyzer has experienced rapid adoption across genome centers worldwide, including individual research labs, core and service facilities, and biotechnology and pharmaceutical companies. Specified to generate more than three gigabases of high-quality data over a five-day paired-end read run, the Genome Analyzer offers the highest rate of daily output and the simplest and most user friendly workflow. The Genome Analyzer also offers the broadest set of supported applications, including those used to profile and discover novel miRNA, to create a high-resolution genome-wide map of DNA-protein binding sites, or to sequence a whole-human genome to greater than 30x coverage. The system's groundbreaking capabilities are further validated by the continuing stream of customer peer-reviewed publications, now numbering more than 50.

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Genetic Discrimination Law Passes Senate with Compromises

[April 24, 2008]

By Matt Jones

a GenomeWeb staff reporter


NEW YORK (GenomeWeb News) – The US Senate today unanimously passed the Genetic Information Nondiscrimination Act after senators agreed to compromises that had held up the bill since last summer.


After over a decade of failed attempts to pass the law, which has passed both houses of congress more than once with wide margins, only never in the same session, GINA also is expected to pass in the US House of Representatives as early as next week and be signed into law “in short order,” Kurt Bardella, press secretary for GINA sponsor Olympia Snowe (R – Maine), told GenomeWeb Daily News today.


GINA “passed the house twice with considerable support,” Bardella said.


The bill, which would protect Americans from discrimination based on information from genetic tests, sailed through the House a year ago by a vote of 420 to 3, but never made its way to a Senate vote after it was placed on hold by Senator Tom Coburn (R – Okla.).

Coburn has since raised concerns about the bill focused primarily on the potential for lawsuits and employers’ rights. In March of this year, he and ten other senators signed a letter to the White House listing their complaints, signaling a new threat to GINA’s passage.


“We believe the final draft of GINA should provide clarity to the health insurance industry, maintain the integrity of the underwriting process, and ensure accurate premium assessments,” the senators stated in the letter.


Now, the lawmakers have agreed to a key compromise that would salve Coburn’s concerns by adding language to create a "firewall" between the parts of the bill dealing with insurers and employers, an adjustment Coburn and the White House said was needed to protect them from some lawsuits, a source on Capitol Hill told GWDN today.


The agreement included other "minor" changes having to do with phrasing, according to the source, who asked to remain anonymous.


Senator Coburn’s office was not immediately available for comment to explain further its stance on the compromises.


As GWDN reported in March, when GINA passed the house as an amendment to another unrelated bill, the Paul Wellstone Mental Health Act, Coburn had other concerns with GINA, and it was not immediately clear how many of these have been agreed upon.


According to Bardella, the Senate will call up the House version of the bill and then insert the changes into the new Senate bill before voting on it and sending it back to the House.


“The passage of GINA today represents the culmination of an effort that began more than ten years ago to put in place landmark protections to safeguard Americans against genetic discrimination,” Snowe said today in a statement.


"Utilizing genetic testing can improve health, reduce spending, and lower health care costs. Yet our laws have not kept pace with emerging technology, and doubts about the misuse of genetic information are preventing Americans from getting tested," she said.


Snowe’s office said it expects the House to pass the bill with unanimous consent and expects President Bush to put his name on it. “We expect the White House will be pleased to sign the bill into law,” Bardella said.


While the passage of the bill was the product of “thirteen years of work,” Genetic Alliance director Sharon Terry told GWDN today, the recent consensus was the result of “constant meetings between the White House, republicans and democrats.”


Terry described the effort for the compromise as a product of a “great conversation” between all parties involved and the engagement of the genetics community. The sudden advancement of consumer genomics businesses over the past year and greater discussion about the uses and ethics of genetic tests in the media could have helped push the bipartisan effort.


“Thirteen years ago,’ Terry said, “we were talking about only a few tests, and now we’re talking about mainstream medicine.”


“Our challenge now is to make sure that doctors and patients are aware of these new protections so that fear of discrimination never again stands in the way of a decision to take a genetic test that could save a life," Kathy Hudson, director of the Genetics and Public Policy Center at Johns Hopkins University.


The pending passage of the bill also was lauded by the Personalized Medicine Coalition, a collection of industry, academic, payor, and other partners who advocate for the advancement of personalized healthcare.


“The guarantees provided by this legislation will encourage millions of Americans to use their genetic information to improve their healthcare, and to help prevent and treat cancer and other diseases,” Edward Abrahams, executive director of the PMC, said in a statement.


Mari Baker, president and CEO of consumer genomics services firm Navigenics, also applauded the Senate’s passage of the bill. “This is fundamental, foundational legislation that’s critical for the long-term ability of using genetic information to improve healthcare in this country,” Baker told GWDN today.


“With GINA passing, people can now have confidence that this information will not be used to discriminate against them, and this industry can move forward.”

[Good for the US Congress and Senate, the President, all Americans. Other countries are expected to follow.

.... - pellionisz_at_junkdna.com April, 25, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The $100 Genome

Forget the $1,000 genome. Some companies are looking far past that goal to create a really inexpensive sequencing technology.
Thursday, April 17, 2008

By Emily Singer

It currently costs roughly $60,000 to sequence a human genome, and a handful of research groups are hoping to achieve a $1,000 genome within the next three years. But two companies, Complete Genomics and BioNanomatrix, are collaborating to create a novel approach that would sequence your genome for less than the price of a nice pair of jeans--and the technology could read the complete genome in a single workday. "It would have been absolutely impossible to think about this project 10 years ago," says Radoje Drmanac, chief scientific officer at Complete Genomics, which is based in Mountain View, CA.

The most recent figures for sequencing a human genome are $60,000 in about six weeks, as reported by Applied Biosystems last month. (That's down from $3 billion for the Human Genome Project, which was sequenced using traditional methods and finished in 2003, and about $1 million for James Watson's genome, sequenced using a newer, high-throughput approach and released last year.) But scientists are still racing to develop methods that are fast and cheap enough to allow everyone to get their genomes sequenced, thus truly ushering in the era of personalized medicine.

Most existing technologies detect the sequence of DNA a single letter at a time. But Complete Genomics aims to speed the process by detecting entire "words," each composed of five DNA letters. Drmanac likens the technology to Google searches, which query a database of text with keywords. Further speeding up the process with novel chemistry and advances in nanofabrication, the companies will develop a device that can simultaneously read the sequence of multiple genomes on a single chip.

To accomplish the new sequencing, scientists first generate all possible combinations of five-letter DNA segments, given the four letters, or bases, that make up all DNA. These segments are labeled with different types of fluorescent markers and added in groups to a single-stranded molecule of DNA. When a particular segment matches a sequence on the strand of DNA to be read, it binds to that part of the molecule. A specialized camera then snaps a picture--the different fluorescent signals indicate the sequence at specific points along the strand of DNA. The process is repeated with different five-letter DNA combinations, until the entire chromosome is sequenced. The approach is feasible because of the recent availability of cheap DNA synthesis, making it much more efficient to generate libraries of these DNA segments.

Each DNA molecule will be threaded into a nanofluidics device, made by Philadelphia-based BioNanomatrix, lined with rows of tiny channels. The narrow width of the channels--about 100 nanometers--forces the normally tangled DNA to unwind, lining up like a train in a long tunnel and giving researchers a clear view of the molecule. "Since we can stretch out DNA, we can get a huge amount of information from each piece of DNA we look at," says Mike Boyce-Jacino, chief executive officer of BioNanomatrix. "The big difference from any other approach is that we are looking at physical location at the same time we are looking at sequence information." Sequencing methods currently in use sequence small fragments of DNA and then piece together the location of each fragment computationally, which is more time consuming and requires repetitive sequencing.

The companies still have a long road to the $100 genome. BioNanomatrix has already shown that long pieces of DNA--two million letters in length--can be threaded into the channels of existing chips. But now researchers need to develop chips with many more channels, so that multiple genomes' worth of DNA can be sequenced simultaneously.

The main hurdle for Complete Genomics will be to generate fluorescent labels that can be easily and accurately detected. Most current methods get over this problem by making many copies of the same DNA molecule and sequencing them simultaneously, thus boosting the signal to noise ration. But that approach limits the length of the piece of DNA that can be sequenced, and it increases cost by increasing the amount of chemicals needed for the reaction.

The project is part of the Advanced Technology Program, funded by the National Institute of Standards and Technology to spur development of novel, high-risk technologies. This year, Complete Genomics is releasing a commercial product based on similar chemistry, but the company has declined to give details on its status.

The technology necessary to achieve a $100 genome is still at least five years away, says George Church, a geneticist at Harvard Medical School, in Boston, and a member of Complete Genomics' scientific advisory board. "But [it's] coming from a company that has an almost-as-good technology coming out this year."

Both Drmanac and Boyce-Jacino say that one of the biggest advantages of their technology will be the ability to sequence very long strands of DNA. The newest sequencing technologies in use today read DNA in fairly short spurts, from about 30 to 200 letters, which are then stitched together by a computer. This approach works well for some applications, such as resequencing a known genome. But a growing number of studies suggest that the small structural changes in DNA, such as deletions or inversions of short sequences, play a significant role in human variability, says Jeff Schloss, program director for technology development at the National Human Genome Research Center, in Bethesda, MD. "Those are much harder to pick up with short reads."


Longer reads will also allow scientists to look at collections of genetic variations that have been inherited together, known as haplotypes. This kind of analysis can determine if a particular genetic variation has been passed down from the individual's mother or father. Recent research suggests that in some cases, maternal or paternal inheritance can impact the severity of the disease. With new tools to better track inheritance patterns, scientists may discover that this phenomenon is more common than previously thought. "That's one reason we're hoping that several of the emerging methods will allow long reads," says Schloss.

[There is much talk about $1,000 sequencing - though this column reported months ago the technology of $100 genome in a few years.

.... - pellionisz_at_junkdna.com April, 17, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

New technologies speed development of personal genomes ["Jim Watson project"]

Webcast video by Kevin Davies (Bio-IT World) with David Wheeler

HOUSTON -- (April 16, 2008) -- Sequencing the personal genetic blueprint of Nobel laureate and DNA pioneer James D. Watson – first announced nearly a year ago – signals a new technological era that points the way to development of techniques that will make personal genomic medicine possible, said researchers from Baylor College of Medicine in Houston and the company 454 Life Sciences – Roche Diagnostics in a report published today in the journal Nature.

Realizing the potential of personal genomic medicine will require overcoming the barrier presented by the enormous size of the human genome – 6 gigabases. Sequencing the Watson genome took two months using new sequencing machinery and techniques. The cost was one-hundredth of more traditional methods, said Dr. David Wheeler, first author of the Nature paper and a leader in the effort. Wheeler is also an associate professor of molecular and human genetics at BCM.

Identify sequences associated with disease

"A key aim of personal genome sequencing is to identify genome sequences that may be associated with disease, or are predictive of response to medication. The need to make genotype (gene-based)–phenotype (symptom or outward signs) correlations before having predictive value is at the heart of both the excitement and the dilemma of the new era of genomic medicine. Thus the ability to sequence individuals readily using highthroughput, scaleable, low-cost, completely in vitro technology, as demonstrated here, is an important milestone in our ability to connect 'personalized genomes' to 'personalized medicine' and enable these critical correlations to be made," the authors wrote.

The new technique also enabled researchers to sequence new areas of the genome that were not previously identified by traditional means. Novel genes in these areas were also missed previously.

Next-generation technology

"This is the first genome sequenced by next-generation technologies. Therefore it is a pilot for the future challenges of 'personalized genome' sequencing," the authors wrote.

The use of 454 Sequencing machines enabled the rapid elucidation of Watson's DNA with extremely low error rates, said Dr. Richard Gibbs, director of the Baylor Human Genome Sequencing Center. The Watson project is a first step in that direction.

Others who took part in the work include Yufeng Shen, Lei Chen, Amy McGuire, James R. Lupski, Craig Chinault, Xing-zhi Song, Yue Liu, Ye Yuan, Lynne Nazareth, Xiang Qin, Donna M. Muzny and George M. Weinstock, all of BCM, and Maithreyan Srinivasan, Michael Egholm, Wen He, Yi-Ju Chen, Vinod Makhijani, G. Thomas Roth, Xavier Gomes, Karrie Tartaro, Faheem Niazi, Cynthia L. Turcotte, Gerard P. Irzyk, and Marcel Margulies of 454 Life Sciences – Roche Diagnostics. Jonathan Rothberg of Rothberg Institute for Childhood Diseases in Guilford, Connecticut, is a senior author.

Funding for this work came from 454 Life Sciences and Baylor College of Medicine.

["Personal Genomics" is already an industry at the level of investigation of a fraction of human DNA (by microarrays). Dr. Watson's full genome was obtained by a "Next-generation sequencer" (pyrosequencing by 454) and the next-next generation sequencing (with nanotechnology) is not far behind with a $100 genome. The video explains that drawing immediate medical- and quality of life conclusions from full sequences is already a sharp demand (albeit it is too early to satisfy) - but the individual demand by Dr. Watson multiplied by hundreds and later millions of people to be sequenced will create a market that industry is already positioning.

.... - pellionisz_at_junkdna.com April, 17, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

GNS: Building a SNPs-to-Outcomes Engine

By John Russell
Bio-IT World

April 16, 2008

Personal and consumer genomics are boiling right now. The excitement, fueled by the plummeting costs of DNA sequencing and the rise of companies like 23andMe, feels tangibly different than many earlier biotech trends. It feels more concrete as if, for good or ill, something interesting and vast is about to happen. It’s easy to imagine a raucous Wild West period during which traditional therapeutics, nutriceuticals, and “lifestyle” applications create a boisterous, buyer-beware marketplace and stir regulatory confusion.

Yet sifting through the coming data flood and connecting the dots in ways that accurately predict SNPS-to-outcomes remains a huge challenge. Gene Network Sciences CEO Colin Hill says flatly, "GNS is gunning to be the first group that really breaks this open, by having a scalable, supercomputer-driven automated platform that can turn that raw data into discoveries of the key SNPs driving the outcomes."

To hit that target, this pioneer in data-driven systems biology (see GNS Charts “Unknown” Biology) is pushing ahead forcefully. Just this week Gene Network Sciences (GNS) announced that biosimulation pioneer and MacArthur award winner James Collins, currently at Boston University, will join the company as Chief Scientific Officer, a new position. This is an important gain for GNS and adds more scientific heft to the company...

SBNL: Is GNS attacking different questions than it was a year and a half ago?

Hill: The new question that’s become a key focus area for the company is going from “SNPs to outcomes.” We weren’t as focused on DNA sequence information [before]. We hired somebody who’s driving that effort. We’re also driving a lot of our own discovery and we’ve found some partnerships with academic groups such as the Moffitt Cancer Center in Florida and the Weill Cornell Medical School in New York. That’s enabling us to go after some of our own discoveries in addition to the big pharma and biotech collaborations.

But this SNP-to-outcome problem is a really big one. With all the progress that groups like Steve Turner’s company (Pacific Biosciences) is making and other groups like deCode or 23andMe, the data is going to be there. We have a lot of information on the variations that make us all different and determine our disease progression and response to therapeutics. But we have a big problem determining which of the three million genetic variations are causative of the outcomes. That’s a very difficult computational problem that nobody has solved. You’re not hearing about that in all the articles in The New York Times about the race to very fast genomes....

We’ve seen many companies that started as platform companies make a transition to become drug discovery entities. The question about being commercially focused and how well you’re doing there is not necessarily about just revenue growth and profitability because if you’re investing in drug discovery that’s a longer-term play and that you have a different financial profile.

SBNL: Is that what GNS is trying to do? Are you trying to fund your platform development through R&D collaborations while the real goal is to generate, capture, and commercialize biological IP?

Hill: You’re mainly right. We’re not planning to become a drug company. We understand where our expertise is. We think we’re the best in the world at data-driven computation in this sphere. We have no desire to try to bring on capabilities that are well outside of that, [such as] medicinal chemistry and such. I’ve been thinking about this for ten years. I can’t say I have the ultimate solution and when you look around at the industry it’s hard to say that anybody’s really cracked the nut on this.

SBNL: No one has.

Hill: Everybody agrees the drug discovery industry has to change. Pharma’s in the toilet with Wall Street and everybody’s calling for gloom and doom and such. Everyone agrees there need to be new tools to advance the state of the art. The pharma companies know this better than anybody. However, companies that have breakthrough technologies still have a hard time commercializing those technologies and capturing some of the upside from them. From an investor’s point of view a lot of these platform companies have not performed well.

There ends up being tension between to what extent does one need to take the platform and go down the drug discovery pathway in order to capture real value and the value of the technology [itself]. The classic thing is firing all your discovery people and then hiring regulatory folks or even in licensing a compound and becoming a drug maker and I feel like that can’t be the right answer.

SBNL: So, is the idea wrong? Are systems biology companies, like all platform companies, naïve to think they are going to change the drug game in a significant way in a five- or even six-year window? Is it simply that technology development doesn’t pay off or that drug discovery is inherently too messy, chancy and takes too long.

Hill: No. No. I don’t think so. I honestly think it’s the technology. I think many of the approaches that dominated the early days of systems biology were off the mark. People will look back some years from now and say those approaches couldn’t have worked. There was too much unknown about biology, it was too complex, and there wasn’t enough data. I’m referring to approaches based on literature as the starting point, whether it’s assembling that information together into databases so you can visualize your molecular profiling data in this context or it’s doing the simulation models based on literature information. I think those approaches have inherent limitations.

I’ve said to many people that for a number of years GNS was misguided. [Our] approach of trying to model all of the known pathways involving cancer cell biology had its merits, certainly as an academic effort, and had some use in the commercial setting, but I think it was limited. We’ve come to the realization that most of the biological circuitry of human cells is unknown; 95% of it’s unknown. So, knowing only 5% of the information from all of the tens of thousands of papers that you could assemble from Nature, Cell, Science, and such over the last few decades, how and why do we think we should have accurate simulations and predictions about the phenotypes and clinical outcomes based upon such fragmentary knowledge? The answer is we shouldn’t.

If I gave you a computer chip and 95% of the circuitry was missing and I told you, John, can you predict what happens to this computer chip when I perturb that part and that part and that part and you’d say hell no. How could you? Most of the circuitry is unknown. I feel like the classic approach to systems biology, which GNS was a part of, didn’t really have a chance at having great impact on the discovery and development process. Without more of the guts of the system known, your solving a bunch of nonlinear differential equations wasn’t really going to cut it.

We first need to discover what are the key molecules driving disease progression. We have to discover what the key molecules are driving drug response, both from efficacy and the safety perspective. There’s been some recent papers from the Cancer Genome Anatomy Project and [Bert] Vogelstein’s group at Johns Hopkins showing a huge amount of heterogeneity in human tumors; lung cancer, breast cancer. Assuming we believe those results, this is telling us that something is misguided about the view that there is this canonical uber model that controls disease progression and is going to be common to everybody.

SBNL: If that’s true, what does it mean for the GNS value proposition?

Hill: These realizations were a big part of GNS shifting gears four years ago and really focusing on an inference based approach or combining inference with simulation. We don’t believe in simulation [alone]. You just have to have the more complete system to start with, which is the big difference between engineering approaches that have been able to start with a complete blueprint. One of my favorite examples is a blueprint of the circuitry of Nokia cell phone. It’s a very complicated circuit diagram, but the point is an engineer built it. They know it. They know all the parts of it so they can predict what happens when you perturb things. It’s not the case in biology. God didn’t hand us down a blueprint and say this is how human physiology works. We’ve discovered little bits and pieces here and there. We have a big challenge, which is to infer that circuitry before we can simulate outcomes.

So back to your question. The value proposition for our partners, whether pharma partners or academic clinics, is we now have the tool. It scales with the power of IBM’s largest super computers that allows us to take in data from a variety of sources, heterogeneous data, and actually discover the causal regulatory models connecting either genetic perturbation or drug perturbation to the molecular entities, be they genes or proteins or metabolites, and the clinical outcomes that they’re driving.

SBNL: Is the model is still basically based on up or down regulation of genomic components?

Hill: That’s a typical kind of perturbation one would do to a model that’s been reverse engineered. So, for example, in a project around Type II diabetes there are a number of endpoints such as blood chemistries, body weight, cholesterol, a number of things being measured. In this case, I think there were 18 different endpoints. The inputs were various drugs and data discovered from proteomics profiling from serum in animals and a number of entities. We inferred the causal connections between the drugs, proteins, and a variety of endpoints from this reversed engineered model. I’m drawing a single picture, but we actually reverse engineer a thousand models to create an ensemble of models that have a lot of overlap, but some real differences.

Perturbations are made through the system, so turning the dose of a drug up and then knocking up or knocking down one of the proteins and measuring how it’s changing some number of endpoints. What you end up with are outcomes. Let’s say it was body weight you were tracking. We can dial up the dose of drug one in silico and observe the body weight. Say knock down of protein 47 may result in a further shift in body weight. Going through a systematic knockdown of these various protein components, we now reveal that protein 47 is having this major effect on body weight in the presence of drug one, and therefore, it’s a key marker, potential co-drug with drug number one and we can tell you that with x-amount of confidence. Or we can tell you there isn’t enough data because instead of a nicely clustered histogram of results across all one thousand models, we’re seeing something that’s all over the place.

SBNL: Would be at least some starting hypothesis -- the result of down regulating protein 47 is that glucose transformation is slowed, etc?

Hill: The hypothesis generation and testing is completely automated at the outset, although once models are built, the user can test many hypotheses. Only at the very end do we annotate the molecular components you’ve discovered to be the key drivers. This is an attempt to reconstruct the system that gave rise to the data. Essentially it’s directed, data-driven, high-throughput guessing based on some very solid statistical and mathematical principles. But I think it’s a rather profound thing that a lot of biologists have a hard time digesting; the concept of discovering things by computer and discovering models, discovering biological mechanisms not in the traditional wet lab.

SBNL: Is most of the GNS’s current work in discovery or is it in sort of the comparison of compounds?

Hill: That’s a very good question. I want to say it’s about half and half; there is a good mix at this point and across a variety of data types. Like I said, we’re doing our first set of projects in genomics or in genomics being sequence versus molecular profiling. The team is now operating at a different level of test in terms of the number of projects they can execute on simultaneously. It’s putting the platform we’ve been investing in to the test.

This is what we were practicing for and developing and investing for all these years and we’re starting to see it really pay off. I mean the scalability of this approach goes well beyond whatever you can do manually. Part of the beauty of this approach is it is automated. You have to do some statistical analysis of the data ahead of time. You have to understand the experimental design. Often we work with a collaborator to design the experiments in the first place. But once the data is in the right form, the process of reverse engineering the models and then doing the simulations to produce onto discover the key molecules driving outcomes, that part’s pretty fast.

SBNL: Do you have sufficient staff to take on a lot more work, or would you need to scale staff-wise to take on more projects?

Hill: Our operational model scales very well with additional projects. Investing in this machine driven, data driven approach, it’s a different bet — you’re investing in the automation of the software, you’re investing in hardware, you’re investing in the access to data, we’re buying more data, funding with collaborators, new data, and that’s getting cheaper. The cost of computing power is a third of what it was only a few years ago. So you have to kind of take a step back and ask, well what’s going on here and where is this going in the long run? At some point machines will win. I believe in machines. I believe that computers will be the main driver of discoveries in the biomedical world in the near term.

SBNL: When you say near term, what does that mean? Is it ten years, five years?

Hill: No, near term is two-to-five years.

SBNL: Really?

Hill: I think the growing amounts of data [will drive it] – I don’t mind being a contrarian.

SBNL: Aren’t you the one who says that 95% of biology is unknown? It seems to me that it’s a lot of ground to cover in two to five year.

Hill: It’s not that one has to unravel all of the unknown biology and anticipate every next discovery of small interfering RNAs and micro RNAs and things that people are finding. That’s not what I’m talking about. Our challenge, the industry’s challenge of developing a deeper mechanistic understanding of disease progression and the drug response, doesn’t necessarily need every last piece of biological knowledge. But we do need to be able to discover the key things really driving the outcomes.

SBNL: Is biopharma more or less enthusiastic about this approach today?

Hill: More enthusiastic, but it’s a sober enthusiasm. It’s more specific about solving problems. For example, combo therapies in cancer. This is something that a lot of companies need to solve. They can’t run enough clinical trials to explore all the combinations with standard of care therapies with their new targeted cancer drug. So here’s an area where this kind of approach has a clear win, from single drugs applied at multiple doses in your biological system. We have a platform that can combine those drugs in two-way combinations or three-way combinations and determine the most synergistic combinations and the dose ratios needed to get to those results.

SBNL: What projects is GNS currently working on? Can you give us a picture of the business? How long is a typical project?

Hill: We try to be able to turn most projects around in a few months. And a lot of that time is to set up the statistical analysis and interfacing with the partner, that part doesn’t go away. Right now there are on the order of a dozen projects going on with a mixture of pharma, biotech, and clinical research labs.

SBNL: Who are some big commercial collaborators?

Hill: So I can cite Pfizer and CombinatoRx. The academic partners are also important and are becoming more commercially focused these days, that’s clear. You see more and more partnerships between big pharma and these groups.

SBNL: How does GNS start developing its own biological IP?

Hill: About a year and a half ago we hired a patent lawyer from New York, Tom Neyarapally, he’s our VP of Corporate Strategy in IP, and his main mandate was to really broaden the GNS business model, and to monetize the REFS (Reverse Engineering Forward Simulation) platform in other ways. He really led the effort to start the collaborations with groups like Moffitt and Weill Cornell Medical School and other such places. The main thrust of the business is capturing this IP, diagnostic or therapeutic related.

SBNL: That’s through the academic collaborations where they might be more willing to share the IP?

Hill: That was the idea. It’s turning out that commercial collaborations were also able to [yield] more or less, this IP.

SBNL: Biological IP?

Hill: Yes. That’s actually really important and it is an important part of the business model going forward. We’re not going to turn into a drug maker or a diagnostic maker. But we see ourselves as a discovery engine. That is what GNS is about.

SBNL: What are the therapeutic areas in which you are working?

Hill: The big focuses, in terms of our internal discovery, are oncology, naturally, metabolic disease, meaning diabetes, Types I and II, and Alzheimer’s.

SBNL: Is there a dedicated internal group to internal research as it were? If so how big is it?

Hill: Yes. It’s pretty small. We’ve been dealing with the question of how we allocate expenses to these different groups but it’s the same platform. I like to cut things up because it’s easier to explain the business model to investors and other people, but I’ve kind of come full circle with that thought. At the level of the platform, the process, and the work being done, it actually doesn’t make a difference — the scientist doing the work probably don’t even know whether a contract says what aspect of the IP we own as a result.

We know we’re never going to become a multi billion dollar company just doing service deals. We believe that data driven computational discovery is the way forward and is becoming a bigger and bigger part of the process. The challenge of how to capture the upside is a challenge everybody has dealt with, from deCODE to Millennium to Entelos, across the board.

SBNL: Repositioning drugs seems fashionable right now; do you have any activity there?

Hill: Some of the work we’re doing in combination therapies can fit there, and our collaboration with CombinatoRx as well.

SBNL: Will you need to raise additional funds?

Hill: We do not have to raise money this year. We probably will. We did a little raise last year and ended up being over-subscribed by a lot, so that actually put us in a good position. We’re still cautious. My feeling is that there’s more competition coming from outside of the usual suspects in systems biology coming after some of these kinds of problems. Going from SNPs to outcomes, you better believe the competition is heating up there, and from groups you may not normally associate with this problem, such as Microsoft or Google.

My feeling is they are starting to think about these things a lot more and they’re coming with some really heavy machinery. I think you’re going to see some surprises in the next year or two as far as who tries to stake a claim in this area. Their approaches are going to be completely different than the classical systems biology based approaches.

SBNL: Milestones for the next 12 to 18 months?

Hill: SNPs to outcomes across a few different therapeutic areas — that’s what we’re gunning for, really being able to relate SNPs to change and changes to outcomes. Essentially be able to do in an automated fashion in weeks or months what Eric Schadt at the Rosetta/Merck group did over the course of a couple years. Number two is combo therapies and oncology, to be the first to take a single drug, multiple dose, data sets and explore very quickly billions and trillions of drug and dose combinations of cancer drugs and discover the most efficacious combinations and the corresponding markers that indicate the patients that will have the strongest response.

That’s all I care about. We are doubled down on our investments in technology. We are out there buying up data, partnering to get data, and the things that were clear bottlenecks to GNS a year and a half ago, two years ago, they’re not there anymore. Could we do more with more money? Absolutely. We’d love to blow this out in a bigger way and I think the issue I’ll be dealing with over the next year, 18 months, will be when is the time to possibly pull the trigger and accelerate. It’s a bet. If we’re right — and this is the way forward — this will yield discoveries at a pace and a scale that have never been seen before.

[Very exciting ... one can re-live the vibes of the "Internet boom" - but this time with very heavy duty science revolution in the background of mushrooming start-ups. Dealing with just SNPs (point mutations) is tremendously exciting - but we have to reckon that point mutations (and their possible correlation to certain "junk DNA diseases") are just the surface of the challenges. "SNPs hunting" in most cases does not touch the "copy number variations", "insertions-deletions" (indels), tandem and n-tuplet repeats, etc, etc. Most importantly, all of them empirical - until conceptual breakthroughs putting "post-ENCODE Genomics" on an axiomatically different (algorithmic) track. This boom appears to be limitless - it will certainly outlast the wildest expectations

.... - pellionisz_at_junkdna.com April, 18, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Sydney Brenner Urges Cancer Researchers to Consider 'Bedside to Bench' Approach

[April 15, 2008]

By Andrea Anderson

GenomeWeb staff reporter


SAN DIEGO (GenomeWeb News) – In a speech that touched on genomic research, translational research, and personalized medicine, Nobel laureate Sydney Brenner challenged conventional wisdom on Monday, proposing a “bedside to bench” approach to research and medicine.


Brenner made the remarks after receiving the AACR-Irving Weinstein Foundation Distinguished Lectureship award yesterday at the annual American Association of Cancer Research meeting. He received a standing ovation for the speech, in which he questioned the view of translational research as a way to apply basic research to clinical practice.


“What I’m advocating is: go the other way,” Brenner said. “Let’s go from bedside to bench.” Instead of compartmentalizing research and medicine, he said, the two should be integrated so that physicians, who are most familiar with human “phenotypes,” can inform the other arms of science.


Brenner, who was one of the 2002 Nobel Prize winners in Physiology or Medicine, helped discover messenger RNA. He also pioneered the use of the soil roundworm Caenorhabditis elegans as a model organism, opening the door for new insights into developmental biology, aging, and programmed cell death.


But these days he’s pushing a new model organism: humans. “We don’t have to look for a model organism anymore,” Brenner said. “Because we are the model organisms.”


As researchers unravel the genome, it’s easier than ever to evaluate human biology directly, rather than extrapolating it from research on other animals, he said. Human research happens all the time in society — in families and communities to governments and religions, Brenner mused, “Why don’t we now use this to try to understand our genomes and ourselves?”


He acknowledged that there are still challenges to interpreting genetic information. But Brenner argued that the extensive variation between individuals could hold a wealth of information. “It is the variation that has become the interesting thing to study,” he said.


Even so, completely analyzing the genetics of tens of thousands of humans remains technically impractical — and prohibitively expensive. Even as sequencing becomes cheaper, Brenner noted, interpreting the data will likely remain challenging.


“What we need, actually, is a view of all this that tests hypotheses all the time,” Brenner argued. This includes studying “human mutants” — something that may not be as difficult as it sounds given that, “We’re all mutants, basically. It’s hard to find a wild type.”


And humans’ interactions with their environment cannot be ignored either, Brenner said. Our “100,000 year-old genome” — the genome we acquired in ancient history when human biological evolution stopped and cultural evolution took over — is no longer suited to our environment, he said.


“We find we have a biological system now maladapted to an environment that the biological system has created,” he said. That means dealing with disease should not just involve patching biological problems, he argued, but also understanding biological-environmental interactions and using them to predict and prevent disease whenever possible.


“It is the responsibility of us to see that if there’s a way of preventing something — of not allowing it to happen — I think that we should follow that path,” Brenner said.

[While in the present "Genome Revolution" new Technologies (next-generation sequencing, computational methods, etc) came to the limelight, it will always be true that breakthroughs will be achieved by paradigm-shifts in Basic Research. Nobel Laureate Sidney Brenner's "challenging of the present status quo" is, therefore, much welcome - as one of the leading voices "Post-ENCODE" within a year (since the 14th of June, 2007) to come out of a shell-shock and to define new directions. Emphasizing the variance in human genome (that is mostly in what was proposed to put to the attick as "junk" for a later analysis), perhaps the most important item on the agenda is "to test all hypotheses". Indeed, Algorithmic Approaches that have been regarded as "avant guarde" until lately, are eminently testable (and have been tested, moreover with confirmatory experimental results, published in peer-reviewed science publications. Dr. Brenner's bold initiative will much accelerate these advances.

.... - pellionisz_at_junkdna.com April, 15, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The DNA Data Deluge

Bio-IT World
By Kevin Davies

April, Cover Issue

Researchers are enthralled by the astonishing throughput and possibilities of next-generation DNA sequencing platforms. But how are they going to manage the torrents of data these prodigious instruments produce?

April 1, 2008 | On March 6, the employees of Expression Analysis (EA) in Durham, North Carolina, took delivery of the first ever commercial single-molecule DNA sequencing instrument, developed by Helicos BioSciences. Understandably, there was a lot of excitement when "that baby rolled through the door," says EA director of marketing Karen Michaelo.

The HeliScope weighs a little under 2000 pounds, and comes with a separate 32-CPU Dell PowerEdge server stack and a whopping 24 Terabytes (TB) of storage, which will be installed at EA by a pair of Helicos employees. "We're going to skin our knees for a while," says Michaelo, who anticipates offering sequencing services from the platform this summer.

During a tour of the Helicos production floor in Cambridge, Mass., senior VP of product research and development Bill Efcavitch stressed the platform's processing prowess. Customers are going to need it. Today, the HeliScope produces 25 Mb sequence per hour, but Efcavitch predicts a greater than tenfold increase within the next two years, putting the $1,000 genome firmly within reach.

University Challenge

The challenge facing next-generation platform users is how to manage the data glut pouring forth from 454, Illumina, and Applied Biosystems (ABI) instruments and the like. "Fundamentally what the community suffers from is, there's no best practices guide for setting up a lab with this equipment," says Geospiza founder Todd Smith.

For example, Smith estimates an Illumina Genome Analyzer produces 1500 times more data per run than an ABI 3730 capillary instrument. Even the major genome centers are scrambling. At the Broad Institute, a warehouse of 100 ABI 3730s produced 60 Gigabases (billion bases) of sequence last year, estimates director of informatics for sequencing platforms Toby Bloom. The institute's 20 Illumina instruments currently generate 20 Gigabases per week, and this could double in the near future.

"There's not even enough local storage on the data collection machine to complete a single paired end run, in the way we're using them, which means we have to be much more aggressive about moving data through the pipeline. So we've shifted to a pull model rather than a push model for moving the data along," says Bloom's colleague Matthew Trunnell, group leader of application and production systems. (See, "A Broad View")

A run on ABI's SOLiD instrument produces about two Gigabases, says ABI's Michael Rhodes, sounding somewhat awestruck. "I don't think I've really seen anyone who's first time with [SOLiD] who has not been somewhat overwhelmed by that amount of data. They may have rationally dealt with it in their head, but there really are challenges in just moving that amount of data around." Rhodes' team sometimes resorts to burning data on hard drives to ship between sites.

"A lot of smaller [organizations] might have only a few next-generation instruments, but they think about the science primarily," says BlueArc research markets director, James Reaney. "They have no ability to handle data of this size easily."

SGI's global higher education and research solutions manager, Deepak Thakkar, hears scientists complain about offloading Terabytes of data offline before they can start a new run. Says Thakkar: "Even if I don't keep 90% of the data, I still need to do the right thing before I decide to trash it. I don't need to keep every 5 or 10 TB, but I need to know what to keep!"

"The data glut is a huge problem," agrees Steven Salzberg, a bioinformatician at the University of Maryland. Salzberg says his lab (using Illumina instruments) "keeps the sequences and quality values, and throws away pretty much everything else almost right away. It's cheaper to re-sequence than to store the raw data for the long term."

With hundreds of next-gen instruments deployed in the past 1 to 2 years, and more platforms on the horizon, the deluge is just beginning.

Helping Hands

Illumina CIO Scott Kahn says the Genome Analyzer ships with "an instrument control PC that has adequate storage to collect all the images, so at the end of the run, they can be transferred and then processed with an offline pipeline that runs across a Linux cluster."

Kahn characterizes three broad groups of users. The most proficient customers, including genome centers such as the Broad, Wellcome Trust Sanger Institute, and Washington University, transfer data in real time as the runs are proceeding to gauge run quality and determine whether additional cycles are desirable. A second group of "quite sophisticated" users "will use mechanisms that we've provided to transfer data off the machine after the run has completed." The third camp doesn't have dedicated bioinformatics resources, and puts a premium on ease of use.

ABI offers a 12-core server with its SOLiD instrument providing 9 TB storage, which stores the images during the analysis and some of the results. "When you do the next run, you get rid of the pictures," says Rhodes. Problem solved!

Helicos' Efcavitch is realistic about the data handling challenges. "The amount of image data we're collecting is staggering," he admits. During the data acquisition phase, lasting a week or more, the HeliScope Analysis Engine cluster processes the image data in real time down to a sequence file. "To store that data would be just cost prohibitive. So we take the image data, we process it so all we're saving is 1 percent of the image data for diagnostic purposes and a sequence file."

The server stack can hold data from two separate runs. "We put enough room to save one run, start it up, transfer the data out, now you're ready for a new run," says Efcavitch. With some platforms, "you can't transfer off all the data, so they save all the data on a hard disk. You have to transfer that off [and] do that image processing."

Efcavitch says Helicos intends to make its software open source. Meanwhile, his colleague Kristen Stoops, director of informatics business development, is building a "Helicos IT pipeline" for customers. "It does require some effort on [the users'] part to think about who is going to use the data, and how it needs to be moved through various levels of access and storage," she says. (See, "Helicos' IT Pipeline")

Helping users to do that is a growing number of software firms and consultancies, who see an ever broadening niche to be exploited.

Core Solutions

Todd Smith's company, Geospiza, which builds IT infrastructures for core laboratories that provide centralized genomics and proteomics services applications, releases its new platform for next-gen labs this month and hopes to bring its first customers online shortly thereafter.

Core labs have to recognize that data management is for their users as much as them, says Smith. "Traditionally, core labs would send data or the researchers would download the sequence data, and use desktop software to analyze the data," says Smith. "Next-gen changes that. The first thing the core lab experiences is: How are we going to get the data to our researchers? How will they access the analysis tools and CPU clusters needed to consume the data we give them?"

And then there's the metadata. "Information about images, redundancy in datasets scattered in directories," says Smith. "So there's a lot of complexity within the data that people are trying to sort out."

Managing next-gen data is about collection and distribution, Smith reckons. "You have to be able to correlate the runs with different samples. One researcher might have one sample, another four samples." A next-gen instrument can "spit out a bunch of files. I need to link those files to the run and to the sample, and then make the data available to my end researcher."

Geospiza's FinchLab, "Next-Gen Edition," will be delivered as a hosted service (SaaS) or as a "drop-in" operational bundle that includes a LIMS for collecting the data, a Dell server, and a 7 TB Isilon clustered storage (scalable to 1500 TB) to make the data accessible. Seven TB may suffice for a year or so. "Once [users] get good at things and start getting creative, they start looking at 100 TB," says Smith..

The LIMS maps every step and order to a specific workflow. "You can't just park data on the instruments," says Smith. "Those files need to be moved to a central server, because the instruments have 10-13 TB storage, but when you think of all the image files to be processed, that storage is used up," says Smith. "The computers are only for the data processing, they're not part of the data management."

Smith advises moving the data to a data management system once, granting customers access through a web interface to download data if necessary. "Once the data is in the data management system, cloud computing can play a big role in moving the algorithms to the data. We believe this will be far more practical and cost effective for researchers - that's our goal," says Smith.

For Canada's GenoLogics, there is an opportunity to find new users for its Geneus lab and data management system, which is primarily deployed in gene expression and genotyping settings. James DeGreef, VP market strategy, says that many of his customers have or are purchasing next-gen systems. "We took our core systems, and built it out to handle all the LIMS aspects of next-gen sequencing capabilities," says DeGreef. The goal is also to provide the bioinformatics capabilities of next-gen sequencing.

GenoLogics is currently developing its resource with the University of Pittsburgh core lab and the University of British Columbia Genome Sciences Center. DeGreef anticipates a full release later this year.

On a Quest

Geospiza recently joined ABI's next-generation software consortium. "We want to enable researchers to do a lot of downstream analysis," says ABI's Roger Canales, a senior program manager. "We provide [vendors] with data file formats and information about how to handle and process the data, to facilitate the development of these software tools."

Another member of the consortium is GenomeQuest. "Current IT architectures and components cannot be easily adapted to process the volume and scale of data," says president and CEO, Ron Ranauro. "The consequence is that next-generation sequencing is [otherwise] limited to a few leading organizations with the most advanced bioinformatics staffs."

Ranauro's company (see 2007 story) has spent several years developing a platform for biopharma customers to take advantage of the next-gen revolution. "Just as Google did for indexing the web, GenomeQuest assessed the existing IT components available for indexing and searching sequence data. The only way to achieve scalability and performance was to start from scratch," Ranauro says.

The GenomeQuest system manages the flow of data across large computer networks so that any one computer only operates on a small part of the search, while the central system coordinates the flow of algorithms and data and collates results. "The benefits are infinite, linear scaling of computation and fully managed data resources," says Ranauro. "We shorten the time it takes to turn next-gen sequence data into biological meaning."

The two main classes of workflows are reference-guided assemblies (or variant analysis) and all-against-all (for metagenomics and transcriptomics applications), which can be adapted to specific customer needs. "The system is open to allow scripted access or web linking," says Ranauro. "Either way, access to large scale computational and data management resources are completely virtualized. An all-against-all, metagenomics workflow takes about ten lines of code in our system."

Team Players

The BioTeam's managing partner, Stan Gloss, sees the next-gen field becoming 50% of his business in the next six months. "The marketplace is moving that quickly," he says. (See, "Next-Generation Sequencing Solutions," Bio•IT World, October 2007). BioTeam senior consultant Michael Cariaso says next-gen instruments provide basic software processes culminating in a directory of files, but "that's pretty much where every vendor is going to leave you. The more they do, the less likely it is to fit the way your lab works with data. Two next-gen machines sitting side-by-side have no knowledge of each other, and the vendor software does little to improve that."

And so BioTeam has developed a Wiki Laboratory Information Management System (see p. 12), which resembles a mediaWiki installation. Explains Cariaso: "Every time a [sequencing] run finishes, the same way it might write to a log file, it also logs the results into the Wiki. As we do that for multiple platforms, it becomes a one-stop shop for all devices as well as the pipelines you have in house. It quickly becomes an umbrella over every other workflow process and data source within the facility."

BioTeam has installed WikiLIMS with groups including Dick McCombie's team at Cold Spring Harbor Laboratory and Tim Read at the Naval Medical Research Center in Silver Spring, Maryland.

For a researcher working on a particular project, (s)he "hits one button on the wiki, it says, here are all the microarrays and next-gen runs you've done for this. It will build a nice table, with quality scores, matrices, graphs... It can look up projects and look across timelines. New data arrives automatically as it is generated. This provides centralized and up-to-date view of what's really happening in your lab."

The WikiLIMS also provides the ability to integrate CGI scripting into the wiki. For example, Cariaso says the 454 software for base calling is good, but assembling multiple runs into a consensus requires someone to work at the UNIX command line. "We can come instead and say, 'Here's your project page and a list of all the runs you've done. Check which runs you want, hit the big red button, and launch the assembly, store the results back in the Wiki.' We can figure out the workflows in that environment and make them a single-button press."

Minding the Store

The rise of next-gen sequencing applications is fueling rabid demand for new storage solutions. (See, "Isilon Insights") Illumina's Kahn says the choice "is typically up to the specific environment, how coupled it is to Linux or Windows, the price point, the amount of storage needed, and the retention policies."

BlueArc's James Reaney sees the three key issues as storage bandwidth, computational analysis, and data retention/migration policies. "The architecture must be of robust enough design to solve all three bottlenecks," says Reaney.

As the Wellcome Trust Sanger Institute and Washington University in St. Louis can attest, BlueArc's Titan network storage platform provides workflow paralellization and easy upgrades. The new Titan 3200 platform doubles the storage of its predecessor (see p. 49), with 20 Gbps of fully bi-directional, non-blocking storage bandwidth. Reaney says the bladed modular architecture uses tiered storage, which can be configured to suit the user, offering a range of solid-state, fibre channel, and SATA disk, plus tape backup.

"The Titan 3200 is easily the performance leader," says Reaney. "Combined with a modular, upgradable architecture and now a maximum capacity of four usable petabytes, the Titan platform is also the most future-proof storage platform one could have."

According to SGI's Thakkar, most of SGI's deployment is with individual sequencing labs and pharma companies, rather than genome centers. Thakkar explains: "[Pharma] buys something for a particular program or application. Extra capacity is not something they usually like to keep on hand. They don't want to incur extra overheads."

SGI works with customers in three major areas. It offers a bioinformatics appliance to accelerate genomics and proteomics applications and high-performance compute power - hybrid computing with shared memory systems linked to clusters, where SGI can take sequencing information offline and use its large memory systems to compare against known databases. He says platform vendors would benefit if there was a "faster, more streamlined" way to take the data offline for high-throughput computational analysis.

But SGI's "mainstay is raw storage," says Thakkar. This could be three-tiered storage or storage to fit the needs of the data being produced. "Analysis is key to what you end up re-running: Do I go back and rerun the experiment, or do I just go ahead and save it and run another experiment? Having intuitive and efficient analysis systems are key," says Thakkar.

Thakkar anticipates more SOLiD installations this year, especially as users contemplate replacing some of their capillary instruments. He adds that "454 has done an excellent job of figuring out the entire workflow for the customer, maybe because they've been doing this for a bit longer. They've got many of the kinks figured out, especially on the storage end."

Scary Sequence

Asked about her biggest headaches, the Broad's Toby Bloom cites fault tolerance as the biggest problem. "We've always had a notion of a) storing our data for ever and b) having multiple places in our pipeline we could fall back to if we lost something. If something went down, we could queue up data behind it until the component was fixed. If something got corrupted, there was always a fallback going to the previous step. We can't do that anymore... We can't even afford to keep everything backed up on tape! All of that is the scariest part of all of this."

"As fast as Moore's Law is working to support us, we're still eclipsing it in our ability to generate sequence data," says Helicos' Stoops. Efcavitch, her colleague, hopes it stays that way. "With a simple improvement to our efficiency and error rate, we'll be at 90 Mb/hr, with the same hardware, simply changing the chemistry," he says. "And by increasing the density of molecules on the surface, we'll be at 360 Mb/hr, with the same imaging hardware." All within two years.

There could be an awful lot of drenched scientists and skinned knees by then.

[The scariest (or most beautiful?) aspect of the oncoming "Dreaded DNA Data Deluge" is, that the above article outlines mostly the first wave of the Tsunami - and the behind it the second looming wave is orders of magnitude bigger. There is just one line above to advise that there are actually three challenges (storage bandwidth, computational analysis, and data retention/migration policies. "The architecture must be of robust enough design to solve all three bottlenecks). While this article focuses on storage, why bother to store anything if we don't intend to actually use the genomic information? Here comes the second whammy; "computational analysis"; to come to some conclusions, e.g. for just a single human DNA sequence sample. There is already a lucrative market in Personal Genomics for analysis of just 0.5 M (of the 3200 M) data-points of e.g. "microarray" probes. One single array takes 4 hours of "wetwork" followed by TWO DAYS of computational analysis. If the computational analysis would require linearly more time for just one human "full genome", this would amount to 35 years of computing time... (and the function is not linear...). So why is the challenge "beautiful" rather than "depressing"? Because the "state of art" is presently "brute force" performed on old computing architecture. With Algorithmic Approaches implemented on Next-Generation Architecture Computing it should not take a generation to wait for computational analysis results of a Personal Genome...

.... - pellionisz_at_junkdna.com April, 15, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

New DNA sequencing strategy could be vital during disease outbreak

[Date: 2008-04-14]

EU-funded scientists have developed a strategy to rapidly identify the genetic properties of virulent strains of bacteria. Such techniques will be essential in responding effectively to an epidemic of a new strain of a disease or a bioterrorist attack.

Thanks to traditional DNA sequencing technologies, researchers have succeeded in sequencing the genomes of over 450 species of bacteria, including representative strains of all major human pathogens. However, this process is extremely slow, and in the case of an outbreak or terrorist attack, scientists need to determine the pathogen's genome as soon as possible so that they can determine which virulence genes or drug resistance genes the bacteria has.

Recently, techniques have been developed which enable scientists to sequence an entire bacterial genome in a matter of hours. However, the finishing steps required to obtain the complete sequence are still very time consuming. In this latest study, scientists in France and Sweden investigatedwhether enough information to mount a response to an outbreak could be obtained by using a rapidly sequenced, incomplete genome and comparing it to existing genomes for the same species. Their results are published online by the journal Genome Research.

They tested their theory on a strain of Francisella tularensis, a highly infectious bacterium that causes a disease called tularaemia. People catch the disease from the bite of an infected tick, while handling infected animal carcasses or by eating or drinking contaminated food or water. Symptoms include fever, chills, headaches, diarrhoea, muscle and joint pain and progressive weakness. If left untreated, it can be fatal. The scientists chose it for their study because there are strong concerns that it could be genetically manipulated for use as a biological weapon.

'In the context of an outbreak, a quick approach may help to identify immediately the genetic determinants responsible for modified virulence or transmission,' explained Dr Bernard La Scola of the University of the Mediterranean in France.

Dr La Scola and his colleagues used rapid sequencing technology to obtain the genome of a strain of F. tularensis taken from a patient suffering from tularaemia. They were able to identify a number of genes linked to virulence as well as a mutation associated with quinolone resistance. The researchers were also able to distinguish their strain from 80 other strains of F. tularensis.

'We demonstrated that this strategy was efficient to detect gene polymorphisms such as a gene modification responsible for antibiotic resistance, and loss of genetic material,' commented Dr La Scola.

According to the team, with enough researchers working on the project, the time from DNA extraction to complete genome analysis can be cut to just six weeks. Dr Scola believes that future advances in the software used to analyse and compare genome sequences could cut this time still further.

EU support for the research came from the EU-funded EuroPathoGenomics Network of Excellence, which is funded under the 'Life sciences, genomics and biotechnology for health' thematic area of the Sixth Framework Programme (FP6).

[Abstract of Academic article].

Rapid comparative genomic analysis for clinical microbiology: The Francisella tularensis paradigm

Bernard La Scola1,4, Khalid Elkarkouri1, Wenjun Li1, Tara Wahab2, Ghislain Fournous1, Jean-Marc Rolain1, Silpak Biswas1, Michel Drancourt1, Catherine Robert1, Stéphane Audic3, Sven Löfdahl2, and Didier Raoult1,4

1Unité des Rickettsies CNRS UMR 6020, IFR 48, Faculté de Médecine de Marseille, Université de la Méditerranée, 13385 Marseille Cedex 05, France; 2Swedish Institute for Infectious Disease Control (SMI), Center for Microbiological Preparedness (KCB), 171 82 Solna, Sweden; 3Structural & Genomic Information Laboratory, CNRS UPR-2589, IBSM, Parc Scientifique de Luminy, FR-13288 Marseille Cedex 09, France

It is critical to avoid delays in detecting strain manipulations, such as the addition/deletion of a gene or modification of genes for increased virulence or antibiotic resistance, using genome analysis during an epidemic outbreak or a bioterrorist attack. Our objective was to evaluate the efficiency of genome analysis in such an emergency context by using contigs produced by pyrosequencing without time-consuming finishing processes and comparing them to available genomes for the same species. For this purpose, we analyzed a clinical isolate of Francisella tularensis subspecies holarctica (strain URFT1), a potential biological weapon, and compared the data obtained with available genomic sequences of other strains. The technique provided 1,800,530 bp of assembled sequences, resulting in 480 contigs. We found by comparative analysis with other strains that all the gaps but one in the genome sequence were caused by repeats. No new genes were found, but a deletion was detected that included three putative genes and part of a fourth gene. The set of 35 candidate LVS virulence attenuation genes was identified, as well as a DNA gyrase mutation associated with quinolone resistance. Selection for variable sequences in URFT1 allowed the design of a strain-specific, highly effective typing system that was applied to 74 strains and six clinical specimens. The analysis presented herein may be completed within approximately 6 wk, a duration compatible with that required by an urgent context. In the bioterrorism context, it allows the rapid detection of strain manipulation, including intentionally added virulence genes and genes that support antibiotic resistance.

[The bottleneck (of speed), as it is clearly stated, the SOFTWARE of genomic analysis. Six weeks is clearly unacceptable - just as in WWII to calculate the ballistic projectile (before it hits the target) demanded rapid calculation by newfangled "computers". Biodefense (especially if the French-Sweedish European research teams up with US Biodefense establishment) will generate the parallel computing means of required rapid enough applications.

.... - pellionisz_at_junkdna.com April, 14, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

May We Scan Your Genome?

By Claudia Kalb - Newsweek
Apr 13, 2008 - 11:41:14 AM

As personal genetic testing takes off, some worry that marketing is getting ahead of science.

DNA is hip. At least that's what the new breed of genetic marketers would like you to believe. Last week, Navigenics, a California Web-based company, launched its $2,500 personalized DNA test—spit into a test tube and we'll tell you your risk for heart attack and other conditions—at a storefront in New York's trendy SoHo neighborhood. Computers, set against an orange and pink double helix, showed off Navigenics's Web site. Waiters circulated with pink cocktails—past a woman in a fur shrug over here, past Al Gore, a friend of the company (and partner in a firm that's invested in Navigenics), over there. Tony Bonidy, 60, from Pittsburgh, attended the much-publicized kickoff and wants to get himself and his family tested. "This is incredible," he said.

It's been 55 years since Watson and Crick defined the structure of DNA. Today, DNA is defining us. Most of the 1,100 genetic tests on the market are for rare single-gene diseases, like cystic fibrosis. But now, DNA-testing companies say they can scan our genomes and tell us our potential risk for diabetes, Alzheimer's and other common chronic conditions. Our DNA, once a mystery, is suddenly a commodity, and some two dozen businesses are competing for it in cyberspace. In January, another new company, 23andMe, handed out 1,000 free tests at the World Economic Forum in Davos, then boasted on its blog, The Spittoon, that "scholars, celebs and politicos swarmed" its booth. Featured photos: Naomi Campbell showing off her test kit, New York Times columnist Thomas Friedman spitting into his.

But what's the science behind all the hype? Direct-to-consumer (DTC) genetic-testing companies differ in their wares and the stringency of the research they rely on. DNA Direct, launched in 2005, offers diagnostic tests for individual disorders like hemochromatosis, or iron overload ($199). Most customers have family histories and want to know if they're at risk too, says CEO Ryan Phelan. Other companies sweep the genome more broadly. Navigenics scans nearly 1 million DNA snippets, then homes in on markers associated with 18 conditions, including multiple sclerosis and lupus—all of them influenced by multiple genes, many still unidentified. And then there's the entertaining stuff—23andMe ($999) and competitor deCODE Genetics ($985) provide gene tests not just for health conditions (alcohol flush reaction and lactose intolerance included), but for genetic ancestry as well.

With each new marketing push comes new criticism. Michael Watson, of the American College of Medical Genetics, says DNA testing doesn't belong in virtual clinics: "We're very concerned about the trivialization of genetics." One key issue is regulation. While the government mandates that genetic tests be performed in certified labs, not all are, and there's little to no oversight of a given test's accuracy or clinical usefulness. The individual gene variants linked to complex conditions may have only a modest effect on risk. And most DTC companies don't take lifestyle issues, like smoking, or family history into account, even though both can bump the odds up or down. There's plenty of debate, too, over the usefulness of information that can't be translated into action. Yes, you can cut out the fat and start jogging if your diabetes risk appears to be higher, but why pay hundreds of dollars to get that message?

Dr. Thomas Morgan, of the Washington University School of Medicine in St. Louis, worries that the business is getting ahead of the science. While researchers have clearly identified a chromosomal region linked to heart attack, for example, no single gene—including some being analyzed by DTC companies—stands out as the smoking gun. And undiscovered genes may turn out to be major risk factors. The result, says Morgan: "I might scare myself or reassure myself falsely based on the very limited knowledge that we have."

Market share, however, will not come to those who wait. "It's a matter of getting the field jump-started," says Dietrich Stephan, one of two scientists who founded Navigenics. The company, which offers genetic counseling, says its goal is improving health. Knowing your personal risk can lead to action, says CEO Mari Baker. Her genome scan found markers for celiac disease, a gluten allergy. A subsequent blood test came back positive. Now she avoids wheat, rye and barley. Most results won't lead to such clear-cut outcomes, though, and a recent report found that physicians are unprepared to deal with this wave of genetic information. Navigenics is offering a course for docs on Medscape to help fill the gap—a great way to market its product, too.

Andrew Meyer, 23, has caught the genome bug. Last December, he asked for donations on his blog, Buzzyeah.com, because he didn't have the cash. The $10s, $20s and $50s poured in. Meyer is still analyzing his 23andMe report, which he is sharing with the public. His motivation? "I'm really curious," he says. One day, no doubt, there'll be genetic tests for that, too.

[Newsweek, almost exactly half a year ago (Oct. 15) completely omitted from its "USA Edition" a masterful article on the Genome Revolution (see in full here). The newsmedia does not really know what to do with a "scientific/technological revolution". It sure does know, within six months, how to cover a massive marketing trend of a DTC business model.

.... - pellionisz_at_junkdna.com April, 14, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Global Biotech Competition Heats Up

Policymakers and economic development agencies are focusing intently on the biotechnology sector as an engine for economic growth as the race to secure these companies grows more intense.

By Ellen James

Introduction courtesy of Gautam Jaggi, leader, Ernst & Young’s Global Biotechnology Center and managing editor of “Beyond Borders,” Ernst & Young’s global biotechnology report.

From its earliest days, the modern biotechnology industry has concentrated in geographic clusters. The industry was born in California and the Boston area in the mid-1970s, when pioneering companies were founded to commercialize products based on scientific breakthroughs in molecular biology. Since then, the industry has spread around the world, [Since the title bears "global", one might mention that the USA leadership is challenged not only by UK, Netherlands, Germany and Switzerland in Europe, but also by China, India, Korea, Japan, Singapore and Australia - with Brazil also coming up - see in this column, AJP] but its growth has typically been clustered in certain geographic areas. In the United States, the most successful clusters have included the San Francisco Bay Area, New England, San Diego, North Carolina’s Research Triangle Park, Maryland, and Los Angeles. [USA regions were analyzed recently in this column, pointing out in particular the Houston-region, where the IT (Texas Instruments) and the medical establishment (Baylor as the largest medical compound) are integrating in an impressive manner - AJP]

Today, competition for biotechnology companies is fierce, as policymakers and economic development agencies focus intently on the sector as an engine for economic growth. Regions are trying to gain competitive advantage by identifying and aligning research, education, infrastructure, and private sector activities around particular fields.

Regions looking to replicate the success of existing clusters often start by identifying the drivers of success, which include:

Strong research. Startups created from university research frequently have been located close to their founding scientists—creating clusters around leading research universities.

Venture capital. Drug development is an expensive and time-consuming proposition, and it has taken visionary venture capitalists to recognize the commercial potential of scientific advances and provide the financial backing. Venture capital investors have historically preferred to fund companies in their own geographic region to maximize their own direct oversight and involvement.

Incentives for commercialization. Incentives that increase the propensity to commercialize can encourage product development and company formation. Measures by state university systems to streamline paperwork related to commercialization activities or increase royalty payments to inventors have been key stimuli to research, development, and commercialization.

Highly educated workforce. Beyond entrepreneurial scientists and business executives, biotechnology companies need access to highly educated and skilled workers. Clusters that have evolved in close proximity to top research universities can draw from a strong labor pool. But for less-mature clusters, the ability to provide highly skilled workers has required longer-term investments in education and training.

Specialized real estate. Life sciences research requires specialized laboratory spaces, such as “wet labs” that provide features, including special ventilation and utility needs such as natural gas, oxygen, and carbon dioxide. And research on contagious pathogens requires higher containment biosafety lab facilities. These specialized facilities are required to preserve the integrity of the research, as well as to prevent contamination of external spaces.

Tax and other incentives. Governments can facilitate private sector investment through incentives, such as R&D tax credits and net operating loss (NOL) carry forward provisions.


North Carolina: A Wealth of Biotechnology Opportunities

North Carolina has a lot to offer in the biotechnology and life sciences arena: More clinical research organizations operate in this state than any other; more than 450 companies make it the third largest biotechnology hub in the U.S.; and the state is home to the world’s largest and best-known research park—the 7,000-acre Research Triangle Park.

But in an increasingly competitive global marketplace, state leaders know they cannot rest on past successes. That is why they have invested in an integrated set of collaborative programs to maintain North Carolina’s competitive edge.

BPTC, the North Carolina Biomanufacturing and Pharmaceutical Training Consortium, is a statewide partnership designed to ensure that biomanufacturing companies have the skilled workers they need when they need them. BPTC is a collaborative effort founded by North Carolina’s universities and community colleges, the non-profit North Carolina Golden LEAF Foundation, the N.C. Biotechnology Center, and the industry’s North Carolina Biosciences Organization.

Under the BPTC umbrella are several educational initiatives, including:

• The Biotechnology Training and Education Center (BTEC) at North Carolina State University. Opened in September 2007, this facility is the largest of its kind in the nation. Its on-site and distance education programs will train as many as 2,000 students and prospective employees per year.

• The Biomanufacturing Research Institute and Training Enterprise (BRITE) Center at North Carolina Central University. This program is designed to provide bachelor’s degrees and advanced education in pharmaceutical and bioprocess development and related fields. A 52,000-square-foot facility currently under construction will provide laboratories for scholars conducting research in critical areas of biotechnology.

• The BioNetwork, the community college system’s network of campus-based education and training programs located throughout the state. The reach of these programs is particularly important because North Carolina’s biotechnology industry is no longer concentrated solely in the Research Triangle Park area.

Research Triangle Park set the standard for research parks when it opened in the 1950s. It remains a world leader, home to well-known names such as GlaxoSmithKline, Bayer, and Biogen Idec. But today, North Carolina’s 54,000 life sciences workers are finding employment throughout the state.

Some examples:

• The former mill town of Kannapolis, NC (near Charlotte, NC) has staked its future on a $1 billion biotechnology research campus backed by Dole Foods Co. owner David Murdock in partnership with Duke University and the University of North Carolina. The campus will focus on agriculture, nutrition, and health research.

• From bioagricultural companies involved in pest control to innovative nanotechnology advancements, biotechnology and life sciences companies in the Piedmont Triad region generate more than $4 billion in revenue annually.

• The N.C. Biotechnology Center, the first state-sponsored biotech center when it opened in Research Triangle Park in the 1980s, now has regional offices in Greenville and Wilmington to the east, Winston-Salem in the state’s central region, Charlotte to the south, and Asheville in the western mountains.

Both the new training initiatives and regional offices resulted from a strategic plan entitled New Jobs Across North Carolina. Recognizing that vision and risk-taking must shape smart and practical decisions, Governor Mike Easley, in 2003, charged a committee of 120 experts to develop a long-term strategic plan to help guide future state investments in biotechnology. The group, led by former governors Jim Hunt and Jim Martin, identified 54 recommendations for building the industry statewide—with the goal of employing 125,000 North Carolinians in biotechnology-related jobs by 2023.

“My advice to you is to be aggressive and be bold,” Gov. Easley told the steering committee when it began its work. “Let’s just keep going forward.”

North Carolina’s biotechnology industry continues to do just that.

Central Kentucky’s Leader in Biotechnology

Biotechnology is one of the fastest growing industries in the United States, and Lexington, KY is committed to fueling the growth of this industry in the area. With many such companies in Central Kentucky—the majority of those in Lexington—these research-intensive companies have a tremendous impact on the economy. Calling Central Kentucky home are Alltech Biotechnology, Martek Biosciences Corporation, Intranasal Therapeutics, and a host of others.

Commerce Lexington Inc. and the Greater Lexington Chamber of Commerce partner with the University of Kentucky (UK) and the Kentucky Cabinet for Economic Development’s Department of Commercialization and Innovation (DCI) to grow, recruit, and retain biotechnology companies. Under the leadership of Dr. Lee T. Todd, president of UK, a goal was set to become a top 20 research university. Providing a supportive environment for professors and researchers to start companies is a key mechanism by which to attain this goal.

To further the progress of assisting entrepreneurs, Commerce Lexington, UK, and Lexington Fayette Urban County Government formed the Bluegrass Business Development Partnership (BBDP). The BBDP’s goal is to be a one-stop, super-service provider, linking entrepreneurs with the information they need to be successful, including assistance in financial planning, business plans, funding sources, real estate, and service providers. The BBDP also designed a new Web site to assist entrepreneurs, www.thinkbluegrass.com.

Lexington is home to the only research and development business park in the state of Kentucky—UK’s Coldstream Research Campus. Coldstream, a 735-acre office park, was specifically designed for recruiting high-tech and biotechnology companies, as well as university centers and start-ups. Coldstream offers intellectual capital and resources from UK, as well as infrastructure for existing and new companies.

Through DCI, Kentucky created the first SBIR/STTR grant match program in the country. This innovative initiative has attracted the attention of many new biotechnology firms, such as Transposagen Biopharmaceuticals, which recently opened a new office in downtown Lexington. The company, which provides unique animal models of human diseases for drug discovery and development, selected Lexington because of the SBIR/STTR match program and the city’s vibrant academic community. Within five years, Transposagen will create 15 high-paying jobs.

The Lexington community offers a wide variety of advantages to companies—including a strong, diverse, and educated workforce. The 2005 Census data ranked the city the eighth most educated in the nation; 35.6% of the population 25 years or older has at least a bachelor’s degree. This is due in part to being within 30 miles of 12 different institutions of higher education. Over 62,000 students are enrolled in these institutions, graduating close to 10,500 annually.

For more information on what Lexington offers, contact Gina Greathouse, senior vice president of economic development, Commerce Lexington Inc., at 859-225-5005 or visit www.commercelexington.com.

Roanoke Valley: Virginia’s Rising Biotech Star

The Roanoke Valley of Virginia is making a name for itself in the life sciences sector. New ideas and products are always coming out of this nine-government region in western Virginia, and in the next few years, so will new doctors.

Construction will soon begin on a new medical school, which is a joint venture between the Carilion Clinic and nearby Virginia Tech. The school will welcome its first class in 2010. Another Carilion/Virginia Tech collaboration is a research institute, which will be housed in the same building and focus on translating medical research into real world applications for patient care. The two operations will be located adjacent Carilion’s flagship hospital, Roanoke Memorial just south of the city of Roanoke’s downtown business district.

Several years ago Carilion, the largest healthcare company in the region and employer, and the city of Roanoke began redeveloping the area into a planned biomedical park. An office building, which houses the Carilion Biomedical Institute and several life sciences-related firms, has been completed and work has begun on the clinic offices. The new activity is a natural fit for the region. Already home to the Carilion Clinic, two Hospital Corporation of America hospitals, and a large Veterans Administration facility, the Roanoke Valley has served as the hub of first-rate medical care for the broader region. Recent plans have enhanced that role beyond traditional patient care and treatment to include the research and education components.

The new medical school is patterned after Harvard Medical School’s Health Sciences and Technology program and Cleveland Clinic’s Lerner College of Medicine; it will have a small class size and be dedicated to training physician researchers. Class size is projected to be 40 students per year. In addition to a traditional medical school curriculum, all students will receive training in research methods, conduct original research, and write a thesis as a condition of graduation. To accommodate the expanded graduation requirements, the school will have a five-year curriculum instead of the traditional four-year curriculum.

An earlier Carilion partnership with Virginia Tech and the University of Virginia produced the Carilion Biomedical Institute, which aims to bring life sciences research to commercial applications. Some CBI projects include research into the use of chip technology to eliminate the need for fluorescent dyes in diagnostics tests; magnetic targeting devices that can locate screw holes in the intermedullary nails used to repair long-bone fractures, and a new generation of technologies that target eye disease. Virginia Tech, located just 45 minutes southwest of Roanoke’s downtown business district, provides an important resource for the region’s life sciences sector. In addition to partnerships with Carilion, the university’s biosciences researchers are consulted worldwide for expertise on plant, animal, and microbial genomics, as well as biotechnology applications. In fact, Virginia Tech faculty members in the biological sciences represent the largest concentration of non-medical biology expertise in Virginia.

The proximity to Virginia Tech— and other nearby institutions of higher learning—is one of the factors that contributes to the talented workforce in the area, according to Sam English, founder of the Center for Innovation & Entrepreneurship, a Roanoke-based organization that promotes innovation and assists entrepreneurs. He also cites the cost effectiveness of doing business in the valley as another factor.

Kansas’ Biotech Powerhouse

Manhattan, KS, home of Kansas State University (KSU), is increasing its already formidable presence in animal health, food safety, food security, biosecurity research, and engineering. Manhattan was recently named one of five finalists for the $450 million National Bio and Agro-Defense Facility by the Department of Homeland Security. KSU presently has 200 scientists/researchers who work in the area of agri-bioscience, and faculty members generated more than $114 million in outside funding in the 2007 fiscal year.

Manhattan is also part of the “Kansas City Animal Health Corridor” that extends east into Missouri. Nearly one-third of the $15 billion global market in animal care services and products is produced there. It includes more than 120 animal health organizations. KSU is an acknowledged leader in the development of animal vaccines, such as those that combat West Nile virus. This base of technologies promises to grow even more under the Kansas Economic Growth Act (KEGA). Under KEGA, funding support for bioscience research at KSU will continue at a strong pace.

Major bioscience and research initiatives in Manhattan include:

• The Biosecurity Research Institute, a $55 million-plus federal project ensuring protection of the nation’s food supply. The institute received $1.5 million from the Kansas Bioscience Authority to add high-end video capabilities to its educational infrastructure.

• The KSU Grain Science Complex, which is the first professional institute in the United States to provide technical training in support of market development activities for U.S. grains and soybeans. Staff there teach agricultural and business courses to students from around the globe.

• The National Institute For Strategic Technology Acquisition and Commercialization, which is a not-for-profit organization focusing on technology transfer and commercialization of over 1,100 patents donated by Fortune 500 and technology companies.

• The Bioprocessing and Industrial Value Added Program, which fosters research focused on turning Kansas crops into value-added products and testing cutting-edge processes used to produce new grain-based food and non-food products. This includes everything from biodegradable shell casings to disposable knives and forks made out of grain.

• The Grain Marketing and Production Research Center, which is the USDA’s main facility for conducting research on measuring and controlling the quality of cereal grains throughout the grain industry. Major initiatives include study of wind erosion, ensuring grain quality and safety, and control of insect pests in the food supply.

• The Terry C. Johnson Center for Basic Cancer Research, which is located at KSU, continuously advances research and enhances graduate and undergraduate education and training programs. It also promotes public awareness and prevention of cancers through community outreach.

Several biotechnology-related companies have already made a home in Manhattan, such as Nanoscale, a company that develops and commercializes NanoActive materials, and NutriJoy, Inc., a company based upon donated technologies.

In December 2007, General Electric Aviation announced the selection of KSU for the location of a University Development Center. Work performed at the site will include software development, verification and validation, mechanical design, and hardware design, eventually resulting in the creation of over 40 engineering jobs.

Manhattan represents the best of all worlds: top research facilities, a dynamic college atmosphere, and an appealing living environment that is winning notice all over the country.

Iowa: A Growing Bioeconomy

In Iowa, bioscience critical mass and raw biomass are abundant, and scientists and researchers find the support they need to develop the complex solutions the world is seeking. The state’s bioeconomy is fueled by rapid growth in the fields of renewable energy and plant, animal, and human genomics.

Through the research strength of Iowa’s world-class universities, pro-business environment, and enviable quality of life, an increasing number of businesses, researchers, and scientists are finding the state of Iowa to be life-changing.

The proof is in the rankings. Iowa ranks first in the nation in biotechnology, according to Business Facilities, a position earned through Iowa’s advancements in the agricultural feedstock and chemicals category. And according to a 2004 Battelle Memorial Institute report, Iowa’s bioscience strengths are in bioeconomy, advanced food and feed, animal systems, integrated post-genomic medicine, integrated drug discovery, development and production, integrated biosecurity, and biomedical imaging.

Iowa’s prominence in bioscience is bolstered by three funding streams that allow companies to foster innovation and grow their workforces:

• The Iowa Values Fund, a $500 million business incentive fund, is set aside to create more bioscience companies and careers in Iowa.

• The $2.5 million Commercialization Fund is used to help small- to medium-sized Iowa companies in industries such as bioscience commercialize innovation that fosters competitive, profitable companies.

• The Iowa Power Fund is a $100 million fund designed to promote research and development, knowledge transfer, technology innovation, and the improvement of economic competitiveness as it relates to the effort to produce renewable energy and improve energy efficiency.

This funding, coupled with a commitment from state universities and community colleges, has pushed Iowa to the forefront of the bioscience revolution. For example, the University of Iowa in Iowa City is currently constructing the BioVentures Center, a research park dedicated solely to the advancement of biosciences. This is in addition to the Oakdale Research Center (which has a number of biotechnology company partners), the Center for Advanced Drug Development, and the Division of Pharmaceutical Services. The university also has the Center for Biocatalysts and Bioprocessing on campus.

In Ames, researchers at Iowa State University work in the Center for Crop Utilization Research, the Plant Science Institute, the Iowa Biologics Facility, and the Center for Plant Genomics. At the community college level, the Iowa Department of Economic Development’s Industrial New Jobs Training Program provides no-cost or reduced-cost job-training services to new employees of eligible businesses through Iowa’s 15 community colleges.

All of this and more make Iowa an exciting place for over 1,800 bioscience establishments to do business. The state has defined itself as a serious contender in bioscience through its commitment to partnerships among government, education/research, and business. Bioscience business leaders, researchers, and government officials are working together to bring products to market and establish an open environment for new and ongoing research.

The result is a state strong in all aspects of the bioscience industries. Iowa is the epicenter of the bioscience revolution. For more information on how your company can join, visit www.iowalifechanging.com.

Mission Possible: How Austin Is Becoming a Biotech Hub

The Austin, TX area offers biotechnology innovators and entrepreneurs a unique and attractive combination of attributes that has already brought in some of the nation’s leading biotechnology and life sciences companies.

The career of Dr. Matt Winkler shows how it all comes together. Dr. Winkler came from the University of Texas and started Ambion, which is a market leader in the development and supply of innovative RNA-based life sciences research and molecular biology products. The company grew to 400 employees before Dr. Winkler sold it to Applied Biosystems, another technology leader in the life sciences marketplace, which still has a significant presence in the Austin area.

Dr. Winkler then started a new company, Asuragen, which leverages its RNA and miRNA expertise into molecular diagnostics, pharmacogenomic services, and therapeutics, and now has 130 employees. Asuragen recently spun out another new company, Mirna Therapeutics, which works with microRNA to find new ways to fight diseases like cancer. The creation of these three companies reflects the powerful combination of entrepreneurship and innovation that Austin offers. By uniting research universities, venture capital, a talented local workforce, and community leadership dedicated to fostering successful business development, Austin has made itself an ideal choice to grow a biotechnology business.

In February 2007, the Austin Chamber of Commerce, which represents five counties with a population of over 1.5 million residents, formed the BioAustin Council to coordinate the growth of the Austin biocluster. Bruce Leander, chairman of the BioAustin Council and former president of Ambion, describes the Council’s mission: “The Austin region is fully committed to the growth of the bio/life sciences industry cluster. Through the BioAustin Council, local life sciences companies will have more opportunities than in the past to network and collaborate.”

Everything in biotechnology starts with innovation, new ideas, and the resources to explore them. Austin has earned its reputation as a center for innovation thanks in large part to the world-class University of Texas, home of the well-regarded College of Biomedical Engineering and its nationally ranked College of Pharmacy, along with over 100 research units in areas such as biochemistry, biological and medical engineering, and nanotechnology. Research being done at the University of Texas and SEMATECH (the semiconductor research consortium) is exploring the crossover between nanotechnology and biological applications. Overall, there are over 20 major colleges within 200 miles of Austin with a total enrollment in excess of 250,000 students. This provides an almost endless pool of young, well-educated talent and a fertile source of new ideas. It’s no accident that Austin inventors have been assigned patents at a rate that has outstripped other metropolitan areas during the past five years—a fact that led the Wall Street Journal to rank Austin one of the most inventive cities in the U.S.

Developing ideas, however, is not enough—a biotechnology cluster needs the funding, expertise, and infrastructure to get ideas out of the university and into the marketplace. Austin is one of the top regions in the country for venture capital investment; venture capitalists poured over $650 million into Austin during 2007, and on a per capita basis, only Boston, San Francisco, and San Jose take in more venture capital investment than Austin. There are already six venture capital companies in Austin providing funding for life sciences and biotechnology. The Austin Technology Incubator, one of the top-ranked incubators in the country, provides vertical integration, talent, and expertise. Similarly, the Texas Life-sciences Commercialization Center in Georgetown, TX, 30 miles north of Austin, provides wet labs, clean rooms, and shared business resources for emerging technology companies engaged in biotechnology, life sciences, and nanotechnology business. These incubators provide physical space and assistance with getting products to market.

Most biotechnology clusters in the United States are located along the east or west coasts; Austin, the only one in the Southwest, is strategically located in the center of the country and sits astride I-35, a major route for U.S./Mexico/Canada trade. Austin offers considerably more affordable living than other major biotechnology centers, with a median home price 16% less than the national average, and one of the lowest state and local tax burdens in the nation, making it an attractive location in which to work and live. “When you get right to it, Ambion is in Austin because it’s easy to recruit scientists to live and work here,” says Dr. Winkler.

Among the 100 biotechnology and life sciences companies that already call the Austin area home is Luminex, which develops, manufactures, and markets proprietary biological testing technologies; the company currently has 250 employees. Hospira, a world leader in generic injectable pharmaceuticals, is Austin’s largest biotechnology employer, with 1,350 employees. Running a close second, PPD, a contract research organization, employs 1,300 people.

By bringing together world-class innovators and entrepreneurs in a business-friendly environment, Austin is well on its way to becoming a leading hub for biotechnology research and development.

San Antonio’s Thriving Biomedical Industry

The bioscience-healthcare industry is San Antonio, TX’s number one economic generator, with an annual economic impact of more than $15 billion and 113,000 employees, according to a recent healthcare and bioscience economic impact study commissioned by the Greater San Antonio Chamber of Commerce. One out of every seven San Antonio employees works in this sector, with its cutting-edge research, world-renowned educational institutions, nationally recognized health care systems, and leading biotechnology companies.

In 2006, the economic impact of the city’s bioscience-healthcare research sector was over $604 million. Some of the largest areas of research were conducted by the Cancer Therapy & Research Center, the Southwest Foundation for Biomedical Research, and the University of Texas Health Science Center at San Antonio—one of the top 30 universities for research funding.

San Antonio’s Texas Research Park has been named one of the five finalists for a proposed new U.S. Department of Homeland Security (DHS) $450 million National Bio and Agro-Defense Facility to conduct research on ways to protect the public and our agricultural system from animal diseases, including some that could be used for biological terror attacks. DHS is expected to select the site by late 2008.

The city has the world’s largest Phase I clinical trials program for new anti-cancer drugs, a $200 million Children’s Cancer Research Institute, the world’s largest genomics computing cluster, the nation’s only privately owned biosafety level four (BSL-4) maximum containment laboratory, and the Southwest Research Institute—one of the nation’s largest non-profit, independent research and development organizations.

According to the San Antonio Medical Foundation, 44 medical-related institutions are based in the 900-acre South Texas Medical Center (STMC), with combined annual budgets, including research, totaling almost $3 billion. STMC has approximately 27,000 employees working in medical-related activities, and another 29,000 people in other jobs. The STMC also has 300 acres available for future expansion. Capital improvements over the next five years will total $640 million. The center is recognized worldwide for the impact of its research, patient diagnosis, treatment and rehabilitation, degree programs, continuing education, and state-of-the-art physical structures.

San Antonio is also home to the Texas Cord Blood Bank (TCBB), the state’s first public bank for stem cell-rich umbilical cord blood. TCBB expanded its collection of cord blood to seven hospitals statewide, with plans to announce additional sites in 2008.

In late 2006, pharmaceutical manufacturer DPT Laboratories Ltd. opened a 258,000-square-foot complex at San Antonio’s newest technology park, Brooks City-Base. The $24 million expansion provides additional research space to the company and serves as a worldwide distribution site.

The Department of Defense (DoD) is transforming San Antonio’s Fort Sam Houston into the U.S. hub for all military medical training and research. The city already boasts the new Center for the Intrepid, a world-class rehabilitation center at Brooke Army Medical Center, and in January 2008, military officials broke ground on the new $92 million Battlefield Health and Trauma Center for all combat casualty care and trauma research missions. That and other DoD initiatives will add thousands of new personnel to the local payroll and more than $2 billion in new construction and renovations over the next few years.

San Antonio’s rich mixture of research, education, and private companies makes it one of the nation’s leaders in this sector, as well as an appealing place for bioscience/healthcare professionals to pursue careers and for companies to do business.

Strongsville: The Road to Biotech Success

All roads in Strongsville, OH point to unprecedented growth—expansions and relocations of businesses, a growing population, and a city government whose philosophy is to make the process as easy as possible.

“My job is to help build this community to its maximum potential,” says Mayor Thomas Perciak. “Collectively, we can get far more done than if I was on a one-man journey.”

Strongsville has been on an ambitious development avenue the last few years. In 2001, Strongsville purchased 182 undeveloped acres from Figgie International. The city then received a $500,000 grant from the Ohio Department of Development for the development of the first 43 acres and to offset construction costs for a 1,500-foot extension of Foltz Parkway, which enables access to the land. The land is in the Strongsville Business Park, the largest of four industrial parks in the city. Combined, the industrial parks host 182 businesses (from smaller ones such as Roscoe Medical Inc. to large corporations such as ICI Paints), employing more than 9,000 people.

“It was bought with development in mind,” says Gene Magocky, economic development director for Strongsville. “For us to keep attracting business, we need to plan ahead and ensure there is sufficient developable land for the future.”

One industry that the city has been working hard to attract is biotechnology. The area offers many biotechnology amenities that can help these types of companies meet their maximum potential. One such asset is the Cleveland Clinic’s Lerner Research Institute. Located 20 miles from Strongsville, the Lerner Research Institute’s Department of Biomedical Engineering is committed to investigation, innovation, and the translation of scientific discoveries into practical applications. By providing a forum in which engineers, scientists, and physicians can interact, the department plays a key role in advancing its mission to promote excellence in research, education, and patient care. The department’s role in the larger scientific and medical communities is exemplified by the significant research funding the department receives from the National Institutes of Health and other agencies, and by its presence in peer reviewed literature. The Lerner Research Institute and the Cleveland Clinic as a whole are recognized as leaders in the biotechnology field across the world, having received numerous patents and awards for their work.

Case Western Reserve, located 24 miles from Strongsville, offers the Entrepreneurial Biotechnology (EB) degree, which is an 18-month master’s degree offered by the biology department in collaboration with the Case School of Medicine, an outstanding biomedical research center that currently ranks 12th among the nation’s 122 medical schools in NIH research funding. The program provides studies in state-of-the-art biotechnology and scientific innovation, practical business instruction, and real-world experience in innovation and/or entrepreneurship to individuals with a bachelor’s, master’s, or Ph.D. in a biology-related field. The result: cutting-edge scientists who are empowered to innovate, commercialize technology, and develop new businesses, either as entrepreneurs creating and growing new companies or “intrapreneurs” working within established companies and organizations.

In 2002, Cleveland State University, located 20 miles from Strongsville, launched a program to help prepare students for careers in biotechnology. The Biotechnology Certificate Program is available to any CSU student, but especially those who are working toward or have a bachelor’s in business, engineering, biology, or chemistry. It’s also available to college students enrolled in other universities and to college graduates.

Assets such as these, combined with Strongsville’s pro-business attitude, make the city a prime destination for biotechnology companies. For more information on how the city of Strongsville can get your company on the road to biotechnology success, contact Gene Magocky, economic development director for the city, at 440-580-3117 or gene.magocky@strongsville.org.

Vineland: A Growing Biotech Hub

Just a few miles east of Philadelphia, Vineland, NJ is strategically located right between New York City and the Baltimore/Washington, DC markets. The opening of the new South Jersey Healthcare Regional Hospital in August 2004, supplemented by a new Rehabilitation Hospital, two new kidney dialysis centers, and new MRI service centers have attracted many new physicians to the city and have helped to bolster the city’s thriving biotechnology industry.

Vineland’s central South Jersey location and competitive financial incentive programs are responsible for a growing body of interest in the area.

One such financial incentive program is the Fund for Economic Development. This loan features everything that expanding industries and businesses need to support their growth, including long terms (a maximum term of 20 years), a fixed interest rate (5%), no prepayment penalties, a one-time loan servicing fee, and flexible payment schedules. First and second lien programs are also available. A minimum workforce of 25 full-time employees and an average annual salary of $30,000 plus benefits are required; loan requests of up to $15 million will be considered.

Because of this program, Vineland has attracted the attention of companies worldwide. Since its inception in 1990, the city has closed over 400 loans, totaling $110 million. The experience and confidence gained over the years from the program’s successes have enabled the city to now offer a more aggressive loan program.

To be eligible for this program, proposed facilities are required to locate in the Vineland Urban Enterprise Zone. The city of Vineland’s Economic Development Web site, www.vinelandbusiness.com, details all the benefits of locating a business in the zone.

The program is funded by the New Jersey Urban Enterprise Zone Authority, which allocates the retail sales tax revenues collected in the Vineland Zone to a Zone Assistance Fund for economic development projects in the Vineland Urban Enterprise Zone. The Urban Enterprise Zone designation for Vineland ends in October 2018, but Vineland’s Fund for Economic Development will not disappear. It will become a permanent fixture for the city of Vineland to use for its diverse business community.

[What is the location of "biotech boom", globally? The simple answer is "everywhere with imagination".

.... - pellionisz_at_junkdna.com April, 13, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Al Gore Helps Navigenics Launch Personal Genomics Service

By Kevin Davies
Bio-IT World

April 9, 2008 | NEW YORK – With apologies to fans of Maury Povich and Connie Chung, former Vice President Al Gore’s surprise appearance was the undoubted highlight of the Tuesday evening reception that marked the official launch of Navigenics’ new personal genomics service.

Gore told the 150 assembled executives, scientists, investors, press and other guests celebrating the public release of Navigenics’ Health Compass that he had both personal and professional ties to the company. Last November, Gore became a partner with Kleiner Perkins, the venerable Bay Area venture capital firm that is the lead investor in the company.

He also called co-founder David Agus, director of the Spielberg Family Center for Proteomics in Los Angeles, “a long time friend,” and “a genius in oncology, and a miracle worker.”

“This is a great firm,” Gore said in his brief impromptu remarks (minus Powerpoint slides). “In my opinion, they’ve got the ethics and the culture and the values right,” Gore continued. “On all these new genetic breakthroughs, there is always some resistance culturally, and then, when there’s an evaluation of the inherent value, if the ethics are right, if the surrounding culture is right, then it just breaks through. I think this company [Navigenics] has the culture right… and I think it’s going to be a fantastic success.”

“The time is right,” said Mari Baker, Navigenics CEO, for “a change in healthcare in this country.” She said the cost of doing a genetic scan has finally reached the point that services can be offered to consumers. “Our team has gone through the literature, found enough conditions that meet our stringent criteria, to give people valuable data that they can use tomorrow to improve their health,” said Baker.

Using Affymetrix microarrays and its CLIA-certified facilities, Navigenics scans customer DNA samples for gene alternations, or SNPs (single nucleotide polymorphisms) that are known to be associated with common diseases. There are 1.8 million probes on the latest Affymetrix chip, although for now, Navigenics extracts information from just a few dozen of them. Among the first 18 conditions for which Navigenics provides personalized information are Alzheimer’s disease, obesity, and diabetes.

In contrast to other personal genomics services launched last year by 23andMe and Iceland’s DeCodeMe, Navigenics is not offering information about ancestry, genome comparison tools, nor does it offer information on so-called “non-actionable” conditions such as Lou Gehrig’s disease.

Navigenics’ focus is squarely on using personal genetic data to calculate an individual’s projected lifetime risk for a particular condition. The company has genetic counselors on staff available for individual client consultations. Navigenics has also partnered with MedScape to begin the daunting challenge of educating physicians about the complexities and possibilities of personal genomics.

Agus joined fellow co-founder Dietrich Stephan, and board members John Doerr and Brook Byers in a panel discussion, the first in a two-week series of New York events to raise the company’s profile among various constituencies, especially the medical community. Aside from Gore’s unexpected appearance, the biggest surprise of the evening was perhaps a candid testimonial from Agus about the effectiveness of the Health Compass. He noted that one early beta tester was prompted to take an early colonoscopy based on her Health Compass results, which subsequently revealed a 1.5-inch polyp.

But Agus admitted that his own genetic scan has served as a wake-up call. Whereas the average lifetime risk for U.S. males is around 40 percent, Agus revealed that, “I had an 82 percent chance of getting a heart attack, and I had normal cholesterol. So based on that, I’m on Lipitor, I exercise … and I’m reducing my risk. But that 82 percent hit home.”

Navigenics offered its saliva kits for sale at the event – with one notable disclaimer. A notice next to the kits said that residents of New York State might have to wait for their results, pending approval from the state’s Department of Health, which is required to authorize genetic tests..

[This coverage on the Navigenics Opening is brimming with information. The "blessing" from Al Gore, acknowledged spokesman of treating the Earth (and ourselves) kindly, might help taking away unfriendly attitudes towards the genome revolution - albeit admittedly he holds direct interest in the venture. CEO Mari Baker's statement is also remarkable; that "Personal Genomics" amounts to a significant betterment of healthcare. Particularly important is Navigenics' coinage of "Actionable Genomic Conditions" (DNA "glitches" that one can actually do something about, thus the information is not to scare anyone). This is illustrated by two very convincing examples; one very personal of the Management.

.... - pellionisz_at_junkdna.com April, 10, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^



Navigenics Debuts Gene Dx Service, Allies With Mayo to Study How Patients Use Data

GenomeWeb
[April 9, 2008]

By Turna Ray

Navigenics this week officially launched its genetic-testing service, and in hopes of shielding itself against recent criticism of the direct-to-consumer [DTC - comment by AJP] model, has disclosed an alliance with the Mayo Clinic to study how patients understand and use genetic risk information.

Additionally, Navigenics said it would work with the Personalized Medicine Coalition to develop industry standards for consumer genomic-testing services. This move, in turn, strategically positions Navigenics as a proponent of “responsible” DTC marketing.

Toward the second half of last year, at least three consumer genomics services were launched, including 23andMe, DeCode Genetics’ DeCodeMe service, and Knome, all attempting to create a new business model that sells individuals information about their genes. These companies say their services could help customers better predict their predisposition to diseases.

However, there are many in academia, industry, and government who believe that, given the evolving regulatory infrastructure and the current state of the science for genetic tests, marketing genotyping services straight to the general public is irresponsible and possibly dangerous.

"In this emerging field it is absolutely critical that the entire industry remain rooted in high-quality science and research; this is exactly what we do and what our customers, collaborators, and advisors expect from Navigenics," Navigenics CEO Mari Baker said in a statement. “We believe this research collaboration with ... Mayo Clinic, will help the entire industry evolve responsibly, provide the necessary tools to educate clinicians, and ultimately improve people's lives."

The study, entitled “A Proof of Principle Trial of Communication to Patients Receiving Predictive Genetic Risk Assessment,” will examine how patients and doctors understand and use information provided by a Navigenics’ genetic risk assessment service. The study, slated to begin this month and end in September 2009, is jointly funded by Navigenics and the Mayo Clinic.

With regard to industry standards for DTC genetic-testing services, Navigenics said it plans to solicit input form multiple stakeholders on subjects related to DTC genomic testing at an undisclosed conference later this year.

The Service

Navigenics' SNP-genotyping service, which costs $2,500, will compete with DeCodeMe’s and 23andMe's $1,000 services. Meanwhile, Knome's whole-genome sequencing service has a $350,000 price tag.

Some of the disease risk factors for which Navigenics will be testing include Alzheimer's disease, heart attack, multiple sclerosis, Crohn's disease, colon cancer, prostate cancer, and breast cancer.

--------------------------------------------------------------------------------

The “Holy Grail” for Navigenics would be to conduct a study to see if genetic risk information makes an impact on a population-wide basis.

--------------------------------------------------------------------------------

According to Navigenics co-founder Dietrich Stephan, under the Navigenics Health Compass program — which the company claims is the only personal genetic risk-assessment service that provides on-staff certified genetic counselors — patients will work with their physician to order a screen and fill in their healthcare information on Navigenics' website. Once the test kit arrives, individuals would provide Navigenics with a saliva sample from which DNA is extracted and tested using Affymetrix arrays.

The company will evaluate an individual’s genetic profile based on information in the scientific literature. “The service calculates genetic predisposition based on genome-wide association studies that combine a variety of epidemiological variables,” the company said in a statement. “Navigenics scours the more than 4,000 published studies correlating genes to medical conditions and other common human traits to include only those which present high-quality and reliable results.”

Navigenics will make available results from the genetic tests online within two weeks on an encrypted, password-protected site, and update customers’ profiles as new information about the conditions they were tested for becomes available.

The Detractors

Although some support Navigenics’ claim to want to better understand the role of genetic information in healthcare and society through its collaborative study with Mayo, others remain concerned about directly marketing genetic tests to consumers.

In an article published in Science last week, researchers from Johns Hopkins University's Genetics and Public Policy Center surveyed companies offering DTC CYP450 testing to guide treatment with SSRI antidepressants, even though a government working group recently discouraged the use of such tests due to a lack of evidence. In their survey, the researchers found that as many as 15 companies market CYP450 tests to consumers and identified four firms that make either explicit or implicit scientific claims about the value of CYP450 testing in making dosing or prescribing decisions for SSRIs. (see article, this issue).

Chief among the concerns about such DTC genetic testing services is that genetic information, provided directly to patients who may not fully understand it, may confound the doctor/patient relationship.

Although Navigenics' Stephan said the company does not plan to “gratuitously advertise” its services, Navigenics does plan to work with physicians “to add this to their diagnostic menu.

“That's the real goal,” Stephan said.

Another concern with such services is how the genetic information might be used by insurers and employers — particularly since the Genetic Information Nondiscrimination Act, which is designed to prohibit abuses of such data, has yet to be passed.

Furthermore, some believe that companies might sell the genetic data to drug makers that need such data to conduct genetically targeted clinical studies.

As reported in Pharmacogenomics Reporter sister publication GenomeWeb Daily News, the fact that 23andMe had received undisclosed funds from Genentech led some to speculate that the genetic information its service yields could end up being used in research and development efforts.

During a webcast last year, 23andMe co-founder Linda Avey admitted as much: “We do plan to use the data that we’re collecting on our customers to improve our services ... [and] be involved with research and with the outside research community.”

For its part, Navigenics has not discussed whether it plans to use its customers' genetic information in clinical studies with drug makers.

Another challenge to DTC gene test service providers is proving that their services are clinically useful and that public health is actually improved by having this data. Indeed, an ongoing study by researchers at Kings College in the UK suggests that genetic-risk information alone does not inspire preventative action by individuals [see PGx Reporter 03-26-2008].

According to Navigenics’ Stephan, one of the main focuses at the company for the past two years has been looking at how patients use genetic-risk information. Before launching its service, Navigenics conducted several hundred screens and followed up with surveys to find that 46 percent of genotyped individuals have “done something” after three months.

“Things they have done have varied from going to their physician to being diagnosed with that particular disease, to behavior and lifestyle modifications,” Stephan said. “We don't know if they stick with it, but as an upfront snapshot that's a lot better than when we go to the doctor and the doctor tells us, 'You gotta lose 10 pounds,' and we walk out and we live happily ever after.”

According to Stephan, the “Holy Grail” for Navigenics would be to conduct a study to see if genetic risk information makes an impact on a population-wide basis. In this regard, Navigenics is in the midst of planning a population-wide prospective study called iPREVENT that aims to study millions of customers over decades, to see if they end up getting diagnosed with the disease that their genotypes had predicted them to get, and if they remain healthy after having taken preventative action, such as making lifestyle changes.

The research team for this study has been built, and the trial is slated to start soon, Stephan added.

[Does information improve the quality of life? In general, the answer is a resounding YES. With the fledging industry of DTC Personal Genomics the ways and means HOW the information will help people is emerging and to some seems controversial, the cornerstone of medicine is "informed consent". People certainly have the right for the information about their bodies, and as long as they are willing to pay for it, there will always be business to take it as an opportunity to satisfy customers.

.... - pellionisz_at_junkdna.com April, 9, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


Enter Navigenics, Where Personal Genomics Gets More Medical

Wired
By Thomas Goetz April 08, 2008 | 2:00:00 AMCategories: Personalized Medicine

Today marks another entrant in the personal genomics game: Navigenics, the much anticipated startup out of Redwood Shores, California, is open for business.

The company arrives as direct competition to 23andMe and DeCodeMe, both of which began offering direct-to-consumer genotyping last year. Navigenics was originally planning to launch around the same time as the competition, but ended up taking several months longer to fine-tune its product. As planned, Navigenics is taking a more clinical approach to personal genomics, with a more overt pitch towards the medical implications.

I had the opportunity to visit the company last week and get a preview of the service. Here are a few standout observations.

1) The Results: Navigenics launches offering results on 18 diseases, from glaucoma to colon cancer to Alzheimer's. This is about the same number as 23andMe had at its launch, though that company is now up to 58 different conditions.

One big distinction is that 23andMe lets users peruse the entire results of their genotype run -- more than 500,000 different single nucleotide polymorphisms (SNPs) -- the single-letter differences between base pairs of DNA that determine how individuals differ from one another. Navigenics, even though it's using a 1 million SNP chip (as does DecodeMe), is more circumspect with its results, only letting customers see the results for those conditions they've vetted.

2) The Business Model: As has been anticipated, Navigenics will charge an initial fee of $2,500 for a one year membership -- and then an annual fee of $250. This compares with about $1000 for permanent access at 23andMe and DecodeMe.

That's been criticized as a bad deal, especially since you can't look at the 1 million results. But Navigenics offers an intriguing twist: It will freeze your spit sample, allowing the company to re-test your DNA as more associations with different SNPs are discovered (and deemed scientifically valid). Mari Baker, Navigenics' CEO, says the company expects to go back two or three times a year to extract more data points.

That means that there's a clear trade off: With 23andMe, you're buying into today's technology, and the company promises to show you everything it has. With Navigenics, it's not going to show you everything, but the company promises to keep you up to date as the technology and the science improve.

3) The Calls: One thing you notice when you get your 23andMe results is how subtle the differences are between the average person's risk for disease and your own. For colorectal cancer, for instance, my 23andMe results tell me that I have a .21 out of 100 chance of developing the disease, compared to a .26 out of 100 average risk. That may be scientifically valid distinction, but as a consumer it is so slight as to be no difference.

Navigenics uses a different method of calculating your genetic risk. I won't get into the details here, but basically it generates a “Lifetime Risk analysis” that results in what the company believes are stronger calls. Certainly the numbers seem more emphatic; examples I saw showed, for instance, a person with a 51 percent risk of heart disease, compared to an average 42 percent risk. That's a striking difference.

4) The Physician: Navigenics puts a great deal of emphasis on the utility of genotype data for useful medical insights. It's clearly one of its main selling points. To that end, “we're putting education towards the top of our agenda,” says Mari Baker, and Navigenics has bankrolled an online continuing medical education course on Genomic and Personalized Medicine with Medscape. Navigenics also has genetic counselors on staff to help customers interpret their results. What's more, it suggests customers bring doctors their Health Compass Report, a primer for personal physicians explaining what the company does, how it calculates risk, and what its patients' results might precondition them for. To me, this is a bold and aggressive endorsement of the power of genomic data for real-time medical insights, much stronger than anything 23andMe has done. It'll be interesting to see how the medical community responds.

So it's a more high-fidelity environment, with less flexibility for the user. And though 23andMe gets lots of attention because of its founders' Google connection, make no mistake: Navigenics is as blue-chip as they come, with top management from T-Gen and Kleiner Perkins.

Long and short: Yesterday personal genomics was an oddity. Today, it's an industry.

[As reported earlier, Navigenics.com is backed by a stellar VC line-up (KPCB, Sequoia, Mohr-Davidow) and uses Affymetrix arrays, with Founder Stephen Fodor's key role in Navigenics. The test provides probabilities of 18 conditions:

Alzheimer's disease
Breast cancer
Celiac disease
Colon cancer
Crohn's disease
Diabetes, type 2
Glaucoma
Graves' disease
Heart attack
Lupus
Macular degeneration
Multiple sclerosis
Obesity
Osteoarthritis
Prostate cancer
Psoriasis
Restless legs syndrome
Rheumatoid arthritis

Without a doubt, there will be customers not only "shopping around", but subscribing to both services and directly, "head-on" compare results that the various (now well over 20) companies ship. There will be customers who will make their decision based on price, others based on the list of "covered" conditions, since some are particularly interested in a given disease. It will be most interesting to see to what extent the "probabilities" supplied by different stastics compare (or differ) - and that will create a stir for validating "probabilities". As Wired points out, one of the most important question is how the medical establishment will respond to "Internet leapfrogging doctors". Just as many in the USA are bypassing pharmacies and order medicines "on-line" from Canada (or India) which will enforce a much more leveled global pricing and break the monopoly of US doctors prescribing medicines, in the Genome Revolution people are likely to find occasionally more "up-to-date" help "on-line", rather than turning to their doctors. Most MDs were trained way before the "Genome Revolution" and some are rather cueless. This will change very soon, with the emergence of "Genomic and Personalized Medicine M.D." specialists. There will be a category of "actionable genomic conditions" - with self-help to textbooks on layman and expert advice how to adjust medication, diet, lifestyle in view of particular "actionable genomic conditions". Patents have already been filed how to elevate the quality of life based on an abundance of information as a result of the "Genome Revolution".

.... - pellionisz_at_junkdna.com April, 8, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Gene Regulation Video in Science website

[Stephen Buratowski seems to still uphold Crick's "Central Dogma", while John Mattick strikes down consistently the "Junk DNA misnomer". Look for paper in press of functional principle for Post-ENCODE Genomics .... - pellionisz_at_junkdna.com April, 7, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Gene Regulation in the Third Dimension
Science 28 March 2008:

Vol. 319. no. 5871, pp. 1793 - 1794

[Excerpts]

Perspective

Job Dekker

Analysis of the spatial organization of chromosomes reveals complex three-dimensional networks of chromosomal interactions. These interactions affect gene expression at multiple levels, including long-range control by distant enhancers and repressors, coordinated expression of genes, and modification of epigenetic states. Major challenges now include deciphering the mechanisms by which loci come together and understanding the functional consequences of these often transient associations. ...

It has long been hypothesized that communication between widely spaced genomic elements can be facilitated by the spatial organization of chromosomes that brings genes and their regulatory elements in close proximity (Fig. 1A). ...


...3C and microscopy studies confirm that long-range chromosomal interactions are widespread, which suggests a high level of communication between dispersed genomic elements.

Spatial Assembly of Expression Units

Well-characterized examples of spatial association of genomic elements involve interactions between enhancers and target genes. An example is that of the β-globin locus. The locus contains several β-globin–like genes that are regulated by a single cis-acting element, the locus control region (LCR), which is located about 10 to 60 kb upstream of the globin gene promoters. The LCR was found to physically associate with the active globin gene (4). Many more examples of long-range looping events have been identified, e.g., in the -globin locus (5, 6) and the interleukin gene cluster (7), and also in controlling single genes [e.g. (8, 9)].

Highly specific associations between loci located on separate chromosomes have also been described. These trans-interactions can be between enhancers and putative target genes, as in the case of olfactory receptor genes (10). However, in other cases, they appear to play a role in a higher level of gene control to coordinately regulate multiple loci (Fig. 1B). One example is the association between the T helper 2 cytokine locus on mouse chromosome 11 and the interferon gene on chromosome 10 (11). Expression of these loci is mutually exclusive, and interaction between them may provide an opportunity to initiate or enforce opposite epigenetic states.

The process of mammalian X-chromosome inactivation involves a specific trans-association between the X chromosomes. Female cells carry two X chromosomes, one of which is mostly silenced so that expression levels of X-linked genes are comparable to those in male cells. X inactivation is initiated at the X-inactivation center. Recently, a transient interaction between the two X-inactivation centers was detected during the developmental stages at which X inactivation is initiated (12, 13). Analysis of mutations in the inactivation centers showed that their association is intimately involved in the X-inactivation process. X chromosome pairing provides an elegant mechanism for counting the number of X chromosomes and for ensuring that their epigenetic fates are linked so that when one chromosome is inactivated the other is not.

These observations suggest an interesting model of what constitutes a "regulatory expression unit" in complex genomes. Whereas in compact genomes, genes and their regulatory elements cluster along the linear genome sequence, in more complex genomes, expression units can be assembled by spatial clustering of genes and distant regulatory elements (Fig. 1). This mode of de novo assembly of expression units could provide additional levels of gene regulation by allowing combinatorial association of genes and sets of regulatory elements. For example, for imprinted loci, maternal and paternal alleles associate with different elements to assemble into distinct expression units (14).

Chromosomal Interactions Are Transient

Many of the observations of long-range interactions have been made using 3C and its variations. Performing 3C is relatively simple, but it has proven more complicated to interpret the results, as has been discussed in several reviews (15, 16). In particular, although many of the chromosomal interactions detected with 3C have been confirmed by microscopy, it is difficult to relate 3C signals to actual frequencies of association. In many cases, the frequency of colocalization is rather low (less than 10% of cells at a given point in time), in accordance with the fact that chromosome conformation is dynamic and highly variable among individual cells. Therefore, the common use of rather rigid looping models to describe these associations, although appealing, can be misleading because these models do not reflect the highly transient nature of long-range interactions.

Functions of Chromosomal Interactions

Observing a specific association between two loci does not by itself reveal a function for that interaction. Additional approaches such as knock-down of proteins (e.g., transcription factors) that mediate the interaction or deletion of the regulatory element can reveal causal relations between long-range interactions and gene regulation. Another powerful approach is to analyze colocalization of loci by in situ hybridization combined with simultaneous visualization of RNA production at the gene to determine whether the interaction is correlated in time with gene transcription at the level of single cells. It should be noted that interactions have been observed that correlated with gene transcription but that deletion of the interacting regulatory element did not affect expression (10, 17). Although this could indicate that the interaction is not relevant, it could equally reflect our very limited understanding of the role of chromosomal associations in genome regulation.

How do chromosomal associations affect gene expression? Enhancer-promoter interactions could aid in stable recruitment of components of the transcription machinery to the promoter. In addition, enhancer-bound enzymatic activities could be brought in contact with promoter complexes that are then modified, e.g., phosphorylated or methylated, which leads to modulation of promoter activity. Other types of interactions, such as those between the X-inactivation centers, could allow coordinated assembly of two distinct protein complexes on the interacting partners. Alternatively, given the very transient nature of these associations, the two loci may acquire distinct but stable marks, e.g., DNA methylation, that direct assembly of protein complexes at later time points when the loci no longer interact.

How Do Loci Get Together?

Several models have been proposed (18) by which distant genomic elements contact each other (Fig. 2). Passive diffusion models are based on the assumption that the mobility of loci provides opportunities for random collisions that are then converted into productive interactions; whether they are productive is dependent on the affinity and specificity of bound protein complexes. Although these models are appealing, it seems that active processes are required, as well, to directly guide loci toward each other. For instance, enhancers have been proposed to actively track along chromatin fibers until a receptive promoter is encountered. Recently, it was found that loci can follow rapid and directed trajectories through the nucleus in an actin-dependent fashion (19, 20). The roles of nuclear actin and myosin have been contentious, but these exciting recent results strongly suggest that they play critical roles in facilitating long-range interactions by transporting loci toward each other or to specific subnuclear neighborhoods, such as transcription factories, which are enriched in RNA polymerase (Fig. 1C).

Fig. 2. Passive and active models for bringing loci together. Circles and rectangles represent regulatory elements and genes. Wavy arrows indicate random diffusion. Straight arrows indicate active and directed movement.

Genomes also contain regulatory elements that modulate interactions between other loci. So-called insulators prevent an enhancer from activating a promoter but only when it is positioned in between them. How insulators work is not known in detail, but they too engage in long-range interactions with other elements (21), which suggests that they generate looped chromosome structures that somehow facilitate the formation of appropriate assemblies of enhancers and target genes.

Future Perspective

At present, significant effort is aimed at comprehensive mapping of chromosomal interactions. Several adaptations of 3C have been developed that allow large-scale detection of genomic interactions by using microarrays or by direct sequencing using any of the newly developed high-throughput sequencing technologies (22). The 4C method (3C-on chip, or circular 3C) allows identification of regions throughout the genome that are physically close to a single locus of interest (23, 24). The 5C method (3C–carbon copy) is not anchored on a single locus and is used for mapping dense interaction networks throughout large chromosomal regions of interest (25). These approaches will yield new insights into the spatial organization of genomes but are descriptive in nature. Additional approaches will be essential to unravel the mechanisms by which chromosomal associations affect genome regulation. These approaches include time-resolved imaging of chromosomal loci, molecular and genetic manipulation of the mechanisms that control the subnuclear localization and movement of loci, as well as biochemical studies to characterize the complexes that mediate chromosomal associations. Combined, these various approaches promise to provide exciting new insights into the three-dimensional aspects of gene regulation.

References and Notes

1. D. A. Kleinjan, V. van Heyningen, Am. J. Hum. Genet. 76, 8 (2005). [CrossRef] [ISI] [Medline]

2. A. G. West, P. Fraser, Hum. Mol. Genet. 14, R101 (2005).[Abstract/Free Full Text]

3. J. Dekker, K. Rippe, M. Dekker, N. Kleckner, Science 295, 1306 (2002).[Abstract/Free Full Text]

4. B. Tolhuis, R. J. Palstra, E. Splinter, F. Grosveld, W. de Laat, Mol. Cell 10, 1453 (2002). [CrossRef] [ISI] [Medline]

5. D. Vernimmen, M. De Gobbi, J. A. Sloane-Stanley, W. G. Wood, D. R. Higgs, EMBO J. 26, 2041 (2007). [CrossRef] [ISI] [Medline]

6. G. L. Zhou et al., Mol. Cell. Biol. 26, 5096 (2006).[Abstract/Free Full Text]

7. C. G. Spilianakis, R. A. Flavell, Nat. Immunol. 5, 1017 (2004). [CrossRef] [ISI] [Medline]

8. J. A. Grass et al., Mol. Cell. Biol. 26, 7056 (2006).[Abstract/Free Full Text]

9. H. Jing et al., Mol. Cell 29, 232 (2008). [CrossRef] [ISI] [Medline]

10. S. Lomvardas et al., Cell 126, 403 (2006). [CrossRef] [ISI] [Medline]

11. C. G. Spilianakis, M. D. Lalioti, T. Town, G. R. Lee, R. A. Flavell, Nature 435, 637 (2005). [CrossRef] [Medline]

12. N. Xu, C. L. Tsai, J. T. Lee, Science 311, 1149 (2006).[Abstract/Free Full Text]

13. C. P. Bacher et al., Nat. Cell Biol. 8, 293 (2006). [CrossRef] [ISI] [Medline]

14. S. Kurukuti et al., Proc. Natl. Acad. Sci. U.S.A. 103, 10684 (2006).[Abstract/Free Full Text]

15. J. Dekker, Nat. Methods 3, 17 (2006). [CrossRef] [ISI] [Medline]

16. M. Simonis, J. Kooren, W. de Laat, Nat. Methods 4, 895 (2007). [CrossRef] [ISI] [Medline]

17. S. H. Fuss, M. Omura, P. Mombaerts, Cell 130, 373 (2007). [CrossRef] [ISI] [Medline]

18. J. D. Engel, K. Tanimoto, Cell 100, 499 (2000). [CrossRef] [ISI] [Medline]

19. C. H. Chuang et al., Curr. Biol. 16, 825 (2006). [CrossRef] [ISI] [Medline]

20. M. Dundr et al., J. Cell Biol. 179, 1095 (2007).[Abstract/Free Full Text]

21. J. A. Wallace, G. Felsenfeld, Curr. Opin. Genet. Dev. 17, 400 (2007). [CrossRef] [ISI] [Medline]

22. B. Wold, R. M. Meyer, Nat. Methods 5, 19 (2008). [CrossRef] [ISI] [Medline]

23. M. Simonis et al., Nat. Genet. 38, 1348 (2006). [CrossRef] [ISI] [Medline]

24. Z. Zhao et al., Nat. Genet. 38, 1341 (2006). [CrossRef] [ISI] [Medline]

25. J. Dostie et al., Genome Res. 16, 1299 (2006).[Abstract/Free Full Text]

26. J.D. is supported by grants from NIH (HG003143), the Keck Foundation, and the Cystic Fibrosis Foundation. M. Walhout is acknowledged for suggestions for this article.

[The spatially fractured (fractal) organization of genomic components has been suggested since 2002 (FractoGene). While the actual mechanisms are investigated afresh, one wonders about bringing them together by a coherent conceptual framework.... - pellionisz_at_junkdna.com April, 6, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

BioNanomatrix Lands $5.1M in Venture Financing

[March 26, 2008]

NEW YORK (GenomeWeb News) – DNA analysis firm BioNanomatrix has secured $5.1 million in series A venture financing, lead investor Battelle Ventures said today.

BioNanomatrix is developing nanoscale single-molecule imaging and analytic platforms with applications in genome analysis and clinical diagnostics.

The funding round included an investment from Battelle’s affiliate Innovation Valley Partners, and from KT Venture Group. Ben Franklin Technology Partners and 21 Ventures also participated in the round through debt conversion agreements.

Battelle said that biotechnology executive Edward Erickson and Battelle partner Tracy Warren have joined BioNanomatrix’ board of directors.

Warren said that the BioNanomatrix technology, which is based on nanochips that are compatible with off-the-shelf optics, can be used to “significantly reduce the time, complexity and cost dynamics of sequencing the genome and carrying out genetic analyses.

In September, BioNanomatrix and partner Complete Genomics were awarded an $8.8 million grant from the US National Institute of Standards and Technology to develop a high-throughput sequencing technology that they said will be able to sequence a human genome in eight hours for less than $100.

[There is so much talk about the "$1,000 Genome" that BioNanomatrix/Complete Genomics potential for a $100 (One Hundred USA dollars!) does not get overexposure. Perhaps some don't look up the revolutionary technology-patent, or such an astonishingly low price does not appear to them as credible (it is credible enough for top-ranking investors). Or, perhaps some are simply terrified of a "data deluge" once these next-next-generation sequencing nanotechnologies will suddenly appear. - pellionisz_at_junkdna.com April, 4, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Roche NimbleGen Launches NimbleGen Sequence Capture Technology for Targeted Genome Resequencing

Business Wire 2008
2008-04-01 20:21:57 -

www.roche-applied-science.com/ - Roche NimbleGen, a company of Roche Applied Science announced the commercial launch of its NimbleGen Sequence Capture technology for scientists interested in selecting targeted regions of the genome for high-throughput sequencing. This new technology enables researchers to easily and rapidly capture up to five million targeted bases from the human or mouse genomes.

The technology addresses a major bottleneck faced by researchers when trying to sequence large or multiple genomic regions. The bottleneck lies in the sample preparation process, where researchers want to sample only a small, relevant portion of a genome and sequence using next-generation technologies such as Roche's 454 Genome Sequencer FLX system. The length of the captured sample fragments, coupled with the long-read technology of the 454 Sequencing system, enables haplotyping and provides full information on variants, such as insertions, deletions, and SNPs. Currently, regions of interest are selectively sampled through a labor-intensive process whereby individual fragments are individually amplified using polymerase chain reaction (PCR). Since a unique reaction is required for each fragment, the selection of large genomic regions requires the parallel design, optimization and execution of up to thousands of individual reactions, representing a substantial investment in time and money.

The NimbleGen Sequence Capture service is the first commercially available solution that specifically addresses this sample preparation bottleneck. This technology offers maximum flexibility with custom tailored designs targeting either contiguous or dispersed genomic regions, and saves substantial time and cost when compared to PCR-based methods.

Researchers around the world have awaited the availability of this revolutionary technology with great excitement. "What's happening today with regard to Sequence Capture is reminiscent of the early days of PCR, the same sense that all of a sudden your field has changed, and all that constrains you is your imagination about how you should now be designing experiments," said John Greally, Ph.D., Associate Professor in the Departments of Medicine and Molecular Genetics at Albert Einstein College of Medicine. "Occasionally a technical advance lives up to its billing as transformative. This seems to be one of those occasions."

The initial offering of this technology is as a service, and Roche NimbleGen plans to make NimbleGen Sequence Capture arrays and related reagents and instruments available soon to customers worldwide so that the technology can be used in every researcher's laboratory. Indeed several early access customers have already started using NimbleGen Sequence Capture arrays in high-profile genome sequencing projects to facilitate the analysis of thousands of human DNA samples to determine the genetic variations associated with a range of human diseases.

Baylor's Human Genome Sequencing Center, which last month presented NimbleGen Sequence Capture data at the Advances in Genome Biology and Technology conference, plans to employ NimbleGen Sequence Capture technology to target all human coding regions for several projects, including The Cancer Genomes Atlas (TCGA) project and the 1000 Genomes Project. TCGA is a joint program between the National Cancer Institute and the National Human Genome Research Institute to identify and understand the genetic variation involved in human cancer, with the end goal of improving the ability to diagnose, treat and prevent cancer. The 1000 Genomes Project is an international research consortium with the goal of sequencing at least one thousand people from around the world in an effort to catalog biomedically relevant DNA variations in the human genome.

"We are extremely pleased with the capabilities and efficiencies the NimbleGen Sequence Capture technology has brought to our sequencing research efforts. There are huge advantages when this technology is compared to PCR-based methods," said Richard Gibbs, Ph.D., Director of Human Genome Sequencing Center at Baylor College of Medicine. "This is the most exciting next phase in bringing genetic discovery to medicine."

Roche NimbleGen is a leading innovator, manufacturer and supplier of a proprietary suite of DNA microarrays, consumables, instruments and services. Roche NimbleGen uniquely produces high-density arrays of long oligo probes that provide greater information content and higher data quality necessary for studying the full diversity of genomic and epigenomic variation. The improved performance is made possible by Roche NimbleGen's proprietary Maskless Array Synthesis (MAS) technology, which uses digital light processing and rapid, high-yield photochemistry to synthesize long oligo, high-density DNA microarrays with extreme flexibility. For more information about Roche NimbleGen, please visit the company's website at www.nimblegen.com.

[There is much "new revelation" on the IT technology-side of PostModern Genomics. The winning combination, however, is to put together the highest of high-tech with the largest medical facility to establish and discover relationship of pathological phenotypes with genotypes .... - pellionisz_at_junkdna.com April, 2, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

NanoLabs Raises £10M to Aid Race for a Cheaper Genome

By Nuala Moran

BioWorld International Correspondent

LONDON - As the race for the $1,000 genome heats up, Oxford NanoLabs Ltd., one of the leading European contenders, raised £10 million (US$19.89 million) in a second private round, enabling the company to develop working prototypes of its nanopore-based DNA sequencing technology.

Gordon Sanghera, CEO, told Bioworld International, "While the markets are in absolute turmoil at the moment, we were fully subscribed. That's down to the fact that the institutional and private investors really believe in the technology."

The company's nanopore technology has the potential to sequence single molecules of DNA directly, avoiding the need for chemical labeling and time-consuming DNA amplification.

As each DNA base is passed through a nanopore, it binds transiently to a site within the pore.

During binding, each base generates a characteristic reduction of the electrical current through the nanopore, allowing for direct recording of the signal and simple data processing.

Sanghera is coy about giving any details of the plans to commercialize the technology, noting that it is "an incredibly competitive space." Nor is he prepared to give a headline figure on how much it might cost to sequence a genome using Oxford NanoLab's equipment.

"It will obviously be an order of magnitude [lower]. With our system you can work with a single molecule of DNA," Sanghera added.

"There is no need for amplification, and no need for labeling - meaning no reagent costs - and you don't need expensive reading systems."

In addition, he said, various protagonists have come up with a range of interpretations of what would constitute a system for analyzing an entire genome at a cost of $1,000, with some, for example, excluding the capital costs of the equipment.

The $1,000 genome target was set by the U.S. National Institutes of Health (NIH) in 2002 as the figure at which an individual's genome could be sequenced as part of routine medical care, thus opening the floodgates to personalized medicine.

Sanghera said Oxford NanoLabs will be pushing the NIH, "to define exactly what they mean by the figure."

The NIH said the target still some way off when it announced the award of the latest round of grants aimed at reducing the cost of sequencing in August 2007.

While DNA sequencing costs have fallen 50-fold over the past 10 years, it is still as much as $5 million to sequence 3 billion base pairs, which is equivalent to the amount of DNA found in the human genome.

In fact, the NIH's National Human Genome Research Institute has set the nearer-term goal of lowering the costs of sequencing to $100,000 per genome.

Sanghera claimed no other company working in the field currently has technology that does away with the need for labeling and amplification.

"We have corralled the intellectual property on nanopores, and we have gone out and targeted the leading academic researchers, including those in the U.S," he pointed out.

Oxford NanoLabs was spun out of Oxford University in 2005 with seed funding from IP Group plc, the quoted technology management company. It raised £7.75 million in its first round in 2006.

IP Group followed on in the current round and now holds 19.5 percent of the company.

Published April 2, 2008

[Oxford NanoLabs competes against single molecule sequencer Helicos, and Sunnyvale-based Complete Genomics - the latter is not shy to project a $100 full genome with similar (patented) technology within ~3-4 years. (Yes, ONE HUNDRED USD for a full human DNA sequence). Present price is already at $60,000. .... - pellionisz_at_junkdna.com April, 2, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

New Research Provides Greater Insight into Gene Regulation

[March 27, 2008]

By a GenomeWeb staff reporter


NEW YORK (GenomeWeb News) – Seven articles published in Science this week offer an overview of different ways that researchers are tackling the complexities of gene regulation.


The journal, published online today, includes a special section on gene regulation that provides a peek into the current state of research in the area, including everything from chromosome organization to transcription factors to various RNA regulators and riboswitches.


In the first article, Columbia University biochemist Oliver Hobert compares and contrasts transcription factor- and miRNA- mediated regulation of gene expression. As Hobert discusses, these regulators share many traits, such as cell type specificity, dependence on binding site accessibility, and influence over multiple genes.


But, he notes, there are important differences between them as well. For instance, miRNAs offer fast and reversible control of gene expression. In addition, they seem to have greater redundancy and can participate in more specialized — even compartmentalized — control of gene expression.


Next, senior author John Mattick, a molecular biologist at the University of Queensland, and his colleagues tackle the broader subject of non-protein-coding RNAs. They discuss the advent of research into these ncRNAs and review what is now known about the molecules’ influence over genome dynamics, cell biology, and development.


The article touches on the role of ncRNA in functional organization in the cell — for instance in chromatin architecture and nuclear organization. It also discusses genetic regulation by ncRNAs, particularly miRNAs and its evolutionary function.


“Given the functional versatility of RNAs, it is plausible that ncRNAs have represented a rich substrate for evolutionary innovations in eukaryotes,” the authors wrote. “[R]egulatory RNAs are centrally involved in the ontogeny of many organisms, from unique developmental pathways in protozoa to the control of conserved or clade-specific developmental regulators in multicellular animals.”


In the third article, Eugene Makeyev and Tom Maniatis, both molecular and cellular biologists at Harvard University, focus on the multilevel regulation by miRNAs. They describe the latest understanding of miRNA-regulated gene expression — including their cell and tissue-specific influence on transcription, alternative pre-mRNA splicing, and translation — and the implications of these for gene regulatory networks.


The authors conclude that some miRNAs may “prevent interference between spatially and temporally adjacent gene expression programs” by “rewiring the cell-specific networks at all levels of the regulatory hierarchy.”


Cornell University molecular biologists Leighton Core and John Lis turn their attention to another player in gene expression — RNA polymerase II. Their article reviews recent evidence in Drosophila and mammals suggesting the enzyme pauses near the promoter of many genes during the early elongation stage of transcription, providing additional gene regulation.


“Future investigations should focus on determining how promoter-specific binding proteins affect the transition between initiation and pausing, as well as the transition between pausing and productive elongation,” Core and Lis wrote. “[T]he results will provide important insights into the role of cell signaling events in the mechanics of transcription regulation.”


Next up, Job Dekker, a University of Massachusetts researcher who studies three-dimensional genome organization, discusses the importance of spatial chromosome organization in gene regulation. For instance, he describes how expression can be controlled via spatial gene clusters and their interactions with relatively distant regulatory elements, such as enhancers and repressors.


Meanwhile, Yale University biochemist Ronald Breaker describes so-called riboswitches, small, seemingly primitive RNA molecules that influence gene expression by interacting with molecules on mRNA. Despite their simple nature, Breaker reveals, these riboswitches have important and diverse functions. For instance, they can stabilize certain forms of mRNA, act in feedback loops that shut off certain genes when their products are no longer needed, and mediate some mRNA splicing events.


Finally, senior author Alexander Johnson, a microbiologist at the University of California at San Francisco, and his colleagues discuss the ways that eukaryotic transcription circuits evolve over time and how these changes affect a species’ morphology and physiology.

[Genome Revolution is about information science and information technology. Some forget that no computing problem would be looming if only 1.3% of the human DNA ("genes") mattered. This collection of articles shows that information science of a new Genomics is far from "done". To the contrary. At meeting revolving around new data, new algorithms and new information technology one wonders what kinds of new comprehensive concepts of how the DNA governs development of organelles, organs and organisms emerges? .... - pellionisz_at_junkdna.com March, 29, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

New Software Aids Researchers Analyzing Millions of DNA Sequences

ScienceDaily (Mar. 28, 2008) — It took a global corps of scientists approximately $500 million and 13 years to identify the more than 35,000 genes of the human genome. Five years later, Boston College Biologist Gabor Marth and his research team have developed software that can analyze half a million DNA sequences in 10 minutes.

The Marth laboratory's proprietary PyroBayes software is one of a new breed of computer programs able to accurately process the mountains of genome data flowing from the latest generation of gene decoding machines, which have placed a premium on computational speed and accuracy in data-crunching fields known as bioinformatics and high-throughput biology, said Marth, an associate professor of Biology.

"We're on the edge of a real technological revolution that I think will help us understand the genetic causes of diseases in humans and how genetic materials determine traits in animals," said Marth. "It is going to lead to less expensive technologies that will allow researchers to decode any individual."

PyroBayes will aid researchers involved in the 1,000 Genomes Project, which announced last month a plan to sequence the genomes of 1,000 individuals from around the world. The NIH, which helps direct the project, has awarded Marth more than $1.3 million to develop software over the next four years.

The advances of the Marth lab were revealed in two articles published by the professor and his assistants in the February issue of Nature Methods, the premier journal of scientific research methodology.

In an article co-authored by Marth, post-doctoral researcher Chip Stewart, and graduate students Aaron Quinlan and Mike Strömberg, the group unveiled the lab's PyroBayes base caller software, which examines data from one of the latest generation of DNA decoding machines -- from Roche / 454 Life Sciences -- faster and with far greater accuracy than other programs for pyrosequencing, a technology that utilizes the detection of pyrophosphate for decoding the sequence of DNA, the carrier of genetic information in living organisms.

A second Nature Methods article, written in collaboration with colleagues from the Washington University School of Medicine, reported that three other computer programs developed by the Marth lab made it possible to quickly and accurately examine the whole genome of a laboratory worm and identify key differences between the sample strain and an earlier strain -- a comparative process known as re-sequencing, now being applied to the genomes of humans and other organisms. This second study used another next-generation DNA sequencing platform, the Illumina/Solexa machine.

Advances are driving re-sequencing costs down, but researchers must still prove the effectiveness of the new technology by working with smaller organisms, which made the worm study critical, Marth said. "This brings us closer to a major milestone in human individual re-sequencing -- the decoding of the genome of human beings in routine fashion," said Marth.

Of the few computer programs available for the new sequencing machines, the software package developed by the Marth lab is the only one capable of working with a variety of decoding machines and offers greater accuracy, allowing researchers to separate true genetic variations from data errors, said Marth. PyroBayes, a Linux-based package, is made available to fellow academic researchers at no cost.

As a member of its analysis group, the Marth lab participates in the data analysis of the 1000 Genomes Project, which was launched last month. The goal of the project is to sequence the genomes of at least 1,000 people from around the world to create the most detailed and medically useful picture to date of human genetic variation.

Ultimately, advances in bioinformatics will help push genetic science forward, shedding new light on human health and disease. Marth sees his lab's role in providing critical tools that help researchers to organize data, interpret them, and visualize genome variations.

"We are excited to develop the software that will help these super-fast, high-throughput sequencing machines to realize their potential to produce invaluable data for research," Marth said.

Journal article can be purchased at Nature Methods: http://www.nature.com/nmeth/journal/v5/n2/full/nmeth.1172.html

[The price of sequencing is no longer the bottleneck. The looming "Data Deluge" is .... - pellionisz_at_junkdna.com March, 28, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

BGI to Ramp up Sequencing Abilities with Illumina, Roche Tools

[March 26, 2008]

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) – The Beijing Genomics Institute is dramatically expanding its DNA sequencing capacity by adding fourteen new next-generation sequencers, BGI said today.

BGI said it has ordered 11 new Illumina Genome Analyzers and three Roche Genome Sequencer FLX instruments. The Institute already had six Illumina sequencers, as well as 26 3730xl sequencers from Applied Biosystems and 107 MegBACE-1000 sequencers from GE Healthcare.

The purchases are expected to bring BGI’s raw-sequencing data output to up to 16 Gbps per day or more, and the institute will “significantly increase [its] ability to market new generation sequencing services to the community.”

BGI recently announced plans to sequence the genome of the Giant Panda and a plan to work with the Sanger Institute and the National Institutes of Health to sequence the genetic codes of 1,000 people from around the world.

[With 150 "next generation sequencers" the Beijing Genomics Institute does not stop. They have openly declared a quest for the "next generation genome informatics" - with an enormous team of scientists, educated both in China and their best acquiring Ph.D.-s in the USA .... - pellionisz_at_junkdna.com March, 26, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Genetic Testing Gets Personal [and now they are more then 20 companies]

Firms Sell Answers on Health, Even Love

By Rick Weiss

Washington Post Staff Writer

Tuesday, March 25, 2008; Page A01

In January, at the World Economic Forum in Davos, Switzerland, movers and shakers lined up to spit into test tubes -- the first step to having snippets of their DNA analyzed by 23andMe, a personalized gene-testing company that for $999 promises to help people "search and explore their genomes."

Those wanting an even more complete analysis of their biological inheritance can turn to Knome, a Cambridge, Mass., company that, for $350,000, will spell out all 3 billion letters of their DNA code -- an unparalleled opportunity, the company says, to "Know thyself."

For singles on tighter budgets and with narrower interests, there is ScientificMatch.com, which says that its $995 genetic test will help clients find DNA-compatible mates who will smell sexier to them, have more orgasms and produce healthier children.

This is the world of direct-to-consumer genetic testing, a peculiar mix of modern science, old-fashioned narcissism and innovative entrepreneurialism, all made possible by the government-sponsored Human Genome Project.

More than 20 companies today offer "personalized genomics" tests that promise to help clients discern from their DNA what diseases they are likely to get, whether they are shy or adventurous, even their propensity to become addicted to drugs. A growing number bypass doctors and deal directly with consumers.

The trend has critics warning that the market is becoming rife with hype. The field is effectively free of regulatory oversight, watchdogs note, and much of the science behind the results is still sketchy.

But backers of these enterprises say they are pioneering nothing less than a medical and cultural revolution. With each person who adds his or her DNA to the companies' high-security databases, they say, links between specific gene variants, health conditions and behavioral traits are getting documented, speeding discoveries about biology, identity and destiny.

"We call it consumer-enabled research," said Linda Avey, co-founder of 23andMe, based in Mountain View, Calif. "It's about changing the paradigm of how research is done."

It is also about self-discovery and a new kind of social networking, as "members" -- as some companies call them -- learn about their DNA details and share them with others.

"We envision a new type of community where people will come together around specific genotypes, and these artificial barriers of country and race will start to break down," said Anne Wojcicki, who with Avey co-founded 23andMe.

"I think people will really get into it," said George Church, the Harvard geneticist who co-founded Knome and founded the not-for-profit Personal Genome Project, which will compare the genomes of 100,000 people willing to make their DNA public. "I think this is going to connect people clear around the world."

Gene Chips Slash Costs


Personalized medicine, the detection of people's individual health risks and the tailoring of preventive strategies and therapies just for them, has been a buzzword for years. But it remained elusive until technological advances allowed researchers to scan huge stretches of human DNA quickly and at relatively modest expense.

"Gene chips" that cost just a few hundred dollars can today detect hundreds of thousands of tiny molecular hiccups in a smidgeon of DNA collected from saliva or blood. Unlike better-known genes that single-handedly cause inherited diseases such as sickle cell anemia, most of these gene variants add in very small ways to a person's medical weaknesses or strengths.

Only about 100 such glitches have been convincingly linked to specific diseases or behavioral tendencies, but new connections are being discovered every month. Together they can start to paint a picture of a person's health prospects and behavioral predilections.

Meanwhile, the cost of spelling out an individual's entire genetic code, or genome, is also dropping precipitously, from several million dollars a few years ago to about $1 million last year and an anticipated $200,000 or so this year.[The journalist is could not follow the numbers. Full sequencing is already under $60,000 - from the initial $300 M.; a drop Five Thousandfold in about 5 years - AJP]

"Our goal and vision has been to make a total human genome affordable," said Christopher K. McLeod, chief executive of 454 Life Sciences in Branford, Conn., which makes some of the fastest and most powerful gene-sequencing machines under the corporate motto "Measuring Life One Genome at a Time."

By comparing an individual's genetic profile with databases of known correlations, companies can calculate that the person, for example, is 30 percent more likely than average to get colon cancer, 20 percent less likely to get cataracts, and 10 percent more likely to be impulsive or have anger-management issues.

Yet the probabilistic nature of those results is potentially problematic, said J. Craig Venter, the geneticist who broke scientific and cultural ground last year when his eponymous Rockville research institute spelled out his entire genetic code and posted the results on a publicly accessible database, revealing to the world that he has, among other things, genetic inclinations toward wet earwax.

It can be entertaining, Venter said, to learn one has a gene for soggy earwax. "But if you're on the receiving end of one of these tests and are told your probability of having a serious problem is 62 percent, what the hell does that mean?"

Results Can Mislead


And that is assuming the results are correct. As it turns out, many gene tests today search for DNA patterns that have been linked to a disease or trait in only one or two studies. Such findings are often overturned by later research.

Even if the findings hold up, there are countless other genes still unstudied that experts say will eventually be found to either augment or counterbalance the risks discovered to date. Until those factors -- harmful and protective -- are added to the gene chips, clients run the risk of being misled.

"This information can be quite profound," said R. Alta Charo, a professor of law and bioethics at the University of Wisconsin. "It can lead to a decision to have your breasts chopped off before you've been sick for a day or having your ovaries scooped out before you have children. These are dramatic decisions, but these products are going on the marketplace as though they were underarm deodorant."

Exacerbating the problem is that virtually no one is watching over the industry. The Food and Drug Administration does not regulate most gene-based tests, and there is no federal proficiency-testing system for companies offering them.

So while some of the new companies, including 23andMe, Knome and Navigenics of Redwood Shores, Calif., boast solid teams of renowned researchers and emphasize that the information they provide is not diagnostic, other outlets inhabit the scientific fringe.

Perhaps most denigrated by experts are those that purport to identify people's nutritional needs from their DNA and then sell them dietary supplements at a hefty profit.

"It is totally bogus," said Gail Geller of the Berman Institute of Bioethics at Johns Hopkins University, whose research has documented how easily the public can be bamboozled by genetic test results.

Then there is ScientificMatch.com, which "uses your DNA to maximize the chances of finding chemistry -- actual, physical chemistry -- with your matches," according to the company's Web site.

At the heart of that claim is a hypothesis that people are most attracted to others whose immune systems differ most from their own. A few studies have found evidence supporting the idea (it may be an evolutionary strategy for maintaining genetic diversity). But at best, geneticists say, it is a narrow basis upon which to choose a mate.

"It creates an air of charlatanism that doesn't help the field," Venter said.

All told, concluded a study in this month's issue of the American Journal of Human Genetics, "There is insufficient scientific evidence to conclude that genomic profiles are useful in measuring genetic risk for common diseases or in developing personalized diet and lifestyle recommendations for disease prevention."

The Science Is Still Young


Despite today's limitations, the day will come, experts agree, when enough will be known about human genetics so that a scan of an individual's genome will convincingly predict that person's medical risks and behavioral foibles -- perhaps with enough assuredness to dictate preemptive therapy or even extend disability rights to some whose behavior falls outside societal norms. But the only way to get there is to collect massive amounts of data from a wide array of people so computers can find those correlations.

That task is underway, but the work takes time, which is why direct-to-consumer genomics companies say they should be welcomed. Most people are disinclined to sign up for research that offers nothing in return, Wojcicki said, but at 23andMe, "they get something back right away, and they are also part of something really powerful."

Wojcicki predicts that as members share information about their genes, their health and their personalities -- an irresistible option for many in this age of electronic "friending" -- the new enterprises will revolutionize health care "the way YouTube revolutionized media."

"I call it the democratization of the genome," Venter said.

Concerns persist. If people want to use their information in a meaningful way, they will probably want to share it with their physician, said Francis S. Collins, director of the National Human Genome Research Institute and a leader of the Human Genome Project, completed in 2003, which cobbled together the first complete human DNA sequence. And medical records are not totally opaque to prying eyes.

"People ought to think about that," said Collins, who confessed to feeling both excited and concerned about consumer-driven genomics. "We don't want employers to use genetic information to make hiring or firing or promotion decisions on the basis of fears that an employee may get sick." It is "enormously frustrating," he added, that bills prohibiting genetic discrimination have been passed by both chambers of Congress but are stalled because of an unrelated power struggle on Capitol Hill.

One subtle but potentially insidious downside of the new trend, Collins said, is that people may slip into the DNA-deterministic thinking that fed the early 20th-century eugenics movement, in which people with "undesirable" traits underwent forced sterilizations.

"I very much worry that all this emphasis on a 'gene for this' and 'gene for that' raises the risk that people will conclude that that's the whole story," Collins said. Instead of empowering people to make healthful changes in their lives, that could simply make them "more fatalistic," he said, "in which case, what's the point?"

At the same time, he and others acknowledged, by identifying people with similar genes but different health outcomes, genomics companies' databases could help scientists identify the specific environmental influences -- diet, exposure to certain chemicals, even stress or abuse -- that interact with particular genes to make people into the individuals they are.

"By disentangling the genetics, we'll get a much deeper appreciation of both nature and nurture," said Church, the Harvard geneticist. "I would be surprised if it didn't change our view of ourselves pretty significantly."

[Those who lived through, worked through (and coined siliconvalley.com), the so-called "dotcom boom", may find the above almost like a "deja vu". "Competing compenies appearing on the same day", "within half a year, 20+ companies crowding into the space", "personal and personalized information profiled in emails and the web", "no regulatory control", "major VC firms engaging to the hilt", "industrial leaders catapulting", uncounted billions made under a cloude of "hype", "sterling companies such as Cisco, Yahoo, Google emerged", find themselves at a juncture of similar significance. Sure, there are major differences. First, the "dotcom boom" had a rather shallow science behind it - the Genome Revolution is perhaps the biggest paradigm-shift in the history of R&D. Second, Internet was about information, and thus did not necessarily had an impact on everybody. Personal Genomics (not "genetics") affects absolutely all - the healthy (to stay that way) and those who (at least potentially) harbor health conditions - in cases literally life or death at stakes. Whatever will emerge from this (after "dotcom boom", shall we call it "Big Bang of Genomics"?) is impossible to precisely forecast - but is impossible to overestimate .... - pellionisz_at_junkdna.com March, 21, 2008]


^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Navigenics will Launch [its Personalized Genomics web service] April 8th in NYC!

Well in a move to trump NY or a business plan that does include physicians. !!

That's right

From a counselor's email sent by dnanyc@navigenics.com

Dear xxx,

Navigenics invites you to be one of the first people in NYC to experience first-hand a leading-edge approach to health and wellness.

Navigenics is launching its first genetics service April 8th. We truly believe this company will revolutionize the way we think about our health. Our first service, called the Navigenics Health Compass, tests for genetic risk markers for 18 actionable common conditions—cardiac disease, several cancers, Alzheimer’s among them—and arms you with specific information on how you can mitigate your individual risk for developing each condition, including personal genetic counseling sessions and customized health and wellness content. To celebrate the launch of our first service, we are coming to New York City for two weeks in April (April 8 – 17) to host a series of exciting and informative events. We will be installed at a SOHO location, and I encourage you to join us for some of our events. (Please see the calendar invitation below.)

Please help us celebrate this transformation for medicine: from a “sick care” model of “wait and see” to the emergence of early risk detection. The time has come to empower individuals with the opportunity and knowledge to take preventative steps, and a hands-on approach to their family’s health and wellness!

All the best,

The Navigenics Team

RSVP@navigenics.com

Now it will be interesting to see how their competitors 23andMe and deCodeMe react. Our practices stand ready to pick up the pieces and serve as an information source for both patients and physicians who have lost their compass, or just want to learn about this new technology.

The Sherpa Says:

I'll be there. How about you? To all my physician freinds, give me a call and I can explain what the hell is going on.... You should go to these events....Seriously, they look pretty impressive

[Personal Genomics is not only here to stay, but is now the trend. While 23andMe and DeCodeMe have provided for some months "interesting facts" based on genotyping, Navigenics focuses on "actionable genomic risks" (based on the same technology, except they use the competitor to Illumina array, the Affymetrix chip, that has became the "underdog" lately). Navigenics is backed by KPCB, Sequoia and Mohr-Davidow, and a leader is Steve Fodor, Founder of Affymetrix .... - pellionisz_at_junkdna.com March, 21, 2008]


^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Tapping miRNA-Regulated Pathways

GEN Genetic Engineering and Biotechnology News
Mar 1 2008 (Vol. 28, No. 5)
Expression Profiling Ramps Up to Support Diagnostics and Drug Discovey

Vicki Glaser

miRNAs are master regulators of gene expression, according to William S. Marshall, Ph.D., president and CEO of miRagen Therapeutics. “You can have one microRNA that controls multiple genes and one gene that is controlled by multiple microRNAs.” They exert negative regulation and have been shown to control expression of entire signaling pathways.

miRNA discoveries appear in the news on a regular basis. For example, a research team at the Wistar Institute recently reported the identification of two miRNAs—miR-373 and miR-520c—that are members of the same miRNA family and were shown to promote tumor metastasis. The two miRNAs are not found in normal adult cells, only in tumor cells, according to the authors. Of the 450 miRNAs tested, these two, described by Tony Huang, Ph.D., and colleagues, induced cell migration in the MCF-7 line of human breast cancer cells, which normally do not metastasize. They could serve as biomarkers for the metastatic potential of breast cancers and the need for more aggressive treatment.

As the number of miRNAs identified continues to grow, researchers are exploring the biology of miRNA function and characterizing the tissue specificity and range of activity of individual miRNA molecules. Changes in miRNA levels have been correlated with disease processes. A great deal of work is under way to study the effects of over- or underexpression of specific miRNAs on the development and inhibition of pathogenesis, particularly in the areas of cancer, as well as in heart disease, neurological disorders, and aging.

Researchers in industry and academia will present their latest findings at CHI’s “microRNA in Human Disease and Development” meeting to be held later this month.

“The microRNA expression profile is extremely indicative of cell type,” continues Dr. Marshall. “When a cell transforms into a disease state, there is typically a perturbation in the microRNA level.” Approximately 400 to 500 miRNAs have been characterized in humans to date, according to Dr. Marshall, and 80–150 are typically expressed in any particular cell type.

miRagen, which will focus on the role of miRNAs in cardiovascular health and disease, is a newly established company based on research being conducted in the laboratory of Eric Olson, Ph.D., chair of the department of molecular biology at University of Texas Southwestern and cofounder.


Decoding a Complex Picture

“We believe that individual microRNAs regulate the expression of tens, if not hundreds, of independent genes,” and many of those affect related pathways and overlapping processes, says David Brown, Ph.D., director of R&D at Asuragen. “Our studies of microRNAs indicate that a single, small RNA often regulates a given cellular process by affecting the expression of genes in two to four related pathways.”

Researchers at Asuragen, in collaboration with the laboratory of Frank Slack, Ph.D., at Yale University, have shown that the let-7 miRNA is expressed in normal lung tissue and that inhibition of let-7 function leads to increased cell division in lung cancer cells. The let-7 controls cell-cycle progression, and overexpression of let-7 in cancer cells can reduce cell division. The let-7 family of miRNAs has been shown to repress multiple genes involved in the cell cycle and cell division, including RAS.

At the Computational Biology Center of IBM’s Thomas J. Watson Research Center, Isidore Rigoutsos, Ph.D., manager of the bioinformatics and pattern-discovery group, and colleagues are studying miRNA and its effects on biologically significant transcripts. The work involves applying a pattern-based method for identifying miRNas as well as miRNA-binding sites and their corresponding heteroduplexes. It is demonstrating that—in addition to the proposed mechanistic model for animal miRNA function, by which miRNAs act through the 3´ untranslated regions (UTRs) of targeted transcripts—animal miRNAs may also extensively target sites in amino acid-coding regions and 5´ UTRs.

Dr. Rigoutsos’ group has argued that based on computational analysis there may be as many as 50,000 miRNAs in the human genome and each may have as many as a few thousand potential targets. To support the contention that miRNAs such as miR-134, which has been implicated in embryonic stem-cell differentiation in mice, may have numerous targets, Dr. Rigoutsos and colleagues randomly selected 158 of the roughly 2,300 predicted 3´ UTR targets of miR-134’s. Using a luciferase assay, they demonstrated greater than 30% suppression of luciferase activity in 129.

The conclusion that one can draw from these findings is that the extent of miRNA activity and the mechanisms by which they regulate gene expression “are likely more complicated than previously acknowledged,” says Dr. Rigoutsos. Furthermore, “There is accumulating evidence that short RNAs can not only affect the levels of proteins, but that proteins may also affect the production of microRNAs.”

Asuragen is applying miRNA biomarkers in the development of molecular diagnostic assays with a primary focus on cancer. As evidence mounted that “the altered expression of a single microRNA profoundly affects critical regulatory pathways in humans,” Asuragen expanded its focus to explore their potential therapeutic applications, explains Dr. Brown. The company has been using cell and animal models to explore the concept of miRNA-replacement therapeutics, in which the reintroduction of an miRNA in a diseased cell would reinitiate pathways that have been turned off by the downregulation of the miRNA.

The literature includes examples of miRNAs that function as oncogenes, with their overexpression contributing to tumorigenesis, and of others that act as tumor suppressors, which when downregulated contribute to cancer. Asuragen is focusing on miRNAs that act as tumor suppressors and is using synthetic versions of a few key miRNAs to reduce or eliminate the expansion of cancer cells in vitro and in vivo.

Changes in expression of a single miRNA have been implicated in some cancers, and “downregulation of microRNAs is critical for the maintenance of cancer cells,” says Dr. Brown. Reintroduction of miRNAs in animals has been shown to impair the viability of cancer cells and result in cell death. Asuragen expects to advance one or more miRNA therapeutic compounds from feasibility testing to preclinical studies within the next year.

The company is currently launching a commercial venture called Mirna Therapeutics, which will enable an expanded effort to develop miRNA-based drugs.

If a single miRNA can have a role in regulating multiple pathways, would miRNA replacement therapy carry a risk for off-target effects? “We truly believe there are no such things as off-target effects for microRNAs,” says Dr. Brown. “We are not introducing anything that is not already present in the cell.” Throughout evolution, the biology of mammals has been geared to regulation by miRNAs, and miRNA-replacement therapy would simply reintroduce an miRNA that would normally be present, Dr. Brown contends.

Mike Wilson, Ph.D., array R&D manager at Asuragen, will describe the company’s molecular diagnostics development effort in a presentation entitled, “microRNA Expression Profiles Associated with Colorectal Cancer and Derived from FFPE Tissues.”


Therapeutic Targets

Eugenia Wang, Ph.D., professor at the University of Louisville, has proposed that miRNAs have a critical role in “a universal or system-specific programmatic shift of signaling control” that occurs at mid-life and brings about a decline in cellular health status associated with aging, which may precipitate increased risk of late-life diseases. In her presentation, she will review the hypothesis that the changes in expression of most if not all aging-related genes are controlled by underlying hubs and the belief that miRNAs, acting as molecular master switches, are candidate hubs.

Dr. Wang’s research has focused on two distinct systems, the aging mouse liver and peripheral lymphocytes from individuals with sporadic Alzheimer’s disease. She is using microarray-based screening to profile changes in age- and disease-related miRNA expression.

Dr. Wang has developed miRNA microarrays called MMChips that contain all known miRNAs for a particular species. To date, these include seven different MMChips—human, mouse, rat, dog, C. elegans, Drosophila, and a microarray targeting viral-specific miRNAs. Using these in parallel with tandem mass spectrometry-based proteomic-profiling techniques, Dr. Wang performs comparative mapping of up- and downregulated miRNA expression and associated down- and upregulation of specific proteins.

By isolating miRNAs from the livers of young, middle-aged, and old mice, Dr. Wang can look for changes in miRNA levels as the animals age. She has shown that an organism’s maintenance microRNAs—those responsible for fine-tuning its cell state and behavior, including regulators of the cell-DNA cycle, DNA repair, oxidative stress responses, and apoptosis—start to become abnormally expressed in midlife, which causes deterioration in six or seven general pathways.

Dr. Wang’s work in sporadic Alzheimer’s disease focuses on the role miRNAs might have in the systematic deterioration that can lead to age-dependent diseases such as cardiac disease, osteoporosis, and Alzheimer’s. The studies have identified three miRNAs that are upregulated in patients with sporadic Alzheimer’s disease compared to age-matched controls. These miRNAs are involved in cell-cycle regulation.

Eva van Rooij, Ph.D., a postdoctoral researcher in the department of molecular biology at the UT Southwestern Medical Center, notes that “microRNAs act upstream of disease processes and so are powerful regulators of downstream cascades They tend to regulate groups of genes with common functions such as cell growth or survival. microRNAs are also adept at regulating important signaling pathways.” In a talk entitled “The Myriad Roles of microRNAs in Heart Disease,” Dr. van Rooij describes her ongoing work to characterize the effects of manipulating stress-responsive miRNAs on cardiac muscle disease.

miRNAs have been implicated as positive and negative regulators of growth, development, function, and stress responsiveness of the heart. A group of stress-responsive miRNAs including miR-195, miR-208, and miR-21 are up- or downregulated during pathological cardiac remodeling and may play a role in the development of cardiac hypertrophy and heart failure.

Dr. van Rooij’s work focuses on miR-208, which is only expressed in the heart. She is confident that efforts to develop an inhibitor of miR-208 will allow for tissue-specific downregulation.

To understand the biology of miR-208, Dr. van Rooij and colleagues knocked down its expression in mice, stressed the animals’ hearts, and found that the typical response to stress on the heart, characterized by hypertrophy and diminished pumping capacity, was not evident.

In the absence of miR-208, the heart was better able to handle the stress. The combination of its cardiac-specific expression and role in the heart’s response to stress led Dr. van Rooij and colleagues to pursue miR-208 as a therapeutic target for heart disease.


Enabling miRNA Research

Exiqon recently announced that it plans to merge with Oncotech, a developer of cancer diagnostics, to leverage its miRNA analysis technology. Through the company’s cancer testing services, which use its Extreme Drug Resistance (EDR®) assays, Oncotech has “collected a biobank of 150,000 biopsy samples with associated clinical data,” says Søren Møller, Ph.D., Exiqon’s vp of R&D.

Exiqon intends to develop a series of miRNA-based drug-resistance and treatment-selection tests that will be marketed through Oncotech’s CLIA laboratory.

Dr. Møller describes miRNAs as a sweet spot for the company’s core locked nucleic acid (LNA) technology due to the challenges inherent in working with these small RNA molecules. LNAs are nucleic acid analogues that provide enhanced thermal stability and target discrimination. One of the challenges in microarray-based miRNA screening is the range of GC content and associated melting temperatures (Tm) of the hybridization probes.

“LNA technology allows us to Tm balance all the probes on an array,” says Dr. Møller. “And the high specificity of LNA capture probes allows us to differentiate between closely related microRNAs based on a single base mismatch,” he adds.

Exiqon has developed detection probes designed for in situ hybridization using tissue or cell specimens to detect the spatial distribution of miRNAs in histology samples. The company recently introduced the miRCURY™ LNA microRNA PCR system for miRNA detection and quantification.

Dr. Møller’s presentation at the conference, entitled “microRNA Profiling of Cancer Using a Novel LNA-Based Microarray,” describes Exiqon’s work on developing global expression profiles of miRNAs in breast cancer and normal adjacent tissue with the goal of identifying diagnostic biomarkers and prognostic signatures. The company has identified numerous, differentially expressed miRNAs including let-7a/d/f, miR-125a/b, miR-21, miR-32, and miR-136 and has confirmed the microarray results using quantitative PCR. It has also identified several miRNA candidates previously not linked to breast cancer using next-generation DNA sequencing technology.

Chris Hebel, director of business development at LC Sciences, attributes the company’s ability to custom design microfluidic microarrays for miRNA profiling to the flexibility inherent in the technologies it employs to create the arrays. The µParaflo™ Biochip platform relies on three main technologies: microfluidics; a synthesis chemistry that can use standard building blocks to make DNA, RNA, or peptides and can accommodate modified nucleic acids or amino acid analogues; and light deprotection/photolithography to drive the synthesis of the probes directly on the chip.

LC Sciences’ synthetic strategy and its ability to produce custom-designed chips offer two main advantages, according to Hebel. It can normalize the Tm of the probes and achieve uniform binding across an array. It can also modify and add new probes as the database of known miRNAs expands.

The company adopted a service-based business model rather than opting to sell its chips. Hebel has seen the miRNA research market broaden over the past year. Whereas the company’s customers initially were primarily focused on cancer and neuroscience research, the customer base has grown to include larger numbers of cardiac researchers, virologists, plant scientists, and cell biologists.

The company has also added related miRNA services such as qPCR and gene expression array-based profiling and continues to maximize the probe density on its chips.

[microRNA mining is done mostly by proprietary algorithms, the whole genome requring massively parallel computing. Results are of the highest importance to Big Pharma - that once an algorithm is proven successful, computation is scaled to industrial dimensions .... - pellionisz_at_junkdna.com March, 17, 2008]


^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

All Connected

Forbes

Matthew Herper, 03.16.08, 2:00 PM ET

Merck is using a new technique that it believes could turn genetic information into new drugs, a key bottleneck for the pharmaceutical industry as it struggles to invent new medicines.

The techniques, described in the current version of the journal Nature, are an example of a technique biologists have been touting for years: treating all 25,000 genes as a complicated network. Each gene is seen not as a single switch, but as part of a vast and complex circuit board. Scientists call this "systems biology."

"There's a heck of a lot that's going on between the change in DNA and the onset of the disease," says Eric Schadt, the Merck (nyse: MRK - news - people ) researcher who masterminded the work. "This is telling us the disease is a lot more complex than we imagined." He says his new technique, which could identify changes in this genetic circuit board that would stop obesity without causing harm, is "a path forward for how to leverage the amazing rate of discovery in genomics."

Schadt's technique combines two research approaches. The first, known as a genome-wide association study, samples DNA at thousands of places to find "spelling" differences that are more common among those who have a particular disease than among those who don't. Genome-wide association studies have linked some two-dozen spelling mistakes to diseases including heart disease, diabetes and macular degeneration.

The second approach, called gene expression, looks at which genes are being used in a particular cell. This is akin to being able to see which bits of software a computer is using at a particular time. Schadt has developed computational methods to examine how genetic spelling mistakes are linked to changes in gene expression, and then at how both are connected to obesity.

He tried the technique out first in mice, then, with the help of Iceland's DeCode Genetics, in people. The result is a circuit diagram of more than 1,000 genes that interact to determine whether or not a mouse or person becomes obese.

Francis Collins, director of the National Human Genome Research Institute, said the paper combined two different ways of looking at genes in "a marvelously integrated way," leading to "stunning and unexpected" observations about the biology of obesity.

"We've heard a lot of talk for several years about the promise of systems biology--but now that promise is really coming through," says Collins.

The Merck work could identify "pressure points" in the gene network that could be tuned using drugs, says Dietrich Stephan, a researcher at the Translational Genomics Research Institute (TGen) in Phoenix. "These are really very important papers," says Stephan. "What he's done is really flesh out the biology of that DNA variation."

The new technique isn't a final solution to the problem of how to figure out which genetic defects actually cause disease, says Leroy Hood head of the Institute for Systems Biology. Leonid Kruglyak of Princeton University led a 2002 team that used the same basic ideas to understand genes in yeast. "It's nice to see this beginning to bear fruit in human studies," he says.

George Church of Harvard University says his project to sequence the DNA of thousands of people--called the Personal Genome Project--is collecting data from tissue samples to do similar kinds of work. Researchers at the Institute for Systems Biology have been taking a similar approach to understanding ailments like mad cow disease that are caused by malformed proteins called prions.

If Schadt's approach becomes popular among drug firms, it could be a boon to Illumina (nasdaq: ILMN - news - people ) and Affymetrix (nasdaq: AFFX - news - people ), the biotechnology companies which make the DNA chips used both for finding genetic misspellings and for examining which genes are accessed in different tissues.

The research comes out of Merck's May 2001 purchase of Rosetta InPharmatics, a tiny Seattle biotech. While Rosetta Chief Executive Stephen Friend became the boss of Merck's anti-cancer effort, researchers like Schadt continued to push forward with Rosetta's original mission: figuring out how to use genetic data to invent drugs.

Schadt, who switched from pure math to genetics during the genomics boom of the 1990s, and who previously worked at Roche (other-otc: RHHBY.PK - news - people ), published a paper using similar techniques to understand the genes of corn plants in 2003. Another paper, published in 2005, was hailed by Science as one of several examples indicating the systems biology approach was starting to yield results.

Schadt says that he believes the techniques he applied against obesity could also be used to understand other diseases, but cautions that it could be years before they lead to the invention of new drugs.

But the research could help drug giants like Merck or Pfizer (nyse: PFE - news - people )--which are facing an industry-wide dry spell when it comes to inventing new medicines--- to test medicines faster by identifying chemicals whose blood levels should change if a drug for a disease like obesity is actually going to work. That could make clinical trials go more quickly.

"This is going to take time," Schadt cautions. "Drug discovery is not a one-year process."

[Those 25,000 genes (in a "network") are only 1.3% of the Genome .... - pellionisz_at_junkdna.com March, 17, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Applied Biosystems Surpasses Industry Milestone in Lowering the Cost of Sequencing Human Genome

FOSTER CITY, Calif.--(BUSINESS WIRE)--Mar 12, 2008 - Applied Biosystems (NYSE:ABI), an Applera Corporation business, today announced a significant development in the quest to lower the cost of DNA sequencing. Scientists from the company have sequenced a human genome using its next-generation genetic analysis platform. The sequence data generated by this project reveal numerous previously unknown and potentially medically significant genetic variations. It also provides a high-resolution, whole-genome view of the structural variants in a human genome, making it one of the most in-depth analyses of any human genome sequence. Applied Biosystems is making this information available to the worldwide scientific community through a public database hosted by the National Center for Biotechnology Information (NCBI).

Applied Biosystems was able to analyze the human genome sequence for a cost of less than $60,000, which is the commercial price for all required reagents needed to complete the project. This is a fraction of the cost of any previously released human genome data, including the approximately $300 million(1) spent on the Human Genome Project. The cost of the Applied Biosystems sequencing project is less than the $100,000 milestone set forth by the industry for the new generation of DNA sequencing technologies, which are beginning to gain wider adoption by the scientific community.

The availability of this sequence data in the public domain is expected to help scientists gain a greater understanding of human genetic variation and potentially help them to explain differences in individual susceptibility and response to treatment for disease, which is the goal of personalized medicine. Although most human genetic information is the same in all people, researchers are generally more interested in studying the small percentage of genetic material that varies among individuals. They seek to characterize that variation as either single-base changes, or as a series of larger stretches of sequence variation known as structural variants. Structural variants comprise fragments of DNA - which include insertions, deletions, inversions, and translocations of DNA sequences ranging from a few to millions of base pairs that have a higher potential of impacting genes and thus contributing to human disease.

Under the direction of Kevin McKernan, Applied Biosystems' senior director of scientific operations, the scientists resequenced a human DNA sample that was included in the International HapMap Project. The team used the company's SOLiD(TM) System to generate 36 gigabases of sequence data in 7 runs of the system, achieving throughput up to 9 gigabases per run, which is the highest throughput reported by any of the providers of DNA sequencing technology.

The 36 gigabases includes DNA sequence data generated from covering the contents of the human genome more than 12 times, which helped the scientists to determine the precise order of DNA bases and to confidently identify the millions of single-base variations (SNPs) present in a human genome. The team also analyzed the areas of the human genome that contain the structural variation between individuals. These regions of structural variation were revealed by greater than 100-fold physical coverage, which shows positions of larger segments of the genome that may vary relative to the human reference genome.

"We believe this project validates the promise of next-generation sequencing technologies, which is to lower the cost and increase the speed and accuracy of analyzing human genomic information," said McKernan. "With each technological milestone, we are moving closer to realizing the promise of personalized medicine."

McKernan's team used the SOLiD System's ultra-high-throughput capabilities to obtain deep sequence coverage of the genome of an anonymous African male of the Yoruba people of Ibadan, Nigeria, who participated in the International HapMap Project. The scientists were able to perform an in-depth analysis of structural variants by creating multiple paired-end libraries of genomic sequence that included a wide range of insert sizes. Most inserts exceeded 1,000 bases. The SOLiD System has the ability to analyze paired-end libraries with large insert sizes. For the millions of SNPs identified in the project, the SOLiD System's 2-base encoding chemistry discriminated random or systematic errors from true SNPs to reveal these SNPs with greater than 99.94 percent sequencing accuracy.

Another important attribute of the SOLiD System is that, unlike other available DNA sequencing platforms, the system is inherently scalable to support higher levels of throughput without requiring changes to the system's hardware. The high-throughput, accuracy and paired-end analysis capability of the SOLiD System are expected to continue to reduce the cost of conducting studies of complex genomes and how variation in these genomes contributes to conditions such as cancer, diabetes and heart disease, among others.

Associating Genetic Variation with Cancer and Other Diseases

As in-depth resequencing efforts continue to reveal previously uncharacterized genetic variation in human genomes, researchers such as John McPherson, Ph.D., at the Ontario Institute for Cancer Research expect to be able to associate these genetic variants with diseases such as cancer. McPherson is cataloging genetic alterations that occur in different types of cancers to better classify tumors and identify the important early events driving the disease. These provide critical targets for refining and developing new targeted treatments and diagnostic tools.

"Paired-end sequencing is an essential component of whole genome analysis," said Dr. McPherson. "The tight fragment size range provided by the SOLiD protocols allows the identification of a wide range of insertion and deletion sizes. Structural rearrangements are readily identified and deep genome coverage easily attained due to the high throughput of this platform."

Evan Eichler, Ph.D., an associate professor of genome sciences at the University of Washington's School of Medicine and a Howard Hughes Medical Institute Investigator, focuses his research on the role of duplicate regions and structural variation in the human genome. Using computational and experimental approaches, he investigates the architecture of these regions and their role in evolution and disease.

"To understand the extent and prevalence of structural variation in the human genome, which is still largely unknown, my lab has been applying traditional sequencing methods with good results, but much more needs to be discovered at a faster pace," said Dr. Eichler. "The human paired-end data being released is of such depth that discovering smaller structural events at higher resolution becomes possible. The availability of this dataset in the public domain will accelerate our understanding of structural variation in normal and disease states, and open the door to a faster exploration of this type of genetic diversity across human populations."

Developing Software Analysis Tools for Next-Generation Sequencing

Next-generation sequencing platforms have enabled researchers to generate more genetic data than ever before. Applied Biosystems' human resequencing effort represents one of the most comprehensive datasets of genomic data, which is expected to provide researchers with libraries of sequence data that will serve as a model for how to prepare and analyze samples of other complex genomes for future genome analysis projects.

Applied Biosystems expects that the public availability of the human sequence data will help drive innovation and speed the development of new bioinformatics tools. These new tools are expected to enable researchers to interpret the meaning of the data that provide clues to better understand various aspects of health and disease. In addition to the full human dataset, subsets of sequence data are available at NCBI. These datasets can be accessed by independent academic and commercial software developers to further enable the development of analytical tools. Applied Biosystems is making an analysis tool available through the SOLiD System Software Development Community, which is expected to help independent software providers to interpret the subsets of data.

Through its Software Development Community, Applied Biosystems has established relationships with scientists and bioinformatics companies to help scientists address next-generation sequencing bioinformatics challenges and develop tools that are expected to advance data analysis and management. To access the human sequence data released by Applied Biosystems, please visit the SOLiD Software Development Community at: http://info.appliedbiosystems.com/solidsoftwarecommunity. The data have also been deposited at the National Center for Biotechnology Information (NCBI, http://www.ncbi.nlm.nih.gov), which is part of the National Library of Medicine, National Institutes of Health (Bethesda MD USA). At NCBI, the human sequence data can be located at ftp://ftp.ncbi.nih.gov/pub/TraceDB/ShortRead/SRA000272 or by the project name, SOLiD Human HapMap Sample NA18507 Whole Genome Sequence under accession number SRA000272.

Applied Biosystems is a global leader in the development and commercialization of instrument-based systems, consumables, software, and services for the life-science market and is the recognized market leader in the commercialization of DNA sequencing platforms. Perhaps best known for its role in developing the technology that enabled the historic sequencing of the human genome, Applied Biosystems continues its leadership in DNA sequencing by commercializing technology that helps scientists to better understand and treat disease based on genomic information. The company's latest platform for genetic analysis, the SOLiD System, is the life-science industry's highest throughput system for DNA sequencing.

About the SOLiD System

The SOLiD System is an end-to-end next-generation genetic analysis solution comprised of the sequencing unit, chemistry, a computing cluster and data storage. The platform is based on sequencing by oligonucleotide ligation and detection. Unlike polymerase sequencing approaches, the SOLiD System utilizes a proprietary technology called stepwise ligation, which generates high-quality data for applications including: whole genome sequencing, chromatin immunoprecipitation (ChIP), microbial sequencing, digital karyotyping, medical sequencing, genotyping, gene expression, and small RNA discovery, among others.

Unparalleled throughput and scalability distinguish the SOLiD System from other next-generation sequencing platforms. The system can be scaled to support a higher density of sequence per slide through bead enrichment. Beads are an integral part of the SOLiD System's open-slide format architecture, enabling the system to generate up to 9 gigabases of sequence data per run. The combination of the open-slide format, bead enrichment, and software algorithms provide the infrastructure for allowing it to scale to even higher throughput, without significant changes to the platform's current hardware or software.

About the Applied Biosystems Human Genome Dataset

These facts were developed based on 1 gigabase (GB) of data equaling 1 billion (1,000,000,000) bases of DNA sequence.

-- If all 36 billion bases were spread out at 1 millimeter apart, they would extend 36,000 kilometers, or more than 4,000 times the height of Mt. Everest, which at 8,848 meters above sea level, is the highest mountain on Earth.

-- If all 36 billion bases were spread along the Great Wall of China at 1 millimeter apart, this would equate to spanning the 5,000 kilometer wall more than 7 times.

-- If a person were to proofread the 36 billion bases in this dataset at one letter per second for 24 hours-per-day, it would take 1,200 years to read the entire data set.

-- If each base represented one individual in the world population, the dataset would account for more than 5 times the entire world population of 6.8 billion people.

-- This dataset, at 36 billion bases of DNA sequence, is equivalent to 360 times all of the 100 million visible stars in the Earth's galaxy.

About Applera Corporation and Applied Biosystems

Applera Corporation consists of two operating groups. Applied Biosystems serves the life science industry and research community by developing and marketing instrument-based systems, consumables, software, and services. Customers use these tools to analyze nucleic acids (DNA and RNA), small molecules, and proteins to make scientific discoveries and develop new pharmaceuticals. Applied Biosystems' products also serve the needs of some markets outside of life science research, which we refer to as "applied markets," such as the fields of: human identity testing (forensic and paternity testing); biosecurity, which refers to products needed in response to the threat of biological terrorism and other malicious, accidental, and natural biological dangers; and quality and safety testing, such as testing required for food and pharmaceutical manufacturing. Applied Biosystems is headquartered in Foster City, CA, and reported sales of approximately $2.1 billion during fiscal 2007. The Celera Group is a diagnostics business delivering personalized disease management through a combination of products and services incorporating proprietary discoveries. Berkeley HeartLab, a subsidiary of Celera, offers services to predict cardiovascular disease risk and optimize patient management. Celera also commercializes a wide range of molecular diagnostic products through its strategic alliance with Abbott and has licensed other relevant diagnostic technologies developed to provide personalized disease management in cancer and liver diseases. Information about Applera Corporation, including reports and other information filed by the company with the Securities and Exchange Commission, is available at http://www.applera.com, or by telephoning 800.762.6923. Information about Applied Biosystems is available at http://www.appliedbiosystems.com. All information in this press release is as of the date of the release, and Applera does not undertake any duty to update this information unless required by law.

About NCBI

As a national resource for molecular biology information, NCBI's mission is to develop new information technologies to aid in the understanding of fundamental molecular and genetic processes that control health and disease. More specifically, the NCBI has been charged with creating automated systems for storing and analyzing knowledge about molecular biology, biochemistry, and genetics; facilitating the use of such databases and software by the research and medical community; coordinating efforts to gather biotechnology information both nationally and internationally; and performing research into advanced methods of computer-based information processing for analyzing the structure and function of biologically important molecules.

[The price of sequencing is no longer the bottleneck. The looming "Data Deluge" is .... - pellionisz_at_junkdna.com March, 11, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

GATC opens up shop in Sweden

By Kirsty Barnes

11-Mar-2008 - DNA sequencing services provider GATC Biotech has announced the opening of a new subsidiary in Stockholm, Sweden, as it widens the net in Europe.

The firm said the new operations will provide expanded sales and support services in Scandinavia in order to meet growing demand for the company's services in the region, where its customer base is "rapidly expanding".

The company already has an existing European presence in Germany, France and England and is able to boast that it is the only sequencing service provider in the world to have all the leading sequencing technologies (SOLiD, GS FLX and Genome Analyzer) available in-house.

GATC has been pushing for expansion over the past year, increasing its production capacity from 15.6 to 250 gigabases per year. It also grew its employees from 45 to 62 and said it is continuing its recruitment drive during the first quarter of this year.

In November last year, GATC announced the availability of a new human genome sequencing service in a bid to "boost the move towards personalised medicine by sequencing up to 100 genomes by the end of 2010".

Peter Pohl, CEO of GATC told Outsourcing-Pharma.com that the company is urging pharma firms to take a closer look at pharmacogenomics in order to move away from the current 'one-size-fits-all approach' to drug development, and move closer to towards "the long-expected promise of personalised medicine."

As part of this, the firm aims to further reduce the cost of human genome sequencing from the current cost of around $5m (€3.4m), to be able to offer "a quality dataset for €500", within ten years, "making it a realistic option for pharmaceutical research".

GATC said it is the first sequencing company worldwide to offer whole human genome sequencing to industry and academia.

"Improved access to genomic data could transform the diagnosis and treatment of cancer," Professor Dr Christof von Kalle from the German National Center for Tumor Diseases, said at the time.

"By sequencing and comparing genomes obtained before and after the diagnosis of cancer, researchers can gain a better understanding of the genetic basis of cancer, particularly the role of the previously understudied non-coding regions. There is growing evidence implicating these areas, which comprise 98.5 per cent of the human genome, in the onset and control of cancer."

Meanwhile, in January the firm founded another subsidiary company called LifeCode, which is the first web-based information service for the storage and utilisation of biological and medical data for the analysis of genetic codes and protein codes in the EU.

[Sweden grabs the high ground of both "Next Generation Sequencing" and "Next Generation Genome Computing" .... - pellionisz_at_junkdna.com March, 11, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

New Partnership with Helicos Puts Expression Analysis at the Forefront of Genomic Research

Wed, 05 Mar 2008 21:11:14 GMT

DURHAM, N.C. - (Business Wire) Expression Analysis, a leading provider of genomic services for clinical trials and research, has confirmed the purchase of the first Helicos™ Genetic Analysis System, from Helicos BioSciences Corporation, and will shortly accept delivery of this state-of-the-art instrumentation. The system consists of the HeliScope™ Single Molecule Sequencer, the HeliScope™ Analysis Engine and the Heliscope™ Sample Loader, all designed for production-level genetic analysis. Central to the platform is the proprietary sequencing-by-synthesis approach called True Single Molecule Sequencing (tSMS)™.

“We are excited to be incorporating this universal genomic analysis tool into our laboratory and to begin offering Next Generation sequencing services,” said Steve McPhail, President and CEO of Expression Analysis. “The HeliScope Sequencer will provide the sensitivity, specificity and throughput necessary to open new research avenues and answer biological questions never before possible. Key applications including de novo sequencing, candidate gene sequencing, and digital gene expression offer the company opportunities to provide significant value-added services to our clients. For example, we will now be able to conduct cost-effective follow-up to genome-wide association studies through targeted resequencing,” continued McPhail.

“The effective translation of genomic discoveries into translational research requires increasingly large studies to investigate the normal underlying heterogeneity of human disease. The single molecule sequencing technology enables the scale of studies required throughout the drug discovery and development process, providing a unique opportunity for Expression Analysis clients,” stated Dr. Patrice Milos, PhD, Vice President and Chief Scientific Officer of Helicos.

Steve Lombardi, President and COO of Helicos added, “Helicos is pleased that Expression Analysis has chosen our platform for their next stage of growth. The HelicosTM Genetic Analysis System, driven by the world’s first single molecule DNA sequencing technology, was designed from day one to enable a whole new generation of scientists with powerful tools for interrogating the genome. As more knowledge of the whole genome is elucidated, customers like Expression Analysis deploying that knowledge into useful applications will dramatically grow the market.”

With the addition of this technology, clients of Expression Analysis will gain new insight into underlying biology providing a more focused approach to the diagnosis, treatment and management of complex diseases. The HeliScope represents a more comprehensive approach to current genomic technologies enabling a deeper understanding of the human genome and human biology and a significant step toward the $1,000 genome.

About Expression Analysis, Inc.

Expertise Beyond Expression -- Providing whole genome to focused set gene expression and genotyping assays, along with sequencing services using Illumina BeadChip®, Affymetrix GeneChip®, Helicos True Single Molecule Sequencing (tSMS)™ and Applied Biosystems’ TaqMan technologies. As the leading provider of genomic services in clinical trials and research, Expression Analysis offers solutions for challenging specimens such as whole blood and FFPE tissues, as well as nucleic acid isolation and data analysis services. The company’s quality system follows CLSI guidelines and their CLIA-registered lab supports GLP compliance.

About Helicos BioSciences

Helicos BioSciences is a life science company focused on innovative genetic analysis technologies for the research, drug discovery, and diagnostic markets. Helicos' proprietary True Single Molecule Sequencing, tSMSTM, technology allows direct measurement of billions of strands of DNA enabling scientists to perform experiments and ask questions never before possible.

[With the abundance of sequences coming, the "Analysis engine" will become the bottleneck .... - pellionisz_at_junkdna.com March, 7 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Helicos BioSciences declares first shipment of single molecule DNA sequencer

Thursday, March 06, 2008; Posted: 04:27 AM

Mar 06, 2008 (M2 EQUITYBITES via COMTEX) -- HLCS | news | PowerRating | PR Charts -- Helicos BioSciences (NASDAQ: HLCS), a life science company focused on genetic analysis technologies, reported on 5 March that the company's Helicos Genetic Analysis System has been shipped to its first customer, Expression Analysis of Durham located inNorth Carolina, a provider of genomic analysis services.

The company said the system is the first sequencing platform on the market designed to deliver USD1,000 genome performance and the first-ever single molecule DNA sequencer.

According to Helicos, single module DNA sequencing offers direct measurement capability, simplicity, accuracy and throughput and enables scientists to pose new, direct questions as they deal with problems in biology and medicine.

True Single Molecule Sequencing (tSMS) technology from Helicos reportedly allows the scientific community to make direct DNA measurements of billions of individual strands of DNA, for the first time.

[Single Molecule Sequencing eliminates time-consuming amplification. Thus, there is a single bottleneck remaining - the speed of the data processor. Heliscope analysis engine presently features 2 CPU serial processors .... - pellionisz_at_junkdna.com March, 6 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The DNA Age - Gene Map Becomes a Luxury Item [an American got it on taxpayers' money, another by non-profit funds, a Chineese (don't ask the source) - and now a Romanian entrepreneur -AJP]

By AMY HARMON

Published: March 4, 2008

On a cold day in January, Dan Stoicescu, a millionaire living in Switzerland, became the second person in the world to buy the full sequence of his own genetic code. He is also among a relatively small group of individuals who could afford the $350,000 price tag.

Mr. Stoicescu is the first customer of Knome, a Cambridge-based company that has promised to parse his genetic blueprint by spring. A Chinese executive has signed on for the same service with Knome’s partner, the Beijing Genomics Institute, the company said.

Scientists have so far unraveled only a handful of complete human genomes, all financed by governments, foundations and corporations in the name of medical research. But as the cost of genome sequencing goes from stratospheric to merely very expensive, it is piquing the interest of a new clientele.

“I’d rather spend my money on my genome than a Bentley or an airplane,” said Mr. Stoicescu, 56, a biotechnology entrepreneur who retired two years ago after selling his company. He says he will check discoveries about genetic disease risk against his genome sequence daily, “like a stock portfolio.”

But while money may buy a full readout of the six billion chemical units in an individual’s genome, biologists say the superrich will have to wait like everyone else to learn how the small variations in their sequence influence appearance, behavior, abilities, disease susceptibility and other traits.

“I was in someone’s Bentley once — nice car,” said James D. Watson, the co-discoverer of the structure of DNA, whose genome was sequenced last year by a company that donated the $1.5 million in costs to demonstrate its technology. “Would I rather have my genome sequenced or have a Bentley? Uh, toss up.”

He would probably pick the genome, Dr. Watson said, because it could reveal a disease-risk gene that one had passed on to one’s children, though in his case, it did not. What is needed, he said, is a “Chevrolet genome” that is affordable for everyone.

Biologists have mixed feelings about the emergence of the genome as a luxury item. Some worry that what they have dubbed “genomic elitism” could sour the public on genetic research that has long promised better, individualized health care for all. But others see the boutique genome as something like a $20 million tourist voyage to space — a necessary rite of passage for technology that may soon be within the grasp of the rest of us.

“We certainly don’t want a world where there’s a great imbalance of access to comprehensive genetic tests,” said Richard A. Gibbs, director of the human genome sequencing center at Baylor College of Medicine. “But to the extent that this can be seen as an idiosyncratic exercise of curious individuals who can afford it, it could be quite a positive phenomenon.”

It was the stream of offers from wealthy individuals to pay the Harvard laboratory of George M. Church for their personal genome sequences that led Dr. Church to co-found Knome last year (most people pronounce it “nome,” though he prefers “know-me”).

“It was distracting for an academic lab,” Dr. Church said. “But it made me think it could be a business.”

Scientists say they need tens of thousands of genome sequences to be made publicly available to begin to make sense of human variation.

Knome, however, expects many of its customers to insist on keeping their dearly bought genomes private, and provides a decentralized data storage system for that purpose.

Mr. Stoicescu said he worried about being seen as self-indulgent (though he donates much more each year to philanthropic causes), egotistical (for obvious reasons) or stupid (the cost of the technology, he knows, is dropping so fast that he would have certainly paid much less by waiting a few months).

But he agreed to be identified to help persuade others to participate. With only four complete human genome sequences announced by scientists around the world — along with the Human Genome Project, which finished assembling a genome drawn from several individuals at a cost of about $300 million in 2003 — each new one stands to add considerably to the collective knowledge.

“I view it as a kind of sponsorship,” he said. “In a way you can also be part of this adventure, which I believe is going to change a lot of things.”

Mr. Stoicescu, who has a Ph.D. in medicinal chemistry, was born in Romania and lived in the United States in the early 1990s before founding Sindan, an oncology products company that he ran for 15 years. Now living with his wife and 12-year-old son in a village outside Geneva, he describes himself as a “transhumanist” who believes that life can be extended through nanotechnology and artificial intelligence, as well as diet and lifestyle adaptations. His genome sequence, he reasons, might give him a better indication of just what those should be. Last fall, Mr. Stoicescu paid $1,000 to get a glimpse of his genetic code from deCODE Genetics. That service, and a similar one offered by 23andMe, looks at close to a million nucleotides on the human genome where DNA is known to differ among people.

But Mr. Stoicescu was intrigued by the idea of a more complete picture. “It is only a part of the truth,” he said. “Having the full sequence decoded you can be closer to reality.”

How close is a matter of much debate. Knome is using a technology that reads the genome in short fragments that can be tricky to assemble. All of the existing sequencing methods have a margin of error, and the fledgling industry has no agreed-on quality standards.

Knome is not the only firm in the private genome business. Illumina, a sequencing firm in San Diego, plans to sell whole genome sequencing to the “rich and famous market” this year, said its chief executive, Jay Flatley. If competition drives prices down, the personal genome may quickly lose its exclusivity. The nonprofit X Prize Foundation is offering $10 million to the first group to sequence 100 human genomes in 10 days, for $10,000 or less per genome. The federal government is supporting technology development with an eye to a $1,000 genome in the next decade.

But for now, Knome’s prospective customers are decidedly high-end. The company has been approached by hedge fund managers, Hollywood executives and an individual from the Middle East who could be contacted only through a third party, said Jorge Conde, Knome’s chief executive.

“I feel like everyone’s going to have to get it done at some point, so why not be one of the first?” said Eugene Katchalov, 27, a money manager in Manhattan who has met with Mr. Conde twice.

Mr. Stoicescu, who wants to create an open database of genomic information seeded with his own sequence, hopes others will soon join him.

A few days after he wired his $175,000 deposit to the company, a Knome associate flew in from Cambridge to meet him at a local clinic.

“What the heck am I doing?” Mr. Stoicescu recalls wondering. “And how many children in Africa might have been fed?”

Then he offered up his arm and gave her three test tubes of his blood.

[Of course, Bill Gates is already in the pipeline (hopefully along with Melinda & kids, brings in more money). Maybe one day, by grace of Simonyi, who spent $20 M of his own money on his space travel, will afford 1/10 or so to reveal is Hungarians have double (or triple) helices, or - as Ed Teller (E.T.) would have it - Hungarians are "extra-terrestrial". One nagging question looms in the horizon, though. With their diploid genome on a memory-stick or so, where is the "Personal Genome Computer" they can safely plug it in? Rest assured, it is being taken care of.... - pellionisz_at_junkdna.com March, 4, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

What are the genetic indicators for Alzheimer's or Parkinson's diseases? What genetic explanation is there for why some 80-year-olds with genetic links to diabetes, heart disease or cancer never get sick?

Those are some of the wide-ranging questions that scientists are now probing with the help of rapid-fire genetic analysis systems made by the San Diego-based biotechnology company Illumina.

Illumina's business has been booming for more than two years. In the last quarter of 2007, its equipment sales tripled. And for all of 2007, its revenue was $366.8 million, up from $184.6 million in 2006, beating the expectations of analysts who closely monitor the field.

Excluding acquisitions and other one-time items, the company has been profitable for the past two years – joining a relatively short list of moneymaking biotech companies.

Illumina's success, some of its customers say, can be attributed in part to a confluence of major scientific advancements outside the company, such as the mapping of the human genome and studies of which variations on the genome are medically significant.

But also key, they say, is Illumina's ability to predict the questions scientists will want to answer with the new genetic data, and give them reliable tools to do so.

“The more experience I've had with members of the company, the more I've come to realize their technology is exceptionally good, exceptionally robust and produces exceedingly high-quality data,” said Jerry Taylor, professor of animal sciences and genetics at the University of Missouri.

Illumina, founded a decade ago, is immersed in one of the hottest areas of scientific research right now: genomics. The field involves the study of all the hereditary information encoded in an organism's DNA, the genome that in humans includes 3.1 billion coded pieces of information.

Genomics was also hot eight years ago, when the Human Genome Project was under way and the first rough draft of the human genome was mapped. But investors who bid shares of genomics companies sky-high in 2000 lost interest when people realized it was going to take more time for researchers to drill deeper into the data before it could be of more use for drug discovery.

That time is now upon us, said Eric Topol, a geneticist who heads the Scripps Translational Science Institute in La Jolla. The study of the human genome and its variations has advanced to the point where there were more breakthroughs last year than in several of the past decades combined, he said.

The journal Science said these included the biggest scientific breakthroughs of the year.

“It is allowing us for the first time to learn about the genetic pathways of disease like never before,” Topol said. “It is the most exciting time in the history of medicine.”

Topol said Illumina technology enables him to look further into the genome and is more reliable than competing technologies.

The company provides three major technologies for analyzing the genome: genotyping, gene sequencing and gene expression.

Genotyping is the process of looking at specific regions of the genome to determine what makes an organism, such as a human, unique: Where does the coded information of one person's chain of DNA differ from another person's?

Gene sequencing is the process of looking at the order in which the coded DNA is lined up on the genome, and where and how the sequence differs.

Studying gene expression means determining when a particular gene is turned on and producing a product, such as a protein that causes a biological process.

After the Human Genome Project mapped out the genome of two men, an international scientific collaboration known as the SNP Project charted where there are areas of variation in the sequence of the genomes. Those areas are called single nucleotide polymorphisms, or SNPs (pronounced “snips”).

Another international collaboration, the HapMap Project, told scientists which of these SNPs were genetically significant. Illumina technology was used on that project.

When the first phase of the project was completed in October 2005, the field of genomics started to pick up speed, said Illumina Chief Executive Jay Flatley.

Then, when technology was developed to sequence the whole genome in pieces, “the market took off like a rocket ... going from zero to $100 million,” Flatley said. “A huge potential energy was built up for genetic-associated disease studies.”

Illumina was born as a result of an expedition by co-founder John Stuelpnagel, then a venture capital investor with the CW Group. A veterinarian by training, Stuelpnagel was sent to find promising technology.

Stuelpnagel said he knew he'd found it in David Walt's laboratory at Tufts University.

Walt had developed an array, or a small tray covered in microscopic wells in which many experiments could be run at once. What made it special was that Walt figured out how to make the chemistry used in each of these wells reproducible hundreds of thousands of times, so that many comparative tests could be run on multiple arrays.

It's complicated stuff, so Stuelpnagel likes to describe Illumina's technology as the pick and ax that other researchers can use in their hunt for gold, meaning new therapies and genetic links to disease.

At first, potential customers did not believe Illumina's claims about the accuracy, specificity and capacity of its systems. So they sent biological samples to Illumina for testing, Stuelpnagel said.

That is how Illumina started earning revenue: by running tests for research labs not ready to buy systems for themselves, he said.

When Illumina went public in 2000, it was one of several San Diego companies offering tools for genetic analysis.

At the time, the biotech investment bubble was starting to burst, and many companies were withdrawing their planned initial public offerings. But Illumina went public in July 2000 for $18, and its shares more than doubled to $39.17.

Within 30 days, shares were trading for about $52. But then they started a steady downward trend, stopping under $3.

“Not because of the company; everything here was moving along as planned,” Flatley said. “It was just the market.”

Since February 2005, when the stock traded for about $10, the company's shares have climbed past $74. One analyst, John Sullivan of Leerink Swan, expects the price to increase to $81 to $90 a share in the next year.

When Illumina announced in January that it was the first to sequence the genome of an African man, it demonstrated the improving productivity and technology of its analyzer for sequencing, Sullivan said in a research report. It also yielded high-quality data, he said.

Illumina's largest competitors are Applied Biosystems, a Foster City company with a market capitalization of $5.6 billion, and Affymetrix, a Santa Clara-based company that has a market cap of $1.31 billion, compared with Illumina's $4.1 billion.

Illumina's latest genotyping technology allows researchers to put DNA samples of two people on one business-card-size chip and look at 1 million places on each of the genomes. The probes that the company puts on the chips are like bait to capture the pieces of DNA in which researchers are interested.

Topol said Illumina has the competitive edge because it is also able to offer gene sequencing technology, which complements the genotyping, he said.

“I'd say this is the hottest area of the research right now,” Topol said.

Analysts agree.

The sequencing products “position Illumina as the only life science tools company that can completely address genetic researchers' high-density product needs,” Sullivan wrote.

Illumina obtained this technology by purchasing a company called Solexa in January 2007, in a stock-for-stock deal.

“What Illumina did in buying Solexa was a really big, hairy, audacious bet,” Topol said. “And it's paying off in a big way. Illumina should have the lead in this area for at least a couple of years.”

Stuelpnagel admits the Solexa acquisition was a gamble.

“That's what we are, gamblers,” Stuelpnagel said. “And that was a pretty big, important decision for us ... although it was the fourth time we've acquired technology that was just starting to get close to being commercial.”

Other companies are working on how they might knock Illumina from its perch in the gene sequencing arena. Pacific Biosciences in the Bay Area said it is developing products that could sequence the genome faster and more completely. The company said it hopes to have that technology on the market in the next couple of years.

All the competition and the constant advance of supporting science are driving down the cost of running individual genetic tests. And the drop in price is allowing a wider range of researchers to delve into genomics.

“As the price has gone down, demand has more than made up for it,” Flatley said.

Typically, a researcher wants to look at genetic samples of at least 1,000 people with a disease and compare those to samples of 1,000 people without disease, Topol said.

It costs between $300 to $600 to run the samples of one person's DNA through these tests, which means a study can cost several million dollars.

Illumina's machines cost about $500,000, then trained people are needed to run a machine and interpret the reams of computer data it generates.

The technology is good for more than unlocking genetic links to human diseases. It is also being used to improve the food on our plate, and to attempt to drive down the cost of getting it there.

For instance, a consortium of scientists commissioned by the cattle industry has been using the technology to study the genetic differences in cattle.

There are about 100 million cattle in the United States that contribute about $71 billion to the economy, said Taylor, of the University of Missouri and part of the consortium.

“We are working with those animals to see how we might make them produce beef and milk more efficiently, or increase the quality of that product,” Taylor said.

Some companies have already used Illumina's technology to create products for the public.

A Mountain View-based company, 23 and Me, sells a kit for $999 that pulls a person's DNA off a cotton swap and sequences it. The company sends customers a report listing the genetic markers that can be linked to a specific disease.

The problem with such tests, Topol said, is what to do with that information. For instance, scientists are still trying to figure out the role that lifestyle and the environment play in whether someone with genetic markers for a disease actually gets it, he said.

Meanwhile, scientists are still discovering genetic markers for disease. Currently there is only one known marker for heart attack, but there could be more, Topol said.

But Flatley countered that “the genetic data is being unraveled so quickly, it's only a matter of time until we know what to do with it.

Illumina's technology will increasingly be used for drug development, Topol said. Companies are already developing drugs that shut off the genes shown to cause disease. Genetic analysis will also help identify which people will respond to a particular drug, Topol said.

Eventually it may become routine to test babies, or a potential spouse, he said. “We are just at the very beginning and just starting to see how important this is.”

[Illumina generates so much data that the present-day computer-architectures will not be able to process data time and cost-efficient manner - pellionisz_at_junkdna.com March, 2, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Google Backs Harvard Scientist's 100,000-Genome Quest

By John Lauerman

Church (Harvard) backed by Google to sequence 100,000 humans [AJP]

Feb. 29 (Bloomberg) -- A Harvard University scientist backed by Google Inc. and OrbiMed Advisors LLC plans to unlock the secrets of common diseases by decoding the DNA of 100,000 people in the world's biggest gene sequencing project.

Harvard's George Church plans to spend $1 billion to tie DNA information to each person's health history, creating a database for finding new medicines. The U.S., U.K., China and Sweden this year began working together to decipher the genetic makeup of 1,000 people at a cost of $50 million.

Google, owner of the most popular Internet search engine, is looking for ways to give people greater control over their medical data. Along with the unspecified donation to Church, the Mountain View, California-based company said last week that it would work with the Cleveland Clinic to better organize health records, and last year gave $3.9 million to 23andme Inc., a seller of genomic data to individuals.

Church's plan ``would be the largest human genome sequencing project in the world,'' Stephen Elledge, a geneticist at Harvard Medical School in Boston, said in a telephone interview today. ``The genetic variations are what make people different, and we need to understand the connections to human disease. They'll get a tremendous amount of information from this,'' said Elledge, who isn't involved in the project.

About a dozen full genomes have been sequenced, said David Altshuler, a geneticist at the Broad Institute in Cambridge, Massachusetts. He is a leader of the project involving the U.S., U.K., China and Sweden. That plan, announced in January, aimed to increase the number of sequenced genomes about 100 times.

`Going for 1 Million'

Church, who helped develop the first direct genomic sequencing method in 1984, said that while he plans to enroll 100,000 participants, he may not end it there.

``If we can expand the project, we'll probably go for a million genomes,'' Church said. Since 1984, Church has advised 22 companies including Helicos Biosciences Inc., which recently began selling high-speed gene sequencers, and 23andme.

Google Chairman and Chief Executive Officer Eric Schmidt unveiled a new product, Google Health, at a conference yesterday in Orlando. The company fell $4.21 , or 0.9 percent, to $471.18 as of 4:30 p.m. New York time in Nasdaq Stock Market composite trading.

The Internet-based service will help people manage their medical records and test results so they can be shared safely and privately with various specialists. Genomic data may eventually be included, said Marissa Mayer, vice president for search products.

``We have some genetic partners where we've already been making investments,'' Mayer said in a telephone interview yesterday. ``Genetics is much further out, and will be done at the control and discretion of the user.''

Ideally Suited

Ross Muken, a Deutsche Bank Securities Inc. analyst in San Francisco, said Google is ideally suited to help consumers keep track of genetic data, as new sequencing technology becomes available.

``They want to have an ability to display to the individual their genetic information in a user-friendly interface,'' he said in a telephone interview. ``Who better to do that than Google?'''

Google's involvement would also be a boon to companies that sell sequencing equipment, including Helicos, Illumina Inc., Applied Biosciences Group, and Danaher Corp., he said.

``Players like Google and Microsoft have the ability to move things along at a quicker pace,'' Muken said.

Helicos fell 69 cents, or 6.2 percent, to $10.50 in Nasdaq Stock Market composite trading; Illumina dropped $1.54, or 2.1 percent, to $72.41; and Applied Biosystems declined 32 cents, or 0.9 percent, to $33.71. Danaher fell 36 cents to $74.15 in New York Stock Exchange composite trading.

Database Builder

Google spokesman Andrew Pederson said it began supporting Church, who teaches at Harvard Medical School in Boston, with a donation late last year.

By matching genetic data from each person with his or her health history, Church would build a database that would link DNA variations and disease for scientists and drugmakers, the first step in deciding on treatments that can block the mutations or adjust how they work within the body.

Church also said he'll explore other human traits under genetic control. Participants will give facial and body measurements, tell researchers what time they get up in the morning, and detail other behaviors, he said.

Church has already partially sequenced genomes from 10 people, and the jump to 100,000 is under review by a Harvard ethics panel. The project ``only stops when we stop learning things,'' Church said.

Controlling Costs

The Harvard scientist is controlling costs by sequencing only protein-making genes, which make up about 1 percent of the genome. He is asking for at least $1,000 from most participants to defray costs and subsidize some nonpaying subjects.

Helicos and Norwalk, Connecticut-based Applied Biosystems, which are contributing to the project, compete with Illumina in a $650 million gene-sequencing market, Deutsche Bank's Muken said. As the procedure becomes commonplace, the market's annual revenue could rise into the billions of dollars, he said.

``We do have an interest in the space from an investment perspective,'' said David Darst, a venture capital associate at New York-based OrbiMed, which manages more than $6 billion, in a telephone interview. ``We see a lot of promise around the overlaps with personalized medicine.''

By pairing medical histories with genetic data on the Web, Church is also confronting ethical boundaries. It's possible that subjects' identities can be deduced from their health information, scientists said.

`Mixed Feelings'

``He's explicitly going after medical histories, and there's very mixed feelings about this,'' said Kevin McKernan, one of the developers of Applied Biosystems' SOLiD sequencer. ``I think it's helpful and needed. We need to understand some of these issues that could scare everyone out of the field.''

Esther Dyson, an investor in technology and health-care companies, is already enrolled in a pilot portion of the Personal Genome Program for people who will have their sequences posted online. Last month, at the World Economic Forum in Davos, Switzerland, she encouraged attendees to have their genomes sequenced by Church, she said.

``The reaction was amazingly positive,'' said Dyson, a board member of 23andme. ``Most people are curious and interested, and it's one of the last frontiers.''

``I wouldn't do it,'' said Amy McGuire, a Baylor College of Medicine medical ethicist, in an interview at a gene sequencing meeting in Florida. ``I want my privacy protected more than that. I have difficulty putting family pictures on Facebook,'' the social networking Web site.

`Don't Know Enough'

``We don't know enough about the actual risks,'' McGuire said.

There are many ways in which making genetic information public might hurt people, she said. Health insurers may not want to offer coverage to people with inherited risks of cancer or heart disease; other family member may be affected.

Those are precisely the issues that the field needs to resolve, Church said. The payoff is an unobstructed view of the next revolution in medicine, he said.

``Some people bought Apple IIe's and went on to become entrepreneurs in the electronics revolution,'' he said. ``People who participate in the Personal Genome Project will have a ringside seat for something that might be very similar.''

[Now the "only" question is what MicroSoft will do - and how the entire Information Technology sector is going to change. Laying down the infrastructure of "Personalized Medicine"

- pellionisz_at_junkdna.com 29th of February, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Basically, DNA is a computing problem [Ballistics, in WWII, was also a computing problem]

Tony Cox in Sanger - von Neumann with ENIAC

The revolution of genome sequencing has spawned a parallel revolution in computing, as scientists in Cambridge have found

The Guardian, Thursday February 28 2008

The computing resources of the Sanger Institute at Hinxton, near Cambridge, are almost unfathomable. Three rooms are filled with walls of blade servers and drives, and there is a fourth that is kept fallow, and for the moment full of every sort of debris: old Sun workstations, keyboards, cases and cases of backup tapes - even a dishwasher. But the fallow room is an important part of the centre's preparations. Things are changing so fast that they can have no idea what they will be required to do in a year's time.

When Tony Cox, now the institute's head of sequencing informatics, was a post-doctoral researcher he could sequence 200 bases of DNA in a day (human DNA has about 3bn bases). The machines being installed today can do 1m bases an hour. What will be installed in two years' time is anyone's guess, but the centre is as ready as it can be.

Invisible revolution

Genome sequencing, which is what the centre excels at, has wrought a revolution in biology that many people think they understand. But it has happened alongside a largely invisible revolution, in which molecular biology - which even 20 years ago was done in glassware inside laboratories - is now done in silicon.

A modern sequencer itself is a fairly powerful computer. The new machines being brought online at the Wellcome Trust Sanger Institute are robots from waist-height upwards, where the machinery grows and then treats microscopic specks of DNA in serried ranks so that a laser can illuminate it and a moving camera capture the fluorescing bases every two seconds. The lower half of each cabinet holds the computers needed to coordinate the machinery and do the preliminary processing of the camera pictures. At the heart of the machine is a plate of treated glass about the size of an ordinary microscope slide, which contains around 30m copies of 2,640 tiny fragments of DNA, all arranged in eight lines along the glass, and all with the bases at their tips being directly read off by a laser.

To one side is a screen which displays the results. The sequencing cabinet pumps out 2MB of this image data every second for each two-hour run. With 27 of the new machines running full tilt, each one will produce a terabyte every three days. Cox was astonished when he did the preliminary calculations. "It was quite a simple back-of-the envelope calculation: right, we've got this many machines, and they're producing this much data, and we need to hold it for this amount of time and we sort of looked at it and thought: oh, shit, that's 320TB!"

Think of it as the biggest Linux swap partition in the world, since the whole system is running on Debian Linux. The genome project uses open source software as much as possible, and one of its major databases is run on MySQL, although others rely on Oracle.

"History has shown," says Cox, "that when we have created - it used to be 20TB or 30TB, maybe - of sequencing data, for the longer term storage, then you may need 10 times that in terms of real estate, and computational process, to analyse and compare and all the things that you want to do with it. So having produced something in the order of 100TB to 200TB of sequential data, then the layer beyond that, the scratch space, and the sequential analysis, and so on - to be honest, we are still teasing out what that means, but it's not going to be small."

Down in the rooms where the servers are farmed you must raise your voice to be heard above the fans. A wall of disk drives about 3m long and 2m high holds that 320TB of data. In the next aisle stands a similarly sized wall of blade servers with 640 cores, though no one can remember exactly how many CPUs are involved. "We moved into this building with about 300TB of storage real estate, full stop," says Phil Butcher, the head of IT. "Now we have gone up to about a petabyte and a half, and the last 320 of that was just to put this pipeline together."

This new technology is the basis for a new kind of genomics, with really frightening implications. The ballyhooed first draft of the Human Genome Sequence in 2000 was a hybrid of many people's DNA; like scripture, it is authoritative, but not accurate. Now the Sanger Institute is gearing up for its part in a project to sequence accurately 1,000 individual human genomes, so that all of their differences can be mapped. The idea is to identify every single variation in human DNA that occurs in 0.5% or more of the population sampled. This will require one of the biggest software efforts in the world today.

Although it is only very rare conditions that are caused by single gene defects, almost all common conditions are affected by a complex interplay of factors along the genome, and the Thousand Genome Project is the first attempt to identify the places involved in these weak interactions. This won't be tied to any of the individual donors, who will all be anonymous. But mapping all the places where human genomes differ is the first necessary step towards deciding which differences are significant, and of what.

There are three sorts of differences between your DNA - or mine, or anyone's - and the sequence identified in the human genome project. There are the SNPs, where a single base change can be identified; these are often significant, and are certainly the easiest things to spot. Beyond that are the changes affecting tens of bases at a time: insertions and deletions within genes; finally there are the changes which can affect relatively long strings of DNA, whole genes or stretches between genes, which may be copied or deleted in different numbers. The last of these are going to be extremely hard to spot, since the DNA must be sequenced in fragments that may be shorter than the duplications themselves. "It's a bit like one of those spot the difference things," Cox says. "If you have 1,000 copies, it's very much easier to spot the smallest differences between them."

Genome me?

All of the work of identifying these changes along the 3bn bases of the genome must be done in software and - since the changes involved are so rare - each fragment of every genome must be sequenced between 11 and 30 times to be sure that the differences the software finds are real and not just errors in measurement. But there's no doubt that all this will be accomplished. The project is a milestone towards genome-based medicine, in which individual patients could be sequenced as a matter of course.

Once that happens, the immense volumes of data that the Sanger Institute is gearing up to handle will become commonplace. But the project is unique in that it must not just deal with huge volumes of data, but keep all of it easily accessible so different parts can quickly be compared with each other.

At this point, the old sort of science is almost entirely irrelevant. "It now has come out of the labs and into the domain of informatics," Butcher says. The Sanger Institute, he says, is no longer just competing for scientists. It is about to embark on this huge Linux project just at the time that the rest of the world has discovered how reliable and useful it can be, so that they have to compete with banks and other employers for people who can manage huge clusters with large-scale distributed file systems. Perhaps the threatened recession will have one useful side effect, by freeing up programmers to work in science rather than the City.

[The differences, similarities and challenges are of historical proportion. ENIAC had to be constructed, based on the "von Neumann Architecture", to meet the vital needs of the military to compute ballistic trajectories (fast, preferably before they hit the target). While the two pictured computers are vastly different in scale (ENIAC had the computing power of today's Valentine card's chirping gizmo) - both are essentially identical in their "von Neumann Architecture". The problem is (and von Neumann was keenly aware of the so-called "von Neumann architecture bottleneck" - Teller told me that "we were reasonably smart, but Jancsi [von Neumann] was a genius") that the architectural bottleneck he had to create in a hurry of the war will not suffice later - and in fact worked out the alternative parallel architecture, what is now the "dataflow architecture". For efficiency (e.g. with 4-minute full sequencing of a tumor while the patient is still on the operating table, sequencing is not enough - to effectively counteract we'll have to know in about the same time how the particular cancer drove the genome function unregulated, so what exactly to do right at that moment). Tall order? You bet. It is like a war to win, one person at a time. Turns out, Sputnik wasn't such a big deal, since a satellite can (was) shut down with a single briefcase-size "rock", controlled by with von-Neumann computer (both projectiles doing a combined 25 or so times of the speed of sound). True, for von-Neumann architecture computer industry to reach the present stage cost who knows how many $trillions. Massively parallel genome function will be dealt with by a computer industry based on massively parallel architecture. Such a paradigm-shift does not happen too often. Today, however, is Februrary 29, an unusual day worth remembering.

- pellionisz_at_junkdna.com 29th of February, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Upheaval in Genomics - news censored, exodus of academics to lucrative industry, proprpietary IP abound...

International Edition, cover feature "Genomics Revolution" Same US issue - completely censoring out "Genomics Revolution"

ENCODE (by the US Government, 14th of June, 2007) brought a historically unprecedented upheaval onto biotechnology by the "Genomics Revolution". Perhaps the best measure of an at least 6 months of "shell-shock", that genome function has been profoundly misunderstood for half a decade, is exemplified by censorship - both grandiose and petty.

Both of the above Newsweek issues are of October 15, 2007. On the left, the brilliant "cover article" (written by Princeton University's Profeesor Lee Silver, in all International Issues of Newsweek, in Asia, Europe and Latin America) on the "Genomics Revolution" (see full text here) is completely censored out not only from the cover, but not a single word of it appears in the American Edition (see icon on the right). Why hide the revolution? To defuse an unprecenented public outcry that hundreds of people died of "junk DNA dieseases" that were neglected for over half a Century. Let the news "trickle down" to the US public through the international media - at the least it provides a little breathing time....

On an infinitely pettier scale, in blogs that were desperately clinging to fallen dogmas (as "Junk DNA" and "Central Dogma"), an exceptionally narrow-minded censorship raised its ugly head. One Canadian blog (immune to "class action lawsuits for negligence of 'Junk DNA' causing countless lethal diseases" since the US Government Study is not in their jurisdicition) while quite arrogantly proclaimed a mere year ago that "those who attribute function to Junk DNA should pass the 'Onion test'" (e.g. explain why there is a vast diversity in the amount of 'Junk DNA' of different types of onions) is now spewing literally dozens of blog entries to paper over its own and well-known other vehement former antagonisms to finding reasons why 98.7% non-genic hereditary material was conserved, some for half a billion years (in "Junk" DNA) - never mentions the algorithmic approach to fractal genome that did pass his test on his blog. Worse, when his quote from an even more questionable fellow-Canadian blog was brought into question ("evidence indicates non-function" as he quoted the other blog) a reply was simply censored out.

A profound embarrassment is felt throughout the establishment way beyond ad hoc censorship. NIH admits that it is flat broke - it can not even sustain current research - forget financing a scientific revolution from government resources. Canadian government-sponsored research is even worse off - a the same Canadian blog laments.

Just as with Internet, when it started to truly succeed (1994, Netscape), the umbilical cord to bureaucratic AND meager government funds were weaned off - and private enterprise started to rule. As a result, academics of Genomics move in droves to lucrative industry, globally. Ever since J. Craig Venter has proven that government-sponsored thus deliberately mediocre" research" (geared to "peer review approval of similarly mediocre colleagues") is more of a hindrance rather than a force for leading a revolution towards excellence. Just as with Internet, the era of government leadership must move over and make room for the surge of private enterprise.

Harvard and MIT used to be (and still are) leading forces in Academia. Yet, with the leadership of Eric Lender, the Broad Institute soars, by grace of Mr. and Mrs. Broad, who donated their own $200 M to acceleate Post-Encode Genomics beyond the confinement of government bureaucracy. Prof. George Church (professor of Genetics at Harvard Medical School) is now Founder of Knome.com - a privately funded firm for full sequencing human genome - outsourced to the Beijing Institute of Genomics (China). Stanford Professors of Bioengineering, Genetics and Medicine, Russ Altman, M.D., Ph.D. is an science advisor of Google-"spin-off" 23andMe.com, along with George Church (Harvard), etc. Oxford (UK) Professor Akoulitchev moved his focus into Oxford Biodynamics. Likewise, as reported in this column earlier, Dr. Mattick of IMB (Sidney, Australia) "went commercial" with a cooperation with a California Genome Company. (This columnist, after 7 years of algorithm-development without seeking government-support for the fractal approach that "violated" both prevailing dogmas till ENCODE, entered the business of laying down the hardware of massively parallel computing that both the avalanche of sequences and their massively parallel functional algorithms demand. While Dr. Pellionisz is now Director of Genome Informatics with a global parallel-chip architecture company, his comments past the 14th of February, 2008 (the sixth Anniversary of conception of FractoGene approach) reflect his own personal opinions, and may or may not represent any corporate policies).

[Personal Genomics, PostGenetic Medicine, Information Technology of the Genome Revolution are all destined to succeed as lucrative private sector enterprizes. True, that with private enterprize intellectual property will be increasingly proprietary (and thus, extremely valuable) - but that is dictated by the iron rules of national and global competition. - pellionisz_at_junkdna.com 27th of February, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

How One Protein Binds To Genes And Regulates Human Genome

ScienceDaily (Feb. 11, 2008) - Out of chaos, control: Cornell University molecular biologists have discovered how a protein called PARP-1 binds to genes and regulates their expression across the human genome. Knowing where PARP-1 is located and how it works may allow scientists to target this protein while battling common human diseases.

"This finding was unexpected -- especially since it entails a broad distribution of PARP-1 across the human genome and a strong correlation of the protein binding with genes being turned on," said W. Lee Kraus, Cornell associate professor in molecular biology and the corresponding author in the published study. Kraus has a dual appointment at Cornell's Weill Medical College in New York City. "Our research won't necessarily find cures for human diseases, but it provides molecular insight into the regulation of gene expression that will gives us clues where to look next."

Kraus explains that PARP-1 and another genome-binding protein called histone H1 compete for binding to gene "promoters" (the on-off switches for genes) and, as such, act as part of a control panel for the human genome. H1 puts genes in an "off" position and PARP-1 turns them "on." The new study, said Kraus, shows that for a surprising number genes, the PARP-1 protein is present and histone H1 is not, helping to keep those genes turned on.

When human cells are exposed to physiological signals, such as hormones, or to stress signals, such as metabolic shock or DNA damage caused by agents like ultra-violet (UV) light, the cells take action. One of the cellular responses is the production of NAD (nicotinamide adenine dinucleotide), a metabolic communication signal. NAD promotes the removal of PARP from the genome and alters PARP-1's ability to keep genes on, the scientists have found.

Knowing where this component of the genome's control panel -- the PARP-1 protein -- is located, scientists can better understand the effects of synthetic chemical inhibitors of PARP-1 activity, which are being explored for the treatment of human diseases including stroke, heart disease and cancer. Thus, conceivably, when a patient is having stroke, it may one day be possible to use PARP-1 inhibitors as part of stroke therapy, or one day play a role in targeting cancer, says Kraus.

"Think of PARP-1 as a key regulator of gene expression in response to normal signals and harmful stresses," said Kraus. "If you could control most of the traffic lights in a city's street grid with one hand, this is analogous to controlling gene expression across the genome with PARP-1. Under really adverse conditions, you can set all the lights to stop."

The article, "Reciprocal Binding of PARP-1 and Histone H1 at Promoters Specifies Transciptional Outcomes," was authored by Raga Krishnakumar (co first-author), a graduate student in molecular biology and Matthew J. Gamble (co first-author), a post doctoral researcher at Cornell; and Kristine M. Frizzell, Jhoanna G. Berrocal and Miltiadis Kininis, all Cornell graduate students in molecular biology and genetics. Kraus is the corresponding author. It is published Feb.8, 2008 in the journal Science.The National Institutes of Health funded the research.

[Both the "Junk DNA" and "Central Dogma" false axioms have now reached a stage when neither of them are tenable (either separately, and especially together). While there is now an over-abundance of literature in turmoil if keep flogging these dead horses or (pretending that they are still alive...) the science community is largely still "shock-shelled" and is at a loss "what's next". A paper in press will come forward to break the present impass - pellionisz_at_junkdna.com 14th of February, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

PacBio Plans to Sell First DNA Sequencers in 2010; Aims for 100 Gigabases Per Hour
[February 11, 2008]

By Julia Karow

Editor, In Sequence

MARCO ISLAND, FL (GenomeWeb News) – Pacific Biosciences said at a meeting last week that it is working on a next-generation DNA sequencing instrument that it believes will eventually be able to produce 100 gigabases of sequence data per hour, or a diploid human genome at 1-fold coverage in about 4 minutes.

Sometime in 2010, the Menlo Park, Calif.-based company plans to sell its first DNA sequencing systems, performance specifics of which have yet to be determined, to early adoptors, PacBio Chairman and CEO Hugh Martin told GenomeWeb Daily News sister publication In Sequence last week.

The price of the instrument will likely be in the range of that of next-gen sequencers sold by 454/Roche, Illumina, and Applied Biosystems, he said, which sell for approximately $400,000 to $600,000.

Speaking in a packed auditorium at the end of the Advances in Genome Biology and Technology meeting in Marco Island, Fla., on Saturday, PacBio’s Co-founder and Chief Technology Officer Stephen Turner said that after achieving a set of milestones in November, the company decided to start developing the system commercially, and talking about its progress. Since it was founded in 2004 as Nanofluidics, the company has not talked about its plans or technology in public.

The company’s single-molecule, real-time, or SMRT, technology is based on zero mode waveguides that were originally developed by Turner and others at Cornell University’s Nanobiotechnology Center. Essentially, ZMVs are nanometer-scale holes in a 100-nanometer metal film deposited on a clear substrate. Due to the behavior of light aimed at such a small chamber, the observation volume is only 20 zeptoliters, enabling researchers to measure the fluorescence of nucleotides incorporated by a single DNA polymerase enzyme into a growing DNA strand in real time.

On a prototype system, PacBio researchers have so far observed read lengths of a little over 1,500 bases and a rate of 10 bases per second, and have been able to analyze up to 3,000 zero mode waveguides in parallel.

Since its founding, the company has raised approximately $71.5 million in venture capital from Kleiner Perkins Caufield & Byers, Mohr Davidow Ventures, Alloy Ventures, Maverick Capital, and others, and has received $6.6 million in funding from the National Human Genome Research Institute.

Over the next year, PacBio plans to raise an additional $80 million and is in the midst of growing its headcount from just over 100 to 200 employees “as fast as we can do it,” according to Martin.

[Up to now, it really did not matter if the US Government was planning a decade for sequencing a human, even for the presently available fractional samples (for genetic markers and point-mutations) by a rapidly increasing number of web-services it really did not matter if customers learn it in weeks or months their genetic lineage (e.g. the percentage of proclivity compared to the average, if they might develop e.g. Alzheimers' - especially if other predicted results make it several times more likely that they die of cardiovascular killers that usually strike decades before one might think of Alzheimers'. The possibility within about 2 years of a 4-MINUTES FULL SEQUENCING has changed everything - including a surge of top-ranking Venture Capital interest. Why? The patient will literally be still on the operating table with a biopsy of some suspected malignancy - and with the full genome revealing not only exactly what went wrong, but also what is the medicine that should be applied in a personalized manner will reflect upon present-day medicine as "stone age". Another urgent opportunity is in Biodefense. Presently, passengers trying to board a plane e.g. at SFO must pass through "sniffers" where "pattern-recognizing neural net" is applied to sniff if one smells of explosives/narcotics. Should a passenger with e.g. person-to-person infection capability of a mutant bird flew board a transcontinental plane? Even if he/she gives a "breathalyzer" sample and there are hours available for sequencing to detect e.g. flue strains, till the plane reaches destination, the entire load of passengers will likely to be infected. With speeds of a "pre-boarding screening" everything changes. Is this "sci-fi" or life-or-death business opportunity? Let the readers decide. Suffice to note that in the previous "paradigm-shift" this author arriving at a "neural network algorithm" (learnt directly from Nature, e.g. how birds use their cerebellum for coordinated flight) developed plans for the NASA to keep controlling a supersonic F15 even if "major configuration changes" occurred. The solution called for special "Parallel Computing" chips. (It did take NASA about 10 years and similar number of $M-s to implement plans, but test was successful. With this paradigm-shift around, drivers are almost infinitely more powerful. - pellionisz_at_junkdna.com 14th of February, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

California company claims faster, cheaper gene map

By Maggie Fox, Health and Science Editor

Mon Feb 11, 5:55 PM ET

WASHINGTON (Reuters) - A California company predicts it will soon be able to sequence an entire human gene map in four minutes, for just $1,000.

Pacific Biosciences says its new gene-sequencing machines are far faster than existing equipment, and will be able to do in minutes what it took the federally funded academic effort five years and $300 million to do, and genome pioneer Craig Venter nine months to do in 2000.

"It will change health care forever if it works," Hugh Martin, the chief executive officer of the company, said in a telephone interview on Monday.

The company presented its findings to a meeting in Florida on Saturday.

Last month Knome, a Cambridge, Massachusetts-based personal genomics company, said it was offering people their own personal genome sequences at a cost of $350,000. Martin said he saw no reason for individuals to get their gene maps sequenced yet, and said his company's market was research labs.

"The real idea is to be able to sequence people fast enough and cheaply enough so we can turn some really interesting discovery problems in genetics and genetic diseases into software problems," Martin said.

"You can sequence 1,000 people who exhibit addictive behavior and 1,000 who don't and see if there any differences between them," Martin said.

Government backers of the project are equally enthusiastic.

"In complex diseases like heart disease, there are many different genes that contribute to the disease and each of those genes has a small effect," said Jeff Schloss, who heads the sequencing-technologies grant program at the National Human Genome Research Institute.

NEEDLE IN A HAYSTACK

Researchers still often do not even know where to begin looking for genes involved in some diseases, and so benefit from so-called genome-wide association studies, which are in effect a treasure hunt through the entire genome.

"The tools we have for understanding the relationship between changes in the genome and disease require now that we look at lots of people, that we study a lot of people who have a disease and look at changes in their genome," said Schloss, whose institute gave Pacific Biosciences $6.7 million for its work.

Martin said the company had raised another $72 million from private investors.

The money is out there for companies that want to find cheaper and quicker ways to sequence the human gene map. In 2004, the National Institutes of Health launched a $70 million grant program to encourage such work, and the Santa Monica, California-based X Prize Foundation is offering $10 million to the first team to sequence 100 human genomes in 10 days.

Martin thinks Pacific Biosciences' new technology will be able to get a human genome done in about 4 minutes.

"You could be on the operating table and having a biopsy while under anesthesia," he said. Doctors could compare the sequence in a tumor to the DNA in a patient's healthy cells and perhaps tailor chemotherapy, he said.

The company will sell the instruments at a cost of somewhere between $400,000 and $600,000, plus kits with the chemicals and other components needed to operate them.

Competitors also racing to make a faster, cheaper DNA map include Solexa, now a division of Illumina Inc, Applied Biosystems, 454 Life Sciences Corp, a Roche company, and Helicos Biosciences Corp.

[This is by-and-large a duplication of the news above, with some more noteworthy details. - pellionisz_at_junkdna.com 11th of February, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Race is on to produce a personal - and cheap - genome readout

By Andrew Pollack

Published: February 8, 2008

A person wanting to know his or her complete genetic blueprint can already have it done - for $350,000. [As of today, that is the "walk-in list price" - by Knome of George Church - already discounted by competitors below its 25% -AJP]

But whether a personal genome readout becomes affordable to the rest of us could depend on efforts like the one taking place secretly in a nondescript Silicon Valley industrial park. There, a company called Pacific Biosciences has been developing a DNA sequencing machine that within a few years might be able to unravel an individual's entire genome in minutes, for less than $1,000. The company planned to make its first public presentation about the technology on Saturday. [Not much public talk is about another "nondescript Silicon Valley industrial park" company on its way towards a $100 full genome - we could end the list here, since the whole gamut runs from $100 to the $70 M that sequencing Venter's DNA cost. A used car, or a junky computer PC, can cost next to nothing, while the Bugatti Type 41 Royale costs $10 M, and IBM's list price for a rack of "Blue Gene/L servers is $2 million. Price of the hardware these days is not nearly as important as the "software" of what you can do with your lifeless hardware - AJP]

Pacific Biosciences is just one entrant in a heated race for the "$1,000 genome" - a gold rush of activity whose various contestants threaten to shake up the $1 billion-a-year market for machines that sequence, or read, genomes. But the company has attracted some influential investors. And some outside experts say that if the technology works - still a big if - it would represent a significant advance.

"They're the technology that's going to really rip things apart in being that much better than anyone else," predicted Elaine Mardis, the co-director of the genome center at Washington University.

If the cost of sequencing a human genome can drop to $1,000 or below, experts say it would start to become feasible to document people's DNA makeup to tell what diseases they might be at risk for, or what medicines would work best for them. A DNA genome sequence might become part of each newborn baby's medical work-up, while sequencing of cancer patients' tumors might help doctors look for ways to attack them.

And such technology might also reveal the genetic information in a cancer patient's tumors, for example, to help doctors look for ways to attack them.

To spur such advances, the U.S. government has awarded 35 grants totaling $56 million to companies and universities for development of technology that could put the $1,000 genome sequence within reach. PacBio has received $6.6 million from that program.

Meanwhile, the nonprofit X Prize Foundation is offering $10 million to the first group that can sequence 100 human genomes in 10 days, for $10,000 or less per genome. Six companies or academic groups - although not PacBio - have signed up so far.

Computerized sequencing machines use various techniques to determine the order of the chemical units in DNA, which are usually represented by the letters A, C, G and T. Human beings have 3 billion such units, or 6 billion if one counts the second copy of each chromosome pair.

The industry has long been dominated by Applied Biosystems, which sold hundreds of its $300,000 sequencers to the publicly funded Human Genome Project and to Celera Genomics for their sequencing of the first two human genomes, which were announced in 2000. But two newcomers - Solexa and 454 Life Sciences - have already started to cut into Applied Biosystems' sales with machines that are faster and less costly per unit of DNA sequenced. Solexa is now owned by Illumina and 454 Life Sciencs by Roche.

Applied Biosystems, which is a unit of Applera, recently started selling its own new type of sequencer, which it obtained by buying Agencourt Personal Genomics for $120 million in 2006. Next out, early this year, should be Helicos Biosciences, a newly public company, which has said its machine might be able to sequence a human genome for $72,000, with further improvements to come.

"We can look somebody in the eye and say, 'This instrument is going to get you to the $1,000 genome," said Steve Lombardi, the president of Helicos, which is based in Cambridge, Massachusetts.

Intelligent Bio-Systems, a privately held company in Waltham, Massachusetts, says it will introduce a machine by the end of 2008 that might reduce the cost of a genome to $10,000. Other contenders include privately held NABsys of Providence, Rhode Island, VisiGen Biotechnologies of Houston and Complete Genomics of Mountain View, California.

Some contestants say that they might try for the X Prize as early as next year and that the $1,000 genome is as little as three years away. But other experts are more conservative.

Jeffery Schloss, a director of the $1,000 genome federal grant program at the National Institute for Human Genome Research, said he would be surprised if could be done much before 2014.

Richard Gibbs, director of the human genome sequencing center at Baylor College of Medicine, says a "technical leap" will be necessary to reach the goal. "Once you talk technical leaps," he said, "timetables go out the window."

Pacific BioSciences, which was founded in 2004, says it can make the leap. "If we ever make this work, there would be no other technology applicable in the sequencing field," said Hugh Martin, the chief executive. Martin, who previously ran ONI Systems, a telecommunications equipment company, is nothing if not cocky. "When we're ready," he said, "we're just going to win the X Prize."

PacBio, as the company calls itself, is based in Menlo Park, California, Until now it has remained largely quiet about its work. And even as the company grew past 100 employees, Martin refused to get new space - shrinking the cubicles instead - until PacBio proved to itself that its technology could work.

But having achieved that last November, PacBio is expanding. It will also It was to give its first public presentation Saturday at the Advances in Genome Biology and Technology conference in Marco Island, Florida.

Some outside experts already privy to the technology say it is promising.

"If it works, it's the first thing I've seen that would have a chance of winning the X Prize," said J. Craig Venter, who founded Celera Genomics and now runs a genomics institute.

But PacBio says it won't start selling its first machines until early 2010, and a second generation machine that might be capable of a $1,000 sequence might not be available until around 2013. So the company could be late to a crowded market.

PacBio's long silence has also spawned skepticism.

"If you look at how long they've been running, they have to get to the point where they have to show something soon," said George M. Church, a professor of genetics at Harvard. Dr. Church was the co-founder of Knome, the company currently offering the $350,000 genome blueprints.

PacBio has raised $78 million so far, but probably needs another $80 million, Mr. Martin said. Among the company's backers are Kleiner Perkins Caufield and Byers, the powerhouse venture capital firm. Michael W. Hunkapiller, a co-inventor of automated DNA sequencing and the former head of Applied Biosystems, is a director and his firm, Alloy Ventures has also invested.

PacBio says a big advantage of its machines should be the ability to read 1,000 or more bases - the chemical units that make up DNA - in one stretch.

No sequencer can yet read an entire genome at once. So multiple copies of a genome are broken into fragments, each fragment is then sequenced, and computers try to assemble the pieces in the correct order.

It is akin to shredding several copies of a book and then trying to reconstruct the text. The smaller the pieces, the harder it is to solve the puzzle, particularly in places where there are repetitive sequences.

The type of sequencer used for the Human Genome Project can now read more than 800 bases at once. The newer Illumina and 454 Life Sciences sequencers can go much faster than the older type. But the reading length of the Illumina machine is only about 30 bases, while that of the 454 Life Sciences sequence is 200 to 450.

But these companies say that their sequencers can be used for new medical applications. The 454 Life Sciences machine was used to identify a virus that killed three recipients of transplanted organs, after the usual diagnostic methods had failed, according to a paper published online Wednesday by the New England Journal of Medicine.

Moreover, these companies say that new sequencing techniques are allowing genomes to be put together even from the shorter fragments. The Human Genome Project has already provided a reference genome that can be used as a template to help figure out where the pieces go.

Illumina announced Wednesday that it had sequenced the genome of an anonymous African man in weeks using its machines.

Genomes made by relying on a reference might not be quite as accurate as ones done completely from scratch. But some executives say it would be good enough for medical use.

Indeed, there is considerable debate on just how much information is needed to be useful.

Recently companies like DeCode Genetics, 23andMe and Navigenics have started selling services - for $1,000 to $2,500 - that examine a person's genome at up to 1 million particular points where DNA is known to differ among people. Studies using DNA from thousands of people have found that certain of these variations correlate with higher or lower risk of certain diseases.

And yet, new studies suggest that in human genomes whole sections of DNA might be duplicated, deleted or reversed. A survey of only the variations at selected points would probably miss those much larger-scale differences.

Scientists do not know what those bigger differences might mean in terms of disease risk, because they have not yet had thousands of human genomes to study. So right now, a personal genome readout would provide little useful information beyond what could be obtained by the less expensive scans. And some experts even question how useful those scans are.

Indeed, said Schloss, one of the first paybacks of less costly genome sequencing would be to enable the broad studies to be done "to find out what we really need to know for individuals."

[Some of us remember the Atari, the Amiga and the countless "early entries" of "personal computers" when the paradigm shift did away from computer mainframes (or even DEC minicomputers). Most of us have long forgotten how much the other models (that we did not buy) cost - but the hardware was "affordable" (though, e.g. the Apple Laseprinter cost me $4,000, and the first Mac with its 128kb memory was at least 50% pricier than today's Apple with hundreds of thousands more powerful perfomance). The main question (just as today with the Genome) was "tell me one reason why one needs one for Personal Use?" With "Personal Genomics" the over-abundance of cheap sequences can be taken for granted - it is only a matter of little patience (not afforded by those who are presently dying of e.g. "Junk DNA diseases"). Suddenly, the towering question is "where is the Personal Computer that would deliver useful information based on one's cheaply acquired Personal Genome?". The hardware, for sure, will be parallel (to wrestle the massively parallel genome function). The software will be swiftly written by the cloned copies of Silicon Valley (here in California, in Bangalore, Sanghai, you name it) - but where are the algorithms that make genome function understandable? Some blogs will try forever to jack up the "unknown function" portion of the genome above the critical 50% such that their favorite Titanic ("Junk DNA") does not sink with them. Surely, breakthrough principles of genome informatics and (literally) quantum leaps in our algorithmic understanding will not come from holdouts of the "Pre-ENCODE Genomics" schools. Roche (and other big pharma) positions for "hardcore diagnosis" purposes of affordable personal genomes. They will immediately filter back to homes - to alert would be patients in time to catch regulatory problems - pellionisz_at_junkdna.com 9th of February, 2008]
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

On the front lines of the genomic revolution [Genome Informatics]

David Chandler, MIT News Office

February 6, 2008

Manolis Kellis, a young and fast-rising MIT researcher, uses sophisticated computational tools to investigate the genomes of a variety of organisms, including humans, mice, fruit flies and yeast, and the insights emerging from that work could lead to important findings about human development and disease.

The work marks the early stages of a whole new field of comparative genomics, in which insights about individual species can be derived by studying the similarities and differences in the genomes of many different species. The method can probe the evolutionary process to reveal previously unknown mechanisms, suggesting specific hypotheses that can advance biological knowledge.

"This represents a new phase in genomics--making biological discoveries sitting not at the lab bench, but at the computer terminal," Kellis says.

It was partly a typical MIT kind of serendipity that first got him interested in the field. While still a graduate student in computer science at MIT, he ran into a friend who was reading a biology book and the two started talking about genetics and evolution. Kellis soon was shown one of the first assemblies of the human genome on a computer screen, which he said was like looking in a mirror, and he "could never look back." Soon the friend introduced him to Eric Lander, who eventually became his thesis advisor.

Lander, director of the Broad Institute of MIT and Harvard, describes his former student as an "awesome" person.

"He's a perfect example of how the world of biology has undergol skillne just a dramatic transformation," Lander says. The genomic revolution is providing "a whole new biology that can only be done by analyzing massive data sets. What is required is both biological knowledge and algorithmic and mathematicas."

"It's still very rare in the world," Lander says, to find people with strong skills in both fields. "He's a whirlwind of energy, positive and energetic. He works around the clock, always with a smile, always enthusiastic."

Even as a graduate student, Lander recalls, Kellis was doing amazing work. Lander suggested he work on a thesis project to compare the entire genomes of four species of yeast. "I neglected to mention that nobody had ever done anything as ambitious as that, much less a graduate student," Lander admits. "He proceeded to do amazing things with it, and ended up publishing several landmark papers in Nature--which is unheard of for a computer science graduate student." [...]

[Not only "unheard of" but it would have been totally impossible if a young mathematician did not have a heavy-weight mentor. As a Ph.D. in computer engineering in the Dept. of Anatomy in Budapest I started to do computer modeling of cerebellar neuronal networks (1967) - but could not have published a line if the world-famous and forward-looking morphologist Prof. Janos Szentagothai, an expert in cerebellar neuronal networks, except their missing understanding how the cerebellum mathematically coordinates sensorimotor actions, did not throw in his full support. Though I acquired two more Ph.D.-s (in Biology and in Physics) and worked since 1973 in the USA, my NIH grant was discontinued when I "violated" both the "Junk DNA" and "Central Dogma" obsolete conjectures and for 17 years could not publish, until recently ENCODE officially shattered the dogmatic misunderstanding of the Genome and broke the "Genomic Revolution" open. Now, with "ENCODE architect" calling the scientific community "to re-think long-held beliefs" papers and special issues are actually requested to put "Post-ENCODE Genomics" on an informatically sound basis (in press) - pellionisz_at_junkdna.com 7th of February, 2008]

"He makes a great Greek halvah," Lander says. "He's just delightful. He's one of my kids' favorite people."
^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Illumina says inexpensive genome testing here

February 6, 2008 8:59 PM PST
By: North County Times

SAN DIEGO -- Biotech chip maker Illumina Inc. said Wednesday that it has sequenced the genome of an African man. The project means inexpensive human genome reading, or mapping is at hand, the San Diego-based company said in a news release.

Reading the unidentified African man's genome, or complete set of genetic material, was completed in weeks, the company said. It did not say how much the testing would cost, and an Illumina spokeswoman did not return a call before press time.

Illumina's Genome Analyzer reads not only the genes, but detects variant genes with differences of only one "letter" in the genetic alphabet, the company said. These one-letter differences are called single-nucleotide polymorphisms, or SNPs. They can cause great differences in how a gene functions, or whether it functions at all.

Medical experts say easy and inexpensive genetic testing can help determine whether a person is subsceptible to certain diseases, and what treatment might be most successful.

Shares of Illumina closed Wednesday before the announcement at $68.75, down $2.75 for the day. On Tuesday, Illumina shares closed at $71.50, up $6.61 for the day, after it projected stronger than expected earnings for the first quarter and full year of 2008.

[This single news item is several news item rolled into one. For the historical record, after full sequencing Craig Venter and James Watson in the USA, and an identified (but not named) Chinese scientist in Beijing, the identified but not named African man is the fourth individual with their full genome sequence known. Knowme (by George Church) has ten individuals (9 named, 1 anonymous) lined up for such service (including not yet fully sequenced female homo sapiens, and further costumers are lining up. Personal Genomics is absolutely unstoppable - industries are feverishly gearing up for the challenge, as well as lucrative business opportunity. pellionisz_at_junkdna.com, February 7, 2008]

RNA-associated introns guide nerve-cell channel production

February 06, 2008 - PHILADELPHIA - Researchers at the University of Pennsylvania School of Medicine have discovered that introns, or junk DNA to some, associated with RNA are an important molecular guide to making nerve-cell electrical channels. Senior author James Eberwine, PhD, Elmer Bobst Professor of Pharmacology, and lead authors Kevin Miyashiro, and Thomas J. Bell, PhD, both in Eberwine's lab, report their findings in this week's early online edition of the Proceedings of the National Academy of Sciences.

In nerve cells, some ion channels are located in the dendrite, which branch from the cell body of the neuron. Dendrites detect the electrical and chemical signals transmitted to the neuron by the axons of other neurons. Abnormalities in the dendrite electrical channel are involved in epilepsy, neurodegenerative diseases, and cognitive disorders, among others.

Introns are commonly looked on as sequences of "junk" DNA found in the middle of gene sequences, which after being made in RNA are simply excised in the nucleus before the messenger RNA is transported to the cytoplasm and translated into a protein. In 2005, the Penn group first found that dendrites have the capacity to splice messenger RNA, a process once believed to only take place in the nucleus of cells.

Now, in the current study, the group has found that an RNA encoding for a nerve-cell electrical channel, called the BK channel, contains an intron that is present outside the nucleus. This intron plays an important role in ensuring that functional BK channels are made in the appropriate place in the cell.

When this intron-containing RNA was knocked out, leaving the maturely spliced RNA in the cell, the electrical properties of the cell became abnormal. "We think the intron-containing mRNA is targeted to the dendrite where it is spliced into the channel protein and inserted locally into the region of the dendrite called the dendritic spine. The dendritic spine is where a majority of axons from other cells touch a particular neuron to facilitate neuronal communication" says Eberwine. "This is the first evidence that an intron-containing RNA outside of the nucleus serves a critical cellular function."

"The intron acts like a guide or gatekeeper," says Eberwine. "It keys the messenger RNA to the dendrite for local control of gene expression and final removal of the intron before the channel protein is made. Just because the intron is not in the final channel protein doesn't mean that it doesn't have an important purpose."

The group surmises that the intron may control how many mRNAs are brought to the dendrite and translated into functional channel proteins. The correct number of channels is just as important for electrical impulses as having a properly formed channel.

The investigators believe that this is a general mechanism for the regulation of cytoplasmic RNAs in neurons. Given the central role of dendrites in various physiological functions they hope to relate this new knowledge to understanding the molecular underpinnings of memory and learning, as well as components of cognitive dysfunction resulting from neurological disease.

Abstract of PNAS paper: Cytoplasmic BKCa channel intron-containing mRNAs contribute to the intrinsic excitability of hippocampal neurons

Thomas J. Bell*, Kevin Y. Miyashiro*, Jai-Yoon Sul*,, Ronald McCullough, Peter T. Buckley*,, Jeanine Jochems*, David F. Meaney¶, Phil Haydon, Charles Cantor||,**, Thomas D. Parsons,, and James Eberwine*,,**

Penn Genome Frontiers Institute, and Departments of *Pharmacology, Neuroscience, ¶Bioengineering, and Otorhinolaryngology, School of Medicine, University of Pennsylvania, Philadelphia, PA 19104; Program of Molecular and Cellular Biology and Biochemistry, Boston University, Boston, MA 02215; ||Sequenom, Inc., 3595 John Hopkins Court, San Diego, CA 92121; and Department of Clinical Studies, New Bolton Center, School of Veterinary Medicine, University of Pennsylvania, Kennett Square, PA 19348

Contributed by Charles Cantor, December 17, 2007 (sent for review December 5, 2007)

Abstract

High single-channel conductance K+ channels, which respond jointly to membrane depolarization and micromolar concentrations of intracellular Ca2+ ions, arise from extensive cell-specific alternative splicing of pore-forming -subunit mRNAs. Here, we report the discovery of an endogenous BKCa channel -subunit intron-containing mRNA in the cytoplasm of hippocampal neurons. This partially processed mRNA, which comprises 10% of the total BKCa channel -subunit mRNAs, is distributed in a gradient throughout the somatodendritic space. We selectively reduced endogenous cytoplasmic levels of this intron-containing transcript by RNA interference without altering levels of the mature splice forms of the BKCa channel mRNAs. In doing so, we could demonstrate that changes in a unique BKCa channel -subunit intron-containing splice variant mRNA can greatly impact the distribution of the BKCa channel protein to dendritic spines and intrinsic firing properties of hippocampal neurons. These data suggest a new regulatory mechanism for modulating the membrane properties and ion channel gradients of hippocampal neurons.

[Some are still stuck in the exercise in futility of "proving" vast majority of non-genic parts of the genome "junk" (since negative evidence is impossible to establish, "proving" that any part of the genome is functionally not understood says absolutely nothing about its potential function). While pseudo-scientists entertain themselves with endless "debate", stellar scientists such as above establish unprecedented breakthroughs; in this case providing evidence for the genomic underpinning of one of the most basic scientific problem of the functioning of the nervous system - how the synaptic contacts among neurons may be regulated by essentially the same mechanism that regulates genome expression. While the above study was aimed at the hippocampus, it is expected that these pioneering results will catapult the genomic understanding of cerebellar neuronal networks - where plasticity of Purkinje cell/parallel fiber synapses have been in the forefront of interest at least since the late David Marr (1969) - pellionisz_at_junkdna.com, February 7, 2008]

The Final Meltdown of JunkDNA Myth

[It is the end when "think police"is to pontificate that is, or "that is not right" to think - AJP]

The above "poll" was taken in a blog run by a scientist, famed for his belief that the "Junk" DNA misnomer still has some credibility left. Evidently, there is a total disarray even within his (biased) readership - a veritable "conceptual meltdown" regarding the myth that the non-genic part of the human genome (98.7%) is devoid of function ("junk"). Contrary to his comment , more voters (300) attributed importance to at least 50% of the hitherto unknown non-genic sequences of the human genome, than the number of voters (295) who think that up to 50% of the human genome might be dispensable.

When even on a biased blog the majority of credibility of "perhaps the biggest mistake of the history of molecular biology" is gone, it might be a good time to prudently abandon myth.

In science, it has already happened. Junk" DNA was formally abandoned by the International PostGenetics Society in 2006 as a scientific term. 8 months later, the USA Government-sponsored ENCODE (along with 11 Countries participating) rendered the old dogma "officially" obsolete. The ENCODE release by the 14th of June, 2007 put all US R&D personel on notice that upholding the myth of "junk" DNA exposes any and all US workers to a class action liability lawsuit if "junkDNA diseases", threatening lives of many millions of USA citizens, continue to be neglected - or worse, some might be subject to charges of openly advocating and organizing such negligence. Human lives are at stake.

While nobody in the USA are likely take such a crushing liability without substantial reason (it is difficult to think of such substantial reason), outside of USA jurisdiction (such as in Canada) scientifically irresponsible lip service to the myth of "Junk DNA" can still be occasionally spotted in esoteric foreign "blogs" such as above.

It is fair enough if such "polls" appear as opinions - but taking "public opinion polls" are no substitute for doing actual science.

The actual science is not at all to divert attention to some "guessing game", how many percent of the genome is or isn't indispensible. The actual science question is, how does the n% of the genome function?

As we shall see in an upcoming announcement, those upholding either (or worse, both) obsolete dogmas of "Junk" DNA misnomer and the "Central Dogma", will find themselves blindfolded from breakthroughs in theoretical advancement of "Post-ENCODE Genomics".

[When blogs will focus on theoretical advances, e.g. passsing "the Onion test", explaining "ultraconserved non-coding elements", directly target genome regulation, providing algorithmic explanation of hitherto unknown function of repetitive, self-similar sequences, etc (instead of conducting polls on a "guessing game") we'll see the light at the end of the dark tunnel of "genome misunderstood" for half a Century - pellionisz_at_junkdna.com, February 3, 2008]

Florida Gives Miami Genetic Institute $80M

By LAURA WIDES-MUNOZ 02.01.08, 1:40 PM ET

Associated Press

MIAMI - The University of Miami received $80 million in a state grant to expand its nascent genetic research institute, Gov. Charlie Crist announced Friday in South Florida's latest move to expand its biotech research hub.

The institute, established last year, focuses on using human genome research to prevent, detect and treat human diseases. The money from the state's Innovation Incentive Fund will enable the university's Miller School of Medicine to add an estimated 300 research and technology jobs, 120 to be filled by the end of the year.

The Institute for Human Genomics was started by acclaimed husband and wife geneticist team Margaret Pericak-Vance and Dr. Jeffery Vance, who left Duke University for Miami in January 2007. They have since brought with them more than 80 members of their team. Pericak-Vance heads the institute. Vance will lead a new genetics department within the medical school.

"What you're seeing today is just the beginning," said Medical School Dean Pascal Goldschmidt.

The university will now bring in experts in areas such as pediatrics and neurology to partner with the institute and study the human genome for clues as to individuals' susceptibility to diseases and conditions such as multiple sclerosis, Alzheimer's disease and autism.

"I really think within the next two to three years we're going to have major breakthrough on autism and the genes that cause it," Pericak-Vance predicted.

The money will lead to more than 1,200 direct and indirect jobs, according to the state. Crist administration officials said the institute would generate more than $3 billion for the economy over 20 years.

The university hopes the genetic institute will serve as a key component of a proposed biotech center it is seeking to build in downtown Miami. The center would rival the Scripps Florida campus now under construction in Jupiter.

[Juan Enriquez ("As the future catches you") is justified over and over again. Just as "digital" made Countries and Regions rise or fall (e.g. Japan, Taiwan towered, while the Soviet Union that could never make a decent VCR disappeared), Genomics will be the new watershed between rising or falling empires. Genomics, with its multiple drivers (PostGenetic Medicine, Personal Genomics, Bioenergy and BioDefense, to name a few) oversubscribes resources. Remember when the "Internet boom" started and mesmerized by company after company turning to the net some asked "What will all these companies do with Internet"? The quick answer was "What would all these companies do WITHOUT the Internet?" In the age of "PostENCODE Genomics" not only the USA as a whole will have a hard time competing with Europe, India, China, but (especially with meager federal funding) regions will emerge as winners, while others might end up as losers. Florida, with its population aging and more and more prone to be affected by "Junk DNA diseases" simply can not afford to "pass" on Genomics. (Nor can it afford to bypass "Biofuels"). - pellionisz_at_junkdna.com, February 1, 2008]

Faith in the landscape of modernity [Francis Collins at Stanford, 5th Feb, 7:30pm]

Stanford website with venue (Feb 5th, 7:30pm Tuesday, Stanford Memorial Hall, video, etc)

The Stanford Daily,
January 28, 2008

By Caitlin Mueller

What place does religion have in modern society? If I were feeling especially snarky (and lazy), I could end this column at twenty-nine words with a simple answer: none. But my editors generally prefer a word count of six to eight hundred, and despite my personal inclination towards atheism, I think there is more to be said. [Francis Collins started as an atheist - apparently the more we know the more we know how much we don't know - and have to leave to faith - AJP]

In fact, there is so much more to be said that I won’t feign to be well enough equipped to properly cover the topic. Like most things I write about, there are plenty of experts who are far more informed about the subject of religion than I am. What makes the question of faith in the landscape of modernity especially intriguing, however, is that even the experts cannot begin to agree.

One purported expert is geneticist Francis Collins, the head of the Human Genome Project and a prominent spokesperson for the evangelical Christian faith, who will also be speaking here at Stanford next week. Unlike some of the more old-fashioned and abrasive proponents of religion, Collins’ (modestly) progressive worldview allows for a harmonious - and even complementary - coexistence of science and faith. This distinction is significant. Historically, and even today, the church [which church today? - AJP] has forced followers to renounce the findings of science that contradict the teachings of the Bible, thus alienating most of the more analytically minded population. The newer attitude promoted by Collins has a far more widespread, modern appeal.

In its generous ideological accommodation, it also makes for a brilliant marketing strategy. Rather than engaging in a losing battle against modernity, Collins and his supporters have repackaged their product to sell in modern times. There is much debate about whether faith can indeed be integrated so seamlessly with science, with dissent from both the faithful and scientific camps. I’ll leave this discussion for the likes of Richard Dawkins, Mike Huckabee and you, readers, mostly because I am neither a scientist nor a theologian and my contribution would therefore be amateur at best. [If so, this might be proper to insert a full stop and chop off the rest of the article - lest an admitted "amateur at best" might stray onto subjects beyond the required competence at several key issues - AJP]

[...]

[In the world-famous Stanford Medical School thousands suffer (and perish) because of "Junk DNA diseases". Most experts agree that "faith" (that is not identical to either any "church" or "religion") can significantly contribute to healing. Should it, perhaps, be on the urgent agenda of sponsors to Stanford to ensure that patients can maintain, or even increase, their faith? For quite a long time, cancer patients were (erroneously) under the depressing impression that genomic glitches have sealed their fates - and even "modern" medicine (that looked only to the boundaries of "genes" comprising 1.3% of the human genome) could or would do nothing to help patients any further. Note that the core statement of Francis Collins is about "the divine language of God; DNA and *mathematics*. Should we show faith to the sponsors that the awesome power of mathematical and computational capabilities will be (indeed, already are) unleashed towards a fuller, and from the viewpoint of information science sounder, understanding of basic mathematical principles of genome function? At the previous step of his tour (Houston, speaking at Baylor and at the Holocaust museum) Francis Collins upheld the torch of faith; both in Science *and* in God). The "known" (science) and the "unknown" (faith) must not be warring; as Francis Collins revealed the NIH budget - we don't even have the (government) money to finance the unfolding scientific revolution of PostModern Genomics. Warring with faith and the faithful would costs us fourfold. As Austrian general Montecuccoli proclaimed in the 17th century, "Three things are required to make war: money, money and money". In addition, science would alieanate itself from a fourth pool of resources - that of the faithful, who are ready to contribute if we have the required faith - pellionisz_at_junkdna.com, January 31, 2008]

Reinventing the Sequencer

By Kerry Grens

[Helicos' Bill Efcavitch - AJP]

Helicos' Bill Efcavitch is confident that he can produce a machine that can sequence a genome for $1,000 in ten days. It hasn't been an easy road. While giving a tour of Helicos BioSciences' laboratories in Cambridge, Mass., Bill Efcavitch pauses and pats a refrigerator-size 3700 DNA sequencer. The sequencer was made by Applied Biosystems, where Efcavitch worked until 2004. "It's the platform that sequenced the human genome," says Efcavitch, Helicos' senior vice president of product research and development, "and I managed the group that did that." Now, the instrument he helped develop (which Helicos purchased in the used market) serves as an analytical tool to back up a new style of genome sequencer that Efcavitch is developing. Helicos promises to deliver what the many model 3700 instruments did for millions of dollars in several years for just $1,000 in 10 days.

[...] Efcavitch pulls open a door on the box, revealing a touch-screen control panel. The customer "will put the reagents into the racks under here, press the go button, and then walk away for ten to 15 days while it does its thing." As simple as that, a genome is sequenced.

The HeliScope, based on Helicos' "single-molecule sequencing" technology, which sequences millions of chunks of DNA by synthesizing complementary strands, should soon see a commercial launch. In his careful speech, which reveals the prominent role science plays in his thinking, Efcavitch says that "the cost going out the door is going to be about ten-to-the-fourth dollars" to sequence a genome using the HeliScope, and would require about 10 days. Efcavitch says he is confident that Helicos will hit its goal eventually. "We have clear technological evolution that will get us to the $1,000 genome."
[...]

For Efcavitch, working on a high-risk project is nothing new. For two decades at Applied Biosystems, he led team after team through the development of products, including the model 3700 that ultimately delivered Craig Venter's genome in 2002. "We were under a tremendous amount of pressure to get that platform out the door. It was a very aggressive timeline," he says.

Focused on the End User

For Efcavitch, working on a high-risk project is nothing new. For two decades at Applied Biosystems, he led team after team through the development of products, including the model 3700 that ultimately delivered Craig Venter's genome in 2002. "We were under a tremendous amount of pressure to get that platform out the door. It was a very aggressive timeline," he says.

The 3700 may be the instrument for which Applied Biosystems is best known, but Efcavitch says he's more proud of his other accomplishments at the company, which he joined in 1981 after doing a postdoc in Marvin Caruthers' laboratory at the University of Colorado. At the time, Efcavitch was among just a handful of employees at the infant biotech, leading a team of scientists in DNA synthesis technology. Steven Fung, Applied's senior director of genetic analysis research and development, says it was Efcavitch who brought the synthesis technology from Caruthers' lab to the company. "He championed it here and turned it into a very successful business," says Fung.

For the next several years Efcavitch's team worked to make the availability of synthetic oligos more efficient and inexpensive. In 1987, as Applied Biosystems grew, Efcavitch left the oligo group to develop high-performance electrophoresis chromatography, and later, bioanalytical instrumentation, capillary electrophoresis, and high-performance liquid chromatography.

Part of Efcavitch's success in the early days, says Fung, was his connection to customers. "He would go out and talk to customers along with the sales force," he says. That focus on the end user stayed with him as the company grew.commissioned a study of the software's complexity. "He came by my office one day with this complexity chart and said, 'Do you believe this?' He said, 'There's no way to fix it without redoi In 1991 Efcavitch became the leader of the DNA sequencing group, which developed the 377 sequencer in 1995, the 310 in 1996, and the 3700 in 1999. Joe Smith, Applied's former senior vice president of business development, says that he remembers during the development of the 3700 sequencer that Efcavitch had the software completely rearchitectured. Efcavitch hadling it.'"

Such decisions to start over, or to challenge current technology and improve it, were not always greeted with enthusiasm, says Smith. "He made a lot of unpopular moves at ABI, but they paid off," he says. "The gel business was one of the major ones." Efcavitch proposed making a replenishable electrophoresis gel that could be used in DNA sequencers, rather than casting a new gel every time the user made a new run. "Nobody thought the gel would work," says Smith, but with Efcavitch's insistence and confidence, the replenishable gel is a technology still in use, as is Efcavitch's ability to push through unpopular ideas.

The High Risk of Intuition

In 1999 Efcavitch moved from research and development to business management, taking responsibility for more of the commercial side of developing microarrays. "It was tremendously interesting, but I realized I didn't have revenue lust," he says. "I enjoyed every single minute at Applied Biosystems, but it was time for me to move on."

He left the company in 2004. Helicos' CEO, Stan Lapidus, contacted Efcavitch to interest him in the company. "He has an extraordinary skill mix," Lapidus says, and Efcavitch's work at Applied Biosystems matched just what Lapidus was looking for in a research and development leader.

"My first response was no, sorry. After 10 years of DNA sequencing I didn't want to get into another sequencer," Efcavitch recalls. Nevertheless, he looked into Helicos' platform, a new type of sequencer based on findings from Stanford professor Steve Quake (Proc Natl Acad Sci, 100:3960-4, 2003). "I was struck by the simplicity of the project," he says.

The HeliScope's technology starts by fragmenting a genomic sample into 100-base pair chunks, which adhere to a surface. Then, the sequencer introduces one of four fluorescent analogs to the strands of DNA. If the analog is the correct counterpart to the next available nucleotide on the strand, it will bind to the DNA template. Using fluorescence microscopy, the HeliScope records which of the four analogs binds to that nucleotide, washes the template of any unused analogs, and starts over on to the next nucleotide down the line.

[...]

To make sure, he turned to members of Helicos' "rock star" scientific advisory board, which includes Steven Chu, the director of Lawrence Berkeley National Laboratory, Harvard genetics professor George Church, and Eugene Myers at the Howard Hughes Medical Institute. Lapidus says his experience in building companies taught him the importance of a strong advisory board, especially to support the ambitious goals of Helicos' technology. When starting the company, Lapidus, Quake, and investor Flagship Ventures appealed to people they knew and had worked with in the past to recruit "the best people with relevant experience," says Lapidus.

Efcavitch says board member David Liu, a professor of chemistry at Harvard University, confirmed the feasibility of the analogs project. Lapidus and the rest of the scientific advisory board backed Efcavitch, and he hired a team of organic chemists to make the new analogs. "The thing I like about Stan and the board of directors is they encourage a Plan B type of mentality," says Efcavitch. "Once the concept was confirmed by the scientific advisory board members, that gave it credibility and we took the risk."


Galvanizing the Troops

Taking that risk finally broke the logjam in 2006. One day at the end of that summer, a Helicos scientist noticed a change in her data after altering a step during the sequencing process. It seemed like it might be just the change that could extend the nucleotide read length. Efcavitch turned the group in that direction, and soon the 25-nucleotide read length was reached. Shortly following, the organic chemists presented the newly designed analogs to the molecular biologists, and the entire single-molecule sequencing technology came together.

The company forged ahead with optimizing the HeliScope and getting it ready for market. In early November 2007, the company announced the price for the sequencer: $1.3 million, and $18,000 for reagents. To get down to the $1,000 price per genome will take about two years of optimizing the chemistry, says Efcavitch, who is pushing hard to get there. Patrice Milos, the chief scientific officer of Helicos, says, "I think that he leads his group with a real sense of urgency. I see that in every meeting - that the time is short and we have a lot to do. He does a lot to galvanize the troops."

Efcavitch is just as confident about the market for the HeliScope as he is about the technology, which is the basis of everything Helicos does. Stock analysts don't appear as confident, however. In November 2007 Helicos' stock dropped from $14 per share to less than $10, after an analyst downgraded it to "sell." Since the company went public in May 2007 the stock has gone as high as $15. As of late November, the market capitalization was roughly $209 million.

The analyst who downgraded Helicos' stock cited competition from other companies, including, perhaps not surprisingly, Applied Biosystems. But while other instruments require amplification or multiple sequencers, the HeliScope is designed to do it all with one step. "Some say, gee, you guys are late, meaning we're not the first out with genome sequencing technology. But I don't think we're too late," Efcavitch says. "It's not about the existing two billion dollar sequencing market, but about the next generation."

[The $2B business of genome sequencing (as of today) - is a field of its own. While it will be increasingly hard to keep up with innovations, there is zero question that the price of full sequencing (e.g. with "differential sequencing") will melt down; even more aggressively than e.g. the price of digital storage. (In about 1976 we celebrated the purchase of a 4kb extension-board costing us maybe $1,500, last week I bought an 8Gb stick for $30. In a generation, price of electronic storage dropped by the unbelievable factor of one hundred million. How much would it cost (if it existed...) a "homeDNA computer" by which you could analyze your own genome? - pellionisz_at_junkdna.com, January 30, 2008]

SeqWright Announces Personal Genomics Service

HOUSTON, Jan. 24 /PRNewswire/ -- SeqWright Inc., an international leader in the field of contract genomics, announced today that it will begin offering a personal genotyping service to the public. The service, named SeqWright GPS: GENOMIC PROFILING SERVICE (http://www.seqwright.com/gps ) will be based on data generated from cutting edge microarray technology which will genotype nearly one million minute genetic variations, known as Single Nucleotide Polymorphisms (SNP). The same technology will eventually be used to give customers a profile of their unique DNA copy number variations. Customers will submit a saliva sample from which their DNA will be isolated and analyzed on a SNP-detecting microarray. The SNP results will be used to provide customers a personal genomic database. This information may be used by the customer to infer the risk of developing specific diseases, relatedness to various world populations and ethnic groups, and genetic traits they share with participating family members.

There is a growing body of knowledge relating both SNP and DNA copy number variation with many common diseases. SeqWright plans to empower customers to use their unique genetic profiles by providing them with tools which can compare their SNP genotype results to the most well-established common disease associations. Furthermore, SeqWright will provide customers with updated research information as new disease and health-trait associations are brought to light.

SeqWright's personal genomics service will compare the customer's genetic markers to European, East Asian and West African populations. By doing so, the company will be able to provide customers with a measure of relatedness to the major populations with whom they share ancestry. SeqWright will provide an option for adults in families to be tested so that a "molecular genealogy" can be inferred for interesting health and physical traits. SeqWright will compare genetic markers between consenting family members to reveal how genetic traits were inherited in the family.

SeqWright's CEO, Fei Lu M.D. had this to say about the new service, "SeqWright GPS will provide individuals with the ability to apply cutting-edge research to their personal genetic information in the hopes of helping them live longer, healthier, more productive lives. SeqWright is proud to be a part of the personal genomics revolution as we believe it will ultimately have a profound impact on personal health planning throughout the world."

SeqWright recognizes that the privacy of individual's genetic data is an important concern and therefore SeqWright will ensure that the only access to the data is that granted by the customer. Furthermore, SeqWright will conduct all genetic testing in its GLP compliant, CLIA certified facility with stringent quality controls and powerful data security.

About SeqWright:

SeqWright Incorporated is a world-class genomic services support organization based in Houston, TX with more than a decade of experience specializing in providing access to state of the art Molecular Biology and Genomic services in a highly regulated GLP/CLIA environment. For additional information about SeqWright and its services, please visit http://www.seqwright.com

[The "Personal Genomics" space is getting crowded. Beyond 23andMe, DecodeMe and Navigenics, the Texas-region is putting a strong foot ahead by the well-established (1994) SeqWright genome sequencing company's new service. They indicate going beyond the usual 500,000 or so "SNP"s (point-mutations) also to "CV" (copy number variations) - and it is realistically expected that will bracket the "upside competitor" Knome (that provides full sequencing, from Boston outsourced to Beijing). Richard Gibbs, as a Science Advisor to SeqWright is now in competition not only with Stephen Fodor (Navigenics), but also with George Church (Knome) and Kari Stefansson (DeCodeMe) - pellionisz_at_junkdna.com, January 28, 2008]

Fueling The Future: The oil well of tomorrow may be in a California lab full of genetically modified, diesel-spewing bacteria.

From: Issue 122 | February 2008 | Page 45 | By: Elizabeth Svoboda | Photographs By: Howard Cao

LS9's world headquarters looks like a dorm room on move-out day. The reception area at the biotech company's San Carlos, California, digs is stark white, unashamedly bare. No one has bothered to spring for prints or posters for the walls, not even from Ikea. Haphazard stacks of boxes line every corridor. It's no surprise LS9 doesn't put much of a premium on appearances--after all, its most important employees are patented microbes too small to be seen. "This is where we grow the bacteria," says Steve del Cardayré, the company's vice president for research and development, leading me to a lab space no bigger than your typical college double. He points to a vat containing an oatmeal-like slurry--carbohydrates derived from plant matter that feed the microbes. "After they're finished growing, all we have to do is take the mixture out and spin it, and density makes it separate into its components."

The most important of those components is 21st-century black gold: a compound chemically identical to the diesel fuel that powers millions of U.S. cars and trucks. LS9 leads the newly emerging pack of companies that, with DNA-engineering technology, are custom-creating potentially lucrative species of bacteria that can manufacture fuel on command. LS9's biggest competitor, Emeryville, California-based Amyris Biotechnologies, recently started making bacteria-based diesel in addition to its longtime focus on developing a bioengineered malaria drug. And biotech's big daddy, Craig Venter, a champion of modifying microorganisms to make fuel, has entered the fray; his latest brainchild, Synthetic Genomics, plans to create bugs that excrete hydrogen and ethanol--though, due to the complexity of engineering completely new organisms, the company likely won't produce any fuel for years. But LS9, founded in 2005, has a head start on its rivals--and is closest to putting bacterial gas in your tank.

As crude-oil prices have risen toward the $100-per-barrel mark, the arguments for alternative fuel sources have grown stronger. "What intrigued me was the strong economic case for bacteria fuel," says LS9 president Robert Walsh, who joined the startup after 26 years at Royal Dutch Shell. Because the fuel produced by LS9's microbes is virtually pump-ready--requiring only a simple cleaning step to filter out impurities--making bacteria fuel uses 65% less energy than making ethanol, which needs extensive chemical processing that drives up its price and damages its good-for-the-planet cred. LS9's finished product also has 50% more energy content--a gallon of bacteria fuel would last your car about 50% longer than a gallon of ethanol. "LS9's fuel has a number of advantages in terms of cost, security of supply, and impact on the environment," says Noubar Afeyan, CEO of Flagship Ventures, one of the VC firms that contributed to the startup's $20 million of funding in 2007. "It offers a commercially attractive path to sustainability."

That path began unexpectedly at Codon Devices, Harvard geneticist George Church's rapid-DNA-synthesis company. Church and his lab staff had regular brainstorming sessions in which they liked to muse on out-of-the-box applications for the technology they'd developed, which allowed them to redesign the genomes of existing organisms with a few mouse clicks. One day, someone suggested engineering a bacterium that could make fuel, since the lab had just been awarded a Department of Energy grant. "We're dependent on petroleum, so we don't need some alternative to petroleum. We need a way to make petroleum itself," del Cardayré says. "Biology can do it. Over the course of billions of years, cells have figured out that hydrocarbons are a good way to store energy."

Accordingly, LS9 is staking its prospects not on inventing an entirely new biological pathway, but on exploiting an existing one. Bacteria naturally turn the sugars they consume into fatty acids, which are later converted to lipids for storage. By a stroke of genetic serendipity, fatty acids are only a few molecular linkages removed from diesel fuel, so it has been fairly simple for LS9 scientists to tweak existing bacteria--including familiar varieties such as E. coli--to yield new, diesel-producing strains. "We divert those fatty acid pathways," del Cardayré says. "It's like a detour."

The strategy has already met with small-scale success; an assortment of odd-shaped beakers lines the San Carlos lab's shelves, each holding a few teaspoons of amber-colored diesel. Walsh estimates large quantities of the finished fuel will be market-ready in three to five years. The company is also perfecting a bacterium that produces crude oil, which could be sent to refineries and turned into any imaginable petroleum product, from gasoline to Vaseline.

Still, a host of practical problems must be solved before this industry can take off, and some may prove to be deal breakers. For one thing, public skepticism about all things genetically modified, from food to pet goldfish, may make it difficult for these companies to gain regulatory approval for their products. In a 2006 Pew Initiative study, almost a third of respondents said they viewed genetically modified products as unsafe. "The cry right now is for anything to replace petroleum, but $95 crude is masking a lot of the issues," says Martin Tobias, a biodiesel expert and venture capitalist at Ignition Partners. "It's going to be 10 times harder to get something like this available and accepted than if you were using a naturally occurring organism. Think how difficult it is to get genetically engineered drugs approved."

Then there's the multimillion-dollar question of how to translate a beaker of success to global scale. No one has ever made genetically engineered fuel in industrial quantities, so no one knows what's going to happen when companies try to grow their bacteria in vats the size of trailers. Startups producing biodiesel from algae--which are closely related to bacteria--have encountered difficulties when trying to scale up; in large numbers, the organisms sometimes crowd one another out [The understanding of genome regulation is the key. Even Jacob and Monod (19610 realized that the "Operon"-regulation decreases, or shoots down, production of proteins that are in overabundance - AJP] and emit toxic waste that halts the production process. "Even if you can do this in a test tube, getting the same kind of quality on a large scale could be an issue," says Tom Todaro, CEO of Targeted Growth, a company that's aiming to increase the efficiency of biodiesel production. "People fail to understand how big the oil and gas industry is--just how much fuel you have to be able to produce in a day to compete."

Church admits the challenges are daunting; he isn't picturing bacteria-fuel pumps at every Mobil station just yet. "We know we'll be competing with hydrogen, ethanol, and electric cars," he says. But in unguarded moments, he dares to dream: "If this works out, much of the current motivation for switching away from hydrocarbons might vanish." Why seek an alternative to petroleum, he figures, when a microscopic army of trillions can churn it out for you 24-7?

[Genome regulation is the key to "Biofuels", "Bioenergy", "Big Pharma applications", Medicine of "Junk DNA diseases" - and genome regulation is recognized now as an "information science". With multiple bidders to the same resource, chances are that not only "ENCODE's architect" (Francis Collins, NIH) called for the scientific community to re-think fundamentals (that is information science), but those in need of solutions will increasingly listen to results of the asked for "re-thinking" - pellionisz_at_junkdna.com, January 28, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Life: A Tech-Centric View

By Sonia Arrison

TechNewsWorld

01/25/08 4:00 AM PT

The future of human enhancement is rocketing forward, and many people from a variety of disciplines are contributing. Technology is the common driver, but in the end, simply a tool. Tools can be used for good or evil purposes, and it's up to involved and responsible people to make sure they use the technology and innovations in a beneficial way.

At this week's Digital Life Design (DLD) conference in Germany, renowned scientists Craig Venter, Ph.D., and Richard Dawkins wowed the audience with a conversation about genes and information technology. They discussed how evolution is becoming man-made, which brings up a number of interesting issues.

"Genetics has become a branch of information technology," Dawkins opined. [Dawkins is not breaking any news here. Leroy Hood (Hood, L. and Galas, D. The digital code of DNA, Nature, 2003; 421 (6921) 444-8) said it five years ago - AJP]. There's a good deal of evidence for his statement, including the announcement that Google-funded firm 23andMe launched its Web-based DNA (deoxyribonucleic acid) testing services in Europe this week. 23andMe is one of a number of firms that sample an individual's DNA in order to offer clues to their genetically-driven future.

Increasingly, life is being translated into bits of code to be manipulated in the laboratory. Last year, for instance, Venter's team transplanted the entire genome from one species of Mycoplasma bacteria into another. A few days later, the DNA from the first bacterium completely took over the second and became indistinguishable from the donating bacterium.

The March of Science

"When we look at cells as machines, it makes them very straightforward in the future to design them for very unique utilities," Venter told participants at DLD. Of course, Venter has often referred to the possibility of designing cells to help clean the environment, but his premise could be applied for any purpose. Indeed, biogerontologist Aubrey de Grey is promoting the use of bio-engineering methods to repair human cells and fight aging. However, some worry that human-driven evolution could bring about harmful, unforeseen consequences. [Limited understanding is always dangerous. Neglecting either (or both) informatics and genome regulation is a great challenge to Synthetic Genomics - AJP]

"Accusing a scientist of playing God is obviously stupid," Dawkins said, "but what is not obviously stupid is accusing a scientist of endangering the future of the planet by doing something that could be irreversible." [See comment below that in the Manhattan project the first monies ($7,000) were requested -and immediately granted- to stop nuclear chain reaction. Genome regulation appears to be fractal - defects induced in the exquisite regulatory mechanisms can shot down even "resistant" strains - AJP]

He's right about that, and that's why a conversation about how scientists can safely create new life forms and alter existing ones has already begun in labs around the globe and at organizations like the Foresight Nanotech Institute. Indeed, most scientists are developing their work with an eye to improving the human lot, not the other way around.

There is no stopping the march of science, as some hope, including Sun Microsystems' (Nasdaq: JAVA) cofounder Bill Joy. [Bill Joy is the "smartest cookie" you can find. He certainly would not want to lose out in Post-ENCODE Genomics. Smart as he is, he'd rather pay the basic "insurance premium" (say "$7,000") that the gen(i)e would not get out of the bottle - AJP]. Instead, the focus should be on the best practices created within the community itself. Plus, of course, technologies such as gene manipulation and nanotechnology are only pieces of the larger puzzle. Robotics is another.

Transferring Life

Partnering labs in North Carolina and Japan recently demonstrated that by implanting electrodes into the brain of a monkey, it is possible for a monkey to learn to make a robot walk just by thinking about it. That has incredible consequences not just for paralyzed individuals, but also for military and other applications. Then there's organ-growing.

University of Minnesota researchers recently created a beating heart in the lab by injecting fresh heart cells into a non-living rat heart matrix. This follows on the heels of a technology dubbed "organ-printing" which has also created beating heart cells. However, innovation, Venter noted, may not be limited to the planet Earth.

"We can transfer life across the universe as digital information," Venter explained. "Somebody else could, in their laboratory, build that genetic code and replicate it. Perhaps publishing my genome on the Internet had more applications than I thought." [ "Beam me up, Craig" - said Venter - and the Universe could synthesize uncounted copies of Venter, based on the pure digital information. By picking up another "full sequence", the Universe may erroneously conclude that Planet Earth is a nest of racists .... AJP] -

All these examples show that the future of human enhancement is rocketing forward, and many people from a variety of disciplines are contributing. Technology is the common driver, but in the end, is simply a tool. Tools can be used for good or evil purposes, and it's up to involved and responsible people to make sure they use the technology and innovations in a beneficial way. There is reason to believe that enhancement technologies will be used for both good and evil purposes, but in an age of greater communication, globalization and transparency, the good should stand a better chance of winning.

Martians attempting to clone Venter aren't likely to come along anytime soon, but a whole host of other innovations are on their way and are set to change the direction and pace of human evolution. Meanwhile, 2008 is the year when the public should start to notice that evolution itself is evolving. [Indeed, - AJP]

Sonia Arrison, a TechNewsWorld columnist, is senior fellow in technology studies at the California-based Pacific Research Institute.

[One can not resist to insert a bit or two of "sci-fi" here. What if on any given day J. Craig Venter would receive advice from a more intelligent part of the Universe, how to fix his genome to avoid his untimely demise??? Or, for that matter, what if Bill Joy might recieve advice how to safely shoot down any genome regulatory system going haywire? - pellionisz_at_junkdna.com, January 25, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Venter Institute Scientists Create First Synthetic Bacterial Genome

Publication Represents Largest Chemically Defined Structure Synthesized in the Lab
Team Completes Second Step in Three Step Process to Create Synthetic Organism

ROCKVILLE, Md., Jan. 24 /PRNewswire-USNewswire/ -- A team of 17 researchers at the J. Craig Venter Institute (JCVI) has created the largest man-made DNA structure by synthesizing and assembling the 582,970 base pair genome of a bacterium, Mycoplasma genitalium JCVI-1.0. This work, published online today in the journal Science by Dan Gibson, Ph.D., et al, is the second of three key steps toward the team's goal of creating a fully synthetic organism. In the next step, which is ongoing at the JCVI, the team will attempt to create a living bacterial cell based entirely on the synthetically made genome.

The team achieved this technical feat by chemically making DNA fragments in the lab and developing new methods for the assembly and reproduction of the DNA segments. After several years of work perfecting chemical assembly, the team found they could use homologous recombination (a process that cells use to repair damage to their chromosomes) in the yeast Saccharomyces cerevisiae to rapidly build the entire bacterial chromosome from large subassemblies.

"This extraordinary accomplishment is a technological marvel that was only made possible because of the unique and accomplished JCVI team," said J. Craig Venter, Ph.D., President and Founder of JCVI. "Ham Smith, Clyde Hutchison, Dan Gibson, Gwyn Benders, and the others on this team dedicated the last several years to designing and perfecting new methods and techniques that we believe will become widely used to advance the field of synthetic genomics."

The building blocks of DNA--adenine (A), guanine (G), cytosine (C) and thiamine (T) are not easy chemicals to artificially synthesize into chromosomes. As the strands of DNA get longer they get increasingly brittle, making them more difficult to work with. Prior to today's publication the largest synthesized DNA contained only 32,000 base pairs.

Thus, building a synthetic version of the genome of the bacteria M. genitalium genome that has more than 580,000 base pairs presented a formidable challenge. However, the JCVI team has expertise in many technical areas and a keen biological understanding of several species of mycoplasmas.

"When we started this work several years ago, we knew it was going to be difficult because we were treading into unknown territory," said Hamilton Smith, M.D., senior author on the publication. "Through dedicated teamwork we have shown that building large genomes is now feasible and scalable so that important applications such as biofuels can be developed."

Methods for Creating the Synthetic M. genitalium

The process to synthesize and assemble the synthetic version of the M. genitalium chromosome began first by resequencing the native M. genitalium genome to ensure that the team was starting with an error free sequence.

After obtaining this correct version of the native genome, the team specially designed fragments of chemically synthesized DNA to build 101 "cassettes" of 5,000 to 7,000 base pairs of genetic code. As a measure to differentiate the synthetic genome versus the native genome, the team created "watermarks" in the synthetic genome. These are short inserted or substituted sequences that encode information not typically found in nature. Other changes the team made to the synthetic genome included disrupting a gene to block infectivity. To obtain the cassettes the JCVI team worked primarily with the DNA synthesis company Blue Heron Technology, as well as DNA 2.0 and GENEART.

From here, the team devised a five stage assembly process where the cassettes were joined together in subassemblies to make larger and larger pieces that would eventually be combined to build the whole synthetic M. genitalium genome. In the first step, sets of four cassettes were joined to create 25 subassemblies, each about 24,000 base pairs (24kb). These 24kb fragments were cloned into the bacterium Escherichia coli to produce sufficient DNA for the next steps, and for DNA sequence validation.

The next step involved combining three 24kb fragments together to create 8 assembled blocks, each about 72,000 base pairs. These 1/8th fragments of the whole genome were again cloned into E. coli for DNA production and DNA sequencing. Step three involved combining two 1/8th fragments together to produce large fragments approximately 144,000 base pairs or 1/4th of the whole genome.

At this stage the team could not obtain half genome clones in E. coli, so the team experimented with yeast and found that it tolerated the large foreign DNA molecules well, and that they were able to assemble the fragments together by homologous recombination. This process was used to assemble the last cassettes, from 1/4 genome fragments to the final genome of more than 580,000 base pairs. The final chromosome was again sequenced in order to validate the complete accurate chemical structure.

The synthetic M. genitalium has a molecular weight of 360,110 kilodaltons (kDa). Printed in 10 point font, the letters of the M. genitalium JCVI-1.0 genome span 147 pages.

"This is an exciting advance for our team and the field. However, we continue to work toward the ultimate goal of inserting the synthetic chromosome into a cell and booting it up to create the first synthetic organism," said Dan Gibson, lead author.

The research to create the synthetic M. genitalium JCVI-1.0 was funded by Synthetic Genomics, Inc.

Background/Key Milestones in JCVI's Synthetic Genomics Research

The work described by Gibson et al. has its genesis in research by Dr. Venter and colleagues in the mid-1990s after sequencing M. genitalium and beginning work on the minimal genome project. This area of research, trying to understand the minimal genetic components necessary to sustain life, began with M. genitalium because it is a bacterium with the smallest genome that we know of that can be grown in pure culture. That work was published in the journal Science in 1995.

In 2003 Drs. Venter, Smith and Hutchison made the first significant strides in the development of a synthetic genome by their work in assembling the 5,386 base pair bacteriophage ΦX174 (phi X). They did so using short, single strands of synthetically produced, commercially available DNA (known as oligonucleotides) and using an adaptation of polymerase chain reaction (PCR), known as polymerase cycle assembly (PCA), to build the phi X genome. The team produced the synthetic phi X in just 14 days.

In June 2007 another major advance was achieved when JCVI researchers led by Carole Lartigue, Ph.D., announced the results of work on genome transplantation methods allowing them to transform one type of bacteria into another type dictated by the transplanted chromosome. The work was published in the journal Science, and outlined the methods and techniques used to change one bacterial species, Mycoplasma capricolum, into another, Mycoplasma mycoides Large Colony (LC), by replacing one organism's genome with the other one's genome.

Genome transplantation was the first essential enabling step in the field of synthetic genomics as it is a key mechanism by which chemically synthesized chromosomes can be activated into viable living cells. Today's announcement of the successful synthesis of the M. genitalium genome is the second step leading to the next experiments to transplant a fully synthetic bacterial chromosome into a living organism and "boot up" the cell.

[…]

Contact:
Heather Kowalski
SOURCE J. Craig Venter Institute

=== [The above authentic announcement by precise Heather Kowalski should override some excesses of widespread but less informed media excesses such as below - AJP] ===

Artificial life being created

The Telegraph

[...] The scientists led by the human genome pioneer Dr Craig Venter want to create new kinds of bacterium, living chemical factories if you like, to make new types of bugs which can be used as green fuels to replace oil and coal, digest toxic waste or absorb carbon dioxide and other greenhouse gases from the atmosphere.[Venter set his sights on such "designer microbes" ("MicrobeSoft") by e.g. their "minimum genome" (patentable stripped version of the Myco G. *There is no word about it in this news*. While the accomplishment is without question a major one, must not be confused with that rather different goal - AJP]

The feat will trigger excitement and unease in equal measure along with widespread debate about the ethics of creating new species, which Dr Venter believes will be a major step in the history of our species. One critic of what some call Synthia put it more trenchantly: "God has competition."

Rumours have circulated for weeks that they have achieved the feat but, speaking from Davos, Switzerland, Dr Venter tells The Daily Telegraph: "No we have not. There are a number of serious constraints on that happening and we are working diligently to get rid of them. If we had succeeded it would be part of this paper. As soon as we have it, I doubt that we would be able to keep it a secret. Nor would we want to."But he believes success is only "a matter of time." [...] [Venter may have referred to "a number of serious constraints" including "genome regulation". If the "minimal genome" will be "booted up" with the intact regulatory system (7% in Myco G), it will tell a lot about "genome regulation". If it won't boot if the regulatory system is not tailored according to the "minimal genome" - that would also tell a lot... Sure, "it is a matter of time..." with looking into the regulation right away, or later. The question is how much the time is worth... - AJP]

"The Venter Institute calls this synthetic life version 1.0 and acknowledges that it doesn't quite work yet - however, society shouldn't wait for the next upgrade - the stakes are far too serious," explains Kathy Jo Wetter of ETC Group.

[Without question, a major landmark for Synthetic Genomics and for J. Craig Venter - in the precise rendering of Ms. Kowalski. pellionisz_at_junkdna.com, January 24, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

NIH Announces New Initiative In Epigenomics

Main Category: Genetics

Article Date: 23 Jan 2008 - 2:00 PST

The National Institutes of Health (NIH) will invest more than $190 million over the next five years to accelerate an emerging field of biomedical research known as epigenomics.

"Disease is about more than genetics. It's about how genes are regulated - how and when they work in both health and disease," said NIH Director Elias A. Zerhouni, M.D. "Epigenomics will build upon our new knowledge of the human genome and help us better understand the role of the environment in regulating genes that protect our health or make us more susceptible to disease."

The NIH is making this a priority in its research portfolio, taking it on as an NIH Roadmap initiative. Grant applications are now being accepted for research on epigenome mapping centers, epigenomics data analysis and coordination, technology development in epigenetics, and discovery of novel epigenetic marks in mammalian cells.

Epigenetics focuses on processes that regulate how and when certain genes are turned on and turned off, while epigenomics pertains to analysis of epigenetic changes across many genes in a cell or entire organism.

Epigenetic processes control normal growth and development. Diet and exposure to environmental chemicals throughout all stages of human development among other factors can cause epigenetic changes that may turn on or turn off certain genes. Changes in genes that would normally protect against a disease, as a result, could make people more susceptible to developing that disease later in life. Researchers also believe some epigenetic changes can be passed on from generation to generation.

The Epigenomics Program is a trans-NIH effort led by several NIH institutes including the National Institute of Environmental Health Sciences, the National Institute on Drug Abuse (NIDA), the National Institute on Deafness and Other Communication Disorders, the National Institute of Diabetes and Digestive and Kidney Diseases, the National Institute of Neurological Disorders and Stroke, and the National Center for Biotechnology Information of the National Library of Medicine. Efforts of these Institutes are coordinated by the Office of Portfolio Analysis and Strategic Initiatives (OPASI) as part of the NIH Roadmap.

"Epigenetic mechanisms are important in development, aging, and learning and memory, but our understanding of epigenetic processes is still very much in its infancy," said NIDA Director Nora D. Volkow. "A deeper understanding of epigenetics will enable researchers to make significant strides in understanding and treating many diseases including cancers, obesity, depression, and addiction."

Increased interest in epigenetics has spawned international research collaborations that have pushed the field forward in recent years. With the NIH Roadmap initiative, the United States will increase its commitment to epigenetics research and accelerate the pace of biomedical discovery in the next decade.

For example, epigenetics may help explain how some people are predisposed to certain illnesses such as cardiovascular disease, diabetes and hypertension. Several studies have documented that children born to mothers who did not get adequate nutrition during pregnancy were more likely to develop type 2 diabetes and coronary heart disease later in life. The theory is that epigenetic changes occur in genes that regulate sugar absorption and metabolism during fetal development that allow for survival with little food, but when encountered with an environment where food was plentiful these changes led to development of diabetes. (See scientific illustration of how epigenetic mechanisms can affect health at here.)

"Although we are beginning to understand a great deal about the basic science of epigenetics, this initiative heralds its application to human health and disease. This initiative will connect real life problems with cutting edge science," said Dr. Alan Krensky director of OPASI.

NIH hopes to achieve the following goals with the Epigenomics Program:

- Coordinate and develop a series of reference epigenome maps, analogous to genome maps, which will be publicly available to facilitate research in human health and disease.

- Evaluate epigenetic mechanisms in aging, development, environmental exposure including physical and chemical exposures, behavioral and social environments, and modifiers of stress.

- Develop new technologies for epigenetic analysis of single cells and imaging of epigenetic activity in living organisms.

- Engage the international community to define standard practices and platforms, to develop new laboratory tools such as antibodies.

The overall hypothesis of the NIH Roadmap Epigenomics Program is that the origins of health and susceptibility to disease are, in part, the result of epigenetic regulation of the genetic blueprint. Researchers believe that understanding how and when epigenetic processes control genes during different stages of development and throughout life will lead to more effective ways to prevent and treat disease.

Additional information about the Epigenomics Program is available here. For more information about funding opportunities, go to here.

The Epigenomics Program is part of the NIH Roadmap for Medical Research. The Roadmap is a series of initiatives designed to pursue major opportunities and gaps in biomedical research that no single NIH institute could tackle alone, but which the agency as a whole can address to make the biggest impact possible on the progress of medical research. Additional information about the NIH Roadmap can be found at http://www.nihroadmap.nih.gov.

The National Institutes of Health (NIH) - The Nation's Medical Research Agency - includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. It is the primary federal agency for conducting and supporting basic, clinical and translational medical research, and it investigates the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

[The PostModern Era of Genomics ("Genomics beyond Genes") was coined as "PostGenetics" by IPGS (2005; as the second Century of Genetics started after Bateson's coining "Genetics" in 1905). In 2006, IPGS at it European Inaugural formally abandoned "Junk DNA" as a scientific term - and the "Methylation Society" was re-named to "Epigenetics Society" in the same year. With the release by June 14, 2007 of the US-Government "ENCODE" first report, "Post-ENCODE Genomics" became the generic term. Calling the new era "Epigenetics" and "Epigenomics" may create much confusion. "Epigenesis" was introduced in 1896 (112 years ago) by Oscar Hertwig, 1849-1922. "Biological problem of today: preformation or epigenesis? The basis of a theory of organic development". W. Heinemann: London, 1896. Since that time, dozens of very different definitions were used. Presently, the use of a US Government Agency the name of an existing for-profit company (Seattle) may have the appearance of favoritism - pellionisz_at_junkdna.com, January 24, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Navigating the Genome for Autism Clues

January 23, 2008

Two new studies connect structural variations to 1 percent of autism cases, a finding that may help unlock the enigmatic disorder's genetic footprint

By Nikhil Swaminathan

A pair of research teams recently linked large-scale mutations on one of the body's 23 pairs of chromosomes (which carry cells' genetic code) to autism, a finding that helps shed light on a disorder whose genetic underpinnings have confounded scientists for decades. The revelation represents the most concrete evidence to date that structural variations in the genome play a crucial role in the condition's development, marked by symptoms that include a failure to socially connect, communication difficulties and obsessive behavior.

Scientists this month unveiled evidence that an estimated 1 percent of all autism cases may stem from a structural change involving 25 to 30 genes on chromosome 16. On January 9, a team, led by researchers at Massachusetts General Hospital (MGH) and Children's Hospital Boston, announced that it had found copy number variations—deletions of duplications of segments of genetic code that alter the number of copies of a gene a person carries—in 12 of 1,400 autism sufferers it was studying. (A person normally receives two copies of each gene, one from each parent.) The researchers report in The New England Journal of Medicine (NEJM) that they replicated the finding in two other cohorts—one with 500 participants and another with 300 individuals diagnosed with autism. A week later, a Canadian research team announced in The American Journal of Human Genetics that it had uncovered genetic kinks in the same region in a sample of 927 people—427 of whom suffer from autism

The back-to-back findings come amid a stream of evidence pointing to genetic rearrangements as key culprits in autism. (Chromosome 16 is the second instance of a copy number variation to be fingered as a causative mutation of the condition. Scientists first reported a link between a surplus of genetic material on chromosome 15 and autism in 1994, a finding that has since been replicated and confirmed to be a copy number variation.

Less than a year ago the Autism Genome Project (AGP) Consortium, a collective of more than 120 scientists representing various institutions around the world, reported in Nature Genetics that it had found similar chromosomal variants in several autism patients. About the same time, scientists at Cold Spring Harbor Laboratory in Long Island, N.Y., focusing on families with one autistic child, reported that an estimated 10 to 30 percent of all reported cases of autism may be caused by new (or spontaneous) mutations in the number of copies of genes in children (that were not found in either parent).

Citing the growing body of evidence of links between copy number variations and diseases such as autism, an international science consortium announced yesterday that it plans to sequence the genomes of 1,000 people from around the world in an attempt to flush out genetic suspects. "The importance of these variants has become increasingly clear with surveys completed in the past 18 months that show these differences in genome structure may play a role in susceptibility to certain conditions, such as mental retardation and autism," the National Institutes of Health, one of the participant organizations, said in a statement.

Mark Daly, an assistant professor of medicine at Harvard University Medical School and MGH as well as co-author of the NEJM study, notes that "It is extremely unusual to see these spontaneous deletions and duplications in a region that's usually a copy number–stable region. This specific spontaneous mutation, which we found in a sufficient number of cases, announced itself as an autism risk factor."

"I'm really happy because we found the same result on [chromosome] 16 using a Canadian cohort," adds Stephen Scherer, director of The Center for Applied Genomics at The Hospital for Sick Children in Toronto, a co-author on the second study. "Validation of a complex disease is very exciting."

The search for autism-related genes has led to stunning evidence of the complexity of the disease, which is estimated to affect one in every 150 children born worldwide. Autism involves a spectrum of illnesses that all have similar symptoms, including Rett syndrome, which researchers have linked to a specific genetic mutation. The syndrome only strikes girls and is characterized by asocial behavior and cognitive deficits. But the exact causes of the vast majority of autism-related disorders remain a mystery: classic genetic studies, which tie the ailment to single nucleotide polymorphisms (SNPs—deletions, additions or substitutions of one unit in the genetic code), have returned a number of different markers with very few well-replicated candidates.

Some research teams over the past five years have used microarray or gene-chip technology to compare genomes and quickly scan them for variations of copy numbers on each chromosome. Daly says that microarray data provided his team with an early analysis of the vast data sets it is reviewing; he says scientists plan to follow up their initial findings with an SNP-association study. The Canadian researchers—some of whom were members of The AGP Consortium—specifically set out to examine copy number glitches.

Daly believes these structural events are likely behind only some cases of autism. "I think the middle ground is that some of the genetics underlying autism have their roots in these spontaneous deletions or duplications," he says. "At the same time, across our data set, we don't see regions like this. It's a part of the puzzle, but there's a lot more to it than just this type of event."

Michael Wigler, a geneticist at Cold Spring and the senior author of the 2007 Science paper, believes that successfully hunting down these lesions in the genome will prove crucial to unraveling autism. At the least, he says, they will point researchers in the right direction.

"The general approach of looking at copy number variation as the cause for genetic disease has probably taken one of those exponential—it's probably hyper- —exponential leaps," he says. "So, in 2003 we published [a previous] Science paper, which showed that there is [a] relatively large amount of copy number variation among normal, healthy people."

Both of the new studies found that copy number events involving either duplication or deletion of the 25 to 30 chromosome-16 genes—several of which are known to play a role in the developing brain—appear to cause autism. "That region—or rather, a gene in that region—is apparently extremely sensitive to [copy-number] dosage," Daly says. "Too much or too little causes developmental differences."

In the case of a deletion of this DNA segment, the damaged gene likely will not produce enough protein. This can potentially cause myriad malfunctions, because proteins typically work in complexes. Hence, a deficiency of one can hobble the entire collaborative effort. By the same token, a surfeit of one protein caused by duplication would also cause malformed complexes.

There is another wrinkle in the Canadian data set, Scherer says: of the 427 autism sufferers assessed, 7 percent showed evidence of copy number variation. Within that group, 27 individuals (11 percent) had two or more of these spontaneous deletions or duplications (or one of each). One of them had a deletion on chromosome 22 that affected, among other genes, SHANK3, which has been implicated in mental retardation. This deletion was accompanied by a duplication of a genetic segment on chromosome 20.

Scherer calls the mutation on chromosome 20 a "modifier" that adds to the complexity of the phenotype. "There are some of these copy number changes that increase the risk of being autistic, but they may need to be inherited with other changes that culminate," in the disorder, he says. "It's going to vary based on your sex, your genetic background. and possibly the environment." He believes that a person without the chromosome-20 alteration is likely to suffer mental retardation and one who has it is more likely to develop autism.

That is in line with a unified genetic theory of autism proposed by Wigler, who performed a rigorous statistical analysis of a large data set cobbled together by the Autism Genetic Resource Exchange, a group of autism researchers who share data collected from families with autistic children. Wigler proposed the idea of modifying genes partly to account for the disparity in autism incidence between boys and girls. (Boys are four times more likely than girls to develop autism.)

So how significant is the latest finding about chromosome 16? "It's going to be the intense subject of functional studies, studies in model organisms, and genetic follow-up studies in human samples," Daly says. Clinicians monitoring children with the deletion or the duplication of material there, he explains, may eventually be able to find a matching set of specific symptoms that accompany those particular genetic events. "There are folks at the Children's Hospital," Daly notes, who "are already turning this into a critical screening tool."

For Scherer's part, he believes, from his preliminary observations of one family, that a deletion of DNA on the 16th chromosome may result in autism accompanied by mental retardation as well as disruptions in the aortic valve (one of the heart's four valves) that may cause seizures. "In fact," he says, "if you see…[this pair of symptoms], along with autism, it might predict the [chromosome-] 16 deletion."

If that bears out, then looking for the deletion ahead of time could be crucial in early intervention. "We tried to do a thorough study," Scherer says, "so that we could convince people to do a microarray analysis as part of their workup."

[Obesity related to 6,000 genes - now autism related to 25-30 genes (and uncounted copy number variations). "A gene" causing a hereditary disease (although in some instances it does happen) is more of an exception than the rule. Increasingly, scores of "genes" (mostly their intronic segments) and intergenic "Junk DNA" segments as regulatory sequences are found to be associated with hereditary diseases. The genome as a massively parallel, recursive system (look for announcement) is emerging to the forefront - January 23, 2008 pellionisz_at_junkdna.com]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^



International effort to catalog complete DNA of 1,000 people

WASHINGTON (AFP) — Researchers on three continents will join together to catalog the genomes of 1,000 people in an ambitious project that they hope will help determine genetic roots and factors for human disease, the group announced Tuesday.

England's Wellcome Trust Sanger Institute, the US National Human Genome Research Institute (NHGRI), and China's Beijing Genomics Institute-Shenzhen are forming the 1000 Genomes Project to create a new map of the human genome with the most detailed data yet on DNA variations with biomedical significance.

"The 1000 Genomes Project will examine the human genome at a level of detail that no one has done before," said Richard Durbin of the Sanger Institute.

Sanger, who will co-chair the consortium, said the project only became possible in the past two years due to strides in genetic sequencing technology, bioinformatics and other research techniques.

"We are moving forward to build a tool that will greatly expand and further accelerate efforts to find more of the genetic factors involved in human health and disease," he said in a statement.

The research aims to provide more details of the one percent of DNA that varies from person to person and which is often responsible for differences in susceptibility to disease and reaction to treatments among individuals.

Researchers have already cataloged dozens of specific regions of variation in the human genome, or haplotypes, and associated them with common diseases like coronary artery disease, breast cancer, arthritis and age-related macular degeneration.

But current DNA maps are not highly detailed, and researchers want a finer picture of the human genome to better pinpoint the genetic factors in disease.

"Our existing databases do a reasonably good job of cataloging variations found in at least 10 percent of a population," said NHGRI Francis Collins [Translation: "we have some vague idea about 10% but are in the dark for 90% of the homo sapiens" - AJP].

"By harnessing the power of new sequencing technologies and novel computational methods, we hope to give biomedical researchers a genome-wide map of variation down to the one percent level," he said. "This will change the way we carry out studies of genetic disease."

The project will map the DNA of a number of specific ethnic groups, including Yoruba in Ibadan, Nigeria; Chinese in Denver, Colorado, and Chinese in Beijing; Toscani in Italy, Gujarati Indians in Houston, Texas; [Utah residents with northern European ancestry, Japanese in Tokyo,], Mexicans in Los Angeles and African descendants in the US southwest .

The DNA is collected with permission and all identifying medical and personal information is stripped from the samples to protect their anonymity.

The consortium hopes that new technologies and methods along with the project's ambitious size will help them slash the cost of DNA sequencing to one-tenth of what it would have been just recently.

"This project will examine the human genome in a detail that has never been attempted -- the scale is immense," said Gil McVean of the University of Oxford in England.

"When up and running at full speed, this project will generate more sequence in two days than was added to public databases of all of the past year."

The data generated will be placed on freely accessible databases for scientists worldwide to make use of.

[The targeted list of ethnicities is very nice, though the Welsh, Irish and Scottish and English might feel slightly neglected on the UK part, it might be questionable why the "Chinese in Denver and Colorado" (as opposed to the ethnicity of Tibet and countless other groups in China), and why the average US taxpayer should "B-list" the vast majority of ethnic origins of USA citizens. Whose diseases should come first for the US taxpayers? - This project has just opened up a Pandorra-box of Global Proportions - AJP]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Supergene Labs Design Microbes to Change Sun to Fuel, Eat Waste
By Bob Drummond

[Leaders of Synthetic Genomics: Keasling, Venter, Church - AJP]

Jan. 9 (Bloomberg) -- [....]

Designer organisms, and the potential to profit from them, are sparking excitement -- and debate -- among scientists and venture capital investors. Researchers in an emerging field called synthetic biology envision microbes customized with artificial genes to enable them to turn sunlight into fuel, clean up industrial waste or monitor patients for the first signs of disease.

Already, scientists are producing strings of man-made DNA, short for deoxyribonucleic acid, which directs the functions of all living cells. Then they splice the manufactured DNA into the genes of existing organisms, reprogramming bacteria to act like microscopic factories churning out biofuels. [...]

``It's the coolest stuff of my career,'' Venter says in his office at the J. Craig Venter Institute in Rockville, Maryland. ``We can go from 15 years of reading the genetic code to now maybe harnessing that information for the betterment of mankind.'' [There is a stage between "reading the genetic code" and "using the understanding". Presently, the "understanding of genome regulation" seems to be the key - AJP]

[...]

``You can win in business in multiple ways: You can either make a product, or you can make something that sizzles -- that seems like a product,'' Harvard Medical School genetics professor George Church says. Church is a co-founder of LS9 Inc. in San Carlos, California, which plans to use modified E. coli bacteria to convert plant matter into a gasolinelike fuel. [Dr. George Church is "professor turned to business" in more than one ways. Co-founder of the LS9 "Biofuels", Founder of Knome.com "full sequencing company" (outsourcing the work to the Beijing Genomics Institute) and on the Board of Advisors for Personalized Genomics company "23andMe". - AJP]

Single-Celled Refineries

Re-engineered microorganisms may inherit all sorts of jobs. For now, top gene researchers are particularly excited about the potential for energy-producing microbes that may become single- celled refineries for ethanol, biodiesel or other petroleum substitutes without using food crops such as corn.

Scientists are forming bioenergy companies with money from some of the same venture investors who once backed computer and Internet startups.

``It's a huge, huge market, and at $100 oil, with the climate crisis and our geopolitical situation, it's the right market to go after,'' says Samir Kaul, a partner at Khosla Ventures in Menlo Park, California. Khosla Ventures, run by Sun Microsystems Inc. co-founder Vinod Khosla, is backing Church's LS9 and other synthetic biology companies.

Venter has founded a company called Synthetic Genomics Inc. to design microbes that make fuel from plant matter, carbon dioxide and sunshine, or convert underground coal into a more easily extracted gas.

$100 Billion Industries

The energy market is so much larger than biopharmaceuticals that there's room for a plethora of blockbuster products, he says.

``With fuel, we're hoping there could be a hundred to a thousand different unique solutions,'' Venter says, wearing blue jeans and a sport shirt in an office crowded with awards, mementos and sailing memorabilia. ``Each one could be a $100 billion industry on its own.''

Synthetic biology's potential stems from life's vast array of single-celled organisms. Many of them already perform valuable tasks, such as fermenting grain into alcohol.

``The reason biology is cool to me is that I look at all the things it can physically make,'' says Drew Endy, a biological engineering professor at Massachusetts Institute of Technology in Cambridge. ``The list goes on and on and on and on.''

Synthetic biology builds on the more than three decades of genetic engineering behind trailblazing biotechnology companies such as Amgen Inc. and Genentech Inc.

Gene Splicing

The science, also called gene splicing, typically involves transplanting a single gene from one cell into another to produce a particular protein, says Jay Keasling, a chemical engineering professor at the University of California, Berkeley. In synthetic biology, scientists implant a series of genes designed to work together, like stations along a biological assembly line.

``It's one thing to throw a gene into a cell,'' Keasling says. ``We're talking about putting in genetic circuits that will allow us to coordinate many different processes simultaneously.'' [...]

DNA Synthesizers

Scientists feed the four ingredients in the desired order into DNA synthesizers. The machines are generally about the size of a laser printer and are studded with small bottles.

The ingredients mix with chemicals that make the molecules join into relatively short strings, says John Mulligan, chairman of Blue Heron Biotechnology Inc., a Bothell, Washington-based company that sells synthetic DNA. The short strands are chemically linked into longer chains, which are joined into double-stranded DNA and copied, Mulligan says.

``We can go from completely inert bottles full of powder -- for A, C, G and T -- dissolve them in an organic solvent and make long strings of DNA,'' Church says. ``We put that into a cell, where it basically produces whatever you want it to produce.''

There are various ways to get the DNA into a microbe. The genetic material can pass through a cell wall, with laboratories such as Venter's orchestrating millions of individual microscopic reactions at once. Or the genes can hitch a ride on a virus that infiltrates bacteria.

[...]

Venter says today's body of genetic knowledge is growing so fast that biology likely will dominate 21st-century science and technology, just as discoveries in physics revolutionized the past 100 years. [...]

Synthetic biology's ability to stretch the imagination may not be all blessing. The Sept. 11, 2001, attacks and the anonymous anthrax mailings the same year introduced American society to a heightened threat of terrorism. Today, teenagers with science fair projects can browse Internet databases for the DNA sequences needed to make a novel microbe.

``Engineered biological agents could be worse than any disease known to man,'' the U.S. Central Intelligence Agency said in a 2003 report. [...]

Still, some early entrants are expecting big things from the embryonic industry as researchers rush to start companies and venture capitalists hustle to fund them.

``It's equivalent to building the first transistor,'' says Juan Enriquez, chief executive officer of Biotechonomy LLC, a Boston investor in Venter's Synthetic Genomics. ``It changes fundamentally the rules of the game across a whole series of industries.''

Made-to-Order DNA

Harvard's Church, MIT's Endy and UC Berkeley's Keasling started Codon Devices Inc. in Cambridge to sell made-to-order synthetic DNA and related services. Investors, led by Cambridge's Flagship Ventures, have contributed $33 million in two rounds of venture funding.

Venter started Synthetic Genomics in 2005 with gene researcher and longtime collaborator Hamilton Smith, who won the Nobel Prize in medicine in 1978. In October 2005, the company sold $30 million of preferred stock to 12 investors in the U.S., according to a Securities and Exchange Commission filing.

It's raised more money abroad. In June, BP Plc, Europe's second-biggest oil company, bought an unspecified stake as part of a research venture to study microbes living in coal and oil fields. Venter says he's investigating ways to engineer microbes to make hydrocarbons more environmentally friendly.

`A Young Field'

Another of Keasling's synthetic biology startups, Amyris Biotechnologies Inc. in Emeryville, California, has raised $90 million since October 2006. It's working on plant-based gasoline and diesel fuel substitutes.

Amyris is also teaming up with UC Berkeley to create microbes to produce low-cost artemisinin, an anti-malarial drug that's too expensive for wide use in poor countries. That effort is backed by a $42.6 million grant from the Bill & Melinda Gates Foundation. ``This is a very young field,'' Venter says. ``There are a lot of startup companies. There's a lot of money floating around.''

The hubbub around synthetic biology evokes an earlier era in Silicon Valley. And some young companies are drawing money from the same VCs that backed high-profile computer and Internet firms.

Khosla Ventures is funding Amyris and Codon Devices in addition to LS9. Menlo Park-based Kleiner Perkins Caufield & Byers is backing Codon Devices. Kleiner partner John Doerr, known for his early support of Amazon.com Inc. and Google Inc., is on the Amyris board.

`Renaissance in Learning'

Steve Jurvetson, managing director of Draper Fisher Jurvetson in Menlo Park, is a Synthetic Genomics director. A foundation headed by Intel Corp. co-founder Gordon Moore has given more than $15 million to Venter's nonprofit institute since 2004.

Jurvetson says synthetic biology is taking computing's place as the cutting edge of technology. ``There's a huge renaissance in learning that's going on in life sciences that's bringing some of the techniques and approaches we've used in information theory,'' says Jurvetson, an original investor in Hotmail Corp., the email service provider Microsoft Corp. bought in 1997. ``It's as if we're finally able to decipher and re- engineer the code of life.''

Analogies linking synthetic biology with computing aren't just superficial comparisons. Cells get marching orders from sequences of DNA's four chemical letters, just as computers take direction from programs written in strings of ones and zeros.

``It's like a computer language, but it's base four instead of base two,'' says Chad Waite, who invests in biology companies as a managing director at OVP Venture Partners in Kirkland, Washington.

[...] Venter and his research partners want to make their customized microbe with the smallest number of genes required to keep it alive. Like an automobile chassis, the stripped-down organism could act as a frame, supporting strands of synthetic DNA designed to produce chemicals or digest pollutants, Venter says.

``We have to understand the minimal cell to understand and build correctly the next phases,'' says Venter, who has applied for patents on a bacteria with a minimum complement of genes. He has tentatively picked a name for his minimalist microbe: Mycoplasma laboratorium. The name signifies bacteria built in the lab.

Church questions whether Venter's approach makes sense. Existing bacteria, such as E. coli, probably make better starting points for synthetic biology work, he says. ``Mycoplasma is notoriously frail,'' he says. ``If I were to pick a chassis, an organism to produce biofuels, it would be a robust, tough dude. It would not be some wimpy guy.''

Keasling says Venter's goal of making a custom-designed cell as a foundation for synthetic engineering is too distant to be practical. And it's not even necessary. People die of malaria every day for want of the artemisinin his company, Amyris, is working to produce.

``We can't wait for a synthetic cell that might have all of the traits we want,'' Keasling says.

Kaul, at Khosla Ventures, isn't surprised Venter is taking the engineering of synthetic DNA to the extreme by designing his own man-made organism.

`Next Big Thing'

``Craig is always pushing the frontiers of biology,'' says Kaul, who was a biochemist at Venter's Institute for Genomic Research before moving to venture capital. ``He's always on to the next big thing. When he puts his mind on something, it's always going to be pretty exciting.''

While debate simmers inside the scientific community, the political, ethical and moral concerns targeting genetically modified crops and stem cell research aren't yet impeding synthetic biology research. The field is too new.

``It's still under the radar,'' says Jim Thomas, research program manager at ETC. ``We're at a very early stage in terms of people being aware of this.''

Sooner or later, something is likely to happen to put synthetic biology in the spotlight. Thomas has a pretty good guess what that will be: ``The interesting turning point is going to be when Craig Venter announces Synthia, his synthetic organism. That's going to take a lot of people by surprise.''

Potential Dangers

Thomas says Synthia and custom-designed organisms like it, which have never been exposed to nature, will open a new realm of potential dangers. Regulators will need to rule out environmental threats before the microbes go to work outside a lab, he says.

``You're building it entirely from scratch, so there's no reference point,'' he says. ``Nobody really has an ability yet to work out how to assess the safety.''

The risk of unanticipated dangers from unfamiliar microorganisms stoked fears in the 1970s with the birth of gene splicing. Cambridge, now a hotbed of synthetic biology work at Harvard and MIT, banned genetic engineering within the city limits. Researchers observed an ad hoc moratorium until a scientific conference in 1975 established safety guidelines.

Similar concerns may have to be addressed with synthetic biology, OVP's Waite says. ``There are going to be a lot of ethical issues we have to crawl through,'' he says.

The Washington-based Center for Strategic & International Studies, MIT and the Venter Institute released a 55-page report in October describing potential synthetic biology safeguards, from distribution of bio-safety manuals to registration of DNA synthesizers.

No Incidents

Genetic engineers have shown they can work safely after a generation's worth of experience and no damage from rogue microbes, Venter says. ``We've had 30 years and tens of millions of experiments -- all that have worked without incident,'' he says.

The Internet is changing synthetic biology's equation. In 2002, researchers from what is now Stony Brook University, part of the New York state university system, managed to make a synthetic, infectious poliovirus. They downloaded its genetic blueprint from a Web site and ordered the DNA from a commercial lab.

DNA do-it-yourselfers browsing eBay in November could choose from a half dozen used DNA synthesizers for as little as $199 plus shipping. A secondhand Pharmacia Gene Assembler Plus model was selling for $995, while the owner of an Applied Biosystems 394 wanted $1,000. On its Web site, Codon Devices recently advertised gene synthesis for as little as 69 cents per pair of DNA letters with a minimum order of synthetic DNA.

[...]

``We're entering a brave new world, where every year is going to feel a bit like future shock, and the pace of change is only going to accelerate,'' Jurvetson says.

It should be no surprise that Venter is looking beyond M. laboratorium, his first swipe at a man-made organism, to projects approaching the frontiers of knowledge. Along the way, he and other future-oriented geneticists will have to clear hurdles in the lab and in the marketplace before synthetic biology can produce real profits -- and deliver on its promise.

To contact the reporter on this story: Bob Drummond in Washington at bdrummond@bloomberg.net

[A main theme of the Genomic Revolution appears to be that "Genomics will be just as revolutionary for Biology and Medicine in the 21st Century" as "Phycics revolutionized the 20th Century" (see Jurvetson above, and also Prof. Lee Silver in Newsweek International Editions, but not in the USA Edition). If so, let's take some lessons. The danger and promise of nuclear reaction releasing colossal amounts of energy urged scientists to first understand the new phenomena in terms of novel basic principles (quantum mechanics) - before rushing to potentially dangerous applications. True, it is fairly easy and inexpensive to put together (especially a limited number) of "genes" (according to the Pre-ENCODE terminology). Would it work BEFORE we understand the basics of genome regulation? When Szilard suggested to Einstein to write a letter to President Roosevelt, proposing in essence "the Manhattan Project" (for the A-bomb), it may be noteworthy that upon receiving the "nod", and were asked what was needed first, they wanted $6,000 for grafite-moderators to REGULATE nuclear chain-reaction (such that it can be shot down, should anything go wrong). Their request was immediately granted by the Committee. The Grand Total that Synthetic Genomics already received is probably five orders of magnitude larger ($600 M is an understatement) - but even the basics of Genome Regulation are quite unknown, as we speak. ENCODE-2.0 received about $60 M for 4 years - and it does not even address "regulation". Where is the Committee to submit a request for a similarly modest investment in Genome Regulation as the $6,000 worth of grafite was in the previous revolution? pellionisz_at_junkdna.com, January 21, 2008]

Scientist hopeful about future of medicine, but funds needed

Jan. 18, 2008, 11:58PM

By ERIC BERGER
Houston Chronicle

[Francis Collins - Head of Government R&D in Modern Genomics. PostGenetic Medicine is unlikely to be headed by (any) government - AJP]

Half a decade after completing the Human Genome Project, scientists are beginning to understand how genes influence common diseases such as diabetes, the country's leading geneticist said Friday in Houston.

Yet at this time of great medical promise, Dr. Francis Collins warned, stagnant funding of medical research threatens to drive a generation of young scientists out of medicine.

Collins spoke to researchers and students at Baylor College of Medicine, one of the three major academic partners in the Human Genome Project that mapped all 3 billion bits of the human genetic code. Collins led that project and now directs the National Human Genome Research Institute, which provides the bulk of federal research funding for genetics.

During his speech and a subsequent interview, Collins said DNA sequencing — that is, deciphering DNA's chemical components and identifying genes — has gotten cheaper, better and faster since 2003.

As a result, scientists are closing in on finding the particular genes that, in part, cause common human diseases.

"In the course of the next decade or more, medicine will be utterly revolutionized as a result," Collins said. "None of us should be too confident we can quite predict that trajectory, but it's going to be a great ride."

Even before the human genome project's completion, scientists had identified many of the so-called single-gene diseases, such as cystic fibrosis. But they struggled to understand the genetic causes of more common diseases such as cancer and heart ailments.

After mapping the full genome, researchers found that many genes contributed to these diseases, [6,000 genes appear to contribute to obesity, see news below, blowing away another "Pre-ENCODE dogma" - AJP] and some became frustrated as they tried to make sense of the complicated puzzle. But that is now changing, Collins said. [How is it changing? Leroy Hood declared that "Genomics became an Informational Science"; with proper mathematics and algorithmic approach the "many-to-many" paradigms are routinely handled - AJP]

The main cause is a precipitous drop in the cost to analyze human DNA.

To better understand all the genetic causes of a disease, scientists must look at all possible gene variations. But there are 3 billion chemical bits of DNA in the human genome, and at least 10 million of these might directly influence human disease.

To survey the entire genome for variant candidates that contribute to diabetes would have cost $10 billion in 2002, Collins said.

But as technology has improved, such an experiment can now be done for about $600,000, he said. A recent study of more than 30,000 patients revealed about a dozen gene variations that contribute to diabetes.

With falling costs, a complete picture of the hereditary causes of human disease could come fairly soon, Collins said.

He tempered his optimism with funding concerns. Since the year 2003, he said, the budget for the National Institutes of Health has remained flat with respect to inflation. And since the cost of medical research has risen faster than inflation, Collins said, the agency's purchasing power has declined by 20 percent.

As a result, fewer grants for research are being approved.

"I've been at NIH for 15 years, and this is by far the most difficult period since I've been there," Collins said. "There's just not enough funding to go around."

eric.berger@chron.com

[The statement probably reads "There's just not enough GOVERNMENT funding to go around" - which is certainly true. Revolutionary medicine based on "Post-ENCODE Genomics" (PostGenetic Medicine) therefore, is not going to be lead by Government (except in China...). But how? Maybe the visit by Francis Collins to Baylor is not entirely accidental. With hundreds of millions suffering from, or to be diagnosed at any time, to develop one or several of scores of "Junk DNA diseases" there will be an unprecedented outry of e.g. the US public. On the two extremes there are already patients (or potential patients) who will donate generously to Hospitals to become "PostGenetic Medicine Centers" (with or without this particular adjective). Mr. and Mrs. Broad donated years ago $200 M of their own money to establish the "Broad Institute", for example, to tie Harvard and MIT in medical R&D. On the other extreme there will be patients so outraged by the several decades old neglect of "Junk DNA" that Newsweek issued as a cover story in its all International Editions Prof. Lee Silver's outstanding article on the subject - but in the US Edition not only the cover, but the entire article was skipped. Why? Pre-ENCODE (till June 14, 2007) everyone had the "alibi" of neglecting "Junk DNA" and getting away with it, since anyone could claim that "there was no official agreement on the subject". However, the massive US Government-led project provided evidence and thus put everyone on notice that "according to the US Government Junk DNA may be dangerous to your health". Thus, any individual R&D worker and any Health Care Facility might wish to defuse and brace in time even the potential of any "class action lawsuit" for neglect of "Junk DNA diseases". Individuals can simply do so by joining the International PostGenetics Society, and thus will be immune to such potential charges. University Hospitals by establishing an interdisciplinary "Department of PostGenetic Medicine" (e.g. using those endowments that benevolent patients are already inclined to fight such diseases with) will be, likewise, immune to charges by individuals with a negative attitude. In fact, it is felt that a competition is building up where the "First World Congress of International PostGenetics Society" will be held - since the Center not only will be immunized but could claim leadership in the field of PostModern Genomics. New item below demonstrates that "Perhaps the biggest of Venter's accomplishments is that by "going commercial" he has proven the 300 year-old dogma that scientific research is led by governments". Health Care Facilities, in the PostModern Era when governments are only one of the sources of support (direct public support and Big Pharma are increasingly significant factors) with proper leadership and the trendy commercialization (see news item below) can provide not only a revolution in medicine, but also a revolution in supporting infrastructure. pellionisz_at_junkdna.com, January 18, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^



Invitrogen Enters Non-Coding RNA Licensing Agreement with IMBcom [Mattick goes commercial]

Deal to Broaden Invitrogen’s Offerings for Epigenetic Analysis

January 17, 2008 08:00 AM Eastern Time

CARLSBAD, Calif.--(BUSINESS WIRE)--Invitrogen Corporation (NASDAQ:IVGN), a provider of essential life science technologies for research, production and diagnostics, has entered into an exclusive license agreement with IMBcom Proprietary Limited Company to commercialize new non-coding ribonucleic acid (RNA) content predicted by a proprietary algorithm and experimentally validated by the University of Queensland, Australia. This expanded content will enable Invitrogen to provide the most comprehensive non-coding RNA product portfolio in the market and be the first company to provide this new content to the research community.

MicroRNAs, which are the focus of current non-coding RNA research, are just one small subset of the non-coding RNA world,” said Peter Welch director of research and development for Gene Expression Profiling at Invitrogen. “MicroRNAs have a discrete function in gene regulation, but the larger non-coding RNAs are involved in multiple roles such as cellular aging and protein assembly, in addition to simple gene regulation.”

By combining the coding and non-coding sequences on the same microarray, researchers can obtain more information from a single sample to better reveal the relationship between non-coding RNA expression and mRNA expression. This is particularly important for scientists studying cancer and stem cells, for such RNAs have been implicated in both of these areas.

Researchers at the University of Queensland developed an algorithm that has predicted tens of thousands of unique human and mouse probe sequences relating to coding and non-coding RNA.

John Mattick, professor of Molecular Biology at the University of Queensland added, “It appears that we have misunderstood the nature of genetic programming in humans and other complex organisms. Most of the genome is transcribed, mainly into non-coding RNAs, which appear to comprise a hidden layer of gene regulation whose full dimensions are just beginning to be explored.

Invitrogen will commercialize these sequences over the next few years, allowing the company to expand its NCode™ microRNA microarray product line into the field of non-coding RNA profiling. Thus, for the first time, a commercial tool will be available to help scientists to identify the large complement of non-coding RNAs and study their function.

For more information, visit: www.invitrogen.com/ncode or www.imbcom.com.au.

About Invitrogen

Invitrogen Corporation (NASDAQ:IVGN) provides products and services that support academic and government research institutions and pharmaceutical and biotech companies worldwide in their efforts to improve the human condition. The company provides essential life science technologies for disease research, drug discovery, and commercial bioproduction. Invitrogen's own research and development efforts are focused on breakthrough innovation in all major areas of biological discovery including functional genomics, proteomics, bioinformatics and cell biology -- placing Invitrogen's products in nearly every major laboratory in the world. Founded in 1987, Invitrogen is headquartered in Carlsbad, California, and conducts business in more than 70 countries around the world. The company employs approximately 4,700 scientists and other professionals and had revenues of more than $1.15 billion in 2006. For more information, visit www.invitrogen.com.

About IMBcom Pty Ltd

IMBcom is The University of Queensland’s company for commercialisation of the intellectual property arising from research conducted at The Institute for Molecular Bioscience (IMB).

[Perhaps the biggest of Venter's accomplishments is that by "going commercial" he has proven the 300 year-old dogma that scientific research is led by governments. Government, by definition built on consensus, simply fails to support the leading edge of "PostModern Era" of genomics in an adequate manner. This became most evident in the USA by Venter - but in other global regions now it is similar. Prof. Akoulitchev left Oxford University for his own company in the UK - outsourcing to India, Singapore and Silicon Vallley, and now Mattick (of Sidney, Australia) "went commercial" for reasons he knows best. In the USA, this trend is well-established beyond Venter. This columnist [AJP] never even tried the impossible, to run against the establishment on "government support"; accumulated IP "clean as a whistle" totally separate from any kind of entanglements. Prof. George Church (Harvard) "went commercial" with his Knowme (see news items below) - within 3 days of China's announcement of global sequencing services outsourcing it to China. Now, Mattick does the same; going commercial with explosive R&D, establishing a vital Australia-California linkage. pellionisz_at_junkdna.com, January 17, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

IPGS Founders meet in Silicon Valley this week for the "Next Big Thing" in "Genomics beyond Genes"

IPGS Originator (A. Pellionisz) meets this week on 13, 14, 15 and 16th of January in Silicon Valley with Prof. Alexandre Akoulitchev of Oxford, UK (IPGS Founder), Christian Hoyer Millar (new IPGS Founder, CEO of Oxford BioDynamics, Ltd) on their way to Singapore and Dubai with movers and shakers in Silicon Valley, California to shape "The Next Big Thing" in "Genomics beyond Genes". Parties of interest may contact Pellionisz_at_junkdna_dot_com

Body Weight Influenced By Thousands Of Genes [6,000 genes - 25 percent of the genome]

ScienceDaily (Jan. 16, 2008) — Researchers from the Monell Center have for the first time attempted to count the number of genes that contribute to obesity and body weight.

Their findings suggest that over 6,000 genes -- about 25 percent of the genome -- help determine an individual's body weight.

"Reports describing the discovery of a new 'obesity gene' have become common in the scientific literature and also the popular press," notes Monell behavioral geneticist Michael G. Tordoff, PhD, an author on the study.

"Our results suggest that each newly discovered gene is just one of the many thousands that influence body weight, so a quick fix to the obesity problem is unlikely."

To obtain an estimate of how many genes contribute to body weight, the Monell researchers surveyed the Jackson Laboratory Mouse Genome Database for information on body weights of knockout mouse strains.

Knockout mice have had a specific gene inactivated, or "knocked out." By studying how the knockout mice differ from normal mice, researchers obtain information about that gene's function and how it might contribute to disease. Mice can provide valuable information on human disease because they share many genes with humans.

The knockout approach is so useful that the inventors of the technology were awarded the 2007 Nobel Prize in Medicine. Knockout mice are now standard tools in all mouse models of behavior and disease.

In 60% of strains, knocking out a gene produces mice that are nonviable; that is, the mouse cannot survive without the knocked out gene.

The Monell survey revealed that body weight was altered in over a third of the viable knockout stains; 31 percent weighed less than controls (indicating that the missing genes contribute to heavier body weight), while another 3 percent weighed more (contributing to lighter weight).

Extrapolating from the total number of genes in the mouse genome, this implies that over 6,000 genes could potentially contribute to the body weight of a mouse.

Tordoff comments, "It is interesting that there are 10 times more genes that increase body weight than decrease it, which might help explain why it is easier to gain weight than lose it." [This comment implies that 600 genes influence gain loss - AJP]

Because body weight plays a role in many diseases, including hypertension, diabetes, and heart disease, the implications of the findings extend beyond studies of obesity and body weight. Gene knockouts reported to affect these diseases and others could potentially be due to a general effect to lower body weight.

The findings also hold clinical relevance, according to lead author Danielle R. Reed, PhD, a Monell geneticist. "Clinicians and other professionals concerned with the development of personalized medicine need to expand their ideas of genetics to recognize that many genes act together to determine disease susceptibility."

Maureen P. Lawler also contributed to the study which is published online in the journal BMC Genetics.

[This is the death knell of yet another dogma of Pre-ENCODE Genomics, the "one gene, one phenotype, one billion dollar pill" myth. The genome is a massively parallel system, in which "genes" (to be re-defined Post-ENCODE) and "phenotypic elements" (e.g. obesity, along with uncounted other elements) are connected based on not a "one-to-one" but on a "many-to-many" principle, characteristic to massively parallel nonlinear systems, such as neural networks. pellionisz_at_junkdna.com, January 16, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Is most of the human genome functional? ["Wait and see" is professionally negligent -AJP]
Friday, January 11, 2008

[...] there has always been a persistent undertone in biology that non-coding DNA must be doing something or it would have been deleted. [...] What has intrigued me much more is the debate among biologists about this, and the rather questionable claims, suppositions, and extrapolations that get made not just by the media but by various scientists themselves.

Take Francis Collins. He's a major player in genome biology and led the charge by the public Human Genome Project [and was/is the ENCODE architect -AJP]. And yet, he makes claims that non-coding DNA may be present in the genome "just in case" it needs to be put to use in the future. [That is not what he said - AJP]. This makes no sense from an evolutionary perspective [Francis Collins stands for "evolution" - problem is, that after ENCODE 1.0 our view on "evolution" is not at all "single minded". According to ENCODE, Lamarckian considerations must be re-visited while this essay does not seem be aware of the revolution - AJP]. It would be tempting to attribute this to Collins's adherence to the notion of theistic evolution, but in fact one can find this sort of fuzzy foresight argument being brought up by lots of authors. I suppose it's just disappointing that there is not better communication between genome biology and evolutionary biology [add "genome informatics" to PostGenetics, the new era of Post-ENCODE Genomics - AJP].

The case that frustrates me most is that of John Mattick. He of the worst figure ever is one of the primary promulgators of the view that scientists have overlooked possible function for non-coding DNA and that this is "one of the biggest mistakes in the history of molecular biology" that can only be corrected by a "new paradigm", and so on. [add "genome informatics" as well to PostGenetics, the new era of Post-ENCODE Genomics - AJP]. Basically, the argument seems to be that much of the non-coding portion of a given genome is involved in regulation and such. ["argument seems to be" is "an increasing consensus" - except those still go on with ad hominem attacks, e.g. against Mattick, and continue to vilify true pioneers such as Mattick and others -AJP] In the past, Mattick has refrained from pinning down an estimate of how much non-coding DNA he believes is functional, but his presentation of (extremely selective) data left little doubt that he considers more non-coding DNA to be correlated with greater complexity. But now we're starting to get some more explicit and increasingly bold claims.

As Check (2007) pointed out in a news article in Nature, Mattick thinks scientists are vastly underestimating how much of the genome is functional. He [Mattick] and Birney have placed a bet on the question. Mattick thinks at least 20% of possible functional elements in our genome will eventually be proven useful. Birney thinks fewer are functional.

Now consider this quote by Comings (1972), who was the first person to use the term "junk DNA" extensively (even before Ohno's (1972) coinage appeared in print):

"These considerations suggest that up to 20% of the genome is actively used and the remaining 80+% is junk. But being junk doesn't mean it is entirely useless. Common sense suggests that anything that is completely useless would be discarded. There are several possible functions for junk DNA.

So, even if Mattick is right about 20% of the human genome being functional, which is considered a rather high estimate on the basis of available data, he still would be merely agreeing with the author of the first major discussion about junk DNA.

Now, I should point out that I do not have a vested interest in how much of the human genome is functional. 5%? Fine. 20%? Fine. 50%? Ok. I will go where the data indicate. My reason for rejecting the notion of "more complexity means more DNA" is comparative: I refer you to the "onion test" for a simple illustration. [It was only the FractoGene approach that passed the "onion test" - AJP]. However, as readers of Genomicron already know, I find it rather irksome when people take any new finding about (potential) function in some part of the human genome and extrapolate this to mean that all DNA in every genome must be serving some role.

Anyway, back to what Mattick suggests. As noted, for the most part he has gone about arguing for large-scale function more by hint than by direct claim. However, finally he says the following (Phaesant and Mattick 2007).

"Thus, although admittedly on the basis of as yet limited evidence, it is quite plausible that many, if not the majority, of the expressed transcripts are functional and that a major component of genomic information is rapidly evolving regulatory DNA and RNA. Consequently, it is possible that much if not most of the human genome may be functional. This possibility cannot be ruled out on the available evidence, either from conservation analysis or from genetic studies, but does challenge current conceptions of the extent of functionality of the human genome and the nature of the genetic programming of humans and other complex organisms". [Emphasis added]

It seems to me that "we can't rule this out" is not a reason to think that something is plausible, let alone true. [If your doctor "can not rule it out that you have AIDS" unless he/she wants to engage in malpractice must investigate it - AJP]. In fact, the existence of mechanisms such as transposable element spread and the pseudogenization of duplicate genes suggests that there is good reason to expect much (probably most) of the genome to be non-functional unless data show otherwise. Some TEs have taken on a function, some cause disease, some are merely benign or only slightly detrimental. The proportions of non-coding elements in each of these categories remain to be determined, but they are not all equally likely by default.

The question of which sequences are functional, and in what way, is one of the more contentious and therefore interesting ones in genome biology. On the one hand, new information from various sources including the ENCODE project indicates that much non-coding is transcribed, though it remains an open question whether this has to do with function or noise. On the other hand, a recent analysis has suggested that as many as 4,000 sequences within the human genome initially thought to be genes are not really genes after all (Clamp et al. 2007), bringing the total count down to around 20,000.

Some people [...] desperately want the vast non-coding majority of eukaryote DNA to have a function. They latch onto any new discovery of function in some segment of the genome or another (or indeed, any mere restatement of what many authors have been saying since the 1970s) and consider their position supported. The rest of us will just have to wait and see.

References

Check, E. (2007). Genome project turns up evolutionary surprises. Nature 447: 760-761.

Clamp, M., B. Fry, M. Kamal, X. Xie, J. Cuff, M.F. Lin, M. Kellis, K. Lindblad-Toh, and E.S. Lander (2007). Distinguishing protein-coding and noncoding genes in the human genome. Proceedings of the National Academy of Sciences USA 104: 19428-19433.

Comings, D.E. 1972. The structure and function of chromatin. Advances in Human Genetics 3: 237-431.

Doolittle, W.F. and C. Sapienza. 1980. Selfish genes, the phenotype paradigm and genome evolution. Nature 284: 601-603.

Ohno, S. 1972. So much "junk" DNA in our genome. In Evolution of Genetic Systems (ed. H.H. Smith), pp. 366-370. Gordon and Breach, New York.

Phaesant, M. and J.S. Mattick (2007). Raising the estimate of functional human sequences. Genome Research 17: 1245-1253.

[The foreign author of the above essay is talking from a remote town in Canada, exercising a highly non-scientific approach of "wait and see" - while talking down Mattick (down under in Sidney, Australia) who has devoted decades of his research to the subject (and the above essay never quotes Mattick very recent and excellent article "Contrary to current dogma, most of the genome may be functional" available free for registration or directly in this column. Readers in the USA (scientists or patients of "JunkDNA diseases"), however, are in an entirely different legislature. In the USA, once the government-study ENCODE sealed by the 14th of June, 2007 that [defects of] "junk DNA" in some cases already established as the cause of hereditary diseases, researchers at least in theory can be slapped by a massive class-action law-suit for negligence if they fail to follow through, to the best of their knowledge, what and how "junk DNA" does. Probably this is why the October 15 issue of Newsweek published a major article on the subject in its International Editions in Asia, Europe and South-America, but entirely omitted the cover story from its USA Edition. One does not think that any USA researcher is likely to expose himself/herself to such a liability by taking the negligent non-professional attitude of "wait and see approach"- advocated by a foreigner. pellionisz_at_junkdna.com, January 12th, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Knome and the Beijing Genomics Institute Enter into Exclusive Strategic Alliance

CAMBRIDGE, Massachusetts — Jan. 10, 2008 — Knome, a leading personal genomics company, and the Beijing Genomics Institute (BGI) today announced that they have entered into an exclusive worldwide strategic alliance.

For private individuals seeking whole-genome sequencing services, Knome will have exclusive access to BGI’s world-class genome sequencing, assembly, and annotation capabilities. In combination with Knome’s analytic tools, security protocols, and genetic interpretation services, this alliance introduces an unmatched level of technology and service to the personal genomics industry.

“We are thrilled to announce this collaboration,” said Knome’s CEO Jorge Conde. “BGI was the ideal choice for the establishment of a long-term partnership for providing whole-genome sequencing and bioinformatic services. Not only do they have the largest installations of advanced sequencers in the world and a team of over 100 highly experienced bioinformaticians, but they are actually one of only three groups in the world that have completely sequenced a human genome.”

Knome’s whole-genome sequencing and analysis services are being marketed globally to private individuals. Knome is initially offering 20 individuals the opportunity to participate in its first sequencing flight. Pricing for the service starts at $350,000, and includes whole-genome sequencing and a comprehensive analysis from a team of leading geneticists, clinicians and bioinformaticians. Core to the fundamental principals of the company, Knome will have sole responsibility for maintaining clients’ personal information under strict confidentiality. Clients will retain full ownership of their personal genome and have the ability to anonymously share all or portions of their genome with researchers and other medical professionals.

“Our institute is very proud to work with Knome in bringing our sequencing and analytical expertise to private individuals around the world. We have made a tremendous investment in capital and people over the last 10 years,” said Zhuo Li, who is responsible for international collaboration at BGI, “and we are now in a position to work with Knome to bring not only the most cutting-edge technology to bear, but to do so at the most competitive cost possible.”

About BGI

Established in 1999 and headquartered in Shenzhen, China, the Beijing Genomics Institute is the largest genetic sequencing center in Asia. BGI has been a core participant in all major international sequencing efforts, including the International Human Genome Project and the International HapMap Consortium. The Institute has one of the largest bioinformatics teams in the world and has made a significant investment in state-of-the-art sequencing technologies, deploying over 120 sequencing machines, 10 supercomputers, and 500 terabytes of storage. Please visit www.genomics.org.cn for more information.

About Knome

Based in Cambridge, Massachusetts, Knome has the distinction of being the first personal genomics company to commercially offer whole-genome sequencing and analysis services for individuals. Whole-genome sequencing decodes nearly all of the 6 billion bits of information that make up an individual’s genome—unlike genome scanning or “SNP chip” technologies that decode only 0.02% of an individual’s genome. Working alongside leading geneticists, clinicians and bioinformaticians from Harvard and MIT, Knome enables its clients to obtain, understand, and share their genomic information in a manner that is both anonymous and secure. Knome is a privately funded company. Please visit www.knome.com for more information.

[Together with the news below, now we have the World's largest single concentration of human genome sequencing and bioinformatics (in Beijing, China) with deep penetration of the World's most affluent market for Personalized Genomics (USA). Should they secure the "leading edge" of PostGenetics, an unbeatable lead-position will be established. pellionisz_at_junkdna.com, January 10, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

China makes 1st volunteer genome atlas

The Beijing Genomics Institute (BGI) has sequenced the first Chinese volunteer's genome as part of a project to create a database of Asian genomes.

The Yanhuang Project, named after two legendary ancient emperors who are believed to be the ancestors of the Han ethnic group, will map the genomes of 100 Chinese people, said Dr. Wang Jian, director of BGI's Shenzhen branch, on Saturday.

"We finished the sequencing of the first Han Chinese genome last October," said Wang, "but the genome is from a researcher."

"We hope that the rest of the 100 people will be volunteers who want to have their genomes sequenced for purely scientific purposes," he said.

The first volunteer, who wanted to remain anonymous, donated 10 million yuan (about 1.3 million U.S. dollars) to the project along with his blood sample for sequencing.

"I believe more breakthroughs will be made in bio-tech and bio-pharmacy industries by sequencing and studying more Chinese genomes," the donor said. So far, only three individuals' genomes have been sequenced anywhere in the world and all of them were scientists.

Wang said the project is the Asian section of a comparative genomics project jointly conducted by Chinese and British scientists, which aims to create genome databases for various races from different continents.

He said that the Yanhuang Project has three phases. The first, which was completed last October, is to sequence a Chinese individual's genome that will serve as the reference. The second is to sequence at least 99 more individuals' genomes to construct a Chinese genetic polymorphism map. The final stage is to study the results of the first two phases and apply the findings to medical science.

"We need to establish the database of Chinese people's genomes in order to solve the problems related to Chinese-specific genetic diseases," said Wang. "It will also give us solid ground for future individual health care in terms of accurate and effective diagnosis, prediction and therapy."

"Personal genomics" became a new industry last year. Some companies in the United States plan to provide personal genome sequencing for a fee of 300,000 to 350,000 U.S. dollars, the American journal Science reported in December. Those companies say that having complete genome maps will help people understand their chances of developing genetically based diseases and act to control and prevent such diseases.

However, this practice is expected to bring ethical challenges since such genetic data can reveal personal information that many might prefer to keep private.

(Xinhua News Agency January 7, 2008)

[China will catapult in Personalized Genomics. Sequencing 100 humans is absolutely nothing compared to the budget of the USA - or even China. However, in the USA it is a political impossibility to "pick" the genomically fairly homogeneous "sample of 100 persons" whose genome will be sequenced first. (Moreover, because of the "melting pot" - it is biologically quite difficult... China, with the largest homogeneous populus of the Han, will establish the first database that is statistically significant. While doing so, a global effort will be attracted - since scientists of many countries in addition to China will hunt "junk DNA diseases" by using the Han database - since there will be nothing comparable. pellionisz_at_junkdna.com, January 5th, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Nutrigenomics: The Genome-Food Interface [click for full article]

By M. Nathaniel Mead

Jan 5, 2008 - 11:33:40 AM

Efforts to unveil the etiology of human disease often recapitulate the nature versus nurture debate. But today's biologists concede that neither nature nor nurture alone can explain the molecular processes that ultimately govern human health. The presence of a particular gene or mutation in most cases merely connotes a predisposition to a particular disease process. Whether that genetic potential will eventually manifest as a disease depends on a complex interplay between the human genome and environmental and behavioral factors. This understanding has helped spawn numerous multidisciplinary gene-based approaches to the study of health and disease.

One such endeavor is nutrigenomics, the integration of genomic science with nutrition and, when possible, other lifestyle variables such as cigarette smoking and alcohol consumption. Although genes are critical for determining function, nutrition modifies the extent to which different genes are expressed and thereby modulates whether individuals attain the potential established by their genetic background.

Nutrigenomics therefore initially referred to the study of the effects of nutrients on the expression of an individual's genetic makeup. More recently, this definition has been broadened to encompass nutritional factors that protect the genome from damage. Ultimately, nutrigenomics is concerned with the impact of dietary components on the genome, the proteome (the sum total of all proteins), and the metabolome (the sum of all metabolites). As in pharmacogenomics, where a drug will have diverse impacts on different segments of the population, researchers recognize that only a portion of the population will respond positively to specific nutritional interventions, while others will be unresponsive, and still other could even be adversely affected.

A Focus on Polymorphisms

Numerous studies in humans, animals, and cell cultures have demonstrated that macronutrients (e.g., fatty acids and proteins), micronutrients (e.g., vitamins), and naturally occurring bioreactive chemicals (e.g., phytochemicals such as flavonoids, carotenoids, coumarins, and phytosterols; and zoochemicals such as eicosapentaenoic acid and docosahexaenoic acid) regulate gene expression in diverse ways. Many of the micronutrients and bioreactive chemicals in foods are directly involved in metabolic reactions that determine everything from hormonal balances and immune competence to detoxification processes and the utilization of macronutrients for fuel and growth. Some of the biochemicals in foods (e.g., genistein and resveratrol) are ligands for transcription factors and thus directly alter gene expression. Others (e.g., choline) alter signal transduction pathways and chromatin structure, thus indirectly affecting gene expression.

There is increasing evidence that genome instability, in the absence of overt exposure to genotoxicants, is itself a sensitive marker of nutritional deficiency.

—Michael Fenech

CSIRO Genome Health and Nutrigenomics Laboratory

Much of the nutrigenomic focus has been on single-nucleotide polymorphisms (SNPs), DNA sequence variations that account for 90% of all human genetic variation. SNPs that alter the function of "housekeeping genes" involved in the basic maintenance of the cell are assumed to alter the risk of developing a disease. Dietary factors may differentially alter the effect of one or more SNPs to increase or decrease disease risk.

An elegant example of a diet–SNP interaction involves the common C677T polymorphism of the methylenetetrahydrofolate reductase ( MTHFR) gene. This variant causes MTHFR enzyme activity to slow down. This results in reduced capacity to use folate (or folic acid) to convert homocysteine to methionine and thence to the S-adenosylmethionine required for the maintenance methylation of cytosine in DNA and control of gene expression, among many other reactions. But the same variant also may increase the form of folate that can be used to make thymidine, one of the bases in DNA, and to prevent mutagenic uracil from being incorporated instead. This shift in methylation status may explain why in a low-folate environment (for example, where there is low intake of folate-rich vegetables such as spinach and asparagus or a lack of supplemental folate) homozygous carriers of the C677T polymorphism may be more prone to developmental defects but at the same time could be protected against certain cancers.

The key point here is that the activity of the reaction catalyzed by the MTHFR gene can be modified depending on the amount of two essential nutrients: folate, which is the substrate for MTHFR, and riboflavin, a cofactor of MTHFR. "Therefore, the risks associated with MTHFR activity can be markedly modified, for better or for worse, depending on fortification and supplementation strategies," says Michael Fenech, a research scientist at the CSIRO Genome Health and Nutrigenomics Laboratory in Adelaide, Australia. "For example, in those countries where mothers are required to supplement with high-dose folic acid to prevent neural tube defects in the infant, this practice may actually allow more babies to be born with the MTHFR C677T [polymorphism]." These children would be less able to convert folate to a usable form. On the other hand, if the dietary environment in which these individuals have to grow is low in folate and riboflavin, then they may struggle to survive in good health.

The field of nutrigenomics could not have been launched without the recent development of high-throughput -omic (genomic, transcriptomic, proteomic, and metabolomic) technologies. "These technologies enable us to identify and measure many molecules of each type at one time," says Jim Kaput, director of the newly established Division of Personalized Nutrition and Medicine at the FDA National Center for Toxicological Research. "In the realm of genomics, for example, we can now measure many variations in DNA, including tens of thousands of single-nucleotide polymorphisms and copy number variants, as well as many RNA molecules. This is crucial, since most cases of chronic diseases are not caused by mutations in single genes but rather by complex interactions among variants of several . . . genes."

These technologies currently enable identification of up to 500,000 SNPs per individual. Whereas nucleic acids can be analyzed with either sequencing or hybridization technologies, protein and metabolites may require slightly different techniques and equipment depending upon the type of protein and chemical nature of the metabolite. Nevertheless, Kaput says, the end result using various -omic technologies is an incredibly detailed window into the molecular makeup of each individual.

Meanwhile, nutritional biochemists have been busily cataloguing factors in food, including dozens of essential nutrients and tens of thousands of bioactive substances, that can be correlated with molecular patterns identified through the various -omic technologies. The intersection of the genomic and nutritional domains will require sophisticated analytic techniques and, in Kaput's opinion, the open sharing of scientific research findings worldwide because of the value derived from studying genomic and nutritional patterns in different populations and ethnic groups.

The Sweet Spot for Genomic Health

Not only the expression of genes but also the physical integrity and stability of the genome—what has been referred to as "genome health"—is to a large degree determined by a steady supply of specific nutrients. "There is increasing evidence that genome instability, in the absence of overt exposure to genotoxicants, is itself a sensitive marker of nutritional deficiency," says Fenech.

Fenech originated the concept of "genome health nutrigenomics," the science of how nutritional deficiency or excess can cause genome mutations at the base sequence or chromosomal level. "The main goal of this particular research discipline is to define the optimal dietary intake and tissue culture medium concentration to maintain damage to the genome at its lowest possible level in vivo and in vitro, respectively," says Fenech. "This is critically important because increased damage to the genome is among the fundamental causes of infertility, developmental defects, cancer, and neurodegenerative diseases." By the same token, the selective use of genome-protective nutrients in individuals with specific gene variants could potentially result in improved resistance toward these major diseases. Fenech believes we need to start viewing foods and diets in terms of their content of genome-protective nutrients.

Folate is among the nutrients most often cited as critical to genomic stability. Controlled intervention study data published in the July 1998 issue of Carcinogenesis and the April 2001 issue of Mutation Research indicate that a folate intake greater than 200 µg/day is required for chromosomal stability. Fenech's team has shown that reducing plasma folate concentration from 120 to 12 nmol/L in vitro, which is considered to be within the equivalent adequate range in vivo, causes as much genome damage as that induced by an acute exposure to 0.2 Gy of ionizing radiation. "We concluded that even moderate folate deficiency within the physiological range causes as much DNA damage in cultured lymphocytes as ten times the annual allowed limit of exposure to X rays and other forms of low linear energy transfer ionizing radiation for the general population," says Fenech. He points out that the typical plasma folate concentration for most populations is only 10–30 nmol/L, a level adequate to prevent anemia "but apparently insufficient to minimize chromosomal damage."

In the May 2005 issue of Carcinogenesis Fenech and his colleagues identified nine key nutrients that may affect genomic integrity in various ways. When consumed in increasing amounts in food, six of these nutrients (folate, vitamin B12, niacin, vitamin E, retinol, and calcium) are associated with a reduction in DNA damage, whereas three others (riboflavin, pantothenic acid, and biotin) are associated with an increase in DNA damage to the same extent observed with occupational exposure to genotoxic and carcinogenic chemicals. "These observations indicate that nutritional deficiency or excess can cause DNA damage on its own and that the effects are of the same magnitude as that of many common environmental toxicants," Fenech says.

[...]

Nutrigenomic Links to Chronic Disease

[...] In time, he adds, we will see important contributions from nutrigenomics for the prevention of many common modern maladies, including obesity, diabetes, cardiovascular disease, cancer, inflammatory disorders, age-related cognitive disorders, visual function, and of course many vitamin deficiency problems.

Diabetes, obesity, and cardiovascular diseases have been referred to by medical anthropologists and others as "diseases of civilization." The reason is simple: when aboriginal populations begin to adopt a high-sugar, high-fat "Western diet" for the first time, obesity and diabetes suddenly begin to appear in those populations and typically increase at rates commensurate with the adoption of the new diet. Such observations have been dramatically borne out in studies of the Pima Indians of Arizona and the indigenous people of Hawaii. In both instances, the abandonment of the traditional plant-rich, high-fiber diet was followed by skyrocketing rates of diabetes, obesity, and later cancer.

Over the course of human evolution, diet has profoundly molded human metabolic capacities and thus paved the way for the emergence of modern diseases. From an evolutionary perspective, diet is a limiting factor that imposes selective pressures on a population, much like other environmental factors. Some genotypes within a population are associated with higher nutrient needs, and when those needs are not met, there will be selection against those particular genotypes. However, when those needs are met—for example, the need for extra calories from carbohydrates and dietary fat—the gene that confers the high nutrient requirement will then persist in the population. This could well be the case for genes linked with obesity and diabetes.

Soloway notes that in cases where certain gene alleles confer some selective advantage, high levels of the required nutrient can actually lead to an expanded frequency of those alleles in a population. "In such cases, nutrient availability can provide a selective pressure that drives genotypic shifts in a population," he says.

From the nutrigenomic perspective, diabetes and obesity are both the result of an imbalanced diet interacting with genes that were once functional and adaptive in an earlier phase of human evolution, when food was less abundant. In the modern context, these same genes are considered to code for hormonal or metabolic tendencies that have become maladaptive and pathological in the modern environment. Risk of developing these diseases is thought to be modulated by genetic susceptibility differences among ancestral groups to the effect of the Western diet in precipitating insulin resistance.

Dietary factors can act to stabilize the genome once genetic abnormalities have occurred. The traditional diet–genome approach has related protection to dietary lifestyle and germline genotype. Here we are discussing dietary interaction with the abnormal genome in potentially precancerous cells.

—Graeme Young

Flinders Centre for Innovation in Cancer

[...]

Given that obesity is itself a risk factor for diabetes, cardiovascular disease, and various cancers, it is worthwhile to focus on the nutrigenomic aspects of this disease. A study conducted at the University of Navarra in Pamplona, Spain, and published in the August 2003 issue of the Journal of Nutrition showed that women with a Glu27 variant and a carbohydrate intake constituting more than 49% of total caloric consumption had a nearly three-fold increase in their risk of developing obesity. Importantly, an alternative variant of that same gene was not linked with a greater obesity risk in relation to the same carbohydrate–calorie intake levels. This could help explain why some women on high-carbohydrate diets gain weight while others do not.

Abdominal obesity, independent of generalized adiposity, predicts insulin resistance, type 2 diabetes, dyslipidemia, and cardiovascular disease. Endocrinologist Jerry Greenfield and colleagues at St. Vincent's Hospital in Sydney, Australia, recently reported that high polyunsaturated fat intake was associated with lower levels of abdominal fat in women at low genetic risk for abdominal obesity but not in women at high genetic risk. Also, a moderately high alcohol intake (1–1.5 drinks per day) was associated with approximately 20% less abdominal fat than lower intakes, but only in women genetically predisposed to abdominal obesity. This study, published in the November 2003 Journal of Clinical Endocrinology and Metabolism, indicates that various gene–diet interactions could be a key part of the abdominal obesity equation.

The APOE gene offers another example of how certain polymorphisms may predispose their bearers to chronic diseases. Each of three phenotypes carries a different probability of cardiovascular disease risk and responds differently to lifestyle and environmental factors, including dietary variables such as the amount and type of dietary fat. Most people in the United States have the APOE3 phenotype and respond favorably to a lower intake of dietary fat and regular exercise: their cholesterol levels drop and overall cardiovascular health improves. However, about 20% of the U.S. population carries at least one variant denoted as APOE-ε4, a polymorphism associated with elevated total cholesterol level, as well as an increased risk of both type 2 diabetes and Alzheimer disease. The SNP also abrogates the protective effects seen with moderate alcohol consumption and greatly increases the cardiovascular risks associated with smoking, dramatically boosting the risk of heart attack in such individuals.

Diet–gene interactions are highly complex and hard to predict, thus demonstrating the need for highly controlled genotypes and environmental conditions that allow for identifying different regulatory patterns based on diet and genotype. The challenges we now face may ultimately require a nutrigenomics project on the scale of the Human Genome Project in order to identify genes that cause or promote chronic disease and the nutrients that regulate or influence the activity of these genes.

—Jim Kaput

FDA National Center for Toxicological Research

"The implication here is that anyone with this genotype should be rigorously attentive to their diet and lifestyle," says Ferguson. "These individuals should avoid smoking and alcohol while undertaking exercise and eating a diet low in saturated fat. Nonetheless, at present, very few people are aware of their APOE genotype." Lack of the awareness of such SNP–diet–lifestyle interactions is not only a drawback for public health education, but also may result in null findings in epidemiologic studies when in fact certain segments of the study population are highly vulnerable to diseases that are linked with a given SNP.

Future Research Directives and Challenges

Identifying the SNP–diet and SNP–nutrient interactions that cause chronic disease is challenging because of the complexities inherent in studying genotypes and in assessing dietary and nutrient intakes. At this time, few if any of the SNP–diet associations that have been reported in epidemiologic studies have been replicated, and many have been plagued by a lack of appropriate statistical power and other methodologic problems. Ultimately, because many cases of chronic diseases are influenced by different diets, nutrition–genome interactions will not be found unless diet and genotype are controlled and changed in the experimental design (same diet with different genotypes, and different genotypes with the same diet).

[...]

Going further, they studied the combined effects of calcium or riboflavin with different levels of folate intake, since earlier studies had indicated that these dietary factors tend to interact in modifying the risk of cancer, osteoporosis, and hip fractures. Increasing one's calcium intake further enhanced the genome-protective effect of a high-folate diet whereas a high riboflavin intake further exacerbated genome damage associated with a low-folate diet. This is consistent with epidemiologic studies showing that cancer rates tend to be higher among populations that consume more red meat (which is very high in riboflavin), more alcohol (which depletes folate), and fewer vegetables (a rich source of folate).

The promise of nutrition-modulated DNA repair strategies has attracted the attention of cancer researchers in particular. "Dietary factors can act to stabilize the genome once genetic abnormalities have occurred," says gastroenterologist Graeme Young, who directs the Flinders Centre for Innovation in Cancer in Adelaide, Australia. "The traditional diet–genome approach has related protection to dietary lifestyle and germline genotype," he says. "Here we are discussing dietary interaction with the abnormal genome in potentially precancerous cells." Young and his colleagues are now planning to explore the capacity of dietary factors to regulate DNA repair mechanisms.

[...]

Bruce Ames, a molecular biologist at Children's Hospital Oakland Research Institute in California, has documented a number of polymorphisms in genes that affect the binding of coenzymes, some of which are essential vitamins. "With these types of evidenced-based findings within the nutrigenomic framework, I believe we'll have more ammunition to convince government and public health officials to tackle the issue of vitamin deficiency around the world," Kaput says. "With this more targeted approach, we're more likely to see political and economic forces fall in place to solve the problem. . . . Although the complexities are substantial, I believe nutrigenomic approaches offer the best hope for understanding the molecular processes that maintain health and prevent disease."

For Fenech, one of the key objectives of nutrigenomics for society is to diagnose and nutritionally prevent DNA damage on an individual-by-individual basis. He has devised the concept of the Genome Health Clinic, a new mode of health care based on the diagnosis and nutritional prevention of DNA damage and the diseases that result therefrom. In recent years, a number of nutritional/metabolic/diagnostic testing companies such as Genova and MetaMetrix have started to sell genomic profiling tests to help guide decision making around dietary supplements. With the increasingly lower pricings for analyzing SNPs in individuals, the population-level potential for dietary optimization based on nutrigenomic approaches seems truly awesome. Even in the absence of information on an individual's genotype, it is practical to use nutrition-sensitive genome damage biomarkers, such as the micronucleus assay, to determine whether dietary and/or supplement choices are causing benefit or harm to a person's genome.

Says Fenech, "In the near future, instead of diagnosing and treating diseases caused by genome or epigenome damage, health care practitioners may be trained to diagnose and nutritionally prevent or even reverse genomic damage and aberrant gene expression. Nutrigenomics will help usher in the development of new functional foods and supplements for genome health that can be mixed and matched so that overall nutritional intake is appropriately tailored to an individual's genotype and genome status.

"M. Nathaniel Mead

[Most of your genome (98.7%) is regulatory (not "Junk" DNA) - unless you eat non-personalized "junk food" - pellionisz_at_junkdna.com, January 5th, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

New Route For Heredity Bypasses DNA

Laura Landweber

ScienceDaily (Jan. 4, 2008) — A group of scientists in Princeton's Department of Ecology and Evolutionary Biology has uncovered a new biological mechanism that could provide a clearer window into a cell's inner workings.

What's more, this mechanism could represent an "epigenetic" pathway -- a route that bypasses an organism's normal DNA genetic program -- for so-called Lamarckian evolution, enabling an organism to pass on to its offspring characteristics acquired during its lifetime to improve their chances for survival. Lamarckian evolution is the notion, for example, that the giraffe's long neck evolved by its continually stretching higher and higher in order to munch on the more plentiful top tree leaves and gain a better shot at surviving.

The research also could have implications as a new method for controlling cellular processes, such as the splicing order of DNA segments, and increasing the understanding of natural cellular regulatory processes, such as which segments of DNA are retained versus lost during development. The team's findings will be published Jan. 10 in the journal Nature.

Princeton biologists Laura Landweber, Mariusz Nowacki and Vikram Vijayan, together with other members of the lab, wanted to decipher how the cell accomplished this feat, which required reorganizing its genome without resorting to its original genetic program. They chose the singled-celled ciliate Oxytricha trifallax as their testbed.

Ciliates are pond-dwelling protozoa that are ideal model systems for studying epigenetic phenomena. While typical human cells each have one nucleus, serving as the control center for the cell, these ciliate cells have two. One, the somatic nucleus, contains the DNA needed to carry out all the non-reproductive functions of the cell, such as metabolism. The second, the germline nucleus, like humans' sperm and egg, is home to the DNA needed for sexual reproduction.

When two of these ciliate cells mate, the somatic nucleus gets destroyed, and must somehow be reconstituted in their offspring in order for them to survive. The germline nucleus contains abundant DNA, yet 95 percent of it is thrown away during regeneration of a new somatic nucleus, in a process that compresses a pretty big genome (one-third the size of the human genome) into a tiny fraction of the space. This leaves only 5 percent of the organism's DNA free for encoding functions. Yet this small hodgepodge of remaining DNA always gets correctly chosen and then descrambled by the cell to form a new, working genome in a process (described as "genome acrobatics") that is still not well understood, but extremely deliberate and precise.

Landweber and her colleagues have postulated that this programmed rearrangement of DNA fragments is guided by an existing "cache" of information in the form of a DNA or RNA template derived from the parent's nucleus. In the computer realm, a cache is a temporary storage site for frequently used information to enable quick and easy access, rather than having to re-fetch or re-create the original information from scratch every time it's needed.

"The notion of an RNA cache has been around for a while, as the idea of solving a jigsaw puzzle by peeking at the cover of the box is always tempting," said Landweber, associate professor of ecology and evolutionary biology. "These cells have a genomic puzzle to solve that involves gathering little pieces of DNA and putting them back together in a specified order. The original idea of an RNA cache emerged in a study of plants, rather than protozoan cells, though, but the situation in plants turned out to be incorrect."

Through a series of experiments, the group tested out their hypothesis that DNA or RNA molecules were providing the missing instruction booklet needed during development, and also tried to determine if the putative template was made of RNA or DNA. DNA is the genetic material of most organisms, however RNA is now known to play a diversity of important roles as well. RNA is DNA's chemical cousin, and has a primary role in interpreting the genetic code during the construction of proteins.

First, the researchers attempted to determine if the RNA cache idea was valid by directing specific RNA-destroying chemicals, known as RNAi, to the cell before fertilization. This gave encouraging results, disrupting the process of development, and even halting DNA rearrangement in some cases.

In a second experiment, Nowacki and Yi Zhou, both postdoctoral fellows, discovered that RNA templates did indeed exist early on in the cellular developmental process, and were just long-lived enough to lay out a pattern for reconstructing their main nucleus. This was soon followed by a third experiment that "… required real chutzpah," Landweber said, "because it meant reprogramming the cell to shuffle its own genetic material."

Nowacki, Zhou and Vijayan, a 2007 Princeton graduate in electrical engineering, constructed both artificial RNA and DNA templates that encoded a novel, pre-determined pattern; that is, that would take a DNA molecule of the ciliate's consisting of, for example, pieces 1-2-3-4-5 and transpose two of the segments, to produce the fragment 1-2-3-5-4. Injecting their synthetic templates into the developing cell produced the anticipated results, showing that a specified RNA template could provide a new set of rules for unscrambling the nuclear fragments in such a way as to reconstitute a working nucleus.

"This wonderful discovery showed for the first time that RNA can provide sequence information that guides accurate recombination of DNA, leading to reconstruction of genes and a genome that are necessary for the organism," said Meng-Chao Yao, director of the Institute of Molecular Biology at Taiwan's Academia Sinica. "It reveals that genetic information can be passed on to following generations via RNA, in addition to DNA."

The research team believes that if this mechanism extends to mammalian cells, then it could suggest novel ways for manipulating genes, besides those already known through the standard methods of genetic engineering. This could lead to possible applications for creating new gene combinations or restoring aberrant cells to their original, healthy state.

Support for the team's research was provided by the National Science Foundation, the National Institutes of Health and the School of Engineering and Applied Science senior thesis research fund.

[The ENCODE-report already hinted that Lamarck's ideas had been dismissed in a rush to judgement towards Darwin - but now we have some brilliant pathways evidenced - and as a result scores of disciplines will never be the same. pellionisz_at_junkdna.com, January 4th, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Seeking God in the Brain - Efforts to Localize Higher Brain Functions

Solomon H. Snyder, M.D., D.Sc.

Solomon Snyder

[Collins talks about "The Language of God" defined as mathematics and DNA. We may be better off finding out the mathematics of both brain function and that of the genome (that governs neural networks of the brain). Why don't we separate science from religion - as Collins suggests? We are not even at our science goals yet - but at the brink of mathematical identification of brain functions in the genome, such as of cerebellar sensorimotor coordination - AJP]

Neuroscientists have long eschewed global questions about brain function, and books reviewing the current state of neuroscience usually allocate only a small section to "higher functions." But with the advent of novel imaging techniques such as positron-emission tomographic scanning and functional magnetic resonance imaging, attitudes have begun to change. It is now feasible to visualize functions of discrete brain regions while subjects are engaged in diverse activities — doing arithmetic, composing songs, writing poetry, or watching pornographic movies. Information about which parts of the brain are activated during various mental activities has supplemented and, in general, confirmed previous insights derived from observations of alterations of thinking and feeling associated with brain lesions, epilepsy, and the use of diverse drugs.

Efforts to elucidate higher brain functions have intersected with a burgeoning literature on the neural underpinnings of not only language and art but also religion. At one extreme, some scientists, such as Francis Collins, in The Language of God, have even used what we know of molecular biology and brain function to argue for the existence of a personal God.1 Collins reviews anthropologic data emphasizing the universality of the search for God among a diverse group of primitive and advanced cultures over many thousands of years; he interprets this

universality as implying that some basic structure in the brain "needs God." Similarly, noting that humans have an intuitive sense of right and wrong, Collins suggests that this characteristic, too, originates in an intrinsic structure of the brain. He goes so far as to conclude that the moral law was implanted in our brains by God, but many scientists have argued, from the same universality, that moral, altruistic behavior is programmed into the brain because it facilitates social behavior that leads to the preservation of the species.

Others have used similar data to argue that all of religion is an artifact of evolution. Neuroscientist David Linden, for instance, has recently suggested specific mechanisms whereby evolutionary alterations in the structure of the brain might account for the development of religion as well as love, memory, and dreams.2 As the brain evolved, he explains, the overgrown cerebral cortex came to overlie the more primitive, emotion-regulating limbic structures, which in turn surmount the most primitive brain-stem structures and the associated hypothalamus. Linden argues that the accidental linking of these portions of the brain accounts for many of the tribulations of humankind — anxiety and other emotional disturbances arise in substantial part from the ongoing war between the "rational" higher centers and the emotion-laden limbic system. Linden argues that if an "intelligent designer" had assembled the brain, it would surely have done an elegant, impeccable job, but the more we learn about the brain, the more clearly we see that it is an ad hoc concatenation of structures designed for unrelated functions — a sort of Rube Goldberg contraption. Though the brain somehow manages to function rather elegantly, breakdowns manifested in emotional and other disturbances are all too frequent.

Linden speculates about the neural mechanisms that may underlie religious impulses. He regards religious ideation as reflecting beliefs — such as the concept of a virgin birth or the notion of a God who knows every thought of every human being — that violate our everyday perception of reality. He likens such conceptualizations to the confabulations that persons with split brains arrive at in order to make sense of the incompatible data encountered by the two separated hemispheres.

In his recent book The Soul in the Brain, British neurologist Michael Trimble looks to his area of expertise, epilepsy, to explore a possible relationship between the human brain and religion: religiosity, he notes, is often brought to the fore by seizures.3 Trimble points out that some of the greatest religious figures in history had what were probably complex partial seizures, which are known to be associated with religious ideation. For instance, during Saint Paul's conversion on the road to Damascus, he is said not only to have suffered 3 days of blindness but also to have fallen to the ground frequently and experienced ecstatic visions. Muhammad described falling episodes accompanied by visual and auditory hallucinations. Joseph Smith, who founded Mormonism, reported lapses of consciousness and speech arrest, noting that "When I came to . . . I found myself lying on my back looking up at heaven." Joan of Arc reported, "I heard this voice [of an angel] . . . accompanied also by a great light."3

Trimble recalls that in The Varieties of Religious Experience, the 19th-century psychologist William James also highlighted the trances, visions, and auditory hallucinations associated with religion, emphasizing the ineffable, altered state of consciousness of most religious mystics. Such mystical states, encountered in most religions, remarks Trimble, are extraordinarily similar to the mental states elicited by psychedelic drugs such as LSD and mescaline. Almost 50 years ago, the psychiatrist Walter Pahnke came to this conclusion on the basis of experiments in which the psychedelic drug psilocybin was administered to students at the Harvard Divinity School. More recently, Roland Griffiths and colleagues have replicated these studies in a more rigorous fashion and found that subjects receiving psilocybin reported long-lasting changes in a religious sense of self.4 Drugs whose mechanism of action is understood can be powerful tools for elucidating the molecular basis of mental states — we know much more about the neurotransmitters that mediate emotions, for instance, from studying the actions of antidepressant drugs than from direct manipulations of the brain — and psychedelic drugs are known to act as agonists of one subtype of serotonin receptors.4 Since serotonin neurons arise from a discrete set of raphe nuclei in the brain, it may be possible to narrow the search for the biologic cause of at least one type of religiosity to these few cells.

But given the variability of what we mean by "religion" and "poetry," attempts to localize such purported functions within the brain are always fraught with hazards. With his focus on epileptic causes of both religious and creative impulses, Trimble enumerates several candidate regions, most of them in the temporal lobe — an area that receives a substantial input from serotonin neurons — which is consistent with what we know of sites of action of psychedelic drugs. In this issue of the Journal, Sanai and colleagues (pages 18–27) report on a study in which they mapped sites involved in diverse modes of language use in patients with gliomas who were undergoing debulking of their tumors. They found a far wider dispersal than might have been expected, with parietal and temporal as well as frontal regions providing important contributions. However, any extrapolation from a mapping of brain areas that mediate language use to likely cerebral contributions to religious or creative dispositions would be highly speculative.

So where do all these brain explorations lead us? In seeking a general relationship between religious states, poetry, and music, Trimble ascribes all three to the right, nondominant side of the brain. He assumes that integration of the activity of the right-sided emotional brain with that of the left-sided analytic brain gives rise to the greatest intellectual achievements in the arts. I suspect that major advances in science, too, are the product of more than pure reason — in the finest scientists I have encountered, I have always detected a notable creative, artistic flair.5 Artistic, intuitive approaches are evident even in the most abstract intellectual achievements, such as Einstein's theories. Needless to say, a simple dichotomy of right and left brains is a gross oversimplification. Nonetheless, as imaging technology and associated cognitive testing become ever more sophisticated, we may be able to discriminate ways in which religious and creative sensibilities relate to one another and to brain areas that mediate emotions that are deranged in psychiatric illness. Whether any of these advances will provide the answer to the cerebral basis of religion, if one exists, is anybody's guess.

Source Information

Dr. Snyder is a professor of neuroscience at the Johns Hopkins University School of Medicine, Baltimore.

References

Collins FS. The language of God: a scientist presents evidence for belief. New York: Free Press, 2007.
Linden DJ. The accidental mind: how brain evolution has given us love, memory, dreams, and God. Cambridge, MA: Belknap Press, 2007.
Trimble MR. The soul in the brain: the cerebral basis of language, art, and belief. Baltimore: Johns Hopkins University Press, 2007.
Griffiths RR, Richards WA, McCann U, Jesse R. Psilocybin can occasion mystical-type experiences having substantial and sustained personal meaning and spiritual significance. Psychopharmacology (Berl) 2006;187:268-283. [CrossRef][Medline]
Snyder SH. The audacity principle in science. Proc Am Philos Soc 2005;149:141-158. [ISI]

[Neuroscience versus Genomics compete on healthy grounds (and Dr. Snyder excels in both) - while "competition" of Science against Religion may not deliver much good, in fact this column joins those who believe that mixing the two is not only needless but harmful. Neuroscience AND Genomics (together) may win the top prize, especially if one resulting in the other can be expressed (figuratively speaking) in "The Language of God" i.e. the divine language of mathematics. "For higher brain functions" - we are most likely not even close yet. However, for the simplest goals of Neuroscience, e.g. explaining sensorimotor coordination by neural networks of the cerebellum in the "divine language" (mathematics) some went on record a quarter of a Century ago (predictions experimentally verified). Today, we are at the brink of expressing the genomic underpinnings of generating cerebellar (e.g. Purkinje neuron) structure and function - thus identifying a triangulation point for expressing mathematically a) (primitive) brain function, (b) neural network mathematical operations and even (c) some genomic principles and specific genetic information that govern development of such neural networks. (One peer reviewed paper is out (with experimental support of quantitative predictions), another has been accepted by peer review and is "in Press"). "Godly?" -one does not think so. "Another success story of the divine language of science (mathematics)" - quite possibly. Those who opt for this modest strategy or separating science and religion, please prepare submitting "genomic underpinnings of cerebellar neuronal networks" Special Issue (by late Spring), comissioned to Editorship by AJP - pellionisz_at_junkdna.com, January 3rd, 2008]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Seattle research to map disease with U.S. grant

By Sandi Doughton

Tuesday, January 1, 2008

It used to be that drugs were discovered largely by luck.

Ages ago, observant healers noticed the bark of a certain willow tree reduced fever and pain. Centuries later, scientists identified the chemical responsible and figured out how to manufacture aspirin.

In today's era of designer drugs, researchers start with a detailed understanding of a disease or disorder, then tailor remedies to match.

Now, Seattle scientists are joining this methodical search for cures in a big way.

With a $30.6 million federal grant, the Seattle Biomedical Research Institute (SBRI) will unravel the structures of thousands of toxins and other compounds produced by bacteria, viruses and parasites. The goal is to discover molecular Achilles' heels that can be exploited to design new drugs and vaccines.

The diseases that will be studied include influenza, drug-resistant tuberculosis, bacteria that cause ulcers and the drug-resistant strain of staphylococcus responsible for increasing numbers of skin infections. Some of the bugs, like the ones that cause cholera and typhus, are potential bioterror agents.

"We'll really be looking at things that haven't had huge amounts of resources thrown at them," said Peter Myler, head of the private lab's new Seattle Structural Genomics Center for Infectious Disease.

The effort won't yield drugs directly. But the results — which will be publicly available — will lay the groundwork for future drug development.

With an assembly-line approach, researchers will be able to process a hundred or more compounds at the same time, greatly speeding up the process and cutting the cost per compound.

"Our approach is not particularly different from other labs, but the scale is," Myler said.

The focus will be on key proteins produced by disease-causing microbes. For example, botulinum toxin — famed for smoothing wrinkles and causing food poisoning — is a neurotoxic protein produced by a bacterium. Other proteins allow viruses to slip inside cells, enable parasites to evade host immune systems, or help drug-resistant bugs shrug off the effects of antibiotics.

"Proteins are what do all the work in cells," Myler said. "Most traditional drugs target a particular protein and disrupt its activity in some way or another."

Protease inhibitors, the revolutionary class of drugs that has extended the lives of millions of HIV-infected people, work by interfering with a protein crucial for viral replication. Scientists had to understand that protein's structure before developing the treatment, said Lance Stewart, president of deCODE biostructures, part of the SBRI project.

The Bainbridge Island operation is a subsidiary of deCODE genetics, an Icelandic company that offers personal genome sequencing for $985 and is working on drugs based on genetic susceptibilities to diseases.

For the five-year SBRI project, scientists at deCODE will use an X-ray technique to analyze proteins down to the atomic level. The proteins themselves will be produced at SBRI's headquarters in South Lake Union.

"You get very high-resolution pictures," Stewart said. "Once you see what the structure of your target looks like, you can think about designing chemicals to interfere with or improve its activity."

A three-dimensional model of a protein can reveal folds or slots where a drug could attach, said Valentina DiFrancesco, who oversees the project for the National Institute of Allergy and Infectious Diseases (NIAID), a branch of the National Institutes of Health. The institute also funded a second, $30 million structural genomics center at Northwestern University.

Some scientists have questioned the value of a similar federal project to determine the structures of large numbers of proteins without regard to their function. The result, says an analysis published last year in the journal Science, is a lot of basic information — but not an impressive number of drug leads.

Since the NIAID project will focus exclusively on compounds known to be important in disease, it should yield more promising compounds for drug developers to follow up on, DiFrancesco said.

None of the toxic bugs will be grown in Seattle, Myler said. To mass-produce proteins, researchers need only the snippets of DNA, or genes, that code for the proteins.

Researchers from the University of Washington and Battelle Northwest in Richland also will participate in the project.

[Genomic drug discovery is the future for "Big Pharma". - pellionisz_at_junkdna.com, January 1st, 2008]

Study Maps Life In Extreme Environments

ScienceDaily (Dec. 30, 2007) — A team of biologists have developed a model mapping the control circuit governing a whole free living organism. This is an important milestone for the new field of systems biology and will allow the researchers to model how the organism adapts over time in response to its environment. This study marks the first time researchers have accurately predicted a cell's dynamics at the genome scale (for most of the thousands of components in the cell). The findings are based on a study of Halobacterium salinarum, a free-living microbe that lives in hyper-extreme environments.

The researchers focused on a little studied organism that can survive high salt, radiation, and other stresses that would be deadly to most other organisms. By focusing on such an organism the researchers were able to show definitively that they could understand and model the circuit controlling the cell directly from experiments designed to measure all genes in the genome simultaneously. These are called systems-biology experiments. This scholarship is part of a new scientific field, systems biology, which examines how genes influence each other via extremely large networks of interaction and how these networks respond to stimuli, adapting over time to new environments and cell states. The field has blossomed over the past 10 years, spurred by successful mapping of genomic systems.

By a combination of experimental and algorithmic advances studies in this area have shown that scientific knowledge can go from genome to a functional and dynamical draft-model of the whole organism in a relatively short time. Important previous studies in this area identified cell components (genome sequencing) and how cell components are connected. But the study in Cell went beyond previous scholarship and accurately modeled how Halobacterium, an important organism in high-salt environments such as the Dead Sea or Utah's Great Salt Lake, functioned over time and responded to changing environmental conditions. The researchers were, for the first time, able to predict how over 80 percent of the total genome (several thousand genes) responded to stimuli over time, dynamically rearranging the cell's makeup to meet environmental stresses.

The research appears in the journal Cell.The study's lead authors are New York University Assistant Biology Professor Richard Bonneau, who holds appointments at NYU's Center for Genomics & Systems Biology and the university's Courant Institute for Mathematical Sciences, and Nitin Baliga of the Institute for Systems Biology in Seattle, WA. The study also included researchers at the University of Maryland, Vanderbilt University, and the University of Washington.

"This organism is amazingly versatile and tolerates lots of different extreme environmental stresses," said Bonneau. "It does this by making decisions and dynamically changing the levels of genes and proteins; if it makes incorrect decisions it dies. Our model shows how these decisions get made, how the bug responds."

"This is also a good model to explain how, in general, cells make stable decisions as they move through time scales," added Bonneau, who is part of an NYU research group that handled the analysis of this genome. "If you want to understand how cells respond to their environments, the model offers a clearer window than previously existed for this domain of life."

The collaboration between Baliga's and Bonneau's research groups represents a type of partnership becoming more essential to biological and biomedical research: biologists and computer scientists teaming up to design experiments and analysis that synergize to decipher living systems, resulting in ever more complex and accurate models of the cell. The result is more comprehensive, reaching genome-scale levels, more accurate, and more relevant to biologists and biomedical researchers hoping to understand the whole system.

Bonneau added that by understanding how biological systems function, researchers can then turn their attention to engineering the biosynthesis of biofuels and pharmaceuticals.

"We are now gearing up to try this sort of analysis on several other organisms," he noted. "In addition, because this study examined the dynamics of a key environmental microbe it offers a window into understanding life in extreme environments, in some cases created by human activities, such as the concentration of pollution by evaporation or high salt marine environments."

The study was sponsored by the National Science Foundation and the U.S. Department of Energy.

[Genomic- and Systems Biology is Information Science (more precisely, mathematics) - pellionisz_at_junkdna.com, December 30, 2007]

Breakthrough of the year - Human genetic variation [Misnomer: 2007 was the year of BreakDown]
Science 21 December 2007
Elizabeth Pennisi


[play video of the breakdown of Genomics as Science Magazine knew it]

Equipped with faster, cheaper technologies for sequencing DNA and assessing variation in genomes on scales ranging from one to millions of bases, researchers are finding out how truly different we are from one another [there is not a word that the overwhelming majority of differences are in what used to be "Junk DNA" - AJP]

The unveiling of the human genome almost 7 years ago cast the first faint light on our complete genetic makeup. Since then, each new genome sequenced and each new individual studied has illuminated our genomic landscape in ever more detail. In 2007, researchers came to appreciate the extent to which our genomes differ from person to person and the implications of this variation for deciphering the genetics of complex diseases and personal traits.

Less than a year ago, the big news was triangulating variation between us and our primate cousins to get a better handle on genetic changes along the evolutionary tree that led to humans. Now, we have moved from asking what in our DNA makes us human to striving to know what in my DNA makes me me.

Techniques that scan for hundreds of thousands of genetic differences at once are linking particular variations to particular traits and diseases in ways not possible before. Efforts to catalog and assess the effects of insertions and deletions in our DNA are showing that these changes are more common than expected and play important roles in how our genomes work--or don't work. By looking at variations in genes for hair and skin color and in the "speech" gene, we have also gained a better sense of how we are similar to and different from Neandertals.

Already, the genomes of several individuals have been sequenced, and rapid improvements in sequencing technologies are making the sequencing of "me" a real possibility. The potential to discover what contributes to red hair, freckles, pudginess, or a love of chocolate--let alone quantifying one's genetic risk for cancer, asthma, or diabetes--is both exhilarating and terrifying. It comes not only with great promise for improving health through personalized medicine and understanding our individuality but also with risks for discrimination and loss of privacy.

Turning on the flood lamps

Even with most of the 3 billion DNA bases lined up in the right order, there was still much that researchers couldn't see in the newly sequenced human genome in 2001. Early comparative studies threw conserved regulatory regions, RNA genes, and other features into relief, bringing meaning to much of our genome, including the 98% that lies outside protein-coding regions. These and other studies, including a pilot study called ENCODE, completed this year, drove home how complex the genome is.

There are an estimated 15 million places along our genomes where one base can differ from one person or population to the next. By mid-2007, more than 3 million such locations, known as single-nucleotide polymorphisms (SNPs), had been charted. Called the HapMap, this catalog has made the use of SNPs to track down genes involved in complex diseases--so-called genome-wide association studies--a reality. More than a dozen such studies were published this year.

Traditionally, geneticists have hunted down genes by tracking the inheritance of a genetic disease through large families or by searching for suspected problematic genes among patients. Genome-wide association studies go much further. They compare the distribution of SNPs--using arrays that can examine some 500,000 SNPs at a time--in hundreds or even thousands of people with and without a particular disease. By tallying which SNPs co-occur with symptoms, researchers can determine how much increased risk is associated with each SNP.

In the past, such links have been hard-won, and most have vanished on further study. This year, however, researchers linked variants of more than 50 genes to increased risk for a dozen diseases. Almost all the variants exert relatively small effects, in concert with many other genetic factors and environmental conditions, and in many cases the variant's real role has not yet been pinned down. But the sheer numbers of people studied have made even skeptics hopeful that some of these genetic risk factors will prove real and will help reveal underlying causes.

The Wellcome Trust, the U.K.'s largest biomedical charity, began to put its weight behind genome-wide association studies in 2005 and recruited 200 researchers to analyze the DNA of 17,000 people from across the United Kingdom. The results are part of an avalanche of genetic information becoming available as more and more geneticists agree to share data and as funding agencies require such exchanges. In June, the consortium published a mammoth analysis of seven diseases, including rheumatoid arthritis, bipolar disorder, and coronary artery disease. It also found several gene variants that predispose individuals to type 1 diabetes and three new genes for Crohn's disease.

Several large studies have also pinpointed type 2 diabetes genes. One French study involving nonobese diabetics found that a version of a gene for a protein that transports zinc in the pancreas increased the risk of this disease. Three simultaneous reports involving more than 32,000 participants uncovered four new diabetes-associated gene variants, bringing to 10 the number of known non-Mendelian genetic risk factors for type 2 diabetes. These finds strongly point to pancreatic beta cells as the source of this increasingly common chronic disorder.

New gene associations now exist for heart disease, breast cancer, restless leg syndrome, atrial fibrillation, glaucoma, amyotrophic lateral sclerosis, multiple sclerosis, rheumatoid arthritis, colorectal cancer, ankylosing spondylitis, and autoimmune diseases. One study even identified two genes in which particular variants can slow the onset of AIDS, demonstrating the potential of this approach for understanding why people vary in their susceptibility to infectious diseases.

Genomic hiccups

Genomes can differ in many other ways. Bits of DNA ranging from a few to many thousands, even millions, of bases can get lost, added, or turned around in an individual's genome. Such revisions can change the number of copies of a gene or piece of regulatory DNA or jam two genes together, changing the genes'products or shutting them down. This year marked a tipping point, as researchers became aware that these changes, which can alter a genome in just a few generations, affect more bases than SNPs.

In one study, geneticists discovered 3600 so-called copy number variants among 95 individuals studied. Quite a few overlapped genes, including some implicated in our individuality--blood type, smell, hearing, taste, and metabolism, for example. Individual genomes differed in size by as many as 9 million bases. This fall, another group performed an extensive analysis using a technique, called paired-end mapping, that can quickly uncover even smaller structural variations.

These differences matter. One survey concluded that in some populations almost 20% of differences in gene activity are due to copy-number variants; SNPs account for the rest. People with high-starch diets--such as in Japan--have extra copies of a gene for a starch-digesting protein compared with members of hunting-gathering societies. By scanning the genomes of autistic and healthy children and their parents for copy-number variation, other geneticists have found that newly appeared DNA alterations pose a risk for autism.

New technologies that are slashing the costs of sequencing and genome analyses will make possible the simultaneous genome-wide search for SNPs and other DNA alterations in individuals. Already, the unexpected variation within one individual's published genome has revealed that we have yet to fully comprehend the degree to which our DNA differs from one person to the next. Such structural and genetic variety is truly the spice of our individuality.

[2007 was the year when our (mis)understanding of the Genome dramatically and publicly broke down. While (for understandable purposes) Science Magazine plays up to a "breakthrough" the meltdown of Genomics as we used to know it the real breakthrough is a new understanding of how the Genome works. Another way of dealing with probably the most significant scientific and medical revolution is to simply hide from the USA public a brilliant article in Newsweek (by Lee Silver, Princeton Professor) - such that those suffering from health-conditions that were profoundly misunderstood till "Post-ENCODE Genomics" will not avalanche with their outrage the scientific and medical establishment. Ultimately, none of these media and PR-methods will do - though they might lend some time to the general public "to gradually get used to the fact" that fundamentals are terribly wrong about the general understanding of the scientific principles of how the genome is organized - and how it functions in health and disease. Francis Collins, the architect of ENCODE, deserves enormous credit upon releasing results for his honesty that "the scientific community will have to re-think long-standing beliefs" (of our axioms). This worker quietly disregarded some glaringly false dogmas and since 1989 worked feverishly albeit in a clandestine mode to create an informatically sound better understanding. With the "wilderness years of pre-ENCODE" now over, a paper, outlining his new principle had been submitted within 6 months of of the open request and has been accepted by peer-review on the 18th of December, 2007. - pellionisz_at_junkdna.com, December 21, 2007]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Beijing Genomics Institute to Offer Sequencing Services [How big nations do in a Global Competition?]

[December 24, 2007]

NEW YORK (GenomeWeb News) – The Beijing Genomics Institute said yesterday that it plans to use its fleet of 120 Sanger and next-generation sequencing instruments to offer sequencing services to the global market.


In a statement, BGI said that it currently has more than 100 Applied Biosystems 3730xl sequencers, seven Illumina Genome Analyzers, and two ABI SOLiDs. The institute said these instruments enable a sequencing throughput of 250 million base pairs per day using Sanger and 4 billion base pairs per day using the next-generation instruments.


BGI said that it plans to provide these services at “dramatically reduced cost and improved speed,” but did not provide further details.


The institute said it considers its bioinformatics capabilities to be an advantage in the sequencing services market. BGI said it has more than 100 bioinformatics specialists, 2,000 CPUs, and more than 500 terabytes of storage.


In addition to sequencing, BGI said it will offer bioinformatics services including sequence data processing, sequence assembly, gene annotation, expression level research, EST and microarray analysis, SNP discovery, genotype and haplotype analysis, primer design, software development, system integration, structure modeling, and database and website construction.

[2007 might better be called "the kick-off of all-out Global Competition in PostModern Genomics. China just recently declared that (having learnt in ENCODE 1.0 what it needed) will not participate in ENCODE 2.0 - but will go for "diseases of the yellow race". Just days after, the 3rd person in the world with a fully sequenced Genome (after Venter and Craig) became a Chinese man. Now, here is the third announcement: China declaring its superpower status in PostModern Genomics. For the moment, the sequencers are not (known to be) of Chinese origin - but without a doubt in the Global competition China will do much better as an "independent sequencer". First, while sequencers are largely automated, there is still (some) labor involved - where China can easily undercut the high overhead of US, UK, Europe. Second, there is no "entrenched establishment" of "pre-ENCODE" genomics in China. Thus, while the US will waste a decade or so by "the revolution eating its children" - China is leaping ahead to PostGenetics in PostModern Genomics (that is information science and technology). - pellionisz_at_junkdna.com, December 25, 2007]

Hereditary diseases taking toll on Arab region [How big nations (will) do in a Global Competition]

By a staff reporter

23 December 2007



DUBAI — “Arab nations spend more than $30 billion every year on patients suffering from hereditary diseases,” Dr Ghazi Omar Tadmori, Assistant Director, Centre for Arab Genetic Diseases, Dubai, an affiliate of the Shaikh Hamdan bin Rashid Award for Medical Sciences, said yesterday.


Dr Tadmori said this during a lecture that was part of a workshop held at the College of Higher Studies at the Arabian Gulf University in Bahrain early this month.

“The occurrence of hereditary diseases is also growing each year,” he added.

He pointed out that the rising cases of genetic diseases is putting more pressure on medical institutions in Arab countries.

Dr Tadmori said the Arab Centre for Genetic Studies in Dubai aimed to conduct indepth research on human hereditary conditions to curb the spread of genetic diseases globally, particularly in the Arab World.

Dr Tadmori observed that valuable information on the spread of genetic diseases was available in the UAE, Bahrain and Oman. This information, he said, was the first step to map the statistics and build data on hereditary diseases in the Arab region.

As per the indications, there are more than 240 genetic diseases in the UAE, 114 in Bahrain and 250 in Oman.

The accuracy of these figures pointed to the hereditary diversity in a small geographic region. It would be boosted when the mapping project was completed, Dr Tadmori said.

Initial statistics from the centre showed that there are more than 850 genetic diseases in the entire Arab region. It is expected that it will exceed 1,000 types of diseases once the data collection is completed.

Dr Tadmori said thousands of research papers on hereditary diseases published in the Arab world focused only on clinical diagnosis. “They should be combined with other studies on the genetic roots of these diseases,” he opined.

This combination would serve as an essential introduction to genetic care in the Arab World and contribute to the international efforts to identify specific functions of genes within the DNA genome. The findings were released in 2003 when the profile of the human genome was completed.

Dr Tadmori, who is also an academician and researcher in biology, visited the Arabian Gulf University in the framework of a joint cooperation between the centre and the university.

The seven-day workshop, organised jointly by the centre and the university, focused on the applications of computer sciences in the field of medicine, biology, hereditary and life sciences.

The release of the study on the human genome was a breakthrough in the medical field as it provided methods to study the human genetic structure very quickly which helped simplify genetic diagnosis and specify the genetic vulnerability of individuals and peoples due to environmental and historic factors.

Dr Tadmori added that diseases such as diabetes and high blood pressure and genetic anaemia diseases like Thalassaemia and Mongolic Anaemia were spreading in epidemic proportions in the Gulf region due to various social factors caused by the high percentage of inter-marriages between relatives.

He noted that the latest studies on genome were backed by modern computer technologies which provided an international genetic data base that contributed to developing a new type of pharmacology science, including genome pharmacology.

“Scientists are working to develop new drugs that are more appropriate to individuals in line with their genomes. When this science progresses to the desired level, it will produce a new medication system based on each one’s genome, making it easier,” he added

[This column covered the "rising giants" (China and India) in Post-ENCODE Genomics (PostGenetics). China openly declared their departure from the 11-Country ENCODE (led by the USA) to focus on "diseases of the yellow race". The Dubai Genome Center has also been covered. It is only a matter of (short) time to see the full sequencing of an arabic person - and a slice of their vast resources spent on specific diseases they have a higher occurrence. Other than "junk DNA disease" driver, the "bioenergy driver" is not mentioned, though it is in the best interest of oil-rich countries to prepare to the coming "alternative biofuels" for the time the World will run out of oil. Watch for massive investment and an "anti-brain drain" (mostly from the USA) towards arabic Genome initiatives in Dubai and elsewhere - pellionisz_at_junkdna.com, December 21, 2007]

4th International Greek Biotechnology Forum [How small nations (should) do in a Global Competition]

[A quick selection from the 75 distinguished speakers - AJP]

Prof. Martin John Evans (co-winner of the Nobel Prize in Physiology or Medicine-2007), Professor of Mammalian Genetics and Director of the School of Biosciences, Cardiff University

Prof. D. Trichopoulos, Vice President of National Research & Technology Council/Hygiene & Epidemiology Department, Medical School, University of Athens/ Vincent L. Gregory Professor of Cancer Prevention, Department of Epidemiology, University of Harvard

Prof. A. Trakatellis, Vice President of European Parliament

Mr. M. Kyprianou, Health Commissioner, European Committee

Dr. Ch. Vasilakos, Counsellor for Research & Technology, Permanent Representation of Greece to the EU

Prof. F. Kafatos, Chairman, Scientific Council of the European Research Council

Prof. G. Avlonitis, President of European Marketing Academy/Professor of Marketing, Department of Marketing and Communication, Athens University, of Economics and Business

Mr. E. Bonoris, General Director BIONOVA, Board Member of EuropaBio, NCP Coordinator

Dr. M. Polymeropoulos, Chief Executive Officer, Vanda Pharmaceuticals Inc, U.S.A

Dr. Vanhemelrijck, Secretary General EuropaBio (European Association of Bioindustries)

Mr. S. Stasinos, General Director of Hellenic Industrial Property Organisation

Mr. A. Kastanis, General Director of Research & Technology, GSRT

Prof. A. Pagiatakis, President of President of Foundation for Research & Technology - Hellas

[Some small nations of comparable size try to "compete" based on questionable tactics. More about it at the time appropriate - pellionisz_at_junkdna.com, December 14, 2007]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Forget mistletoe - what about DNA?A new dating service matches singles using major histocompatibility complex genes
The Scientist,
14th December 2007 06:09 PM GMT]

A new dating service that launched this week for Boston-area singles claims that it can get the chemistry right when fixing up potential mates -- literally. ScientificMatch.com uses DNA samples from customers to match them with others who have different alleles for major histocompatibility complex genes.

MHC proteins sit on the surface of cells and detect pathogens, but they also appear to play a role in sexual attraction. In sniff tests of dirty t-shirts, people tend to be most attracted to the scent of the shirt whose owner has different MHC alleles from the sniffer. One explanation is that this phenomenon evolved to promote genetic diversity between mates.

For $1,995 and a cheek swab sent off for DNA analysis, customers can find the love of their lives, or so says Eric Holzle, a Massachusetts engineer and long-time dater. Kerry Grens spoke to him on December 11, the day the site went live. At the time, he was driving, and didn't know if anyone had signed up.

KG: What is the algorithm for matching up customers?

EH: We look at 6 alleles, 3 genes. They are all HLA [human leukocyte antigen] genes: HLA-A, HLA-B and the third is HLA-DRB1. The reason we look at those genes is because they have the highest degree of polymorphism and those are ones scientists speculate influence our body odor. We match for the most amount of difference... None of those polymorphisms should be common between them.

KG: What are the chances of finding someone with 100 percent different polymorphisms?

EH: We need to collect research as people sign up. According to some data we've seen, based on DNA chemistry, we expect somewhere around 20-30 percent of the population to be compatible with each other. But we don't just match for chemistry, but personality and personal preferences, like the distance from where you live and age range. Once you include those criteria you're whittling down significantly who person A will be compatible with.

KG: What got you interested in and familiar with this science?

EH: I was basically destitute and living in one of my parent's houses. I had finished a couple of failed projects, and I knew I wanted to do something with the internet. I've been single my whole like and have been a dater. I was watching TV one night and they did a documentary on how people find the odor of other people attractive when their HLA genes are different. I thought, this is a fantastic idea to base a dating Web site on. With the more research I've done, the more benefits I've found to matching HLA genes. They go well beyond matching odor and scent.

KG: What are some of those additional benefits?

EH: I list 6 on my website, the first is natural body odor. Number 2 is a more satisfying sex life. Number 3, if you're a woman and matched up with a proper HLA partner, you have a higher rate of orgasms. That came out in a University of New Mexico report in 2006. It's interesting because it leads into another benefit, which is there's less cheating when people are properly matched up.

KG: Are you aware of studies that looked at HLA genes and divorce rate or happiness?

EH: No. On those particular points, only the one study at the University of Mexico was done on humans.

KG: You said you're a single dater, will you also be a member of ScientificMatch.com?

EH: Absolutely. No question at all. I want to find chemistry!

[The "Internet Boom" really became palpable when start-ups mushroomed based on business models that could be best characterized as "frivolous". (For instance, normal business metric did not apply - it was irrelevant if the company was profitable or had any business model at all - as long as it attracted customers by any means). Since this latest of "personal genomics" web-service carries a "disclaimer" (below) what really matters is that the "boom of consumer genomics" has reached yet another stage.

[Their Disclaimer: "The topic of human sexual pheromones is controversial, because some in the commercial world insist that such chemicals, (or their synthetic or natural alternatives,) can significantly increase your sex appeal—but the scientific community hasn’t yet recognized definitive proof that they work. In fact, scientists aren’t even convinced that we experience pheromones at all. While it’s suspected that many of the outstanding benefits of genetic matching are related to pheromone communication, it doesn’t really matter for the purposes of our introduction service. The sources cited on this website show evidence of the many incredible benefits of immune system DNA matching. But none of them—or any other independent, peer-reviewed, published studies—prove that pheromones are the mechanism by which those benefits are triggered or communicated."

pellionisz_at_junkdna.com, December 12, 2007]

More “Functional” DNA in Genome than Previously Thought

Surrounding the small islands of genes within the human genome is a vast sea of mysterious DNA. While most of this non-coding DNA is junk, some of it is used to help genes turn on and off. [Oops! If it is "mysterious", one can not *know* that "most of it is junk". It is equally likely that most of it is functional. When do we stop the ridiculous game of guessing how many percent of the unknown is "functional" and how many does not look like? Why not focus on the whatever percent that is not junk, and produce theoretically sound scientific hypotheses, experimentally testable and supported what that "non-coding" is actually coding? - AJP] As reported online this week in Genome Research, Hopkins researchers have now found that this latter portion, which is known as regulatory DNA and contributes to inherited diseases like Parkinson’s or mental disorders, may be more abundant than we realize.

By conducting an exhaustive analysis of the DNA sequence around a gene required for neuronal development, Andrew McCallion, Ph.D., an assistant professor in the McKusick-Nathans Institute of Genetic Medicine, and his team found that current computer programs that scan the genome looking for regulatory DNA can miss more than 60 percent of these important DNA regions. [Let us do a little calculation here, to check on the mathematical rigor of those who "talk about Junk DNA". It is known that 98.7% of the (human) DNA was considered "junk". If current computing technology can miss more than 60 percent of these important DNA regions, means in plain English that according to their statement 60.5% of the DNA may be functional. Therefore, the "most of this non-coding DNA is junk" statement is outright wrong according to elementary school mathematics. 39.5% is certainly not "most" if I properly recall my elementary school years well over half a century ago - AJP]

The current methods find regulatory sequences by comparing DNA from distantly related species, under the theory that functionally important regions will appear more similar in sequence than non-functional regions. “The problem with this approach, we have discovered,” says McCallion, “is that it’s often throwing the baby out with the bath water. So while we believe sequence conservation is a good method to begin finding regulatory elements, to fully understand our genome we need other approaches to find the missing regulatory elements.”

McCallion had suspected that using sequence conservation would overlook some regulatory DNA, but to see how much, he set up a small pilot project looking at the phox2b gene; he chose this gene both because of its small size and his interest in nerve development (phox2b is involved in forming part of the brain associated with stress response as well as nerves that control the digestive system).

The researchers created what they call a “tiled path,” cutting up the DNA sequence around the phox2b gene into small pieces, then inserted each piece into zebrafish embryos along with a gene for a fluorescent protein. If a phox2b fragment was a regulatory element, then it would cause the protein to glow. By watching the growing fish embryos - which have the advantage of being transparent - the researchers could see which pieces were regulators.

They uncovered a total of 17 discrete DNA segments that had the ability to make fish glow in the right cells. The team then analyzed the entire region around the phox2b gene using the five commonly used computer programs that compute sequence conservation; these established methods picked up only 29 percent to 61 percent of the phox2b regulators McCallion identified in the zebrafish experiments.

“Our data supports the recent NIH encyclopedia of DNA elements project, which suggests that many DNA sequences that bind to regulatory proteins are in fact not conserved,” says McCallion. “I hope this pilot shows that these types of analyses can be worthwhile, especially now that they can be done quickly and easily in zebrafish.”

McCallion is now planning a larger study of other neuronal genes. “I think we are only starting to realize the importance and abundance of regulatory elements; by regulating the gene activity in each cell they help create the diverse range of cell types in our body.”

[The truly important part is not the elementary school arithmetic error, but the admission that currently available computational tools are outrageously inadequate for a proper study (and utilization) of the "full genome". Given that with Personalized Genomics there will be more "full sequences" ready for analysis than most of the industry cares to think of, some particularly agile "tool manufacturers" are just as busy as the shovel-maker was at the dawn of realization that there is gold to dig in California. - pellionisz_at_junkdna.com, December 12, 2007]

Humans Evolving More Rapidly Than Ever, Say Scientists

By Brandon Keim December 10, 2007 | 5:02:59 PMCategories: Evolution

[From Ice Age to Ice Age - it is mostly in our "Junk" - AJP]

Look out, future, because here we come: scientists say the speed of human evolution increased rapidly during the last 40,000 years -- and it's only going to get faster.

The findings, published today by a team of U.S. anthropologists in the Proceedings of the National Academy of Sciences, overturn the theory that modern life's relative ease has slowed or even stopped human adaptation. Selective pressures are still at work; they just happen to be different than those faced by our distant ancestors.

"We're more different from people 5,000 years ago than they were from Neanderthals," said study co-author and University of Utah anthropologist Henry Harpending.

In the study, researchers analzyed genomes from 270 people belonging to four disparate ethnic groups: Han Chinese, Africa's Yoruba tribe, Japanese and Utah Mormons. By comparing areas of difference and similarity, they determined that about seven percent of the genome has undergone significant change since the end of the last Ice Age.

If human beings had always evolved at such a rapid clip, said the researchers, genetic differences between people and chimpanzees would be 160 times greater than they are.

Driving the changes are environmental fluctuations and population growth. As the number of people swells, so do the number of mutations generated by random chance. Further selecting for disparate genetic inheritances are the diverse terrains, climates and social structures inhabited since the glaciers retreated.

The findings contradict the hypothesis that evolution must be slowing down because people who once would have died are sustained by modern medicine and social safety nets. They also suggest that genetic differences between different ethnic groups can be significant.

"The actual genes that are sweeping have not been thoroughly identified in all cases, but we can see interesting patterns," said Harpending. "There are something like 6 genes, all broken African genes, responsible for European light skin, blue eyes, blonde hair, etc. They are evolving fast in Europe. Meanwhile, other genes responsible for light skin are sweeping in Asia, and they are different from those in Europe."

Asked about James Watson's controversial claims that intelligence evolved less effectively in people of African descent, Harpending said the study wasn't designed to test such characteristics. He also cautioned against interpreting the findings as suggesting that people are becoming fundamentally better.

"Some of the mutations let us do better. We can eat simple carbohydrates, which hunter-gatherers never did. But we may also be accumulating damaging stuff," said Harpending.

He wondered whether social changes might not cultivate unfortunate tendencies.

"Evolution is a double-edged sword," he said. "What evolution cares about is that I have more offspring. If you can do it by charming and manipulating, and I'm a hardworking farmer that's going to feed the kids ten years down the road, then you're going to win. Hit-and-run, irresponsible males are reproducing more. That isn't good for anyone except those males, but that's evolution."

The study's ultimate message, said Harpending: "Whatever changes are happening, they're happening faster."

[More authentic account from John Hawk's Press Release - AJP]

While more than 99 percent of the human genome is common across all humans, the HapMap project is cataloguing the individual differences in DNA called single nucleotide polymorphisms (SNPs). The project has mapped roughly 4 million of the estimated 10 million SNPs in the human genome. More importantly, it is identifying different regions of DNA, or haplotypes, that contain a large number of SNPs and are shared by multiple individuals.

In the hunt for recent genetic variation in this map, Hawks’ research focuses on a phenomenon called linkage disequilibrium (LD). These are places on the genome where genetic variations are occurring more often than can be accounted for by chance, usually because these changes are affording some kind of selection advantage.

The researchers identify recent genetic change by finding long blocks of DNA base pairs that are connected. Because human DNA is constantly being reshuffled through recombination, a long, uninterrupted segment of LD is usually evidence of positive selection. Linkage disequilibrium decays quickly as recombination occurs across many generations, so finding these uninterrupted segments is strong evidence of recent adaptation, Hawks says.

Employing this test, the researchers found evidence of recent selection on approximately 1,800 genes, or 7 percent of all human genes.

This finding runs counter to conventional wisdom in many ways, Hawks says. For example, there’s a strong record of skeletal changes that clearly show people became physically smaller, and their brains and teeth are also smaller. This is generally seen as a sign of relaxed selection – that size and strength are no longer key to survival.

But other pathways for evolution have opened, Hawks says, and genetic changes are now being driven by major changes in human culture. One good example is lactase, the gene that helps people digest milk. This gene normally declines and stops activity about the time one becomes a teenager, Hawks says. But northern Europeans developed a variation of the gene that allowed them to drink milk their whole lives — a relatively new adaptation that is directly tied to the advance of domestic farming and use of milk as an agricultural product.

The biggest new pathway for selection relates to disease resistance, Hawks says. As people starting living in much larger groups and settling in one place roughly 10,000 years ago, epidemic diseases such as malaria, smallpox and cholera began to dramatically shift mortality patterns in people. Malaria is one of the clearest examples, Hawks says, given that there are now more than two dozen identified genetic adaptations that relate to malaria resistance, including an entirely new blood type known as the Duffy blood type.

Another recently discovered gene, CCR5, originated about 4,000 years ago and now exists in about 10 percent of the European population. It was discovered recently because it makes people resistant to HIV/AIDS. But its original value might have come from obstructing the pathway for smallpox.

[While the science article apparently has not yet been posted on PNAS, here it is in full. As we discover more and more variance in our "personal genomes" the reader might need to be warned that the press occasionally goes wild with claims that never appear in the original scientific study. Research (as shown in "Methods") was based on SNP analysis using HapMap - thus the varieties (just as SNP-s themselves) are much more common in the non-coding (formerly "junk") DNA. Thus, individual and group-variance is much more likely to make our personalities (and less likely, our metabolism) diverse. - pellionisz_at_junkdna.com, December 10, 2007]

Craig Venter is the future
Look out, future, because here we come: scientists say the speed of human evolution increased rapidly during the last 40,000 years -- and it's only going to get faster.

The findings, published today by a team of U.S. anthropologists in the Proceedings of the National Academy of Sciences, overturn the theory that modern life's relative ease has slowed or even stopped human adaptation. Selective pressures are still at work; they just happen to be different than those faced by our distant ancestors.

"We're more different from people 5,000 years ago than they were from Neanderthals," said study co-author and University of Utah anthropologist Henry Harpending.

In the study, researchers analzyed genomes from 270 people belonging to four disparate ethnic groups: Han Chinese, Africa's Yoruba tribe, Japanese and Utah Mormons. By comparing areas of difference and similarity, they determined that about seven percent of the genome has undergone significant change since the end of the last Ice Age.

If human beings had always evolved at such a rapid clip, said the researchers, genetic differences between people and chimpanzees would be 160 times greater than they are.

Driving the changes are environmental fluctuations and population growth. As the number of people swells, so do the number of mutations generated by random chance. Further selecting for disparate genetic inheritances are the diverse terrains, climates and social structures inhabited since the glaciers retreated.

The most groundbreaking science is being done outside academia and government. And the egomaniacal geneticist is leading the way.
By Jonathon Keats

Dec. 5, 2007 | As an employee of the National Institutes of Health in the '80s, J. Craig Venter once found himself trailed by two men in suits. After shadowing him for a day in their brown Ford Fairlane, they appeared unannounced at his lab, where they showed him ID cards from the Department of Defense. The men asked him about his work on receptor proteins, which make cells sensitive to chemicals such as adrenaline. Might those proteins also be used to detect nerve poisons? While Venter had previously organized war protests, he'd also served as a medic in Vietnam, and his current research interests coincided with the military's. The questions they were asking were scientifically pertinent. So he accepted a Defense research grant of $250,000.

The NIH was not pleased. Administrators looked upon the money with institutional jealousy. Begrudgingly they set up a special account for his nerve poison research -- and bluntly informed Venter that he was "perhaps too entrepreneurial."

That, of course, was an understatement. Within a decade, Venter was in direct competition with the NIH, backed by $300 million in corporate funding against the agency's multibillion-dollar budget, engaged in arguably the most high-stakes clash in the history of science. The Human Genome Project, which made Venter one of the most admired and reviled figures in the world, has provided a genetic template for studying our species. At the same time, Venter's success dramatizes a paradigm shift in the culture of science, demonstrating the power of noninstitutional research. In the 20th century, the tenured professional supplanted the independent gentleman scientist: James Watson succeeded Charles Darwin. In the 21st century, the tenured professional is becoming outmoded, replaced by the intellectual entrepreneur: The mantle is passing from Watson to Venter.

Like Darwin, Venter was not quick to show his potential. In high school, the only A's he received were in P.E., wood shop and swimming, and, while he was a good enough athlete to get offered a scholarship by Arizona State, he opted instead for bodysurfing on the Southern California beaches and a night job pricing toys at Sears. That came to an end with the Vietnam War. He joined the Navy, which offered him his choice of positions after he scored an unexpected 143 on an IQ test. The medical corps looked best, because it required the briefest tour. Thus began Venter's unlikely career in medicine and science, propelled by the intoxication of accomplishment, leading from the tents of Da Nang to the lecture halls of U.C. San Diego to the laboratories of SUNY Buffalo, where he taught and conducted research for several years before taking a position at the NIH.

The rapidity of Venter's ascent, however, could never keep pace with his ambition, or his ego, and his tenure at the NIH was tempestuous from beginning to end. In his new memoir, "A Life Decoded," he constantly casts himself as a prophet amongst philistines: "I knew that I had made a breakthrough that could change genomic science," he writes of his most storied struggle, "and I was wasting my time, energy, and emotion on battling with a group that had no serious interest in letting an outsider analyze the human genome."

So he left. Simply put, Venter believed that he had a faster and cheaper way to sequence the genome than was feasible using the approved methodology, and that other laboratories were thwarting him because of territorialism and fear of losing funding. His assessment was essentially accurate, and their caution was largely justified. Venter's approach, called shotgun sequencing, was daring scientifically and risky politically, unproven at a scale even approaching the human genome, yet likely to make Congress question why the government was allocating so much for the NIH's safer methods. In other words, funding for him might mean less funding all around, and if he failed to deliver, no second chance for the genome.

Private money, on the contrary, was a different matter. By leaving the NIH, Venter escaped the entrenched interests of a stratified bureaucracy, entering into a realm that embraced risk as an investment strategy. Of course, for a scientist interested in answering fundamental questions rather than improving shareholder returns, that can present a different set of problems.

Venter's turbulent relationship with HealthCare Ventures, which funded his Institute for Genomic Research when he left the NIH, and with PerkinElmer, with which he formed Celera Genomics after his HealthCare Ventures partnership collapsed, amply illustrate the challenges of intellectual entrepreneurship in a venture capitalist economy. Businesses want to protect their investments, which translates into monopoly control or secrecy, policies anathema to open scientific exchange.

Moreover, expectations are often unrealistic. Venter scornfully writes of the "one gene, one protein, one billion dollars" mantra -- referring to the manufacture of drugs in a petri dish -- and points out that fewer than a dozen of the approximately 23,000 human genes have ultimately been lucrative for investors. Good research takes time, and success often rests on failure, notions that every Ph.D. understands but that confound the average MBA.

Venter got the human genome first -- and considerably accelerated the NIH effort in the process -- because he held nothing sacred other than the quest for knowledge. At times, sheer stubbornness drove him forward. He made his $300 million deal with PerkinElmer despite his failed HealthCare Ventures partnership, and ignoring advice from his lawyer to "get the hell away from them while you still can," for the simple reason that the human genome was "the biggest prize in biology," and was attainable only with big pharmaceutical money.

In other instances, he engineered shrewd compromises, such as publicly releasing the raw genome while constructing restricted-access databases designed to make the sequences more useful for industry. He was fired by Celera anyway -- officially because Celera was moving into drug discovery, unofficially because the company could no longer contain his ego -- but not before he had a chance to announce, at a televised ceremony in the Clinton White House on June 26, 2000, that "for the first time, our species can read the chemical letters of its genetic code." And afterward? That was the real start of Venter's career as an intellectual entrepreneur.

[The USA Academia and Government R&D is dangerously close to losing the global race for "Post-ENCODE Genomics". The winner may be Global Private Business - where US interest might even get the short end of it. Venter presently shoots at "trillion dollar microbes" ("MicrobeSoft") towards a Hydrogen-based global economy and, as he openly disclosed at a meeting on October 13th in his "Sand Hill Road" presentation (the Menlo Park, California hub, from where $1 out of every $3 investment is allocated), Venter's main investors are not even in the USA! It used to be that the USA Academia could freely throw around precious intellectual property since the US could get just about any scientists & technologists from anywhere, through its "brain-drain" vacuum-cleaner. No longer. China, India, (name your favorite country) send their Ph.D. students to the USA - especially to Genome Centers led by their naturalized fellow-country men. When they learn "the trade" - they head back to make their money with cheaper labor in their homeland. Venter's years in Academia and as a US government employee at NIH surely educated him (and this commentator, too...) of timely structural problems of US R&D. - pellionisz_at_junkdna.com, December 6, 2007]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Gene Security Network Raises $4M Series A Funding

Dec 05, 2007 14:16 ET

PORTOLA VALLEY, CA--(Marketwire - December 5, 2007) - Gene Security Network, Inc. (GSN), a molecular diagnostics company using data informatics to enhance genetic testing, today announced the closing of approximately $4M Series A financing. Participants in the round included: Claremont Creek Ventures, who led the Series A; Sequoia Capital; Huntington Reproductive Center, which is the largest West Coast IVF center; and private investor Marissa Mayer, who is VP of Search Products and User Experience at Google and whose areas of responsibility encompass Google Health.

GSN will use the investment capital to commercialize its proprietary genetic screening technology. GSN has developed a patented technology, termed Parental Support™, that enables highly accurate testing for multiple genetic diseases from a single cell. The technology is being clinically tested and commercialized for pre-implantation genetic diagnosis (PGD) during in-vitro fertilization (IVF). The service will become available through the leading IVF Centers in the United States in 2008 to help physicians select embryos for implantation during IVF.

"Our technology leverages the data from the human genome project to help parents have healthy babies through IVF, and can substantially impact the emotional and practical response to disease susceptibilities in a family. I am very pleased that Claremont Creek and Sequoia Capital share our vision," said Matthew Rabinowitz, Ph.D., President and Chief Executive Officer of Gene Security Network. "We have a very experienced and respected group of investors, and the funding will allow us to expedite our clinical trials and commercialization path."

In the process of IVF, a single cell can be extracted from the embryo on day three after fertilization for genetic testing prior to implantation on day five. Since only a single copy of DNA is available from one cell, DNA measurements from conventional PGD techniques are highly error-prone, and the number of loci that can be measured is limited.

"To date, the testing technologies used for PGD have a high error rate on the order of 10%, and are only able to detect a small number of abnormalities simultaneously," said John Steuart, Claremont Creek managing director. "GSN's strategy of using data informatics to enable more accurate testing of multiple genetic factors simultaneously is a novel approach to a problem that laboratory techniques alone cannot sufficiently address."

"In the burgeoning field of genomics, the real need is for more actionable tests that will meaningfully impact outcomes," said Roelof Botha, partner at Sequoia Capital. "The selection of embryos during IVF is unique in its ability to directly impact the outcome for the child and parents. Hundreds of thousands of couples who go through IVF annually will be able to benefit from the emerging information that links genes with disease through Gene Security Network's leading bioinformatics."

About GSN's Parental Support™ Technology

Parental Support™ is the first PGD technology to leverage data informatics to deliver highly accurate single cell testing for both chromosome abnormalities, such as Down Syndrome, and multiple genetic diseases, such as muscular dystrophy and cystic fibrosis. In addition, GSN's technology platform will support simultaneous reliable testing for both chromosome abnormalities and multiple genetic diseases from a single cell, an ability that has previously not been possible with existing PGD technologies.

About Claremont Creek Ventures

Claremont Creek Ventures is a venture capital firm that specializes in early stage information technology start-ups. One of the firm's chief areas of interest is the interface between IT and healthcare. Claremont Creeks' investment professionals share a deep commitment to helping entrepreneurs build successful companies from the ground up, drawing from decades of entrepreneurial, operational, and investment experience in the mobility, healthcare and security markets. Founded by Managing Directors Nat Goldhaber, Randy Hawks, and John Steuart, the firm is based in Oakland, California. For more information, please visit http://www.claremontvc.com.

About Sequoia Capital

Sequoia Capital provides startup venture capital for very smart people who want to turn ideas into companies. As the "Entrepreneurs Behind the Entrepreneurs," Sequoia Capital's Partners have worked with innovators such as Steve Jobs of Apple Computer, Larry Ellison of Oracle, Bob Swanson of Linear Technology, Sandy Lerner and Len Bozack of Cisco Systems, Dan Warmenhoven of Network Appliance, Jerry Yang and David Filo of Yahoo!, Jen-Hsun Huang of nVIDIA, Michael Marks of Flextronics, Larry Page and Sergey Brin of Google, Chad Hurley and Steve Chen of YouTube, Dominic Orr and Keerti Melkote of Aruba Wireless Networks and Jit Saxena of Netezza. To learn more about Sequoia Capital, visit www.sequoiacap.com.

About Gene Security Network

Gene Security Network is a molecular diagnostics company that has developed proprietary bioinformatics technologies for complex testing of small quantities of genetic material. GSN will operate a laboratory for preimplantation genetic diagnosis to guide doctors in screening embryos for disease susceptibility during in-vitro fertilization. The company is based out of Sunnyvale, California. For more information, please visit www.genesecurity.net.

[One may not want to drive a car without a map or, better yet, without a GPS. It is too easy to get lost without them. Yet, one may (most do) generate a baby without the slightest of guidance if they might expect serious if not lethal hereditary diseases popping up if the genotypes of the biological parents are incompatible. Isn't that bizarre? Apparently, both Google and Sequioa personnel think so - thus have sparked one of the most beneficial "information technology solution" for this trivially formidable challenge. Also, since such decisions are by far the biggest one can make - the service is guaranteed to be phenomenally lucrative. The challenge stems from the fact that this business is probably the most disruptive, ever - not only regarding the largely uncharted technology but also its impact on human kind - pellionisz_at_junkdna.com, December 6, 2007]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Mathematicians to decipher secrets of immune system [The Indian community in Thailand...]

November 14th, 2007 - 1:57 am ICT

The ‘Immunology Imaging and Modelling’ project is being funded by the Biotechnology and Biological Sciences. It is believed that the project will open the door for bringing together scientists working in different fields, who will then help bring about medical advances for patients.

Scientists have yet to fully understand how the immune system works. Their understanding about the body’s immune responses can be enhanced by packaging various information about the immune system in a quantitative format, so that the entire scientific community may access it.

“A multi and cross-disciplinary, cohesive and active approach is urgently required. The ability to track parasites and cells in real time using novel imaging techniques is allowing exciting new insights and will help us measure the interactions between the different parts of the immune system. This will provide a theoretical and computational model of the immune system, giving a complete picture that researchers from across all disciplines can refer to and draw upon,” says Dr. Carmen Molina-Paris, network co-ordinator and researcher at the University of Leeds.

“Mathematical immunology is maturing into a discipline where modelling helps everyone to interpret data and resolve controversies. Most importantly, it suggests novel experiments allowing for better and more quantitative interpretations,” Dr. Molina-Paris added.

Steve Visscher, interim Chief Executive of BBSRC said: “The new insight that this model will provide will naturally benefit the patient with the advances in healthcare it will lead to. BBSRC is committed to developing an active and cohesive cross-disciplinary community at the mathematics biology interface to enable a more quantitative and predictive biology.”

The project has been reported in the quarterly research highlights magazine of the Biotechnology and Biological Sciences Research Council (BBSRC). (ANI)

[BBSC is in the United Kingdom. So why is the interest in Thailand (mostly among bioinformaticians of Indian origin)? Look up the meeting in Chiangmai, Thailand, just one year ago... - pellionisz_at_junkdna.com, December 6, 2007]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Danish Hotspot in Personalized Medicine [in San Francisco...]

COPENHAGEN, November 12 /PRNewswire/ -- The Danish biotech cluster known as Medicon Valley has, within a short period, grown into a global hotspot in the field of personalized medicine, and finds itself increasingly in favor among big pharma and big biotech. Experience Danish competencies at the 3rd Annual Personalized Medicine Meeting in San Francisco from 12th - 13th November.

With over 10 research institutions and 15-20 private companies engaged in personalized medicine, the Danish biotech cluster known as Medicon Valley has in record time become a strong player in currently one of the hottest research fields within the pharmaceutical industry.

Several of these Danish players operate on an individual basis, although many are also involved in collaborative research efforts, including the Danish Center for Translational Breast Cancer Research, the siRNA Delivery Center, the Danish microRNA consortium and special research departments at the University of Copenhagen and the Technical University of Denmark.

This is one reason why Denmark is participating in 3rd Annual Personalized Medicine Meeting in San Francisco, where Director of Knowledge and Communications at Danish Medicines Agency Kaare Geil is in the panel discussion on the second day to give a European/Danish perspective on the topic: Who Will Pay? The Economics of Personalized Medicine. Kaare Geil says: "Denmark has a national cancer plan, which is very clear on personalized medicine. The ministry of health as well as the ministry of science, has specific programs to support and enable the implementation of personalized medicines in our systems."

S0ren M. Echwald, M.Sc., Ph.D., who is Vice President and responsible for Business Development at Exiqon will be hosting a table discussion at the conference. "Exiqon is very committed to our diagnostic vision, and is active with several diagnostic projects," says S0ren M. Echwald. "With Exqion's key technologies, the growing role of microRNA in classification of cancers, and the strong clinical network in Scandinavia, we feel ideally positioned to contribute to the improvement of cancer treatment selection."

Business Development Manager Vibeke Dalhoff from Copenhagen Capacity, the official investment promotion agency of the Capital Region of Denmark explains: "Denmark is absolutely world-class within biomedicine and we are pioneers within personalized medicine, with strong competencies in the therapeutic and cancer fields, among others."

Gitte Pedersen, special advisor with the sister organization Invest in Denmark: "Denmark is in a unique position. We excel at developing medicine, we have core competencies within system biology, which is the toolbox for personalized medicine, and we have a health service that can see the advantages of implementing individualized medicine and engage in an open dialog between companies and the authorities".

Given this cocktail of companies, research environment and interest from the public sector, international pharmaceutical companies working on personalized medicine have a strong focus on Denmark and Medicon Valley.

"There is growing interest in the Danish competencies within individualized medicine", says Vibeke Dalhoff

[It is very difficult to find a Danish scientist in Silicon Valley, California. The best place to be for the Danish on the Earth is Danmark. On top of it, the value proposition to US Venture Capitalists is rather unusual. Most Americans believe that "personal genomics" (as part of "personalized medicine") will be financed by the individuals - to better express their personal preferences (eminently true, for instance, for the "Gene Security Network" company in Sunnyvale, backed Sequoia, one of the best "leading VCs"). Danmark proposes that an entire Country may pay for personalized medicine! Why? Because this business model was verified e.g. by Genentech (the "grand daddy" of personalized medicine), since it makes no economical sense to use very expensive drugs for people who are not effected by the particular drug at all (because of the personal "profile" of their genomic composition). "Global PostGenetics" will show many such "off the beaten path" business models and solutions - pellionisz_at_junkdna.com, December 6, 2007]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Swiss Government Launches $354M Systems Biology Initiative [Switzerland to the lead...]

A consortium of Swiss universities will use government and university funds of as much as CHF 400 million ($354 million) over the next four years to create a large-scale systems biology-focused consortium that will be Switzerland’s “largest research initiative ever,” the group said.

Groups contributing to the research efforts include the Universities of Basel, Berne, Lausanne, Fribourg, Geneva, and Zurich, the Paul Scherrer Institute, Friedrich Miescher Institute, and the Swiss Institute for Bioinformatics, as well as the Federal Institutes of Technology in Zurich and Lausanne.

The consortium said that the Swiss parliament pledged CHF 200 million for the project, called SystemsX.ch, with CHF 100 million going into systems biology research projects. However, government funds will only be distributed to partner institutions that commit an equal amount to the research project in question.

The government also will offer another CHF 100 million to continue development of the Zurich Department for Biosystems Science and Engineering, located in Basel.

[Altogether close to half a billion dollars is not peanuts even for the Swiss. Switzerland is famous for watchmaking, and of course for Pharma. Half a billion dollars, therefore is an excellent investment, considering that Pfizer bought the San Francisco small microRNA company (SiRNA) for $1.1 billion. - pellionisz_at_junkdna.com, December 6, 2007]

^ back to table of contents ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Chinese DNA. How China Uses Genome Projects to construct Chineseness [China is the next superpower...]

(Conferentie, Debat, Info-Bijeenkomst, Vergadering)

Wanneer: 04-12-2007

Hoe laat: 19:30 - 21:00 uur

Waar: Leiden

Beschrijving: Lecture by Dr Wen-Ching Sung. Assistant Professor, Department of Anthropology, University of Toronto, Canada

Are people around the world genetically the same or different? In the early 1990s, two international human genome research projects with opposite assumptions were set up: the 'Human Genome Project' (HGP), presuming that all populations share a common thread, and the 'Human Genome Diversity Project' (HGDP) focusing on the variations of human genomes. While the HGDP has come to a standstill because of the strong criticism of scientific racism, many human genome projects aiming to collect databanks of regional or national genomes have bloomed in the USA, Iceland, Quebec in Canada, the United Kingdom, and China.

China is unique among these countries which look for "localized genome": it is the only country participating in both the HGP and the HGDP. This talk will analyze how China uses such a Janus-faced nature of the genomic research to construct Chineseness. This lecture is the fourth in the series 'Asian DNA at the Forefront' organised within the context of the 'Socio-genetic Marginalisation Programme' (SMAP) at IIAS. For information on this programme, see www.iias.nl/smap.

[No comment - pellionisz_at_junkdna.com, December 3, 2007]


A Changing Portrait Of DNA [A Changing Story of Global PostGenetics by Newsweek for US Consumption...]


["Going Beyond Genetics" is PostGenetics; Compare this article in the USA-Edition with Oct.15 in all International Editions, except USA - AJP]

Every day, it seems, scientists learn something new about how our genes work. The latest insights into the dazzling and complex machinery of life itself.

Four years ago, a Duke University biologist named Randy Jirtle began an elegant little experiment that would ultimately lead him to confront one of life's biggest mysteries. He started with two groups of mice that gave birth to sets of identical babies carrying the same genes. The babies were raised the same way from birth. They should have looked alike but instead, they barely looked related. In the first group, the babies were overweight, prone to diabetes and cancer and covered in fur the color of rancid butter. The mice in the second group were beautiful: lean, healthy, brown. Same nature, same nurture, radically different outcomes. What was going on in there?

The difference, it turned out, wasn't due to the mice's genetic code, nor was it due to the environment. It lay instead in a mechanism that was mediating between the two. A gene in the sickly yellow babies was making a disease-causing protein called Agouti, which also affects coat color. The brown babies had the same gene, but it wasn't making much of anything. It had mostly stopped working. The brown babies' mothers had eaten a special diet during pregnancy: one rich in folic acid, which floods the body with tiny four-atom configurations called methyl groups. These methyl groups had infiltrated the growing brown mouse embryos and latched onto the flawed gene, shutting it down. This was the solution to the mystery: Jirtle had vividly illustrated why, at the biochemical level, the genetic sequence alone doesn't always equal destiny. Four humble atoms had been enough to override a serious defect in the brown babies' genomes. And what was true of the mice turned out to be true of men: there is much more to our nature than the plans laid in the genetic code.

Biologists have known about methyl groups for decades, and since the 1990s they have discovered several other types of chemical switches that can turn genes on and off. But only recently have they begun to understand that these switches are a crucial link between the DNA and the outside world. Their findings are now challenging some of science's most basic assumptions about the way life works. Researchers once saw the order of the base pairs in DNA as a sort of unchanging blueprint, but that was far too simplistic an interpretation. Almost immediately after conception, while the embryo is still just a few cells, it begins to pick up on subtle cues in its environment. It then canvasses its own genome, switching genes in different cells on or off according to the signals it receives. At this moment, "nature" becomes malleable, and genetically identical cells set off on different journeys. Throw the switches one way and the cells grow into a heart. Throw them another way and the cells metamorphose into a liver. Wait long enough and you'll generate a full-grown person with a bewildering array of cells, tissues and organs. The switches, scientists now know, are responsible for this process. They direct almost all the body's fundamental functions. As much as the genes themselves, they are the biological builders that make us who we are.

That's not always a good thing. Malfunctioning genetic switches play a role in the vast majority of noninfectious diseases, including cancer, obesity and neurological disorders. Some of the switches, once set, seem to get stuck for life. But others may be reversible. Drug companies have already developed chemotherapies that turn genes on and off in cancer cells. They hope to someday build on the same principles to design drugs for almost every illness with a genetic component. "We feel like the Egyptologists in search of Tutankhamun's tomb," says Dr. Andrew Allen, chief medical officer of Pharmion, a firm with several such drugs in the works. "We've reached an antechamber. We have the sense that there are wonderful things just around the corner." Those things may be the answers to some of biology's biggest puzzles—mysteries of life that science has yet to solve.

To crack these secrets, scientists will first have to adopt new ways of thinking about genes. As they have learned more about the switches, they have had to revise theories that go all the way back to the pre-DNA era. Genetics was born in 1866, when a monk named Gregor Mendel published a seminal paper that defined the term "gene" as the unit that passes down heritable traits. He argued that children inherit two copies of each gene—one from the mother and one from the father. Although one may have more influence than the other, both remain active throughout life. Mendel's laws eventually became widely accepted. Still, no one knew what genes were actually made of until 1953, when James Watson and Francis Crick revealed their simple but ingenious double-helix model of DNA.

In Watson and Crick's model, each gene is made of a long, continuous stretch of nucleic acids arranged in a specific order. The gene's function is to serve as a template for RNA, which in turn pieces together proteins, the body's building blocks. This model is still biology's guiding principle, the "central dogma": DNA makes RNA makes proteins. It is correct, but as scientists now know, it is not comprehensive. [Crick's "Central Dogma" is both a misnomer and is incorrect - AJP] DNA must be doing something else as well—because, as it turns out, only 1 percent or so of the genome is actually in the protein-making business.

Until recently, some scientists assumed that the rest of the genome was a hodgepodge of evolutionary leftovers that did very little of consequence. Part of it they called "junk DNA," and the rest of it they didn't even name. "I think some people were hoping that 99 percent of the genome could just be ignored," says biologist Eric Lander, founding director of the Broad Institute, a collaboration of Harvard University and the Massachusetts Institute of Technology. Over the last decade, though, researchers have realized that this forgotten part of the genome is, in fact, profoundly important. It contains the machinery that flips the switches, manipulating much of the rest of the genome.

Most of the machinery follows Mendel's laws. But not all of it does. Some of it violates the notion that both copies of a gene operate throughout life. During the tango of conception, the sperm and egg both try to lead: they argue over a small set of genes in which one copy, from the mother or the father, will be permanently switched off, leaving the other copy to work solo. This week, the journal Genome Research will publish a study, led by Jirtle, suggesting that there are 156 of these solo genes in the body. Many are responsible for regulating other genes.

Scientists have been studying gene regulation for decades, but in the past few years, since the Human Genome Project was completed, they have drastically accelerated their pace. There is still a great deal to be learned, but a new discovery now appears in a major journal almost every week. "Every time you think you've solved the way things get regulated, you realize there's yet another layer of complexity," says biologist Rick Young of MIT and the Whitehead Institute for Biomedical Research. "That can be frustrating, but it's also exciting. It's so beautifully complicated."

Researchers have explored and exploited several types of genetic switches in the last few years. "Small interfering" and "microRNAs"—tiny free-floating nucleic-acid strings that can fool genes into shutting themselves down—are some of the most intriguing. Scientists have figured out how to mimic them using artificially created versions to turn genes off. The technique, called RNA interference (RNAi), won the Nobel Prize last year and it has now moved from academia to industry. Several firms have invented locally injectable RNAi therapies, and three weeks ago Quark Pharmaceuticals began to test in humans the world's first systematically injected RNAi. It turns off a gene called p53 that can cause unnecessary cell death in the kidneys; once the drug's work is done it exits the body, and p53 turns back on.

Methyl groups, the four-atom configurations that tamped down the Agouti gene in Jirtle's mice, are another influential category of switches. They may interfere with the DNA directly, like monkey wrenches in its machinery or, instead, they may interact with histones, proteins that serve as yet another type of switch. Young, the MIT biologist, made a surprising discovery about histones in July: at least a third of our genes have histone switches that hover somewhere between on and off, allowing the genes to start manufacturing their signature proteins but not letting them finish. Young notes that the human embryo must initiate many complex developmental processes in a short time period. Maybe, he says, the body keeps some of the genes involved in development "poised for action" so it can kick-start them quickly, when there's "little time to waste."

That speed, however, may come at a cost. Some of the genes that are left half-on are crucial in early development. When they're fully turned on—and that could happen accidentally if they're already halfway there—they can wipe out the cell's entire machinery, turning it into a blank slate that looks dangerously like a cancer stem cell. Young is currently exploring the hypothesis that our half-on, half-off genes are directly linked to cancer—they're necessary for development, but they also may predispose us to tumors later on.

This idea—that cancer is a necessary problem, an unavoidable consequence of our genes' need to switch on and off—is troubling, but it does make intuitive sense. The more we learn about the genome, the more complicated it turns out to be, and the more complicated a system is, the more potential there is for error. Cancer and other common diseases of regulation may thus be intrinsically built into our bodies—the price we pay for being such intricately built beings. "We cannot look at common diseases such as cancer as accidents of evolution," says Kari Stefansson, president of the Icelandic genetic firm DeCode. "We may have been designed by evolution in a very complex manner for the sole purpose of making sure we eventually die."

Life, death and human nature are complex questions, and we've always known the answers would be equally complex. For the first 50 years of modern DNA-driven genetics, it wasn't clear if we'd ever solve the mysteries. But with our emerging understanding of the machinery that directs development and disease, scientists at least have some new places to look for clues. Let's hope the switches that turn on the genes also turn on the lights.

[Newsweek published on October 15 in its "International Edition" (Asia, Europe, Latin America - but not in the USA) a major cover-article by Princeton professor Lee Silver on the "Revolution" taking place in Genomics, with an impact on over a dozen named major particular hereditary diseases. This new rendering of Newsweek, for USA consumption, subdues the "revolution" to "watch your diet, lo