DNA Computing
#1

DNA Computing
Glossary
DNA
Deoxyribonucleic acid. Molecule that encodes the genetic information of cellu-
lar organisms.
Enzyme
Protein that catalyzes a biochemical reaction.
Nanotechnology
Branch of science and engineering dedicated to the construction of artifacts and
devices at the nanometre scale.
RNA
Ribonucleic acid. Molecule similar to DNA, which helps in the conversion of
genetic information to proteins.
Satisfiability (SAT)
Problem in complexity theory. An instance of the problem is defined by a
Boolean expression with a number of variables, and the problem is to identify
a set of variable assignments that makes the whole expression true.
I Definition of the Subject and Its Importance
DNA computing (or, more generally, biomolecular computing) is a relatively
new field of study that is concerned with the use of biological molecules as
fundamental components of computing devices. It draws on concepts and ex-
pertise from fields as diverse as chemistry, computer science, molecular biology,
physics and mathematics. Although its theoretical history dates back to the
late 1950s, the notion of computing with molecules was only physically realised
in 1994, when Leonard Adleman demonstrated in the laboratory the solution of
a small instance of a well-known problem in combinatorics using standard tools
of molecular biology. Since this initial experiment, interest in DNA computing
has increased dramatically, and it is now a well-established area of research. As
we expand our understanding of how biological and chemical systems process
information, opportunities arise for new applications of molecular devices in
bioinformatics, nanotechnology, engineering, the life sciences and medicine.
II Introduction
In the late 1950s, the physicist Richard Feynman first proposed the idea of using
living cells and molecular complexes to construct “sub-microscopic computers.”
In his famous talk “There’s Plenty of Room at the Bottom” [18], Feynman
discussed the problem of “manipulating and controlling things on a small scale”,
thus founding the field of nanotechnology. Although he concentrated mainly
on information storage and molecular manipulation, Feynman highlighted the
potential for biological systems to act as small-scale information processors:
The biological example of writing information on a small scale has
inspired me to think of something that should be possible. Biology
is not simply writing information; it is doing something about it.
A biological system can be exceedingly small. Many of the cells
are very tiny, but they are very active; they manufacture various
substances; they walk around; they wiggle; and they do all kinds
of marvelous things – all on a very small scale. Also, they store
information. Consider the possibility that we too can make a thing
very small which does what we want – that we can manufacture an
object that maneuvers at that level! [18].
I Early Work
Since the presentation of Feynman’s vision there there has been an steady
growth of interest in performing computations at a molecular level. In 1982,
Charles Bennett [8] proposed the concept of a “Brownian computer” based
around the principle of reactant molecules touching, reacting, and effecting
state transitions due to their random Brownian motion. Bennett developed
this idea by suggesting that a Brownian Turing Machine could be built from a
macromolecule such as RNA. “Hypothetical enzymes”, one for each transition
rule, catalyze reactions between the RNA and chemicals in its environment,
transforming the RNA into its logical successor.
In the same year, Conrad and Liberman developed this idea further in [15],
in which the authors describe parallels between physical and computational
processes (for example, biochemical reactions being employed to implement ba-
sic switching circuits). They introduce the concept of molecular level “word
processing” by describing it in terms of transcription and translation of DNA,
RNA processing, and genetic regulation. However, the paper lacks a detailed
description of the biological mechanisms highlighted and their relationship with
“traditional” computing. As the authors themselves acknowledge, “our aspira-
tion is not to provide definitive answers . . . but rather to show that a number of
seemingly disparate questions must be connected to each other in a fundamental
way.” [15]
In [14], Conrad expanded on this work, showing how the information pro-
cessing capabilities of organic molecules may, in theory, be used in place of dig-
ital switching components. Particular enzymes may alter the three-dimensional
structure (or conformation) of other substrate molecules. In doing so, the en-
zyme switches the state of the substrate from one to another. The notion of
conformational computing (q.v.) suggests the possibility of a potentially rich
and powerful computational architecture. Following on from the work of Con-
rad et al., Arkin and Ross show how various logic gates may be constructed
using the computational properties of enzymatic reaction mechanisms [5] (see
Dennis Bray’s article [10] for a review of this work). In [10], Bray also describes
work [23, 24] showing how chemical “neurons” may be constructed to form the
building blocks of logic gates.
II Motivation
We have made huge advances in machine miniaturization since the days of room-
sized computers, and yet the underlying computational framework (the von
Neumann architecture) has remained constant. Today’s supercomputers still
employ the kind of sequential logic used by the mechanical “dinosaurs” of the
1940s [13].
There exist two main barriers to the continued development of “traditional”,
silicon-based computers using the von Neumann architecture. One is inherent
to the machine architecture, and the other is imposed by the nature of the un-
derlying computational substrate. A computational substrate may be defined
as “a physical substance acted upon by the implementation of a computational
architecture.” Before the invention of silicon integrated circuits, the underlying
substrates were bulky and unreliable. Of course, advances in miniaturization
have led to incredible increases in processor speed and memory access time.
However, there is a limit to how far this miniaturization can go. Eventually
“chip” fabrication will hit a wall imposed by the Heisenberg Uncertainty Prin-
ciple (HUP). When chips are so small that they are composed of components a
few atoms across, quantum effects cause interference. The HUP states that the
act of observing these components affects their behavior. As a consequence, it
becomes impossible to know the exact state of a component without fundamen-
tally changing its state.
The second limitation is known as the von Neumann bottleneck. This is
imposed by the need for the central processing unit (CPU) to transfer instruc-
tions and data to and from the main memory. The route between the CPU and
memory may be visualized as a two-way road connecting two towns. When the
number of cars moving between towns is relatively small, traffic moves quickly.
However, when the number of cars grows, the traffic slows down, and may even
grind to a complete standstill. If we think of the cars as units of information
passing between the CPU and memory, the analogy is complete. Most com-
putation consists of the CPU fetching from memory and then executing one
instruction after another (after also fetching any data required). Often, the
execution of an instruction requires the storage of a result in memory. Thus,
the speed at which data can be transferred between the CPU and memory is a
limiting factor on the speed of the whole computer.
Some researchers are now looking beyond these boundaries and are investi-
gating entirely new computational architectures and substrates. These develop-
ments include quantum computing (q.v.), optical computing (q.v.), nanocom-
puters (q.v.) and bio-molecular computers. In 1994, interest in molecular
computing intensified with the first report of a successful non-trivial molecu-
lar computation. Leonard Adleman of the University of Southern California
effectively founded the field of DNA computing by describing his technique for
performing a massively-parallel random search using strands of DNA [1]. In
what follows we give an in-depth description of Adleman’s seminal experiment,
before describing how the field has evolved in the years that followed. First,
though, we must examine more closely the structure of the DNA molecule in
order to understand its suitability as a computational substrate.
III The DNA Molecule
Ever since ancient Greek times, man has suspected that the features of one
generation are passed on to the next. It was not until Mendel’s work on garden
peas was recognized [39] that scientists accepted that both parents contribute
material that determines the characteristics of their offspring. In the early 20th
century, it was discovered that chromosomes make up this material. Chemical
analysis of chromosomes revealed that they are composed of both protein and
deoxyribonucleic acid, or DNA. The question was, which substance carries the
genetic information? For many years, scientists favored protein, because of its
greater complexity relative to that of DNA. Nobody believed that a molecule as
simple as DNA, composed of only four subunits (compared to 20 for protein),
could carry complex genetic information.
It was not until the early 1950s that most biologists accepted the evidence
showing that it is in fact DNA that carries the genetic code. However, the
physical structure of the molecule and the hereditary mechanism was still far
from clear.
In 1951, the biologist James Watson moved to Cambridge to work with a
physicist, Francis Crick. Using data collected by Rosalind Franklin and Maurice
Wilkins at King’s College, London, they began to decipher the structure of DNA.
They worked with models made out of wire and sheet metal in an attempt to
construct something that fitted the available data. Once satisfied with their
double helix model, they published the paper [43] (also see [42]) that would
eventually earn them (and Wilkins) the Nobel Prize for Physiology or Medicine
in 1962.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: satisfiability, maurice sendak colbert,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  virtual network computing seminar addict 1 1,373 12-12-2012, 02:07 PM
Last Post: seminar details
  DNA FINGERPRINTING technology seminar addict 1 2,939 19-11-2012, 01:42 PM
Last Post: seminar details
  CLOUD COMPUTING CLOUD COMPUTING project uploader 1 1,788 03-10-2012, 03:10 PM
Last Post: seminar details
  CLOUD COMPUTING: A PERSPECTIVE STUDY seminar addict 1 1,531 03-10-2012, 03:10 PM
Last Post: seminar details
  cloud-computing project uploader 1 1,199 03-10-2012, 03:10 PM
Last Post: seminar details
  Smart Clothing: The Shift to Wearable Computing project uploader 0 1,143 11-06-2012, 11:22 AM
Last Post: project uploader
  Addressing the Issues and Challenges of Cloud Computing seminar details 0 953 09-06-2012, 12:17 PM
Last Post: seminar details
  Cloud Computing for Beginners seminar details 0 868 09-06-2012, 11:38 AM
Last Post: seminar details
  NANO COMPUTING project uploader 0 1,894 08-06-2012, 05:02 PM
Last Post: project uploader
  Enabling Data Hiding for Resource Sharing in Cloud Computing seminar details 0 894 07-06-2012, 11:55 AM
Last Post: seminar details

Forum Jump: