21-02-2009, 11:00 PM
To explain what makes quantum computers so different from their classical counterparts we begin by having a closer look at a basic chunk of information namely one bit. A bit is the basic unit of information in a digital computer. From a physical point of view, a bit is a physical system which can be prepared in one of the two different states representing two logical values --- no or yes, false or true, or simply 0 or 1. For example, in digital computers, the voltage between the plates in a capacitor represents a bit of information: a charged capacitor denotes bit value 1 and an uncharged capacitor bit value 0. One bit of information can be also encoded using two different polarizations of light or two different electronic states of an atom. In any of the systems listed above, a bit can store a value of logical 1 or logical 0 using some method which depends on the system used. The size of the register to be used would be determined by the maximum value that is to be used (m) and the number of bits in each register is determined using the equation
k = log 2 n
where n is the smallest power of 2 greater than or equal to m