The Booth multiplication algorithm is a multiplication algorithm that multiplies two signed binary numbers in two-way notation. The algorithm was invented by Andrew Donald Booth in 1950 while conducting research on crystallography at Birkbeck College in Bloomsbury, London. Booth used desktop calculators that were faster to change than to add and created the algorithm to increase its speed. Booth's algorithm is of interest in the study of computer architecture.
The Booth algorithm examines pairs of adjacent bits of the N-bit multiplier Y in the two-signed complement representation, including an implicit bit below the least significant bit, y-1 = 0. For each bit yi, for i running from 0 to N - 1, the bits yi and yi - 1 are considered. Where these two bits are equal, the product accumulator P is not modified. Where yi = 0 and yi-1 = 1, multiplying multiplied by 2i is added to P; and where yi = 1 and yi-1 = 0, the multiply multiplied by 2i is subtracted from P. The final value of P is the signed product.
The multiplicand representations and the product are not specified; typically, these are also in the complement representation to two, as the multiplier, but any numerical system that supports addition and subtraction will also work. As indicated here, the order of the steps is not determined. It usually comes from LSB to MSB, starting at i = 0; the multiplication by 2i is then typically replaced by an incremental shift of the accumulator P to the right between the steps; the lower bits can be shifted, and subsequent additions and subtractions can be made only in the higher N bits of P. There are many variations and optimizations in these details .....
The algorithm is often described as the conversion of 1s strings in the multiplier to a +1 order of high order and a -1 to a low level in the ends of the string. When a string is executed through the MSB, there is no high order of +1, and the net effect is the interpretation as a negative value of the appropriate value.