Everyone who studies computers and most people who use computers more than casually know that modern computers use binary numbers internally. When we use the word bit, we're using a contraction for binary digit, something that can hold a zero or one and nothing else. We organize bits into groups of eight, called bytes or octets, and we organize the octets into words, often of 32 or 64 bits. Everyone knows that. But why? Why do computers use binary numbers? Why wouldn't the scientists and engineers who design modern computers design them to use the familiar decimal numbers that we learned in grade school? That way, we wouldn't have to learn a new system of numbers, and we could deal with quantities in the familiar powers of ten instead of powers of two.
It turns out that there are good reasons why computers use binary numbers, and the reasons are easy to understand; we can reduce the reasons to two important facts. To understand why computers use binary numbers, we need a short trip through manufacturing and electrical engineering.
No two manufactured parts are exactly alike, but small differences do not impair the usefulness of the final product. For mechanical parts, we might say that a difference of ten thousandths of an inch, plus or minus, from the nominal or design value is good enough. Depending on the part, the amount of deviation, or tolerance, might be larger or smaller, but there is always the concept of good enough. A part that is within tolerance is good enough; one that is not is defective.
Like mechanical parts, electrical and electronic components are not all perfect when they're made. They have a manufacturing tolerance. Often it is ±10%. So, a 100 ohm resistor might have an actual resistance from 90 ohms to 110 ohms and still be good enough because it's within that 10% tolerance. It is possible to make electronic parts with tighter tolerances of 5% or even 1%. Tighter tolerances make parts more expensive and, for many applications, aren't really necessary. Engineers take manufacturing tolerances into account when designing circuits.
An additional complication is that electronic components change with age. A component that was within ±10% may become 15 or 20% away from its nominal value after several years of operation. Engineers take that into consideration, too, which is why electronic devices have a design lifetime.
Modern computers work with discrete values – digits – rather than using electrical values as analogs to physical quantities. That's why they're called digital computers. To design a decimal digital computer, we need ten electrical values to represent the digits zero to nine. Hypothetically, we might decide to use a signal of zero volts to represent the digit zero, one volt to represent the digit one, and so on up to nine volts to represent the digit nine.
That sounds OK, but manufacturing tolerances make it very difficult in practice. Taking tolerances into account, we'd design circuits so that, if the digit seven is represented by seven volts, 6.7 volts and 7.2 volts would also be interpreted as the digit seven. If 6.7 volts is a seven, then 7.7 volts must be interpreted as eight, but that's only 10% away from the design or nominal value. A deviation of 10% causes an undetectable error! But manufacturing tolerances might mean that a component difference of 10% must be tolerated by the design. We're trapped; our design cannot work in practice when manufactured in quantity. It is extraordinarily difficult to design and build electronic devices that reliably discriminate among ten discrete values.
Although it is hard to discriminate among ten discrete states with electronics, it is easy to discriminate between two. One type of digital logic circuit uses a voltage of zero to represent the digit zero and five volts to represent the digit one. Essentially, we are discriminating between off and on. Anything less than about 2.5 volts is a zero; anything more is a one. Such a circuit has a tolerance of nearly 50%. It is relatively easy to build circuits that reliably discriminate between two values. The first of our two facts is this: Binary electronic circuits are reliable.
If you've read any computing history, you know that ENIAC, Electronic Numerical Integrator and Computer, was the first large-scale electronic computing machine, built during World War II to compute artillery firing tables. You may also know that ENIAC was a decimal computer; it worked with the digits zero to nine.
The engineers of the 1940s knew the difficulty of representing ten discrete values and the reliability of binary circuits, and so they designed ENIAC using binary electronic circuits. Each decimal digit required ten binary devices arranged so that one was on and the other nine were off. The circuit that was on indicated the digit represented. A ten-digit number required more than 100 vacuum tubes, a hundred to represent the digits and some more control operations and to connect the circuits together.
John von Neumann did some consulting on the construction of ENIAC and contributed quite a lot to the design of a subsequent computer, EDVAC. During that process, von Neumann observed that the ten devices needed for one decimal digit, if used as a ten bit binary number, could represent values from zero to 1,023 instead of only zero to nine. The use of binary numbers increased the expressive power of the binary circuits. That could be used to drive down the cost of a computer, or to make a more powerful computer at the same cost. That is our second fact: The use of binary numbers maximizes the expressive power of binary circuits.
It is important to note that von Neumann did not invent binary numbers. The binary system had been known to mathematicians for hundreds of years. Gottfried Leibniz wrote a paper on binary numbers in 1679. George Boole developed an algebra over binary numbers in the 1850s and Claude Shannon applied binary numbers to telephone switching equipment in the 1930s. Von Neumann's contribution was to recognize that the binary circuits of computers, required for reliability, were best used to represent binary numbers.