The Famous 0’s and 1’s: Why computer understands only Binary code?

Yakshangi Joshi
5 min readMay 24, 2023

--

I remember my first year of engineering book stated “A computer is a dumb machine, which understand only 0 and 1”.

In last few decades, computers have evolved a lot. However, despite major advancement in the computer technology, we have continued using binary as the accepted number system for computer system.

A computer system is altogether known as a digital system which is made up of electronic circuits which can be programmed to perform user specific instructions. But, a computer is a dumb machine that has no brains to process what has to be done. It needs to be directed by a set of instructions called Program using Binary code.

How Computer translates, what you say?

A computer system is made up of both hardware and software components that work together to process output. A software is a set instructions called Programs, which direct hardware what is to be done and performs computation. On the other hand hardware is the physical component which describes the physical capabilities of a system.

A computer program is written in human friendly programming languages such a C, C++, JAVA, PYTHON, etc. These are all much similar to English language and can be understood by learning programming objectives and syntax. These are also referred to as Higher level Languages.

But computer only understands binary and so irrespective of any programming language you choose, all high level programs are required to be converted into binary code by using a special program called Compiler.

Compilers are very large programs, with error-checking and other abilities which translates high level program(Source code) written in a High level language into a set of machine-language instructions that can be understood by a digital computer’s CPU.

And so, at last whatever you say to a computer it’s converted into blocks of binary codes made up of 0’s and 1's.

Understanding Computer System Architecture

In order to understand, why computers only translate all our saying to binary, we need to first understand the computer system architecture and Central Processing Unit(CPU).

we can call CPU as the brain of the computer system which controls system components, CPU operations and the program execution. The processor chip consist of millions of Integrated Circuits(IC’s) which are designed to perform a set of binary operations. All computer programs are required to be converted into binary code so that processor can directly interpret and execute them.

Our brain contains around 100 billion cells called neurons — the tiny switches that let you think and remember things. Computers contain billions of miniature “brain cells” as well. They’re called transistors. Again, each IC is made up to millions of tiny components called Transistor. A transistor is a semi-conductor device used for amplifying or switching electric signals and power.

While working with computer chips, it acts as a switch. Transistors can have two distinct states: ON and OFF represented in form of 1(One) and 0(Zero) respectively. With billions of transistors, a chip can store billions of zeros and ones, and almost as many ordinary numbers and letters (or characters, as we call them).

Hence, due to architectural constraints CPU can be programmed to interpret and executes program instruction in binary format only. To make sense to complicated data computer has to decode it to binary.

Understanding Binary Code

Base 2: 0, 1

As we discussed above, computer doesn’t understand the way humans understand words and numbers. To overcome this limitation modern systems have abilities to directly covert English like high languages to low level machine understandable code. At the lowest level of your computer hardware, everything is represented in forms of binary electrical signals for the computer to make sense of data.

In mathematical terms, Binary number system consist of only two digits that is 0 and 1. Therefore, it’s called Base 2 number system. Like how decimal system uses digits from 0 to 9 and place value of each digit is worth 10 times more than the last one (…, 100, 10, 1). Similarly, in binary each digit being worth two times more than the last (…, 8, 4, 2, 1).

Let’s understand how a computer actually decodes text …

So, basically computer works with a series of binary bits and a collection of 8 bits constitutes a Byte which represents a single character.

Example, 01000101 = 1 character.

So, first thing to do on seeing a series of 0’s and 1’s is breaking them into groups of 8 bits (Byte). Once this is done, we can start converting those individual bytes to corresponding characters.

Now, we shall check the first 3 digits in the 8-bit code to determine the case of the letter either Uppercase or Lowercase;

  • 010 = Uppercase
  • 011 = Lowercase

Example, 01000101 = Uppercase letter

Further, the left over digits correspond to respective letter they represent.

  • Bit 1: 2 to the power of 0 = 1
  • Bit 2: 2 to the power of 1 = 2
  • Bit 3: 2 to the power of 2 = 4
  • Bit 4: 2 to the power of 3 = 8
  • Bit 5: 2 to the power of 4 = 16

To interpret last 5 digits, we will pick 1’s and ignore 0's.

Example, with the left over 00101 digits means that 0(16), 0(8), 1(4), 0(2) and 1(1). Here, we shall only pick digits with 1 that is 4 and 1. On adding them together we shall get 5 which corresponds to alphabet E as per ASCII.

We shall continue this procedure for all the letters and the same procedure can be implemented for decoding large numbers.

But why only Base 2?

The answer dates back to the time when room sized computers where in use. Since every word, letter, digit is computed using an electric signal and in early days of computing, these electric signals where not easy to measure and control precisely.

And so states: ON and OFF made more clarity to understand and distinguish. ON indicated negative charge and OFF positive. Based on this concept computers used binary to build their systems and same has been continued to be in use by modern computers using Transistors.

And so we may conclude the reason for the mass acceptance of binary code for computer systems due to the traditional design of the computer systems.

--

--

Yakshangi Joshi

A tech Writer, who likes talking to, for and about Technology. Majorly Web Enthusiast ☁️. There's always a positve side to watch things happen!!