Bits
Whenever people talk or write about computers, they use the term BITS: “bits of information,” “megabits,” “bits per second,” etc.
BIT is short for Binary digIT. Binary means “two” and digits refer to basic numbers. In our normal (“denary” or “decimal”) number system, we use ten digits: 0,1,2,3,4,5,6,7,8,9. Any numbers beyond those are made by combining more digits. So, ten is 10, which means “one ten, and zero ones.” The denary number 483 is read “four hundred eighty three” and means “4 hundreds (10×10 or ten squared), 8 tens, and 3 ones.”
There are two binary digits: 0 and 1. In binary, 10 is two: “1 two and 0 ones.” 11 in binary is three: “1 two and 1 one.” Four in binary is 100: “1 four (2×2 or two squared), no twos, and no ones.”
Everything a computer uses can be represented using bits, or a string of zeros and ones. HOWEVER, it’s not exactly true that computers actually USE binary digits (or 0s and 1s). Computers actually use electric signals, which is controlled by being off or on. If we represent an electric current (ON) as a 1 and a lack of current (OFF) as a 0, then binary digits can represent what the computer is actually doing. It’s easier to write and work with 101001110110
instead of "on-off-on-off-off-on-on-on-off-on-on-off"
!
We can group bits to make them easier to read and work with. We typically use BYTES, which is a group of 8 bits or 8 binary digits, such as: 10011011. There are also NIBBLES, which are four bits.
When you hear “bit”, think “binary digit” which is a 1 or a 0. Remember that those are used to represent whether electric current is off or on.
For more information on bits, check out my online course “Conquer Computer Science: Data Representation”
Leave a Reply