Welcome Guest. Sign in or Signup

4 Answers

binary number system an introduction

Asked by: 423 views TechSupportPortal

it is a little whitepaper on binary no. systems which is not written by me . but i find it nice so that i am putting it here

4 Answers

  1. 0 Votes Thumb up 0 Votes Thumb down 0 Votes


    Everywhere, except for computer-related operations, the main system of mathematical notation today is the decimal system, which is a base-10 system. As in other number systems, the position of a symbol in a base-10 number denotes the value of that symbol in terms of exponential values of the base. That is, in the decimal system, the quantity represented by any of the ten symbols used – 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 – depends on its position in the number. 

    Unlike the decimal system, only two digits – 0, 1 – suffice to represent a number in the binary system. The binary system plays a crucial role in computer science and technology.  The first 20 numbers in the binary notation are 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111, 10000, 10001, 10010, 10011, 10100, the origin of which may be better understood if they are re-written in the following way:

    1:    00001                        11:    01011
    2:    00010                        12:    01100
    3:    00011                        13:    01101
    4:    00100                        14:    01110
    5:    00101                        15:    01111
    6:    00110                        16:    10000
    7:    00111                        17:    10001
    8:    01000                        18:    10010
    9:    01001                        19:    10011
    10:  01010                         20:   10100  

    Any decimal number can be converted into the binary system by summing the appropriate multiples of the different powers of two. For example, starting from the right, 10101101 represents (1 x 20) + (0 x 21) + (1 x 22) + (1 x 23) + (0 x 24) + (1 x 25) + (0 x 26) + (1 x 27) = 173. This example can be used for the conversion of binary numbers into decimal numbers. 

    For the conversion of decimal numbers to binary numbers, the same principle can be used, but the other way around. Thus, to convert, first find the highest power of two that does not exceed the given number, and place a 1 in the corresponding position in the binary number. For example, the highest power of two in the decimal number 519 is 29 = 512. Thus, a 1 can be inserted as the 10th digit, counted from the right: 1000000000. 

    In the remainder, 519 – 512 = 7, the highest power of 2 is 22 = 4, so the third zero from the right can be replaced by a 1: 1000000100. The next remainder, 3, consists of the sum of two powers of 2: 21 + 20, so the first and second zeros from the right are replaced by 1: 519 = 10000001112.

    Arithmetic operations in the binary system are extremely simple. The basic rules are: 1 + 1 = 10, and 1 x 1 = 1. Zero plays its usual role: 1 x 0 = 0, and 1 + 0 = 1. Addition, subtraction, and multiplication are done in a fashion similar to that of the decimal system:

    Here, you may play with number conversion from the decimal to the binary system. Because only two digits, or states, (on and off, 0 and 1) are involved, the binary system is useful in computers, which are digital devices. The "on" position corresponds to a 1, and the "off" position to a 0. 

    In magnetic storage devices (Hard Rigid Disk, Floppy, Zip, Tape, etc.) magnetized areas of the media  are used to represent binary numbers: a magnetized area stands for 1, and the absence of magnetization means 0. Flip-flops-electronic devices that can only carry two distinct voltages at their outputs and that can be switched from one state to the other state by an impulse-can also be used to represent binary numbers; the two voltages correspond to the two digits. Optical and magneto-optical storage devices use two distinct levels of light reflectance or polarization to represent 0 or 1. 

    saritha - Nov 30, -0001 | Reply

  2. 0 Votes Thumb up 0 Votes Thumb down 0 Votes

    Bit is an abbreviation for binary digit – the smallest unit of information in a digital world. A bit is represented by the numbers 1 and 0, which correspond to the states on and off, true and false, or yes and no. Bits are the building blocks for all information processing that goes on in digital electronics and computers. The term bit was introduced by John Tukey, an American statistician and early computer scientist. He first used the term in 1946, as a shortened form of the term binary digit. Bits are usually combined into larger units called bytes. 

    Byte, in computer science, is a unit of information built from bits, the smallest units of information used in computers. One byte equals 8 bits. The values that a byte can take on range between 00000000 (0 in decimal notation) and 11111111 (255 in decimal notation). This means that a byte can represent 28 (2 raised to the eighth power) or 256 possible states (0-255). Bytes are combined into groups of 1 to 8 bytes called words. The size of the words used by a computer’s central processing unit (CPU) depends on the bit-processing ability of the CPU. A 32-bit processor, for example, can use words that are up to four bytes long (32 bits). The term byte was first used in 1956 by German-born American computer scientist Werner Buchholz to prevent confusion with the word bit. He described a byte as a group of bits used to encode a character. The eight-bit byte was created that year and was soon adopted by the computer industry as a standard.


    Computers are often classified by the number of bits they can process at one time, as well as by the number of bits used to represent addresses in their main memory (RAM). Computer graphics are described by the number of bits used to represent pixels (short for picture elements), the smallest identifiable parts of an image. In monochrome images, each pixel is made up of one bit. In 256-color and gray-scale images, each pixel is made up of one byte (eight bits). In true color images, each pixel is made up of at least 24 bits. The particular sequence of bits in a byte encodes a unit of information such as a keyboard character. One byte typically represents a single character such as a number, letter, or symbol. Software designers use computers and software to combine bytes in complex ways and create meaningful data in the form of text files or binary files (files that contain data to be processed and interpreted by a computer). Bits and bytes are the basis for representing all meaningful information and programs on computers. 

    Bytes are the major unit for measuring quantities of data or data capacity. Data quantity is commonly measured in kilobytes (1024 bytes), megabytes (1,048,576 bytes), or gigabytes (about 1 billion bytes). A regular, floppy disk normally holds 1.44 megabytes of data, which equates to approximately 1,400,000 keyboard characters.  At this storage capacity, a single disk can hold a document approximately 700 pages long, with 2000 characters per page.  The number of bits used by a computer’s Central Processing Unit (CPU) for addressing information represents one measure of a computer’s speed and power. Computers today often use 16, 32, or 64 bits in groups of 2, 4, and 8 bytes in their addressing. 

    In computing, digital is synonymous with binary because the computers familiar to most people process information coded as combinations of binary digits (bits). One bit can represent at most two values; 2 bits, four values; 8 bits, 256 values; and so on. Values that fall between two numbers are represented as either the lower or the higher of the two. Because digital representation represents a value as a coded number, the range of values represented can be very wide, although the number of possible values is limited by the number of bits used. 

    Digitizing means to convert any analog (or continuously varying signal), such as the lines in a drawing or a sound signal, into a series of discrete units represented by the digits 0 and 1. A drawing or photograph, for example, can be digitized by a scanner that converts lines and shading into combinations of 0’s and 1’s by sensing different intensities of light and dark.

    Analog-to-digital converters are commonly used to perform this translation. Analog-To-Digital Converter or ADC, is an electronic device for converting data from analog (continuous) to digital (discrete) form for use in electronic equipment such as digital computers, digital audio and video recorders, and data storage devices. Analog or continuously varying electrical waveforms are applied to the device and are sampled at a fixed rate. Sample values are then expressed as a digital number, using a binary numbering system consisting only of 0’s and 1’s. The resulting digital codes can be used in various types of communications systems.

    saritha - Nov 30, -0001 | Reply

  3. 0 Votes Thumb up 0 Votes Thumb down 0 Votes

    i know it is a silly topic but u know our admin mailed me reagrding that premium membership card. so i am on my way to reach that 20 post mark early..hai will your welcome kit contains any more attractions than a membership card. i hope it will look like a credit card

    saritha - Nov 30, -0001 | Reply

  4. 0 Votes Thumb up 0 Votes Thumb down 0 Votes


    SAVAD.AM - Nov 30, -0001 | Reply

Answer Question