Binary code

Binary code is the system of representing text or computer processor instructions by the use of the binary number system's two-binary digits "0" and "1". A binary string of eight digits (bits) can represent any of 256 possible values and can correspond to a variety of different symbols, letters or instructions. In 8-bit ASCII code the lowercase a is represented by the bit string 01100001.
In computing and telecommunication, binary code is used for any of a variety of methods of encoding data, such as character strings, into bit strings. Those methods may be fixed-width or variable-width.
In a fixed-width binary code, each letter, digit, or other character, is represented by a bit string of the same length; that bit string, interpreted as a binary number, is usually displayed in code tables in octal, decimal or hexadecimal notation.
There are many character sets and many character encodings for them.
A bit string, interpreted as a binary number, can be translated into a decimal number.
==Early uses of Binary codes== prevention of aids
Anton Glaser, in History of Binary and other Nondecimal Numeration. Tomash. 1971. ISBN 0-938228-005., Chapter VII Applications to Computers, cites the following Pre-ENIAC milestones.
- 1932: C.E. Wynn-Williams "Scale of Two" counter
- 1938: Atanasoff-Berry Computer
- 1939: Stibitz: "excess three" code in the Complex Computer
Weight of binary codes
The weight of a binary code, as defined in [1], is the Hamming weight of the binary words coding for the represented words or sequences.
See also
External links
- Harvard Computer Science Lecture on Binary in Computing