Jump to content

Binary notation

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Lerdsuwa (talk | contribs) at 13:00, 15 October 2005. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematics, computer science, telecommunication etc., the term binary notation has the following meanings:

1. Any notation that uses two different characters, usually the digits 0 and 1.

Note: Data encoded in binary notation need not be in the form of a pure binary numeration system; e.g., they may be represented by a Gray code. See binary numeral system.

2. A scheme for representing numbers, which scheme is characterized by the arrangements of digits in sequence, with the understanding that successive digits are interpreted as coefficients of successive powers of base 2.

Source: partly from Federal Standard 1037C and from MIL-STD-188