Jump to content

Sign bit

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by DrWolfen (talk | contribs) at 01:20, 24 March 2012 (Added information about the differences of what the signed bit means in 2's compliments, 1's compliment and sign-magnitude). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computer science, the sign bit is a bit in a computer numbering format that indicates the sign of a number. In IEEE format, the sign bit is the leftmost bit (most significant bit). Typically if the sign bit is 1 the number is negative (in the case of two's complement integers) or non-positive (for ones' complement integers, sign-magnitude integers, and floating point numbers), while 0 indicates a positive number.

In the two's complement representation, the sign bit represents the largest negative value that the bit vector can represent (which would be 2^(w-1) where w represents the length of the bit vector). In the one's complement representation, the sign bit represents the same thing, except that the largest negative value is (2^(w-1) - 1) rather than 2^(w-1). In the sign magnitude representation of bit vectors, the sign bit simply determines whether the value of the given vector is positive or negative. [1]

When an 8-bit value is added to a 16-bit value using signed arithmetic, the microprocessor propagates the sign bit through the high order half of the 16-bit register holding the 8-bit value – a process called sign extension or sign propagation.[2]

See also

References

  1. ^ Bryant, Randal; O'Hallaron, David (2003). "2". Computer Systems: a Programmer's Perspective. Upper Saddle River, New Jersey: Prentice Hall. pp. 52–54. ISBN 0-13-034074-X.
  2. ^ http://www.adrc.net/data-dictionary/s1.htm