Sign bit
This article needs additional citations for verification. (December 2009) |
In computer science, the sign bit is a bit in a computer numbering format that indicates the sign of a number. In IEEE format, the sign bit is the leftmost bit (most significant bit). Typically if the sign bit is 1 the number is negative (in the case of two's complement integers) or non-positive (for ones' complement integers, sign-magnitude integers, and floating point numbers), while 0 indicates a positive number.
In the two's complement representation, the sign bit represents the largest negative value that the bit vector can represent (which would be 2^(w-1) where w represents the length of the bit vector). In the one's complement representation, the sign bit represents the same thing, except that the largest negative value is (2^(w-1) - 1) rather than 2^(w-1). In the sign magnitude representation of bit vectors, the sign bit simply determines whether the value of the given vector is positive or negative. [1]
When an 8-bit value is added to a 16-bit value using signed arithmetic, the microprocessor propagates the sign bit through the high order half of the 16-bit register holding the 8-bit value – a process called sign extension or sign propagation.[2]
See also
References
- ^ Bryant, Randal; O'Hallaron, David (2003). "2". Computer Systems: a Programmer's Perspective. Upper Saddle River, New Jersey: Prentice Hall. pp. 52–54. ISBN 0-13-034074-X.
- ^ http://www.adrc.net/data-dictionary/s1.htm