Jump to content

Talk:Integer overflow

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Inglorion (talk | contribs) at 09:27, 19 August 2006. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Integer arithmetics are frequently used in computer programs on all types of systems, since floating-point operations may incur higher overhead (depending on processor capabilities).

Floating-point operations may or may not actually be more expensive than integer arithmetic on given hardware. I think there are much better reasons to use integers instead of floats: integers are exact and give you more precission than floats of the same size. For example, in a 32-bit integer, you get 32 bits of precission, whereas an IEEE single precission float, which also takes 32 bits, provides only 24 bits of precission (23 bits mantissa, and the sign bit). When the numbers you're trying to represent are integers (or even rationals with a common denominator), you're therefore better off using ints.

Inglorion 09:27, 19 August 2006 (UTC)[reply]