Talk:Integer overflow
Integer arithmetics are frequently used in computer programs on all types of systems, since floating-point operations may incur higher overhead (depending on processor capabilities).
Floating-point operations may or may not actually be more expensive than integer arithmetic on given hardware. I think there are much better reasons to use integers instead of floats: integers are exact and give you more precission than floats of the same size. For example, in a 32-bit integer, you get 32 bits of precission, whereas an IEEE single precission float, which also takes 32 bits, provides only 24 bits of precission (23 bits mantissa, and the sign bit). When the numbers you're trying to represent are integers (or even rationals with a common denominator), you're therefore better off using ints.