Jump to content

Instructions per second

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 192.35.241.xxx (talk) at 15:13, 6 December 2001 (moved here from MIPS). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

MIPS, Million Instructions Per Second, is a measure of microprocessor speed.


However, this measure is useful only among processors with the same instruction set, as different instruction sets often take different numbers of instructions to do the same job.

Many reported MIPS values have represented 'peak' execution rates on artificial instruction sequences with few branches, whereas realistic workloads consist of a mix of instructions, some of which take longer to execute than others. The performance of the memory hierachy also greatly affects processor performance, an issue not considered in MIPS calculations.

Because of these problems, researchers have created standardized tests such as SpecInt to measure the real effective performance in commonly used applications, and raw MIPS has fallen into disuse. It was even perjoratively referred to as "Meaningless Indication of Processor Speed", or "Meaningless Information Provided by Salespeople".


In the 1970's, minicomputer performance was compared using VAX MIPS, where computers were measured on a task and their performance rated against a particular VAX model that performed approximately 1 MIPS.