Jump to content

A-law algorithm

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 213.253.40.156 (talk) at 15:51, 25 February 2002 (from Federal Standard 1037C). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In telecommunication, an a-law algorithm is a standard compression algorithm, used in digital communications systems of the European digital hierarchy, to optimize, i.e., modify, the dynamic range of an analog signal for digitizing.

Note 1: The wide dynamic range of speech does not lend itself well to efficient linear digital encoding. A-law encoding effectively reduces the dynamic range of the signal, thereby increasing the coding efficiency and resulting in a signal-to-distortion ratio that is superior to that obtained by linear encoding for a given number of bits.

Source: from Federal Standard 1037C