Jump to content

Linear block codes

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Photonique (talk | contribs) at 22:51, 19 December 2005 (References). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Linear block codes

Linear block codes, have the property of linearity, i.e the sum of any two codewords is also a code word; and they are applied to the source bits in blocks; hence the name linear block codes. Although linearity is not a requirement, it is difficult to prove that a code is a good one without this property.

Any linear block code, is represented at where

  1. n , is the length of the codeword, in symbols,
  2. k , is the number of source symbols that will be used for encoding at once,
  3. , is the minimum hamming distance for the code

There are many types within linear block codes, like

  1. Cyclic codes (Hamming code, is a subset of cyclic codes)
  2. Repetition codes
  3. Parity codes
  4. Reed Solomon codes
  5. BCH code
  6. Reed Muller codes
  7. Perfect codes

Block codes are tied to the "penny packing" problem which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful Golay code used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is,) the dimensions refer to the length of the codeword as defined above.

The theory of coding uses the N-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so called perfect codes. There are very few of these codes.

Another item which is often overlooked is the number of neighbors a single codeword may have. Again, lets use pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. When we increase the dimensions, the number of near neighbors increases very rapidly.

The result is the the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers.

See also

  1. Error correcting code
  2. Convolution code
  3. Turbo code

References

  1. Lin, Costello "Error Control Coding", 2004, Prentice Hall, NJ.