Jump to content

Charset detection

From Wikipedia, the free encyclopedia

Character encoding detection, charset detection, or code page detection is the process of heuristically guessing the character encoding of a series of bytes that represent text. The technique is recognised to be unreliable[1] and is only used when specific metadata, such as an HTTP Content-Type: header is either not available, or is assumed to be untrustworthy.

This algorithm usually involves statistical analysis of byte patterns;[2] such statistical analysis can also be used to perform language detection.[2] This process is not foolproof because it depends on statistical data.[1]

In general, incorrect charset detection leads to mojibake, due to character bytes being interpreted as belonging to one set—the incorrectly detected one—when they actually belong to a completely different one.[3][4]

One of the few cases where charset detection works reliably is detecting UTF-8. [5] This is due to the large percentage of invalid byte sequences in UTF-8,[note 1] so that text in any other encoding that uses bytes with the high bit set is extremely unlikely to pass a UTF-8 validity test.[5] However, badly written charset detection routines do not run the reliable UTF-8 test first, and may decide that UTF-8 is some other encoding. For example, websites in UTF-8 containing the name of the German city München may display "München", due to the code deciding that the encoding was ISO-8859-1 or Windows-1252 before (or without) even testing to see if it was UTF-8.

UTF-16 is fairly reliable to detect due to the high number of newlines (U+000A) and spaces (U+0020) that should be found when dividing the data into 16-bit words, and large numbers of NUL bytes all at even or odd locations. Common characters must be checked for, relying on a test to see that the text is valid UTF-16 fails: the Windows operating system would misdetect the phrase "Bush hid the facts" (without a newline) in ASCII as Chinese UTF-16LE, since all the byte pairs matched assigned Unicode characters in UTF-16LE.

Charset detection is particularly unreliable in Europe, in an environment of mixed ISO-8859 encodings. These are closely related eight-bit encodings that share an overlap in their lower half with ASCII and all arrangements of bytes are valid. There is no technical way to tell these encodings apart and recognizing them relies on identifying language features, such as letter frequencies or spellings.

Due to the unreliability of heuristic detection, it is better to properly label datasets with the correct encoding (see Specifying the document's character encoding). Even though UTF-8 and UTF-16 are easy to detect, some systems require UTF encodings to explicitly label the document with a prefixed byte order mark (BOM).

See also

[edit]

Notes

[edit]
  1. ^ In a random byte string, a byte with the high bit set has only a 1/15 chance of starting a valid UTF-8 code point. Odds are even lower in actual text, which is not random but tends to contain isolated bytes with the high bit set which are always invalid in UTF-8.

References

[edit]
  1. ^ a b "PHP: mb_detect_encoding - Manual". www.php.net. Retrieved 2024-11-12.
  2. ^ a b Kim, Seung-Ho; Park, Jongsoo (2007). Automatic Detection of Character Encoding and Language (PDF) (Thesis). Stanford University.
  3. ^ "Will unicode soon be the universal code? [The Data]". ieeexplore.ieee.org. Archived from the original on 2025-04-24. Retrieved 2025-07-07.
  4. ^ Chen, Raymond (2019-07-01). "A program to detect mojibake that results from a UTF-8-encoded file being misinterpreted as code page 1252". The Old New Thing. Retrieved 2025-07-07.
  5. ^ a b "A composite approach to language/encoding detection". www-archive.mozilla.org. Retrieved 2024-11-12.
[edit]