Unicode normalization
Unicode normalization is a form of text normalization that transforms equivalent characters or sequences of characters into a consistent underlying representation so that they may be easily compared.
Composition and Decomposition
Underlying Unicode's normalization methods is the concept of character composition and decomposition. Character composition is the process of combining simpler characters into fewer precomposed characters, such as the n character and the combining ~ character into the single ñ character. Decomposition is the opposite process, breaking precomposed characters back into their component pieces.
Unicode composes and decomposes based on characters and sequences it deems equivalent. It has two standards for this: canonical, which distinguishes between functionally equivalent but visually distinct characters, and compatibility, which does not. See the article on Unicode equivalence for more information.
Standards
Unicode defines four normalization standards.
NFD Normalization Form Canonical Decomposition |
Characters are decomposed by canonical equivalence. |
NFC Normalization Form Canonical Composition |
Characters are decomposed and then recomposed by canonical equivalence. It is possible for the result to be a different sequence of characters than the original. |
NFKD Normalization Form Compatibility Decomposition |
Characters are decomposed by compatibility equivalence. |
NFKC Normalization Form Compatibility Composition |
Characters are decomposed by compatibility equivalence, then recomposed by canonical equivalence. |
All the above methods will standardize the order in which decomposed characters appear, even sequences that were already decomposed prior to normalization. They may also replace characters or sequences with equivalent characters or sequences even if it doesn't result in the number of characters changing. These are done to achieve the consistency in encoding required for normalization.
Examples
In the above example you can see how NFD decomposes the original character into simpler component characters, and NFC recomposes it back into the original form. NFKD and NFKC provide the same result.
In the above example you can see how normalization can also replace characters with equivalent characters even if the number of characters does not change. NFD decomposed the A212B character into the same components as it would the A00C5 character, and NFC recomposed it into A00C5 because it's equivalent to the original and normalization will only use one encoding for a character. The omega symbol likewise got changed into an equivalent character even though it couldn't decompose. NFKD and NFKC provide the same result.
The S above is another example of decomposition and recomposition. The D recomposed into a different encoding of the same character, like the A in the previous example. The Q doesn't have a precomposed form but it still had the order of it's components changed to match the standard. NFKD and NFKC provide the same result.
In this example, the fi ligature is preserved by NFD because it uses canonical equivalence, but is broken apart by NFKD because it uses compatibility equivalence instead. Note that NFKC doesn't reconstruct the ligature. In the middle example, the superscript 5 is again preserved by NFD's focus on preserving visually distinct characters, but is converted by NFKD's focus on meaning. The bottom example shows how NFD will only decompose to the point where canonical equivalence is preserved, but NFKD will decompose further by meaning.