Introduction
Normalization is a process that involves transforming characters and
sequences of characters into a formally-defined underlying representation.
This process is most important when text needs to be compared for sorting
and searching, but it is also used when storing text to ensure that the
text is stored in a consistent representation.
The Unicode Consortium has defined a number of normalization forms
reflecting the various needs of applications:
- Normalization Form D (NFD) - Canonical Decomposition
-
Normalization Form C (NFC) - Canonical Decomposition followed by
Canonical Composition
-
Normalization Form KD (NFKD) - Compatibility Decomposition
-
Normalization Form KC (NFKC) - Compatibility Decomposition followed by
Canonical Composition
The different forms are defined in terms of a set of transformations on
the text, transformations that are expressed by both an algorithm and a
set of data files.