Jump to content

UTF-8

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Cosmic Engine (talk | contribs) at 03:29, 14 June 2009 (Indians use syllabaries, not alphabets). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

UTF-8 (8-bit UCS/Unicode Transformation Format) is a variable-length character encoding for Unicode. It is able to represent any character in the Unicode standard, yet is backwards compatible with ASCII. For these reasons, it is steadily becoming the preferred encoding for e-mail, web pages,[1] and other places where characters are stored or streamed.

UTF-8 encodes each character (code point) in 1 to 4 octets (8-bit bytes), with the single octet encoding used only for the 128 US-ASCII characters. See the Description section below for details.

The Internet Engineering Task Force (IETF) requires all Internet protocols to identify the encoding used for character data, and the supported character encodings must include UTF-8.[2] The Internet Mail Consortium (IMC) recommends that all email programs be able to display and create mail using UTF-8.[3]

Template:Table Unicode

History

By early 1992 the search was on for a good byte-stream encoding of multi-byte character sets. The draft ISO 10646 standard contained a non-required annex called UTF that provided a byte-stream encoding of its 32-bit code points. This encoding was not satisfactory on performance grounds, but did introduce the notion that bytes in the ASCII range of 0–127 represent themselves in UTF, thereby providing backward compatibility.

In July 1992, the X/Open committee XoJIG was looking for a better encoding. Dave Prosser of Unix System Laboratories submitted a proposal for one that had faster implementation characteristics and introduced the improvement that 7-bit ASCII characters would only represent themselves; all multibyte sequences would include only 8-bit characters, i.e., those where the high bit was set.

In August 1992, this proposal was circulated by an IBM X/Open representative to interested parties. Ken Thompson of the Plan 9 operating system group at Bell Labs, then made a crucial modification to the encoding to allow it to be self-synchronizing, meaning that it was not necessary to read from the beginning of the string in order to find code point boundaries. Thompson's design was outlined on September 2, 1992, on a placemat in a New Jersey diner with Rob Pike. The following days, Pike and Thompson implemented it and updated Plan 9 to use it throughout, and then communicated their success back to X/Open.[4]

UTF-8 was first officially presented at the USENIX conference in San Diego, from January 25–29, 1993.

Description

The UTF-8 encoding is variable-width, ranging from 1-4 bytes, with the upper bits of each byte reserved as control bits. The leading bits of the first byte indicate the total number of bytes used for that character. The scalar value of a character's code point is the concatenation of the non-control bits. In this table, x represents the lowest 8 bits of the Unicode value, y represents the next higher 8 bits, and z represents the bits higher than that.

Unicode Byte1 Byte2 Byte3 Byte4 example
U+0000-U+007F 0xxxxxxx '$' U+0024
00100100
0x24
U+0080-U+07FF 110yyyxx 10xxxxxx '¢' U+00A2
11000010,10100010
0xC2,0xA2
U+0800-U+FFFF 1110yyyy 10yyyyxx 10xxxxxx '€' U+20AC
11100010,10000010,10101100
0xE2,0x82,0xAC
U+10000-U+10FFFF 11110zzz 10zzyyyy 10yyyyxx 10xxxxxx  U+10ABCD
11110100,10001010,10101111,10001101
0xF4,0x8A,0xAF,0x8D

So the first 128 characters (US-ASCII) need one byte. The next 1920 characters need two bytes to encode. This includes Latin letters with diacritics and characters from Greek, Cyrillic, Coptic, Armenian, Hebrew, Arabic, Syriac and Tāna alphabets. Three bytes are needed for the rest of the Basic Multilingual Plane (which contains virtually all characters in common use). Four bytes are needed for characters in the other planes of Unicode, which are rarely used in practice.

By continuing the pattern given above it is possible to deal with much larger numbers. The original specification allowed for sequences of up to six bytes covering numbers up to 31 bits (the original limit of the Universal Character Set). However, UTF-8 was restricted by RFC 3629 to use only the area covered by the formal Unicode definition, U+0000 to U+10FFFF, in November 2003.

With these restrictions, bytes in a UTF-8 sequence have the following meanings. The ones marked in red can never appear in a legal UTF-8 sequence. The ones in green are represented in a single byte. The ones in white must only appear as the first byte in a multi-byte sequence, and the ones in orange can only appear as the second or later byte in a multi-byte sequence:

binary hex decimal notes
00000000-01111111 00-7F 0-127 US-ASCII (single byte)
10000000-10111111 80-BF 128-191 Second, third, or fourth byte of a multi-byte sequence
11000000-11000001 C0-C1 192-193 Overlong encoding: start of a 2-byte sequence, but code point <= 127
11000010-11011111 C2-DF 194-223 Start of 2-byte sequence
11100000-11101111 E0-EF 224-239 Start of 3-byte sequence
11110000-11110100 F0-F4 240-244 Start of 4-byte sequence
11110101-11110111 F5-F7 245-247 Restricted by RFC 3629: start of 4-byte sequence for codepoint above 10FFFF
11111000-11111011 F8-FB 248-251 Restricted by RFC 3629: start of 5-byte sequence
11111100-11111101 FC-FD 252-253 Restricted by RFC 3629: start of 6-byte sequence
11111110-11111111 FE-FF 254-255 Invalid: not defined by original UTF-8 specification

Invalid byte sequences

Not all sequences of bytes are valid UTF-8. A UTF-8 decoder should be prepared for:

  • the red or orange bytes in the above table
  • a start byte not followed by enough continuation bytes
  • a sequence that decodes to a value that should use a shorter sequence (an "overlong form").

Many earlier decoders would happily try to decode these. Carefully crafted invalid UTF-8 could make these skip a terminating NUL, create a slash or NUL after these have been checked for, or conversely make one of a pair of matched quotes disappear. Invalid UTF-8 has been used to bypass security validations in high profile products including Microsoft's IIS web server.[citation needed]

RFC 3629 states that "Implementations of the decoding algorithm MUST protect against decoding invalid sequences."[5] The Unicode Standard requires a Unicode-compliant decoder to "…treat any ill-formed code unit sequence as an error condition. This guarantees that it will neither interpret nor emit an ill-formed code unit sequence." This is often interpreted as a requirement that an exception is thrown when an invalid string is encountered. This is quite impractical in actual usage: a string comparison is much more usable if it returns false rather than throws an error if one of the strings is invalid UTF-8; if output vanishes due to invalid UTF-8 it can make figuring out what went wrong almost impossible; and a text editor that refuses to load invalid UTF-8 cannot be used to fix that invalid file.

Therefore many modern UTF-8 converters translate errors to something "safe". Only one byte is changed into the error replacement and parsing starts again at the next byte, otherwise concatenating strings could change good characters into errors. Popular replacements for each byte are:

  • nothing (the bytes vanish)
  • '?' or '¿'
  • The replacement character '�' (U+FFFD)
  • The byte from ISO-8859-1 or CP1252
  • An invalid Unicode code point, usually U+DCxx where xx is the byte's value

Replacing errors is "lossy": more than one UTF-8 string converts to the same Unicode result. This means that passing an invalid UTF-8 string through such a translator and then back again will lose information, an undesirable and sometimes dangerous result. The best solution is to keep the text as UTF-8, converting only for display. Another popular solution is to combine the U+DCxx replacement with the additional rule that the normal UTF-8 encoding of U+DCxx is invalid, however this conflicts with the vast majority of existing UTF-8 converters and makes conversion to UTF-8 lossy.

Invalid code points

UTF-8 may only legally be used to encode valid Unicode scalar values, that is to say code points in the range 0x0000 through 0xD7FF and 0xE000 through 0x10FFFF inclusively (i.e. excluding the high and low surrogate characters used for UTF-16). Although it is possible to use the UTF-8 transformation to represent surrogate code points in the range 0xD800 through 0xDFFF, as well as code points higher than 0x10FFFF (beyond the end of the Unicode code space), any such sequence is invalid UTF-8.

UTF-8 strings with any of the following characteristics may indicate that the source string has not been correctly converted, although only the first two points indicate an invalid UTF-8 string:

  • Paired surrogate characters (one of U+D800..U+DBFF followed by one of U+DC00..U+DFFF) may indicate that the string has been encoded as CESU-8
  • Unpaired surrogate characters (U+D800 through U+DFFF) may indicate that an invalid UTF-16 string has been encoded
  • U+FFFE (a noncharacter that is the reverse of the Byte-order mark) at the start of a string may indicate that a byte-swapped UTF-16 string has been encoded
  • U+0080 through U+009F may indicate that CP1252 was converted without first translating the characters to Unicode
  • U+0080 through U+009F and nothing greater than U+00FF may indicate double-converted UTF-8

Official name and incorrect variants

The official name is "UTF-8". All letters are upper-case, and the name is hyphenated. This spelling is used in all the documents relating to the encoding.

Alternatively, the name "utf-8" may be used by all standards conforming to the Internet Assigned Numbers Authority (IANA) list[6] (which include CSS, HTML, XML, and HTTP headers[7]), as the declaration is case insensitive.

Other descriptions that omit the hyphen or replace it with a space, such as "utf8" or "UTF 8", are incorrect and should be avoided. Despite this, most agents such as browsers can understand them.

UTF-8 derivations

The following implementations are slight differences from the UTF-8 specification. They are incompatible with the UTF-8 specification.

CESU-8

Many pieces of software added UTF-8 conversions for UCS-2 data and did not alter their UTF-8 conversion when UCS-2 was replaced with the surrogate-pair supporting UTF-16. The result is that each half of a UTF-16 surrogate pair is encoded as its own 3-byte UTF-8 encoding, resulting in 6 bytes rather than 4 for characters outside the Basic Multilingual Plane. Oracle databases use this, as well as Java and Tcl as described below, and probably a great deal of other Windows software where the programmers were unaware of the complexities of UTF-16. Although most usage is by accident, a supposed benefit is that this preserves UTF-16 binary sorting order when CESU-8 is binary sorted.

Modified UTF-8

In Modified UTF-8[8] the null character (U+0000) is encoded as 0xc0,0x80 rather than 0x00. (this is not valid UTF-8[9] because it is not the shortest possible representation.) Modified UTF-8 strings will never contain any null-bytes,[10] which allows them (with a NUL added to the end) to be processed by the traditional ASCIIZ string functions, yet allows all Unicode values including U+0000 to be in the string.

All known Modified UTF-8 implementations also treat the surrogate pairs as in CESU-8.

In normal usage, the Java programming language supports standard UTF-8 when reading and writing strings through InputStreamReader and OutputStreamWriter. However it uses Modified UTF-8 for object serialization[11], for the Java Native Interface[12], and for embedding constant strings in class files[13]. Tcl also uses the same modified UTF-8[14] as Java for internal representation of Unicode data.

Byte-order mark

Many Windows programs (including Windows Notepad) add the bytes 0xEF,0xBB,0xBF at the start of any document saved as UTF-8. This is the UTF-8 encoding of the Unicode byte-order mark, and is commonly referred to as a UTF-8 BOM even though it is not relevant to byte order. The BOM can also appear if another encoding with a BOM is translated to UTF-8 without stripping it.

The presence of the UTF-8 BOM may cause interoperability problems with existing software that could otherwise handle UTF-8, for example:

  • Older text editors may display the BOM as "" at the start of the document, even if the UTF-8 file contains only ASCII and would otherwise display correctly.
  • Programming language parsers can often handle UTF-8 in string constants and comments, but cannot parse the BOM at the start of the file.
  • Programs that identify file types by leading characters may fail to identify the file if a BOM is present even if the user of the file could skip the BOM. Or conversely they will identify the file when the user cannot handle the BOM (which is why this is not as easy to fix as some believe). An example is the Unix shebang syntax.
  • Programs that insert information at the start of a file will result in a file with the BOM somewhere in the middle of it (this is also a problem with the UTF-16 BOM). One example is offline browsers that add the orginating URL to the start of the file.

If compatibility with existing programs is not important, the BOM could be used to identify if a file is UTF-8 or a legacy encoding, but this is still problematical due to many instances where the BOM is added or removed without actually changing the encoding, or various encodings are concatenated together.

Advantages and disadvantages

General

Advantages

  • The ASCII characters are represented by themselves as single bytes that do not appear anywhere else, which makes UTF-8 work with the majority of existing APIs that take bytes strings but only treat a small number of ASCII codes specially. This removes the need to write a new Unicode version of every API, and makes it much easier to convert existing systems to UTF-8 than any other Unicode encoding.
  • UTF-8 and UTF-16 are the standard encodings for XML documents. All other encodings must be specified explicitly either externally or through a text declaration.[15]
  • UTF-8 and UTF-16 are the standard encodings for having Unicode in HTML documents, with UTF-8 as the preferred and most used encoding.
  • UTF-8 strings can be fairly reliably recognized as such by a simple algorithm (See the W3 FAQ: Multilingual Forms for a Perl regular expression to validate a UTF-8 string). The chance of a random string of bytes being valid UTF-8 and not pure ASCII is 3.9% for a two-byte sequence, 0.41% for a three-byte sequence and 0.026% for a four-byte sequence.[16] While natural languages encoded in traditional encodings are not random byte sequences, they are even less likely to pass a UTF-8 validity test and then be misinterpreted. For ISO/IEC 8859-1 text to be mis-recognized as UTF-8, the only non-ASCII characters in it would have to be in sequences starting with either an accented letter or the multiplication symbol and ending with a symbol. This is an advantage that most other encodings do not have, causing errors (mojibake) if the encoding is not stated in the file and wrongly guessed.
  • Sorting of UTF-8 strings as arrays of unsigned bytes will produce the same results as sorting them based on Unicode code points.

Disadvantages

  • A badly-written (and not compliant with current versions of the standard) UTF-8 parser could accept a number of different pseudo-UTF-8 representations and convert them to the same Unicode output. This provides a way for information to leak past validation routines designed to process data in its eight-bit representation.

Compared to single-byte encodings

Advantages

  • UTF-8 can encode any Unicode character, avoiding the need to figure out and set a "code page" or otherwise indicate what character set is in use, and allowing output in multiple languages at the same time. For many languages there has been more than one single-byte encoding in usage, causing errors and need to manually setting a code page. Also software often defaulted to Latin-1 which is not appropriate for many languages. This should be history when Unicode is supported by all computers.
  • The bytes 0xfe and 0xff do not appear, so a UTF-8 stream never matches the UTF-16 byte-order mark and thus cannot be confused with it.

Disadvantages

  • UTF-8 encoded text is larger than the appropriate single-byte encoding except for plain ASCII characters. In the case of languages which commonly used 8-bit character sets with non-Latin alphabets encoded in the upper half (such as most Cyrillic and Greek alphabet code pages), UTF-8 text will be almost double the size of the same text in a single-byte encoding. For syllabaries used in South Asia, such as Hindi's Devanagari and Thai, the text size is up to three times that for traditional single-byte encodings. This has caused objections in India and other countries.
  • The encoding Latin-1 (or extensions like Windows-1252) worked very well for languages supported by it, for example German and Spanish. When Unicode and UTF-8 was introduced without support in all computers suddenly erroneous characters appeared.
  • String-cutting is harder if you require the breaks to be between characters.
  • Performance is worse than in single-byte encodings, sometimes much worse. Skipping ahead a specific number of characters no longer means merely repositioning a pointer that number of bytes, but instead actually reading all the data.

Compared to other multi-byte encodings

Advantages

  • UTF-8 can encode any Unicode character, avoiding the need to figure out and set a "code page" or otherwise indicate what character set is in use, and allowing output in multiple languages at the same time.
  • UTF-8 is a world standard, meaning that languages requiring multi-byte characters, such as Chinese, Korean and Japanese will be supported by any UTF-8 compliant software, while the older algorithms used for these languages are usually not supported if the software is written in another country. Setting a code page and a font could be enough for some languages but these languages did require special algorithms such as Shift JIS and several others.
  • Because the starting and continuation bytes are distinct sets, UTF-8 is "self-synchronizing". Character boundaries are easily found when searching either forwards or backwards. If bytes are lost due to error or corruption, one can always locate the beginning of the next character and thus limit the damage. Many multi-byte encodings are much harder to resynchronize.
  • Any byte oriented string searching algorithm can be used with UTF-8 data, since no byte sequence of one character is contained within another character or in a sequence of other characters. Some older variable-length encodings (such as Shift JIS) did not have this property and thus made string-matching algorithms rather complicated.
  • The first byte of a multi-byte sequence is enough to determine the length of the multi-byte sequence. This makes it extremely simple to extract a sub-string from a given string without elaborate parsing. This was often not the case in many multi-byte encodings.
  • Efficient to encode using simple bit operations. UTF-8 does not require slower mathematical operations such as multiplication or division (unlike the obsolete UTF-1 encoding).

Disadvantages

  • UTF-8 often takes more space than an encoding made for one or a few languages. Latin letters with diacritics and characters from other alphabetic scripts typically take one byte per character in the appropriate multi-byte encoding but take two in UTF-8. East Asian scripts generally have two bytes per character in their multi-byte encodings yet take three bytes per character in UTF-8.

Compared to UTF-7

Advantages

  • UTF-8 uses significantly fewer bytes per character for all non-ASCII characters.
  • UTF-8 encodes "+" as itself whereas UTF-7 encodes it as "+-".

Disadvantages

  • UTF-8 requires the transmission system to be 8-bit clean. In the case of e-mail this means it has to be further encoded using quoted-printable or base64 in some cases. This extra stage of encoding carries a significant size penalty. The importance of this disadvantage has declined as mail transfer agents' support for eight-bit clean transports and the 8BITMIME SMTP extension as specified in RFC 1869 rises.

Compared to UTF-16

Advantages

  • Converting to UTF-16 while maintaining compatibility with existing programs (such as was done with Windows) requires every API and data structure that takes a string to be duplicated. Handling of invalid encodings in each API makes this much more difficult than it may first appear.
  • Invalid UTF-8 cannot be losslessly converted to UTF-16, but invalid UTF-16 can be losslessly converted to UTF-8. This makes UTF-8 the only safe way to hold text if errors are to be preserved. This turns out to be surprisingly important in practice.
  • In UTF-8, characters outside the basic multilingual plane are not a special case. UTF-16 is often mistaken to be the obsolete constant-length UCS-2 encoding, leading to code that works for most text but suddenly fails for non-BMP characters. It's better to implement support for the entire range of Unicode from the start.
  • Text that is mostly ASCII characters will be around half the size in UTF-8. Text in all languages using codepoints below U+0800 (which includes all modern European languages) will be smaller in UTF-8 due to the presence of spaces, newlines, numbers, and ASCII punctuation, all of which are encoded in one byte per character.
  • Most communication and storage protocols were designed for a stream of bytes. A UTF-16 string must use a pair of bytes for each code word, which introduces a couple of potential problems:
    • The order of those two bytes becomes an issue. One can say that UTF-16 has two variants when used for text files. A variety of mechanisms can be used to deal with this issue (for example, the byte-order mark), but they still present an added complication for software and protocol design.
    • If a byte is missing from a character in UTF-16, the whole rest of the string will be meaningless text (unless a surrogate half is produced which could indicate something is wrong).

Disadvantages

  • Characters U+0800 through U+FFFF use three bytes in UTF-8, but only two in UTF-16. As a result, text in (for example) Chinese, Japanese or Hindi will take more space in UTF-8 if there are more of these characters than there are ASCII characters. However, ASCII includes spaces, numbers, newlines, some punctuation, and XML markup, so it is not unusual for ASCII characters to dominate. For example both the Japanese and the Korean UTF-8 article on Wikipedia take more space if saved as UTF-16 than the original UTF-8 version [17]
  • A simplistic parser for UTF-16 is unlikely to convert invalid sequences to ASCII. Since the dangerous characters in most situations are ASCII, a simplistic UTF-16 parser is much less dangerous than a simplistic UTF-8 parser.
  • In UCS-2 (but not UTF-16) Unicode code points are all the same size, making measurements of a fixed number of them easy. Most people who think this is important are confused by old documentation written for ASCII, where "character" was used as a synonym for "byte". In fact if you measure strings using bytes or 16-bit words instead of "characters" most algorithms can be easily and efficiently adapted for UTF-8 or UTF-16.

See also

References

  1. ^ "Moving to Unicode 5.1". Official Google Blog. May 5 2008. Retrieved 2008-05-08. {{cite web}}: Check date values in: |date= (help)
  2. ^ Alvestrand, H. (1998), "IETF Policy on Character Sets and Languages", RFC 2277, Internet Engineering Task Force
  3. ^ "Using International Characters in Internet Mail". Internet Mail Consortium. August 1 1998. Retrieved 2007-11-08. {{cite web}}: Check date values in: |date= (help)
  4. ^ Pike, Rob (2003-04-03). "UTF-8 history".
  5. ^ Yergeau, F. (2003), "UTF-8, a transformation format of ISO 10646", RFC 3629, Internet Engineering Task Force
  6. ^ Internet Assigned Numbers Authority Character Sets
  7. ^ W3C: Setting the HTTP charset parameter notes that the IANA list is used for HTTP
  8. ^ "Java SE 6 documentation for Interface java.io.DataInput, subsection on Modified UTF-8". Sun Microsystems. 2008. Retrieved 2009-05-22.
  9. ^ "[...] the overlong UTF-8 sequence C0 80 [...]", "[...] the illegal two-octet sequence C0 80 [...]" "Request for Comments 3629: "UTF-8, a transformation format of ISO 10646"". 2003. Retrieved 2009-05-22.
  10. ^ "[...] Java virtual machine UTF-8 strings never have embedded nulls." "The Java Virtual Machine Specification, 2nd Edition, section 4.4.7: "The CONSTANT_Utf8_info Structure"". Sun Microsystems. 1999. Retrieved 2009-05-24.
  11. ^ "[...] encoded in modified UTF-8." "Java Object Serialization Specification, chapter 6: Object Serialization Stream Protocol, section 2: Stream Elements". Sun Microsystems. 2005. Retrieved 2009-05-22.
  12. ^ "The JNI uses modified UTF-8 strings to represent various string types." "Java Native Interface Specification, chapter 3: JNI Types and Data Structures, section: Modified UTF-8 Strings". Sun Microsystems. 2003. Retrieved 2009-05-22.
  13. ^ "[...] differences between this format and the "standard" UTF-8 format." "The Java Virtual Machine Specification, 2nd Edition, section 4.4.7: "The CONSTANT_Utf8_info Structure"". Sun Microsystems. 1999. Retrieved 2009-05-23.
  14. ^ "In orthodox UTF-8, a NUL byte(\x00) is represented by a NUL byte. [...] But [...] we [...] want NUL bytes inside [...] strings [...]" "Tcler's Wiki: UTF-8 bit by bit (Revision 6)". 2009-04-25. Retrieved 2009-05-22.
  15. ^ http://www.w3.org/TR/REC-xml/#charencoding
  16. ^ There are 256 × 256 − 128 × 128 not-pure-ASCII two-byte sequences, and of those, only 1920 encode valid UTF-8 characters (the range U+0080 to U+07FF), so the proportion of valid not-pure-ASCII two-byte sequences is 3.9%. Similarly, there are 256 × 256 × 256 − 128 × 128 × 128 not-pure-ASCII three-byte sequences, and 61,406 valid three-byte UTF-8 sequences (U+000800 to U+00FFFF minus surrogate pairs and non-characters), so the proportion is 0.41%; finally, there are 2564 − 1284 non-ASCII four-byte sequences, and 1,048,544 valid four-byte UTF-8 sequences (U+010000 to U+10FFFF minus non-characters), so the proportion is 0.026%. Note that this assumes that control characters pass as ASCII; without the control characters, the percentage proportions drop somewhat).
  17. ^ The version from 2009-04-27 of ja:UTF-8 needed 50 kb when saved (as UTF-8), but when converted to UTF-16 (with notepad) it took 81 kb, with a similar result for the Korean article This should be done with something other than notepad, with a program that doesn't mangle newlines.[clarification needed]

There are several current definitions of UTF-8 in various standards documents:

  • RFC 3629 / STD 63 (2003), which establishes UTF-8 as a standard Internet protocol element
  • The Unicode Standard, Version 5.0, §3.9 D92, $3.10 D95 (2007)
  • The Unicode Standard, Version 4.0, §3.9–§3.10 (2003)
  • ISO/IEC 10646:2003 Annex D (2003)

They supersede the definitions given in the following obsolete works:

  • ISO/IEC 10646-1:1993 Amendment 2 / Annex R (1996)
  • The Unicode Standard, Version 2.0, Appendix A (1996)
  • RFC 2044 (1996)
  • RFC 2279 (1998)
  • The Unicode Standard, Version 3.0, §2.3 (2000) plus Corrigendum #1 : UTF-8 Shortest Form (2000)
  • Unicode Standard Annex #27: Unicode 3.1 (2001)

They are all the same in their general mechanics, with the main differences being on issues such as allowed range of code point values and safe handling of invalid input.