10
UTF-8 Unicode Character encodings Comparison UTF-7, UTF-1 UTF-8, CESU-8 UTF-16/UCS-2 UTF-32/UCS-4 UTF-EBCDIC SCSU, BOCU-1 Punycode (IDN) GB 18030 UCS Mapping Bi-directional text BOM Han unification Unicode and HTML Unicode and E-mail Unicode typefaces From Wikipedia, the free encyclopedia UTF-8 (8-bit UCS/Unicode Transformation Format) is a variable-length character encoding for Unicode. It is able to represent any character in the Unicode standard, yet is backwards compatible with ASCII. For these reasons, it is steadily becoming the preferred encoding for e-mail, web pages, [1][2] and other places where characters are stored or streamed. UTF-8 encodes each character (code point) in 1 to 4 octets (8-bit bytes), with the single octet encoding used only for the 128 US-ASCII characters. The Internet Engineering Task Force (IETF) requires all Internet protocols to identify the encoding used for character data, and the supported character encodings must include UTF-8. [3] The Internet Mail Consortium (IMC) recommends that all e-mail programs be able to display and create mail using UTF-8. [4] Contents 1 History 2 Description 2.1 Invalid byte sequences 2.2 Invalid code points 3 Official name and incorrect variants 4 UTF-8 derivations 4.1 CESU-8 4.2 Modified UTF-8 5 Byte-order mark 6 Precomposition and Decomposition 7 Advantages and disadvantages 7.1 General 7.1.1 Advantages 7.1.2 Disadvantages 7.2 Compared to single-byte encodings 7.2.1 Advantages 7.2.2 Disadvantages 7.3 Compared to other multi-byte encodings 7.3.1 Advantages 7.3.2 Disadvantages 7.4 Compared to UTF-16 7.4.1 Advantages 7.4.2 Disadvantages 8 See also 9 References 10 External links History 12/12/2009 UTF-8 - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/UTF-8 1/10

utf-8 - wikipedia, the free encyclopedia

  • Upload
    manit

  • View
    332

  • Download
    4

Embed Size (px)

Citation preview

Page 1: utf-8 - wikipedia, the free encyclopedia

UTF-8

UnicodeCharacter encodings

ComparisonUTF-7, UTF-1UTF-8, CESU-8UTF-16/UCS-2UTF-32/UCS-4UTF-EBCDICSCSU, BOCU-1Punycode (IDN)GB 18030

UCSMappingBi-directional textBOMHan unificationUnicode and HTMLUnicode and E-mailUnicode typefaces

From Wikipedia, the free encyclopedia

UTF-8 (8-bit UCS/Unicode Transformation Format) is a variable-length character encoding for Unicode. It is ableto represent any character in the Unicode standard, yet is backwards compatible with ASCII. For these reasons, itis steadily becoming the preferred encoding for e-mail, web pages,[1][2] and other places where characters arestored or streamed.

UTF-8 encodes each character (code point) in 1 to 4 octets (8-bit bytes), with the single octet encoding used onlyfor the 128 US-ASCII characters.

The Internet Engineering Task Force (IETF) requires all Internet protocols to identify the encoding used forcharacter data, and the supported character encodings must include UTF-8.[3] The Internet Mail Consortium(IMC) recommends that all e-mail programs be able to display and create mail using UTF-8.[4]

Contents1 History2 Description

2.1 Invalid byte sequences2.2 Invalid code points

3 Official name and incorrect variants4 UTF-8 derivations

4.1 CESU-84.2 Modified UTF-8

5 Byte-order mark6 Precomposition and Decomposition7 Advantages and disadvantages

7.1 General7.1.1 Advantages7.1.2 Disadvantages

7.2 Compared to single-byte encodings7.2.1 Advantages7.2.2 Disadvantages

7.3 Compared to other multi-byte encodings7.3.1 Advantages7.3.2 Disadvantages

7.4 Compared to UTF-167.4.1 Advantages7.4.2 Disadvantages

8 See also9 References10 External links

History

12/12/2009 UTF-8 - Wikipedia, the free encyclopedia

http://en.wikipedia.org/wiki/UTF-8 1/10

Page 2: utf-8 - wikipedia, the free encyclopedia

By early 1992 the search was on for a good byte-stream encoding of multi-byte character sets. The draft ISO10646 standard contained a non-required annex called UTF that provided a byte-stream encoding of its 32-bitcode points. This encoding was not satisfactory on performance grounds, but did introduce the notion that bytes inthe ASCII range of 0–127 represent themselves in UTF, thereby providing backward compatibility.

In July 1992, the X/Open committee XoJIG was looking for a better encoding. Dave Prosser of Unix SystemLaboratories submitted a proposal for one that had faster implementation characteristics and introduced theimprovement that 7-bit ASCII characters would only represent themselves; all multibyte sequences would includeonly bytes where the high bit was set.

In August 1992, this proposal was circulated by an IBM X/Open representative to interested parties. KenThompson of the Plan 9 operating system group at Bell Labs, then made a crucial modification to the encoding toallow it to be self-synchronizing, meaning that it was not necessary to read from the beginning of the string to findcode point boundaries. Thompson's design was outlined on September 2, 1992, on a placemat in a New Jerseydiner with Rob Pike. The following days, Pike and Thompson implemented it and updated Plan 9 to use itthroughout, and then communicated their success back to X/Open.[5]

UTF-8 was first officially presented at the USENIX conference in San Diego, from January 25–29, 1993.

DescriptionThe UTF-8 encoding is variable-width, ranging from 1–4 bytes. Each byte has 0–4 leading consecutive 1 bitsfollowed by a zero bit to indicate its type. N 1 bits indicates the first byte in a N-byte sequence, with the exceptionthat zero 1 bits indicates a one-byte sequence while one 1 bit indicates a continuation byte in a multi-byte sequence(this was done for ASCII compatibility). The scalar value of the Unicode code point is the concatenation of thenon-control bits. In this table, zeroes and ones represent control bits, x-s represent the lowest 8 bits of the Unicodevalue, y-s represent the next higher 8 bits, and z-s represent the bits higher than that.

Unicode Byte1 Byte2 Byte3 Byte4 example

U+0000–U+007F 0xxxxxxx

'$' U+0024→ 00100100→ 0x24

U+0080–U+07FF 110yyyxx 10xxxxxx

'¢' U+00A2→ 11000010,10100010→ 0xC2,0xA2

U+0800–U+FFFF 1110yyyy 10yyyyxx 10xxxxxx

'€' U+20AC→ 11100010,10000010,10101100→ 0xE2,0x82,0xAC

U+10000–U+10FFFF 11110zzz 10zzyyyy 10yyyyxx 10xxxxxx

' ' U+024B62→11110000,10100100,10101101,10100010→ 0xF0,0xA4,0xAD,0xA2

So the first 128 characters (US-ASCII) need one byte. The next 1,920 characters need two bytes to encode. Thisincludes Latin letters with diacritics and characters from Greek, Cyrillic, Coptic, Armenian, Hebrew, Arabic, Syriacand Tāna alphabets. Three bytes are needed for the rest of the Basic Multilingual Plane (which contains virtually allcharacters in common use). Four bytes are needed for characters in the other planes of Unicode, which include lesscommon CJK characters and various historic scripts.

By continuing the pattern given above it is possible to deal with much larger numbers. The original specificationallowed for sequences of up to six bytes covering numbers up to 31 bits (the original limit of the UniversalCharacter Set). However, UTF-8 was restricted by RFC 3629 to use only the area covered by the formal Unicode

12/12/2009 UTF-8 - Wikipedia, the free encyclopedia

http://en.wikipedia.org/wiki/UTF-8 2/10

Page 3: utf-8 - wikipedia, the free encyclopedia

definition, U+0000 to U+10FFFF, in November 2003.

With these restrictions, bytes in a UTF-8 sequence have the following meanings. The ones marked in red can neverappear in a legal UTF-8 sequence. The ones in green are represented in a single byte. The ones in white must onlyappear as the first byte in a multi-byte sequence, and the ones in orange can only appear as the second or later bytein a multi-byte sequence:

binary hex decimal notes

00000000-01111111 00-7F 0-127 US-ASCII (single byte)

10000000-10111111 80-BF 128-191 Second, third, or fourth byte of a multi-byte sequence

11000000-11000001 C0-C1 192-193 Overlong encoding: start of a 2-byte sequence, but codepoint ≤ 127

11000010-11011111 C2-DF 194-223 Start of 2-byte sequence

11100000-11101111 E0-EF 224-239 Start of 3-byte sequence

11110000-11110100 F0-F4 240-244 Start of 4-byte sequence

11110101-11110111 F5-F7 245-247 Restricted by RFC 3629: start of 4-byte sequence forcodepoint above 10FFFF

11111000-11111011 F8-FB 248-251 Restricted by RFC 3629: start of 5-byte sequence

11111100-11111101 FC-FD 252-253 Restricted by RFC 3629: start of 6-byte sequence

11111110-11111111 FE-FF 254-255 Invalid: not defined by original UTF-8 specification

Invalid byte sequences

Not all sequences of bytes are valid UTF-8. A UTF-8 decoder should be prepared for:

the red invalid bytes in the above tablean unexpected continuation bytea start byte not followed by enough continuation bytesa sequence that decodes to a value that should use a shorter sequence (an "overlong form").

Many earlier decoders would happily try to decode these. Carefully crafted invalid UTF-8 could make them eitherskip or create ASCII characters such as NUL, slash, or quotes. Invalid UTF-8 has been used to bypass securityvalidations in high profile products including Microsoft's IIS web server.[6]

RFC 3629 states "Implementations of the decoding algorithm MUST protect against decoding invalidsequences."[7] The Unicode Standard requires decoders to "...treat any ill-formed code unit sequence as an errorcondition. This guarantees that it will neither interpret nor emit an ill-formed code unit sequence." Many UTF-8decoders throw an exception if a string has an error in it. In recent times this has been found to be impractical: beingunable to work with data means you cannot even try to fix it. One example was Python 3.0 which would exitimmediately if the command line had invalid UTF-8 in it.[8] A more useful solution is to translate the first byte to areplacement and continue parsing with the next byte. Popular replacements are:

The replacement character '�' (U+FFFD)The '?' or '¿' character (U+003F or U+00BF)The invalid Unicode code points U+DC80..U+DCFF where the low 8 bits are the byte's value.Interpret the bytes according to another encoding (often ISO-8859-1 or CP1252).

Replacing errors is "lossy": more than one UTF-8 string converts to the same Unicode result. Therefore the originalUTF-8 should be stored, and translation should only be used when displaying the text to the user.

12/12/2009 UTF-8 - Wikipedia, the free encyclopedia

http://en.wikipedia.org/wiki/UTF-8 3/10

Page 4: utf-8 - wikipedia, the free encyclopedia

Invalid code points

UTF-8 may only legally be used to encode valid Unicode scalar values. According to the Unicode standard the highand low surrogate halves used by UTF-16 (U+D800 through U+DFFF) and values above U+10FFFF are notlegal Unicode values, and the UTF-8 encoding of them is an invalid byte sequence and should be treated asdescribed above.

Whether an actual application should treat these as invalid is questionable. Allowing them allows lossless conversionof an invalid UTF-16 string and allows CESU encoding (described below) to be decoded. There are other codepoints that are far more important to detect and reject, such as the reversed-BOM U+FFFE, or the codesU+0080..U+00AF which may indicate improperly translated CP1252 or double-encoded UTF-8.

Official name and incorrect variantsThe official name is "UTF-8". All letters are upper-case, and the name is hyphenated. This spelling is used in all thedocuments relating to the encoding.

Alternatively, the name "utf-8" may be used by all standards conforming to the Internet Assigned NumbersAuthority (IANA) list[9] (which include CSS, HTML, XML, and HTTP headers)[10], as the declaration is caseinsensitive.

Other descriptions that omit the hyphen or replace it with a space, such as "utf8" or "UTF 8", are incorrect andshould be avoided. Despite this, most agents such as browsers can understand them.

UTF-8 derivationsThe following implementations are slight differences from the UTF-8 specification. They are incompatible with theUTF-8 specification.

CESU-8

Main article: CESU-8

Many pieces of software added UTF-8 conversions for UCS-2 data and did not alter their UTF-8 conversionwhen UCS-2 was replaced with the surrogate-pair supporting UTF-16. The result is that each half of a UTF-16surrogate pair is encoded as its own 3-byte UTF-8 encoding, resulting in 6 bytes rather than 4 for charactersoutside the Basic Multilingual Plane. Oracle databases use this, as well as Java and Tcl as described below, andprobably a great deal of other Windows software where the programmers were unaware of the complexities ofUTF-16. Although most usage is by accident, a supposed benefit is that this preserves UTF-16 binary sorting orderwhen CESU-8 is binary sorted.

Modified UTF-8

In Modified UTF-8[11] the null character (U+0000) is encoded as 0xC0,0x80 rather than 0x00, which is not validUTF-8[12] because it is not the shortest possible representation. Modified UTF-8 strings will never contain any nullbytes,[13] which allows them (with a null byte added to the end) to be processed by the traditional ASCIIZ stringfunctions, yet allows all Unicode values including U+0000 to be in the string.

All known Modified UTF-8 implementations also treat the surrogate pairs as in CESU-8.

In normal usage, the Java programming language supports standard UTF-8 when reading and writing stringsthrough InputStreamReader(http://java.sun.com/javase/6/docs/api/java/io/InputStreamReader.html)and OutputStreamWriter

12/12/2009 UTF-8 - Wikipedia, the free encyclopedia

http://en.wikipedia.org/wiki/UTF-8 4/10

Page 5: utf-8 - wikipedia, the free encyclopedia

(http://java.sun.com/javase/6/docs/api/java/io/OutputStreamWriter.html). However it uses Modified UTF-8 for object serialization,[14] for the Java Native Interface,[15] and for embeddingconstant strings in class files.[16] Tcl also uses the same modified UTF-8[17] as Java for internal representation ofUnicode data.

Byte-order markMany Windows programs (including Windows Notepad) add the bytes 0xEF,0xBB,0xBF at the start of anydocument saved as UTF-8. This is the UTF-8 encoding of the Unicode byte-order mark (BOM), and is commonlyreferred to as a UTF-8 BOM even though it is not relevant to byte order. The BOM can also appear if anotherencoding with a BOM is translated to UTF-8 without stripping it.

The presence of the UTF-8 BOM may cause interoperability problems with existing software that could otherwisehandle UTF-8, for example:

Older text editors may display the BOM as "" at the start of the document, even if the UTF-8 file containsonly ASCII and would otherwise display correctly.Programming language parsers can often handle UTF-8 in string constants and comments, but cannot parsethe BOM at the start of the file.Programs that identify file types by leading characters may fail to identify the file if a BOM is present even ifthe user of the file could skip the BOM. Or conversely they will identify the file when the user cannot handlethe BOM. An example is the Unix shebang syntax.Programs that insert information at the start of a file will result in a file with the BOM somewhere in themiddle of it (this is also a problem with the UTF-16 BOM). One example is offline browsers that add theoriginating URL to the start of the file.

If compatibility with existing programs is not important, the BOM could be used to identify if a file is UTF-8 versusa legacy encoding, but this is still problematic due to many instances where the BOM is added or removed withoutactually changing the encoding, or various encodings are concatenated together. Checking if the text is valid UTF-8is more reliable than using BOM.

Precomposition and DecompositionCertain accented characters (such as é) can be represented in Unicode by unique code points (U+00E9, LATINSMALL LETTER E WITH ACUTE), or with combining characters (U+0065, LATIN SMALL LETTER E andU+0301, COMBINING ACUTE ACCENT). The former is a “precomposed” form and the latter is"decomposed" form. Most systems accept either and consider them unequal, but Mac OS X has many componentsthat prefer and/or assume decomposed only (thus decomposed-only Unicode encoded with UTF-8 is also knownas "UTF8-MAC"). The combination of OS X errors handling composed characters with erroneous Linux software(including samba) that replace decomposed letters with composed ones when copying file names has led toconfusing and data-destroying interoperability problems[18].[19] Correct handling of UTF-8 requires preserving theraw byte sequence to avoid this sort of bug.

Advantages and disadvantages

General

Advantages

The ASCII characters are represented by themselves as single bytes that do not appear anywhere else,which makes UTF-8 work with the majority of existing APIs that take bytes strings but only treat a smallnumber of ASCII codes specially. This removes the need to write a new Unicode version of every API, andmakes it much easier to convert existing systems to UTF-8 than any other Unicode encoding.

12/12/2009 UTF-8 - Wikipedia, the free encyclopedia

http://en.wikipedia.org/wiki/UTF-8 5/10

Page 6: utf-8 - wikipedia, the free encyclopedia

UTF-8 is the only encoding for XML entities that does not require a BOM or an indication of theencoding.[20]

UTF-8 and UTF-16 are the standard encodings for Unicode text in HTML documents, with UTF-8 as thepreferred and most used encoding.UTF-8 strings can be fairly reliably recognized as such by a simple algorithm.[21] The chance of a randomstring of bytes being valid UTF-8 and not pure ASCII is 3.9% for a two-byte sequence, 0.41% for a three-byte sequence and 0.026% for a four-byte sequence.[22] ISO/IEC 8859-1 is even less likely to be mis-recognized as UTF-8: the only non-ASCII characters in it would have to be in sequences starting with eitheran accented letter or the multiplication symbol and ending with a symbol. This is an advantage that most otherencodings do not have, causing errors (mojibake) if the encoding is not stated in the file and wronglyguessed.Sorting of UTF-8 strings as arrays of unsigned bytes will produce the same results as sorting them based onUnicode code points.

Disadvantages

A UTF-8 parser that is not compliant with current versions of the standard might accept a number ofdifferent pseudo-UTF-8 representations and convert them to the same Unicode output. This provides a wayfor information to leak past validation routines designed to process data in its eight-bit representation.[23]

One UTF-8 advantage is that other single-byte encodings can pass through the same API. However, failureto identify the encoding later can lead to errors when rendering the string for the user. This is usually causedby software defaulting to the legacy encoding and relying on a BOM or other information to identify UTF-8.Defaulting to UTF-8 and switching to legacy encoding when invalid sequences are encountered can solvethis.

Compared to single-byte encodings

Advantages

UTF-8 can encode any Unicode character, avoiding the need to figure out and set a "code page" orotherwise indicate what character set is in use, and allowing output in multiple languages at the same time.For many languages there has been more than one single-byte encoding in usage, so even knowing thelanguage was insufficient information to display it correctly.The bytes 0xfe and 0xff do not appear, so a valid UTF-8 stream never matches the UTF-16 byte-ordermark and thus cannot be confused with it.

Disadvantages

UTF-8 encoded text is larger than the appropriate single-byte encoding except for plain ASCII characters.In the case of languages which used 8-bit character sets with non-Latin alphabets encoded in the upper half(such as most Cyrillic and Greek alphabet code pages), letters in UTF-8 will be double the size. For somelanguages such as Hindi's Devanagari and Thai, letters will be triple the size (this has caused objections inIndia and other countries).Many computer users perceive the encoding Latin-1 (or the Windows-1252 extension) to include all thenecessary characters for them and all users they communicate with. They do not see any advantage of usingUnicode, as they are unbothered by code pages, and thus no advantage of using UTF-8.It is possible in UTF-8 (or any other multi-byte encoding) to split a string in the middle of a character, whichmay result in an invalid string if the pieces are not concatenated later.If the code points are all the same size, measurements of a fixed number of them is easy. This is oftenmistakenly considered important due to confusion caused by old documentation written for ASCII, where"character" was used as a synonym for "byte". If you measure strings using bytes instead of "characters" most

12/12/2009 UTF-8 - Wikipedia, the free encyclopedia

http://en.wikipedia.org/wiki/UTF-8 6/10

Page 7: utf-8 - wikipedia, the free encyclopedia

algorithms can be easily and efficiently adapted for UTF-8.

Compared to other multi-byte encodings

Advantages

UTF-8 uses the codes 0-127 only for the ASCII characters.UTF-8 can encode any Unicode character. Files in different languages can be displayed correctly withouthaving to choose the correct code page or font. Chinese, Korean and Japanese can all be in the same textwithout special codes inserted to switch the encoding.UTF-8 is "self-synchronizing": character boundaries are easily found when searching either forwards orbackwards. If bytes are lost due to error or corruption, one can always locate the beginning of the nextcharacter and thus limit the damage. Many multi-byte encodings are much harder to resynchronize.Any byte oriented string searching algorithm can be used with UTF-8 data, since the sequence of bytes for acharacter cannot occur anywhere else. Some older variable-length encodings (such as Shift JIS) did not havethis property and thus made string-matching algorithms rather complicated.Efficient to encode using simple bit operations. UTF-8 does not require slower mathematical operations suchas multiplication or division (unlike the obsolete UTF-1 encoding).

Disadvantages

UTF-8 often takes more space than an encoding made for one or a few languages. Latin letters withdiacritics and characters from other alphabetic scripts typically take one byte per character in the appropriatemulti-byte encoding but take two in UTF-8. East Asian scripts generally have two bytes per character in theirmulti-byte encodings yet take three bytes per character in UTF-8.

Compared to UTF-16

Advantages

Converting to UTF-16 while maintaining compatibility with existing programs (such as was done withWindows) requires every API and data structure that takes a string to be duplicated. Handling of invalidencodings makes this much more difficult than it may first appear.Byte streams containing invalid UTF-8 cannot be losslessly stored as UTF-16. Invalid UTF-16 however canbe stored as UTF-8. This turns out to be surprisingly important in practice.Characters outside the basic multilingual plane are not a special case. UTF-16 is often mistaken to be theobsolete constant-length UCS-2 encoding, leading to code that works for most text but suddenly fails fornon-BMP characters.ASCII characters will be half the size in UTF-8. Text in all languages using codepoints below U+0800(which includes all modern European languages) will be smaller in UTF-8 due to the presence of ASCIIspaces, newlines, numbers, and punctuation.Most communication and storage was designed for a stream of bytes. A UTF-16 string must use a pair ofbytes for each code, which introduces a couple of potential problems:

The order of those two bytes becomes an issue and must be added to the protocol, such as with abyte-order markIf a byte is missing from UTF-16, the whole rest of the string will be meaningless text.

Disadvantages

A simplistic parser for UTF-16 is unlikely to convert invalid sequences to ASCII. Since the dangerouscharacters in most situations are ASCII, a simplistic UTF-16 parser is much less dangerous than a simplisticUTF-8 parser.

12/12/2009 UTF-8 - Wikipedia, the free encyclopedia

http://en.wikipedia.org/wiki/UTF-8 7/10

Page 8: utf-8 - wikipedia, the free encyclopedia

Characters U+0800 through U+FFFF use three bytes in UTF-8, but only two in UTF-16. As a result, text in(for example) Chinese, Japanese or Hindi could take more space in UTF-8 if there are more of thesecharacters than there are ASCII characters. Since ASCII includes spaces, numbers, newlines, somepunctuation, and most characters used in programming and markup languages, this rarely happens. Forexample both the Japanese and the Korean UTF-8 article on Wikipedia take more space if saved as UTF-16 than the original UTF-8 version [24]

In UCS-2 (but not UTF-16) Unicode code points are all the same size, making measurements of a fixednumber of them easy. This is often mistakenly considered important due to confusion caused by olddocumentation written for ASCII, where "character" was used as a synonym for "byte". If you measurestrings using bytes instead of "characters" most algorithms can be easily and efficiently adapted for UTF-8.

See alsoAlt codeASCIIByte-order markComparison of e-mail clients#FeaturesComparison of Unicode encodingsCharacter encodings in HTMLISO/IEC 8859iconv—a standardized API used to convert between different character encodingsGB 18030UTF-8 in URIsUnicode and e-mailUnicode and HTMLUniversal Character SetUTF-16/UCS-2UTF-9 and UTF-18

References1. ^ "Moving to Unicode 5.1 (http://googleblog.blogspot.com/2008/05/moving-to-unicode-51.html) ". Official Google

Blog. May 5, 2008. http://googleblog.blogspot.com/2008/05/moving-to-unicode-51.html. Retrieved 2008-05-08.2. ^ "Usage of character encodings for websites (http://w3techs.com/technologies/overview/character_encoding/all)

". W3Techs. http://w3techs.com/technologies/overview/character_encoding/all. Retrieved 2009-09-25.3. ^ Alvestrand, H. (1998), "IETF Policy on Character Sets and Languages", RFC 2277, Internet Engineering Task

Force4. ^ "Using International Characters in Internet Mail (http://www.imc.org/mail-i18n.html) ". Internet Mail Consortium.

August 1, 1998. http://www.imc.org/mail-i18n.html. Retrieved 2007-11-08.5. ^ Pike, Rob (2003-04-03). "UTF-8 history (http://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt) ".

http://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt.6. ^ Marin, Marvin (2000-10-17). "Web Server Folder Traversal MS00-078

(http://www.sans.org/resources/malwarefaq/wnt-unicode.php) ". http://www.sans.org/resources/malwarefaq/wnt-unicode.php.

7. ^ Yergeau, F. (2003), "UTF-8, a transformation format of ISO 10646", RFC 3629, Internet Engineering TaskForce

8. ^ "Non-decodable Bytes in System Character Interfaces (http://www.python.org/dev/peps/pep-0383/) ".http://www.python.org/dev/peps/pep-0383/.

9. ^ Internet Assigned Numbers Authority Character Sets (http://www.iana.org/assignments/character-sets)10. ^ W3C: Setting the HTTP charset parameter (http://www.w3.org/International/O-HTTP-charset) notes that the

IANA list is used for HTTP11. ^ "Java SE 6 documentation for Interface java.io.DataInput, subsection on Modified UTF-8

(http://java.sun.com/javase/6/docs/api/java/io/DataInput.html#modified-utf-8) ". Sun Microsystems. 2008.http://java.sun.com/javase/6/docs/api/java/io/DataInput.html#modified-utf-8. Retrieved 2009-05-22.

12. ^ "[...] the overlong UTF-8 sequence C0 80 [...]", "[...] the illegal two-octet sequence C0 80 [...]""Request for

12/12/2009 UTF-8 - Wikipedia, the free encyclopedia

http://en.wikipedia.org/wiki/UTF-8 8/10

Page 9: utf-8 - wikipedia, the free encyclopedia

12. ^ "[...] the overlong UTF-8 sequence C0 80 [...]", "[...] the illegal two-octet sequence C0 80 [...]""Request forComments 3629: "UTF-8, a transformation format of ISO 10646"(http://www.apps.ietf.org/rfc/rfc3629.html#page-5) ". 2003. http://www.apps.ietf.org/rfc/rfc3629.html#page-5.Retrieved 2009-05-22.

13. ^ "[...] Java virtual machine UTF-8 strings never have embedded nulls.""The Java Virtual Machine Specification,2nd Edition, section 4.4.7: "The CONSTANT_Utf8_info Structure"(http://java.sun.com/docs/books/jvms/second_edition/html/ClassFile.doc.html#7963) ". Sun Microsystems. 1999.http://java.sun.com/docs/books/jvms/second_edition/html/ClassFile.doc.html#7963. Retrieved 2009-05-24.

14. ^ "[...] encoded in modified UTF-8.""Java Object Serialization Specification, chapter 6: Object Serialization StreamProtocol, section 2: Stream Elements(http://java.sun.com/javase/6/docs/platform/serialization/spec/protocol.html#8299) ". Sun Microsystems. 2005.http://java.sun.com/javase/6/docs/platform/serialization/spec/protocol.html#8299. Retrieved 2009-05-22.

15. ^ "The JNI uses modified UTF-8 strings to represent various string types.""Java Native Interface Specification,chapter 3: JNI Types and Data Structures, section: Modified UTF-8 Strings(http://java.sun.com/j2se/1.5.0/docs/guide/jni/spec/types.html#wp16542) ". Sun Microsystems. 2003.http://java.sun.com/j2se/1.5.0/docs/guide/jni/spec/types.html#wp16542. Retrieved 2009-05-22.

16. ^ "[...] differences between this format and the "standard" UTF-8 format.""The Java Virtual Machine Specification,2nd Edition, section 4.4.7: "The CONSTANT_Utf8_info Structure"(http://java.sun.com/docs/books/jvms/second_edition/html/ClassFile.doc.html#7963) ". Sun Microsystems. 1999.http://java.sun.com/docs/books/jvms/second_edition/html/ClassFile.doc.html#7963. Retrieved 2009-05-23.

17. ^ "In orthodox UTF-8, a NUL byte(\x00) is represented by a NUL byte. [...] But [...] we [...] want NUL bytesinside [...] strings [...]""Tcler's Wiki: UTF-8 bit by bit (Revision 6) (http://wiki.tcl.tk/_/revision?N=1211&V=6) ".2009-04-25. http://wiki.tcl.tk/_/revision?N=1211&V=6. Retrieved 2009-05-22.

18. ^ http://sourceforge.net/tracker/?func=detail&aid=2727174&group_id=8642&atid=10864219. ^ http://forums.macosxhints.com/archive/index.php/t-99344.html20. ^ http://www.w3.org/TR/REC-xml/#charencoding21. ^ W3 FAQ: Multilingual Forms (http://www.w3.org/International/questions/qa-forms-utf-8) : a Perl regular

expression to validate a UTF-8 string)22. ^ There are 256 × 256 − 128 × 128 not-pure-ASCII two-byte sequences, and of those, only 1920 encode valid

UTF-8 characters (the range U+0080 to U+07FF), so the proportion of valid not-pure-ASCII two-byte sequencesis 3.9%. Similarly, there are 256 × 256 × 256 − 128 × 128 × 128 not-pure-ASCII three-byte sequences, and 61,406valid three-byte UTF-8 sequences (U+000800 to U+00FFFF minus surrogate pairs and non-characters), so theproportion is 0.41%; finally, there are 2564 − 1284 non-ASCII four-byte sequences, and 1,048,544 valid four-byteUTF-8 sequences (U+010000 to U+10FFFF minus non-characters), so the proportion is 0.026%. Note that thisassumes that control characters pass as ASCII; without the control characters, the percentage proportions dropsomewhat).

23. ^ http://tools.ietf.org/html/rfc3629#section-1024. ^ The version from 2009-04-27 of ja:UTF-8 needed 50 kb when saved (as UTF-8), but when converted to UTF-16

(with notepad) it took 81 kb, with a similar result for the Korean article

External linksThere are several current definitions of UTF-8 in various standards documents:

RFC 3629 / STD 63 (2003), which establishes UTF-8 as a standard Internet protocol elementThe Unicode Standard, Version 5.0, §3.9 D92, §3.10 D95 (2007)The Unicode Standard, Version 4.0, §3.9–§3.10 (2003)ISO/IEC 10646:2003 Annex D (2003)

They supersede the definitions given in the following obsolete works:

ISO/IEC 10646-1:1993 Amendment 2 / Annex R (1996)The Unicode Standard, Version 2.0, Appendix A (1996)RFC 2044 (1996)RFC 2279 (1998)The Unicode Standard, Version 3.0, §2.3 (2000) plus Corrigendum #1 : UTF-8 Shortest Form (2000)Unicode Standard Annex #27: Unicode 3.1 (2001)

They are all the same in their general mechanics, with the main differences being on issues such as allowed range of

12/12/2009 UTF-8 - Wikipedia, the free encyclopedia

http://en.wikipedia.org/wiki/UTF-8 9/10

Page 10: utf-8 - wikipedia, the free encyclopedia

This page was last modified on 11 December 2009 at 17:37.Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply.See Terms of Use for details.Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.Contact us

code point values and safe handling of invalid input.

Original UTF-8 paper (http://doc.cat-v.org/plan_9/4th_edition/papers/utf) (or pdf (http://plan9.bell-labs.com/sys/doc/utf.pdf) ) for Plan 9 from Bell LabsRFC 5198 defines UTF-8 NFC for Network InterchangeUTF-8 test pages by Andreas Prilop (http://freenet-homepage.de/prilop/multilingual-1.html) and the WorldWide Web Consortium (http://www.w3.org/2001/06/utf-8-test/UTF-8-demo.html)How to configure e-mail clients to send UTF-8 text (http://dotancohen.com/howto/email-utf8.html)Unix/Linux: UTF-8/Unicode FAQ (http://www.cl.cam.ac.uk/~mgk25/unicode.html) , Linux UnicodeHOWTO (http://www.linux.org/docs/ldp/howto/Unicode-HOWTO.html) , UTF-8 and Gentoo(http://www.gentoo.org/doc/en/utf-8.xml)The Unicode/UTF-8-character table (http://www.utf8-chartable.de/) displays UTF-8 in a variety of formats(with Unicode and HTML encoding information)Online Tool for URL Encoding/Decoding(http://netzreport.googlepages.com/online_tool_for_url_en_decoding.html) according to RFC 3986 / RFC3629(JavaScript, GPL)Unicode and Multilingual Web Browsers (http://www.alanwood.net/unicode/browsers.html) from AlanWood's Unicode Resources describes support and additional configuration of Unicode/UTF-8 in modernbrowsersJSP Wiki Browser Compatibility page (http://jspwiki.org/wiki/JSPWikiBrowserCompatibility) details specificproblems with UTF-8 in older browsersMathematical Symbols in Unicode(http://tlt.psu.edu/suggestions/international/bylanguage/math.html#browsers)Unicode.se (http://en.unicode.se) shows how to set your homepages and databases to UTF-8Graphical View of UTF-8 in ICU's Converter Explorer (http://demo.icu-project.org/icu-bin/convexp?conv=UTF-8)

Retrieved from "http://en.wikipedia.org/wiki/UTF-8"Categories: Unicode | Character sets | Encodings | Character encoding | Unicode Transformation Formats

12/12/2009 UTF-8 - Wikipedia, the free encyclopedia

http://en.wikipedia.org/wiki/UTF-8 10/10