<pre class='metadata'>
Title: CSS Syntax Module Level 3
Shortname: css-syntax
Level: 3
Status: ED
Work Status: Testing
Group: csswg
ED: https://drafts.csswg.org/css-syntax/
TR: https://www.w3.org/TR/css-syntax-3/
Previous Version: https://www.w3.org/TR/2014/CR-css-syntax-3-20140220/
Previous Version: https://www.w3.org/TR/2013/WD-css-syntax-3-20131105/
Previous Version: https://www.w3.org/TR/2013/WD-css-syntax-3-20130919/
Editor: Tab Atkins Jr., Google, http://xanthir.com/contact/, w3cid 42199
Editor: Simon Sapin, Mozilla, http://exyr.org/about/, w3cid 58001
Abstract: This module describes, in general terms, the basic structure and syntax of CSS stylesheets. It defines, in detail, the syntax and parsing of CSS - how to turn a stream of bytes into a meaningful stylesheet.
Ignored Terms: <keyframes-name>, <keyframe-rule>, <keyframe-selector>, <translation-value>, <media-query-list>, <unicode-range-token>
Ignored Vars: +b, -b, foo
</pre>

<pre class=link-defaults>
spec:css-text-decor-3; type:property; text:text-decoration
spec:css-color-3; type:property; text:color
spec:css-transforms-1; type:function; text:translatex()
spec:html; type:element; text:a
spec:infra; type:dfn;
	text:string
	text:list
</pre>

<h2 id="intro">
Introduction</h2>

	<em>This section is not normative.</em>

	This module defines the abstract syntax and parsing of CSS stylesheets
	and other things which use CSS syntax
	(such as the HTML <code>style</code> attribute).

	It defines algorithms for converting a stream of Unicode <a>code points</a>
	(in other words, text)
	into a stream of CSS tokens,
	and then further into CSS objects
	such as stylesheets, rules, and declarations.

<h3 id="placement">
Module interactions</h3>

	This module defines the syntax and parsing of CSS stylesheets.
	It supersedes the lexical scanner and grammar defined in CSS 2.1.

<h2 id='syntax-description'>
Description of CSS's Syntax</h2>

	<em>This section is not normative.</em>

	A CSS document is a series of <a>style rules</a>--
	which are <a>qualified rules</a> that apply styles to elements in a document--
	and <a>at-rules</a>--
	which define special processing rules or values for the CSS document.

	A <a>qualified rule</a> starts with a prelude
	then has a {}-wrapped block containing a sequence of declarations.
	The meaning of the prelude varies based on the context that the rule appears in--
	for <a>style rules</a>, it's a selector which specifies what elements the declarations will apply to.
	Each declaration has a name,
	followed by a colon and the declaration value.
	Declarations are separated by semicolons.

	<div class='example'>

		A typical rule might look something like this:

		<pre>
			p > a {
				color: blue;
				text-decoration: underline;
			}
		</pre>

		In the above rule, "<code>p > a</code>" is the selector,
		which, if the source document is HTML,
		selects any <{a}> elements that are children of a <{p}> element.

		"<code>color: blue</code>" is a declaration specifying that,
		for the elements that match the selector,
		their 'color' property should have the value ''blue''.
		Similarly, their 'text-decoration' property should have the value ''underline''.
	</div>

	<a>At-rules</a> are all different, but they have a basic structure in common.
	They start with an "@" <a>code point</a> followed by their name as a CSS keyword.
	Some <a>at-rules</a> are simple statements,
	with their name followed by more CSS values to specify their behavior,
	and finally ended by a semicolon.
	Others are blocks;
	they can have CSS values following their name,
	but they end with a {}-wrapped block,
	similar to a <a>qualified rule</a>.
	Even the contents of these blocks are specific to the given <a>at-rule</a>:
	sometimes they contain a sequence of declarations, like a <a>qualified rule</a>;
	other times, they may contain additional blocks, or at-rules, or other structures altogether.

	<div class='example'>

		Here are several examples of <a>at-rules</a> that illustrate the varied syntax they may contain.

		<pre>@import "my-styles.css";</pre>

		The ''@import'' <a>at-rule</a> is a simple statement.
		After its name, it takes a single string or ''url()'' function to indicate the stylesheet that it should import.

		<pre>
			@page :left {
				margin-left: 4cm;
				margin-right: 3cm;
			}
		</pre>

		The ''@page'' <a>at-rule</a> consists of an optional page selector (the '':left'' pseudoclass),
		followed by a block of properties that apply to the page when printed.
		In this way, it's very similar to a normal style rule,
		except that its properties don't apply to any "element",
		but rather the page itself.

		<pre>
			@media print {
				body { font-size: 10pt }
			}
		</pre>

		The ''@media'' <a>at-rule</a> begins with a media type
		and a list of optional media queries.
		Its block contains entire rules,
		which are only applied when the ''@media''s conditions are fulfilled.
	</div>

	Property names and <a>at-rule</a> names are always <a>identifiers</a>,
	which have to start with a letter or a hyphen followed by a letter,
	and then can contain letters, numbers, hyphens, or underscores.
	You can include any <a>code point</a> at all,
	even ones that CSS uses in its syntax,
	by <a>escaping</a> it.

	The syntax of selectors is defined in the <a href="https://www.w3.org/TR/selectors/">Selectors spec</a>.
	Similarly, the syntax of the wide variety of CSS values is defined in the <a href="https://www.w3.org/TR/css3-values/">Values &amp; Units spec</a>.
	The special syntaxes of individual <a>at-rules</a> can be found in the specs that define them.

<h3 id="escaping">
Escaping</h3>

	<em>This section is not normative.</em>

	Any Unicode <a>code point</a> can be included in an <a>identifier</a> or quoted string
	by <dfn id="escape-codepoint">escaping</dfn> it.
	CSS escape sequences start with a backslash (\), and continue with:

	<ul>
		<li>
			Any Unicode <a>code point</a> that is not a <a>hex digits</a> or a <a>newline</a>.
			The escape sequence is replaced by that <a>code point</a>.
		<li>
			Or one to six <a>hex digits</a>, followed by an optional <a>whitespace</a>.
			The escape sequence is replaced by the Unicode <a>code point</a>
			whose value is given by the hexadecimal digits.
			This optional whitespace allow hexadecimal escape sequences
			to be followed by "real" hex digits.

			<p class=example>
				An <a>identifier</a> with the value "&B"
				could be written as ''\26 B'' or ''\000026B''.

			<p class=note>
				A "real" space after the escape sequence must be doubled.
	</ul>

<h3 id="error-handling">
Error Handling</h3>

	<em>This section is not normative.</em>

	When errors occur in CSS,
	the parser attempts to recover gracefully,
	throwing away only the minimum amount of content
	before returning to parsing as normal.
	This is because errors aren't always mistakes--
	new syntax looks like an error to an old parser,
	and it's useful to be able to add new syntax to the language
	without worrying about stylesheets that include it being completely broken in older UAs.

	The precise error-recovery behavior is detailed in the parser itself,
	but it's simple enough that a short description is fairly accurate.

	<ul>
		<li>
			At the "top level" of a stylesheet,
			an <<at-keyword-token>> starts an at-rule.
			Anything else starts a qualified rule,
			and is included in the rule's prelude.
			This may produce an invalid selector,
			but that's not the concern of the CSS parser--
			at worst, it means the selector will match nothing.

		<li>
			Once an at-rule starts,
			nothing is invalid from the parser's standpoint;
			it's all part of the at-rule's prelude.
			Encountering a <<semicolon-token>> ends the at-rule immediately,
			while encountering an opening curly-brace <a href="#tokendef-open-curly">&lt;{-token></a> starts the at-rule's body.
			The at-rule seeks forward, matching blocks (content surrounded by (), {}, or [])
			until it finds a closing curly-brace <a href="#tokendef-close-curly">&lt;}-token></a> that isn't matched by anything else
			or inside of another block.
			The contents of the at-rule are then interpreted according to the at-rule's own grammar.

		<li>
			Qualified rules work similarly,
			except that semicolons don't end them;
			instead, they are just taken in as part of the rule's prelude.
			When the first {} block is found,
			the contents are always interpreted as a list of declarations.

		<li>
			When interpreting a list of declarations,
			unknown syntax at any point causes the parser to throw away whatever declaration it's currently building,
			and seek forward until it finds a semicolon (or the end of the block).
			It then starts fresh, trying to parse a declaration again.

		<li>
			If the stylesheet ends while any rule, declaration, function, string, etc. are still open,
			everything is automatically closed.
			This doesn't make them invalid,
			though they may be incomplete
			and thus thrown away when they are verified against their grammar.
	</ul>

	After each construct (declaration, style rule, at-rule) is parsed,
	the user agent checks it against its expected grammar.
	If it does not match the grammar,
	it's <dfn export for=css>invalid</dfn>,
	and gets <dfn export for=css>ignored</dfn> by the UA,
	which treats it as if it wasn't there at all.

<!--
████████  ███████  ██    ██ ████████ ██    ██ ████ ████████ ████ ██    ██  ██████
   ██    ██     ██ ██   ██  ██       ███   ██  ██       ██   ██  ███   ██ ██    ██
   ██    ██     ██ ██  ██   ██       ████  ██  ██      ██    ██  ████  ██ ██
   ██    ██     ██ █████    ██████   ██ ██ ██  ██     ██     ██  ██ ██ ██ ██   ████
   ██    ██     ██ ██  ██   ██       ██  ████  ██    ██      ██  ██  ████ ██    ██
   ██    ██     ██ ██   ██  ██       ██   ███  ██   ██       ██  ██   ███ ██    ██
   ██     ███████  ██    ██ ████████ ██    ██ ████ ████████ ████ ██    ██  ██████
-->

<h2 id="tokenizing-and-parsing">
Tokenizing and Parsing CSS</h2>

	User agents must use the parsing rules described in this specification
	to generate the CSSOM trees from text/css resources.
	Together, these rules define what is referred to as the CSS parser.

	This specification defines the parsing rules for CSS documents,
	whether they are syntactically correct or not.
	Certain points in the parsing algorithm are said to be <dfn lt="parse error">parse errors</dfn>.
	The error handling for parse errors is well-defined:
	user agents must either act as described below when encountering such problems,
	or must abort processing at the first error that they encounter for which they do not wish to apply the rules described below.

	Conformance checkers must report at least one parse error condition to the user
	if one or more parse error conditions exist in the document
	and must not report parse error conditions
	if none exist in the document.
	Conformance checkers may report more than one parse error condition if more than one parse error condition exists in the document.
	Conformance checkers are not required to recover from parse errors,
	but if they do,
	they must recover in the same way as user agents.

<h3 id="parsing-overview">
Overview of the Parsing Model</h3>

	The input to the CSS parsing process consists of a stream of Unicode <a>code points</a>,
	which is passed through a tokenization stage followed by a tree construction stage.
	The output is a CSSStyleSheet object.

	Note: Implementations that do not support scripting do not have to actually create a CSSOM CSSStyleSheet object,
	but the CSSOM tree in such cases is still used as the model for the rest of the specification.

<h3 id="input-byte-stream">
The input byte stream</h3>

	When parsing a stylesheet,
	the stream of Unicode <a>code points</a> that comprises the input to the tokenization stage
	might be initially seen by the user agent as a stream of bytes
	(typically coming over the network or from the local file system).
	If so, the user agent must decode these bytes into <a>code points</a> according to a particular character encoding.

	To decode the stream of bytes into a stream of <a>code points</a>,
	UAs must use the <dfn><a href="https://encoding.spec.whatwg.org/#decode">decode</a></dfn> algorithm
	defined in [[!ENCODING]],
	with the fallback encoding determined as follows.

	Note: The <a>decode</a> algorithm
	gives precedence to a byte order mark (BOM),
	and only uses the fallback when none is found.

	To <dfn>determine the fallback encoding</dfn>:

	<ol>
		<li>
			If HTTP or equivalent protocol defines an encoding (e.g. via the charset parameter of the Content-Type header),
			<dfn export><a href="https://encoding.spec.whatwg.org/#concept-encoding-get">get an encoding</a></dfn> [[!ENCODING]]
			for the specified value.
			If that does not return failure,
			use the return value as the fallback encoding.

		<li>
			Otherwise, check the byte stream.
			If the first 1024 bytes of the stream begin with the hex sequence

			<pre>40 63 68 61 72 73 65 74 20 22 XX* 22 3B</pre>

			where each <code>XX</code> byte is a value between 0<sub>16</sub> and 21<sub>16</sub> inclusive
			or a value between 23<sub>16</sub> and 7F<sub>16</sub> inclusive,
			then <a>get an encoding</a>
			for the sequence of <code>XX</code> bytes,
			interpreted as <code>ASCII</code>.

			<details class='note'>
				<summary>What does that byte sequence mean?</summary>

				The byte sequence above,
				when decoded as ASCII,
				is the string "<code>@charset "…";</code>",
				where the "…" is the sequence of bytes corresponding to the encoding's label.
			</details>

			If the return value was <code>utf-16be</code> or <code>utf-16le</code>,
			use <code>utf-8</code> as the fallback encoding;
			if it was anything else except failure,
			use the return value as the fallback encoding.

			<details class='note'>
				<summary>Why use utf-8 when the declaration says utf-16?</summary>

				The bytes of the encoding declaration spell out “<code>@charset "…";</code>” in ASCII,
				but UTF-16 is not ASCII-compatible.
				Either you've typed in complete gibberish (like <code>䁣桡牳整•utf-16be∻</code>) to get the right bytes in the document,
				which we don't want to encourage,
				or your document is actually in an ASCII-compatible encoding
				and your encoding declaration is lying.

				Either way, defaulting to UTF-8 is a decent answer.

				As well, this mimics the behavior of HTML's <code>&lt;meta charset></code> attribute.
			</details>

			Note: Note that the syntax of an encoding declaration <em>looks like</em> the syntax of an <a>at-rule</a> named ''@charset'',
			but no such rule actually exists,
			and the rules for how you can write it are much more restrictive than they would normally be for recognizing such a rule.
			A number of things you can do in CSS that would produce a valid <a>@charset</a> rule (if one existed),
			such as using multiple spaces, comments, or single quotes,
			will cause the encoding declaration to not be recognized.
			This behavior keeps the encoding declaration as simple as possible,
			and thus maximizes the likelihood of it being implemented correctly.

		<li>
			Otherwise, if an <a>environment encoding</a> is provided by the referring document,
			use that as the fallback encoding.

		<li>
			Otherwise, use <code>utf-8</code> as the fallback encoding.
	</ol>

	<div class='note'>

		Though UTF-8 is the default encoding for the web,
		and many newer web-based file formats assume or require UTF-8 encoding,
		CSS was created before it was clear which encoding would win,
		and thus can't automatically assume the stylesheet is UTF-8.

		Stylesheet authors <em>should</em> author their stylesheets in UTF-8,
		and ensure that either an HTTP header (or equivalent method) declares the encoding of the stylesheet to be UTF-8,
		or that the referring document declares its encoding to be UTF-8.
		(In HTML, this is done by adding a <code>&lt;meta charset=utf-8></code> element to the head of the document.)

		If neither of these options are available,
		authors should begin the stylesheet with a UTF-8 BOM
		or the exact characters

		<pre>@charset "utf-8";</pre>
	</div>

	Document languages that refer to CSS stylesheets that are decoded from bytes
	may define an <dfn export>environment encoding</dfn> for each such stylesheet,
	which is used as a fallback when other encoding hints are not available or can not be used.

	The concept of <a>environment encoding</a> only exists for compatibility with legacy content.
	New formats and new linking mechanisms <b>should not</b> provide an <a>environment encoding</a>,
	so the stylesheet defaults to UTF-8 instead in the absence of more explicit information.

	Note: [[HTML]] defines <a href="https://html.spec.whatwg.org/multipage/links.html#link-type-stylesheet">the environment encoding for <code>&lt;link rel=stylesheet></code></a>.

	Note: [[CSSOM]] defines <a href="https://drafts.csswg.org/cssom/#requirements-on-user-agents-implementing-the-xml-stylesheet-processing-instruction">the environment encoding for <code>&lt;xml-stylesheet?></code></a>.

	Note: [[CSS-CASCADE-3]] defines <a at-rule lt=@import>the environment encoding for <code>@import</code></a>.


<h3 id="input-preprocessing">
Preprocessing the input stream</h3>

	The <dfn>input stream</dfn> consists of the <a>code points</a>
	pushed into it as the input byte stream is decoded.

	Before sending the input stream to the tokenizer,
	implementations must make the following <a>code point</a> substitutions:

	<ul>
		<li>
			Replace any U+000D CARRIAGE RETURN (CR) <a>code points</a>,
			U+000C FORM FEED (FF) <a>code points</a>,
			or pairs of U+000D CARRIAGE RETURN (CR) followed by U+000A LINE FEED (LF),
			by a single U+000A LINE FEED (LF) <a>code point</a>.

		<li>
			Replace any U+0000 NULL or <a>surrogate</a> <a>code points</a> with U+FFFD REPLACEMENT CHARACTER (�).
	</ul>


<h2 id="tokenization">
Tokenization</h2>

	Implementations must act as if they used the following algorithms to tokenize CSS.
	To transform a stream of <a>code points</a> into a stream of tokens,
	repeatedly <a>consume a token</a>
	until an <<EOF-token>> is reached,
	collecting the returned tokens into a stream.
	Each call to the <a>consume a token</a> algorithm
	returns a single token,
	so it can also be used "on-demand" to tokenize a stream of <a>code points</a> <em>during</em> parsing,
	if so desired.

	The output of the tokenization step is a stream of zero or more of the following tokens:
	<dfn>&lt;ident-token></dfn>,
	<dfn>&lt;function-token></dfn>,
	<dfn>&lt;at-keyword-token></dfn>,
	<dfn>&lt;hash-token></dfn>,
	<dfn>&lt;string-token></dfn>,
	<dfn>&lt;bad-string-token></dfn>,
	<dfn>&lt;url-token></dfn>,
	<dfn>&lt;bad-url-token></dfn>,
	<dfn>&lt;delim-token></dfn>,
	<dfn>&lt;number-token></dfn>,
	<dfn>&lt;percentage-token></dfn>,
	<dfn>&lt;dimension-token></dfn>,
	<dfn>&lt;whitespace-token></dfn>,
	<dfn>&lt;CDO-token></dfn>,
	<dfn>&lt;CDC-token></dfn>,
	<dfn>&lt;colon-token></dfn>,
	<dfn>&lt;semicolon-token></dfn>,
	<dfn>&lt;comma-token></dfn>,
	<dfn id="tokendef-open-square">&lt;[-token></dfn>,
	<dfn id="tokendef-close-square">&lt;]-token></dfn>,
	<dfn id="tokendef-open-paren">&lt;(-token></dfn>,
	<dfn id="tokendef-close-paren">&lt;)-token></dfn>,
	<dfn id="tokendef-open-curly">&lt;{-token></dfn>,
	and <dfn id="tokendef-close-curly">&lt;}-token></dfn>.

	<ul>
		<li>
			<<ident-token>>, <<function-token>>, <<at-keyword-token>>, <<hash-token>>, <<string-token>>, and <<url-token>> have a value composed of zero or more <a>code points</a>.
			Additionally, hash tokens have a type flag set to either "id" or "unrestricted".  The type flag defaults to "unrestricted" if not otherwise set.

		<li>
			<<delim-token>> has a value composed of a single <a>code point</a>.

		<li>
			<<number-token>>, <<percentage-token>>, and <<dimension-token>> have a numeric value.
			<<number-token>> and <<dimension-token>> additionally have a type flag set to either "integer" or "number".  The type flag defaults to "integer" if not otherwise set.
			<<dimension-token>> additionally have a unit composed of one or more <a>code points</a>.
	</ul>

	Note: The type flag of hash tokens is used in the Selectors syntax [[SELECT]].
	Only hash tokens with the "id" type are valid <a href="https://www.w3.org/TR/selectors/#id-selectors">ID selectors</a>.


<!--
████████     ███    ████ ██       ████████   ███████     ███    ████████
██     ██   ██ ██    ██  ██       ██     ██ ██     ██   ██ ██   ██     ██
██     ██  ██   ██   ██  ██       ██     ██ ██     ██  ██   ██  ██     ██
████████  ██     ██  ██  ██       ████████  ██     ██ ██     ██ ██     ██
██   ██   █████████  ██  ██       ██   ██   ██     ██ █████████ ██     ██
██    ██  ██     ██  ██  ██       ██    ██  ██     ██ ██     ██ ██     ██
██     ██ ██     ██ ████ ████████ ██     ██  ███████  ██     ██ ████████
-->

<h3 id='token-diagrams'>
Token Railroad Diagrams</h3>

	<em>This section is non-normative.</em>

	This section presents an informative view of the tokenizer,
	in the form of railroad diagrams.
	Railroad diagrams are more compact than an explicit parser,
	but often easier to read than an regular expression.

	These diagrams are <em>informative</em> and <em>incomplete</em>;
	they describe the grammar of "correct" tokens,
	but do not describe error-handling at all.
	They are provided solely to make it easier to get an intuitive grasp of the syntax of each token.

	Diagrams with names such as <em>&lt;foo-token></em> represent tokens.
	The rest are productions referred to by other diagrams.

	<dl>
		<dt id="comment-diagram">comment
		<dd>
			<pre class='railroad'>
			T: /*
			Star:
				N: anything but * followed by /
			T: */
			</pre>

		<dt id="newline-diagram">newline
		<dd>
			<pre class='railroad'>
			Choice:
				T: \n
				T: \r\n
				T: \r
				T: \f
			</pre>

		<dt id="whitespace-diagram">whitespace
		<dd>
			<pre class='railroad'>
			Choice:
				T: space
				T: \t
				N: newline
			</pre>

		<dt id="hex-digit-diagram">hex digit
		<dd>
			<pre class='railroad'>
			N: 0-9 a-f or A-F
			</pre>

		<dt id="escape-diagram">escape
		<dd>
			<pre class='railroad'>
			T: \
			Choice:
				N: not newline or hex digit
				Seq:
					Plus:
						N: hex digit
						C: 1-6 times
					Opt: skip
						N: whitespace
			</pre>

		<dt id="whitespace-token-diagram"><<whitespace-token>>
		<dd>
			<pre class='railroad'>
			Plus:
				N: whitespace
			</pre>

		<dt id="ws*-diagram">ws*
		<dd>
			<pre class='railroad'>
			Star:
				N: <whitespace-token>
			</pre>

		<dt id="ident-token-diagram"><<ident-token>>
		<dd>
			<pre class='railroad'>
			Or: 1
				T: --
				Seq:
					Opt: skip
						T: -
					Or:
						N: a-z A-Z _ or non-ASCII
						N: escape
			Star:
				Or:
					N: a-z A-Z 0-9 _ - or non-ASCII
					N: escape
			</pre>

		<dt id="function-token-diagram"><<function-token>>
		<dd>
			<pre class='railroad'>
			N: <ident-token>
			T: (
			</pre>

		<dt id="at-keyword-token-diagram"><<at-keyword-token>>
		<dd>
			<pre class='railroad'>
			T: @
			N: <ident-token>
			</pre>

		<dt id="hash-token-diagram"><<hash-token>>
		<dd>
			<pre class='railroad'>
			T: #
			Plus:
				Choice:
					N:a-z A-Z 0-9 _ - or non-ASCII
					N: escape
			</pre>

		<dt id="string-token-diagram"><<string-token>>
		<dd>
			<pre class='railroad'>
			Choice:
				Seq:
					T: "
					Star:
						Choice:
							N: not " \ or newline
							N: escape
							Seq:
								T: \
								N: newline
					T: "
				Seq:
					T: '
					Star:
						Choice:
							N: not ' \ or newline
							N: escape
							Seq:
								T: \
								N: newline
					T: '
			</pre>

		<dt id="url-token-diagram"><<url-token>>
		<dd>
			<pre class='railroad'>
			N: <ident-token "url">
			T: (
			N: ws*
			Star:
				Choice:
					N: not " ' ( ) \ ws or non-printable
					N: escape
			N: ws*
			T: )
			</pre>

		<dt id="number-token-diagram"><<number-token>>
		<dd>
			<pre class='railroad'>
			Choice: 1
				T: +
				Skip:
				T: -
			Choice:
				Seq:
					Plus:
						N: digit
					T: .
					Plus:
						N: digit
				Plus:
					N: digit
				Seq:
					T: .
					Plus:
						N: digit
			Opt: skip
				Seq:
					Choice:
						T: e
						T: E
					Choice: 1
						T: +
						S:
						T: -
					Plus:
						N: digit
			</pre>

		<dt id="dimension-token-diagram"><<dimension-token>>
		<dd>
			<pre class='railroad'>
			N: <number-token>
			N: <ident-token>
			</pre>

		<dt id="percentage-token-diagram"><<percentage-token>>
		<dd>
			<pre class='railroad'>
			N: <number-token>
			T: %
			</pre>

		<dt id="CDO-token-diagram"><<CDO-token>>
		<dd>
			<pre class='railroad'>
			T: <<!---->!--
			</pre>

		<dt id="CDC-token-diagram"><<CDC-token>>
		<dd>
			<pre class='railroad'>
			T: -->
			</pre>
	</dl>

<!--
████████  ████████ ██    ██  ██████
██     ██ ██       ███   ██ ██    ██
██     ██ ██       ████  ██ ██
██     ██ ██████   ██ ██ ██  ██████
██     ██ ██       ██  ████       ██
██     ██ ██       ██   ███ ██    ██
████████  ██       ██    ██  ██████
-->

<h3 id="tokenizer-definitions">
Definitions</h3>

	This section defines several terms used during the tokenization phase.

	<dl export>
		<dt><dfn>next input code point</dfn>
		<dd>
			The first <a>code point</a> in the <a>input stream</a> that has not yet been consumed.

		<dt><dfn>current input code point</dfn>
		<dd>
			The last <a>code point</a> to have been consumed.

		<dt><dfn>reconsume the current input code point</dfn>
		<dd>
			Push the <a>current input code point</a> back onto the front of the <a>input stream</a>,
			so that the next time you are instructed to consume the <a>next input code point</a>,
			it will instead reconsume the <a>current input code point</a>.

		<dt><dfn>EOF code point</dfn>
		<dd>
			A conceptual <a>code point</a> representing the end of the <a>input stream</a>.
			Whenever the <a>input stream</a> is empty,
			the <a>next input code point</a> is always an EOF code point.

		<dt><dfn export>digit</dfn>
		<dd>
			A <a>code point</a> between U+0030 DIGIT ZERO (0) and U+0039 DIGIT NINE (9) inclusive.

		<dt><dfn export>hex digit</dfn>
		<dd>
			A <a>digit</a>,
			or a <a>code point</a> between U+0041 LATIN CAPITAL LETTER A (A) and U+0046 LATIN CAPITAL LETTER F (F) inclusive,
			or a <a>code point</a> between U+0061 LATIN SMALL LETTER A (a) and U+0066 LATIN SMALL LETTER F (f) inclusive.

		<dt><dfn export>uppercase letter</dfn>
		<dd>
			A <a>code point</a> between U+0041 LATIN CAPITAL LETTER A (A) and U+005A LATIN CAPITAL LETTER Z (Z) inclusive.

		<dt><dfn export>lowercase letter</dfn>
		<dd>
			A <a>code point</a> between U+0061 LATIN SMALL LETTER A (a) and U+007A LATIN SMALL LETTER Z (z) inclusive.

		<dt><dfn export>letter</dfn>
		<dd>
			An <a>uppercase letter</a>
			or a <a>lowercase letter</a>.

		<dt><dfn export>non-ASCII code point</dfn>
		<dd>
			A <a>code point</a> with a value equal to or greater than U+0080 &lt;control>.

		<dt><dfn export>name-start code point</dfn>
		<dd>
			A <a>letter</a>,
			a <a>non-ASCII code point</a>,
			or U+005F LOW LINE (_).

		<dt><dfn export>name code point</dfn>
		<dd>
			A <a>name-start code point</a>,
			a <a>digit</a>,
			or U+002D HYPHEN-MINUS (-).

		<dt><dfn export>non-printable code point</dfn>
		<dd>
			A <a>code point</a> between U+0000 NULL and U+0008 BACKSPACE inclusive,
			or U+000B LINE TABULATION,
			or a <a>code point</a> between U+000E SHIFT OUT and U+001F INFORMATION SEPARATOR ONE inclusive,
			or U+007F DELETE.

		<dt><dfn export>newline</dfn>
		<dd>
			U+000A LINE FEED.
			<span class='note'>
				Note that U+000D CARRIAGE RETURN and U+000C FORM FEED are not included in this definition,
				as they are converted to U+000A LINE FEED during <a href="#input-preprocessing">preprocessing</a>.
			</span>

		<dt><dfn export>whitespace</dfn>
		<dd>A <a>newline</a>, U+0009 CHARACTER TABULATION, or U+0020 SPACE.

		<dt><dfn export>maximum allowed code point</dfn>
		<dd>The greatest <a>code point</a> defined by Unicode: U+10FFFF.

		<dt><dfn export>identifier</dfn>
		<dd>
			A portion of the CSS source that has the same syntax as an <<ident-token>>.
			Also appears in <<at-keyword-token>>,
			<<function-token>>,
			<<hash-token>> with the "id" type flag,
			and the unit of <<dimension-token>>.

		<dt><dfn>representation</dfn>
		<dd>
			The <a>representation</a> of a token
			is the subsequence of the <a>input stream</a>
			consumed by the invocation of the <a>consume a token</a> algorithm
			that produced it.
			This is preserved for a few algorithms that rely on subtle details of the input text,
			which a simple "re-serialization" of the tokens might disturb.

			The <a>representation</a> is only consumed by internal algorithms,
			and never directly exposed,
			so it's not actually required to preserve the exact text;
			equivalent methods,
			such as associating each token with offsets into the source text,
			also suffice.

			Note: In particular, the <a>representation</a> preserves details
			such as whether .009 was written as ''.009'' or ''9e-3'',
			and whether a character was written literally
			or as a CSS escape.
			The former is necessary to properly parse <<urange>> productions;
			the latter is basically an accidental leak of the tokenizing abstraction,
			but allowed because it makes the impl easier to define.

			If a token is ever produced by an algorithm directly,
			rather than thru the tokenization algorithm in this specification,
			its representation is the empty string.
	</dl>

<!--
████████  ███████  ██    ██ ████████ ██    ██ ████ ████████ ████████ ████████
   ██    ██     ██ ██   ██  ██       ███   ██  ██       ██  ██       ██     ██
   ██    ██     ██ ██  ██   ██       ████  ██  ██      ██   ██       ██     ██
   ██    ██     ██ █████    ██████   ██ ██ ██  ██     ██    ██████   ████████
   ██    ██     ██ ██  ██   ██       ██  ████  ██    ██     ██       ██   ██
   ██    ██     ██ ██   ██  ██       ██   ███  ██   ██      ██       ██    ██
   ██     ███████  ██    ██ ████████ ██    ██ ████ ████████ ████████ ██     ██
-->

<h3 id="tokenizer-algorithms">
Tokenizer Algorithms</h3>

	The algorithms defined in this section transform a stream of <a>code points</a> into a stream of tokens.

<h4 id="consume-token">
Consume a token</h4>

	This section describes how to <dfn>consume a token</dfn> from a stream of <a>code points</a>.
	It will return a single token of any type.

	<a>Consume comments</a>.

	Consume the <a>next input code point</a>.

	<dl>
		<dt><a>whitespace</a>
		<dd>
			Consume as much <a>whitespace</a> as possible.
			Return a <<whitespace-token>>.

		<dt>U+0022 QUOTATION MARK (")
		<dd>
			<a>Consume a string token</a>
			and return it.

		<dt>U+0023 NUMBER SIGN (#)
		<dd>
			If the <a>next input code point</a> is a <a>name code point</a>
			or the <a lt="next input code point">next two input code points</a>
			<a>are a valid escape</a>,
			then:

			<ol>
				<li>
					Create a <<hash-token>>.

				<li>
					If the <a lt="next input code point">next 3 input code points</a> <a>would start an identifier</a>,
					set the <<hash-token>>’s type flag to "id".

				<li>
					<a>Consume a name</a>,
					and set the <<hash-token>>’s value to the returned string.

				<li>
					Return the <<hash-token>>.
			</ol>

			Otherwise,
			return a <<delim-token>>
			with its value set to the <a>current input code point</a>.

		<dt>U+0027 APOSTROPHE (&apos;)
		<dd>
			<a>Consume a string token</a>
			and return it.

		<dt>U+0028 LEFT PARENTHESIS (()
		<dd>
			Return a <a href="#tokendef-open-paren">&lt;(-token></a>.

		<dt>U+0029 RIGHT PARENTHESIS ())
		<dd>
			Return a <a href="#tokendef-close-paren">&lt;)-token></a>.

		<dt>U+002B PLUS SIGN (+)
		<dd>
			If the input stream <a>starts with a number</a>,
			<a>reconsume the current input code point</a>,
			<a>consume a numeric token</a>,
			and return it.

			Otherwise,
			return a <<delim-token>>
			with its value set to the <a>current input code point</a>.

		<dt>U+002C COMMA (,)
		<dd>
			Return a <<comma-token>>.

		<dt>U+002D HYPHEN-MINUS (-)
		<dd>
			If the input stream <a>starts with a number</a>,
			<a>reconsume the current input code point</a>,
			<a>consume a numeric token</a>,
			and return it.

			Otherwise,
			if the <a lt="next input code point">next 2 input code points</a> are
			U+002D HYPHEN-MINUS
			U+003E GREATER-THAN SIGN
			(->),
			consume them
			and return a <<CDC-token>>.

			Otherwise,
			if the input stream <a>starts with an identifier</a>,
			<a>reconsume the current input code point</a>,
			<a>consume an ident-like token</a>,
			and return it.

			Otherwise,
			return a <<delim-token>>
			with its value set to the <a>current input code point</a>.

		<dt>U+002E FULL STOP (.)
		<dd>
			If the input stream <a>starts with a number</a>,
			<a>reconsume the current input code point</a>,
			<a>consume a numeric token</a>,
			and return it.

			Otherwise,
			return a <<delim-token>>
			with its value set to the <a>current input code point</a>.

		<dt>U+003A COLON (:)
		<dd>
			Return a <<colon-token>>.

		<dt>U+003B SEMICOLON (;)
		<dd>
			Return a <<semicolon-token>>.

		<dt>U+003C LESS-THAN SIGN (&lt;)
		<dd>
			If the <a lt="next input code point">next 3 input code points</a> are
			U+0021 EXCLAMATION MARK
			U+002D HYPHEN-MINUS
			U+002D HYPHEN-MINUS
			(!--),
			consume them
			and return a <<CDO-token>>.

			Otherwise,
			return a <<delim-token>>
			with its value set to the <a>current input code point</a>.

		<dt>U+0040 COMMERCIAL AT (@)
		<dd>
			If the <a lt="next input code point">next 3 input code points</a>
			<a>would start an identifier</a>,
			<a>consume a name</a>,
			create an <<at-keyword-token>> with its value set to the returned value,
			and return it.

			Otherwise,
			return a <<delim-token>>
			with its value set to the <a>current input code point</a>.

		<dt>U+005B LEFT SQUARE BRACKET ([)
		<dd>
			Return a <a href="#tokendef-open-square">&lt;[-token></a>.

		<dt>U+005C REVERSE SOLIDUS (\)
		<dd>
			If the input stream <a>starts with a valid escape</a>,
			<a>reconsume the current input code point</a>,
			<a>consume an ident-like token</a>,
			and return it.

			Otherwise,
			this is a <a>parse error</a>.
			Return a <<delim-token>>
			with its value set to the <a>current input code point</a>.

		<dt>U+005D RIGHT SQUARE BRACKET (])
		<dd>
			Return a <a href="#tokendef-close-square">&lt;]-token></a>.

		<dt>U+007B LEFT CURLY BRACKET ({)
		<dd>
			Return a <a href="#tokendef-open-curly">&lt;{-token></a>.

		<dt>U+007D RIGHT CURLY BRACKET (})
		<dd>
			Return a <a href="#tokendef-close-curly">&lt;}-token></a>.

		<dt><a>digit</a>
		<dd>
			<a>Reconsume the current input code point</a>,
			<a>consume a numeric token</a>,
			and return it.

		<dt><a>name-start code point</a>
		<dd>
			<a>Reconsume the current input code point</a>,
			<a>consume an ident-like token</a>,
			and return it.

		<dt>EOF
		<dd>
			Return an <<EOF-token>>.

		<dt>anything else
		<dd>
			Return a <<delim-token>>
			with its value set to the <a>current input code point</a>.
	</dl>


<h4 id="consume-comment">
Consume comments</h4>

	This section describes how to <dfn>consume comments</dfn> from a stream of <a>code points</a>.
	It returns nothing.

	If the <a lt="next input code point">next two input code point</a> are
	U+002F SOLIDUS (/) followed by a U+002A ASTERISK (*),
	consume them
	and all following <a>code points</a> up to and including
	the first U+002A ASTERISK (*) followed by a U+002F SOLIDUS (/),
	or up to an EOF code point.
	Return to the start of this step.

	If the preceding paragraph ended by consuming an EOF code point,
	this is a <a>parse error</a>.

	Return nothing.


<h4 id="consume-numeric-token">
Consume a numeric token</h4>

	This section describes how to <dfn>consume a numeric token</dfn> from a stream of <a>code points</a>.
	It returns either a <<number-token>>, <<percentage-token>>, or <<dimension-token>>.

	<a>Consume a number</a> and let |number| be the result.

	If the <a lt="next input code point">next 3 input code points</a> <a>would start an identifier</a>,
	then:

	<ol>
		<li>Create a <<dimension-token>> with the same value and type flag as |number|,
			and a unit set initially to the empty string.

		<li><a>Consume a name</a>.
			Set the <<dimension-token>>’s unit to the returned value.

		<li>Return the <<dimension-token>>.
	</ol>

	Otherwise,
	if the <a>next input code point</a> is U+0025 PERCENTAGE SIGN (%),
	consume it.
	Create a <<percentage-token>> with the same value as |number|,
	and return it.

	Otherwise,
	create a <<number-token>> with the same value and type flag as |number|,
	and return it.


<h4 id="consume-ident-like-token">
Consume an ident-like token</h4>

	This section describes how to <dfn>consume an ident-like token</dfn> from a stream of <a>code points</a>.
	It returns an <<ident-token>>, <<function-token>>, <<url-token>>, or <<bad-url-token>>.

	<a>Consume a name</a>, and let |string| be the result.

	If |string|’s value is an <a>ASCII case-insensitive</a> match for "url",
	and the <a>next input code point</a> is U+0028 LEFT PARENTHESIS ((),
	consume it.
	While the <a lt="next input code point">next two input code points</a> are <a>whitespace</a>,
	consume the <a>next input code point</a>.
	If the <a lt="next input code point">next one or two input code points</a> are U+0022 QUOTATION MARK ("),
	U+0027 APOSTROPHE (&apos;),
	or <a>whitespace</a> followed by U+0022 QUOTATION MARK (") or U+0027 APOSTROPHE (&apos;),
	then create a <<function-token>>
	with its value set to |string|
	and return it.
	Otherwise,
	<a>consume a url token</a>,
	and return it.

	Otherwise,
	if the <a>next input code point</a> is U+0028 LEFT PARENTHESIS ((),
	consume it.
	Create a <<function-token>>
	with its value set to |string|
	and return it.

	Otherwise,
	create an <<ident-token>>
	with its value set to |string|
	and return it.


<h4 id="consume-string-token">
Consume a string token</h4>

	This section describes how to <dfn>consume a string token</dfn> from a stream of <a>code points</a>.
	It returns either a <<string-token>> or <<bad-string-token>>.

	This algorithm may be called with an <var>ending code point</var>,
	which denotes the <a>code point</a> that ends the string.
	If an <var>ending code point</var> is not specified,
	the <a>current input code point</a> is used.

	Initially create a <<string-token>> with its value set to the empty string.

	Repeatedly consume the <a>next input code point</a> from the stream:

	<dl>
		<dt><var>ending code point</var>
		<dd>
			Return the <<string-token>>.

		<dt>EOF
		<dd>
			This is a <a>parse error</a>.
			Return the <<string-token>>.

		<dt><a>newline</a>
		<dd>
			This is a <a>parse error</a>.
			<a>Reconsume the current input code point</a>,
			create a <<bad-string-token>>, and return it.

		<dt>U+005C REVERSE SOLIDUS (\)
		<dd>
			If the <a>next input code point</a> is EOF,
			do nothing.

			Otherwise,
			if the <a>next input code point</a> is a newline,
			consume it.

			Otherwise,
			<span class=note>(the stream <a>starts with a valid escape</a>)</span>
			<a>consume an escaped code point</a>
			and append the returned <a>code point</a> to the <<string-token>>’s value.

		<dt>anything else
		<dd>
			Append the <a>current input code point</a> to the <<string-token>>’s value.
	</dl>


<h4 id="consume-url-token">
Consume a url token</h4>

	This section describes how to <dfn>consume a url token</dfn> from a stream of <a>code points</a>.
	It returns either a <<url-token>> or a <<bad-url-token>>.

	Note: This algorithm assumes that the initial "url(" has already been consumed.
	This algorithm also assumes that it's being called to consume an "unquoted" value,
	like ''url(foo)''.
	A quoted value, like ''url("foo")'',
	is parsed as a <<function-token>>.
	<a>Consume an ident-like token</a> automatically handles this distinction;
	this algorithm shouldn't be called directly otherwise.

	<ol>
		<li>
			Initially create a <<url-token>> with its value set to the empty string.

		<li>
			Consume as much <a>whitespace</a> as possible.

		<li>
			Repeatedly consume the <a>next input code point</a> from the stream:

			<dl>
				<dt>U+0029 RIGHT PARENTHESIS ())
				<dd>
					Return the <<url-token>>.

				<dt>EOF
				<dd>
					This is a <a>parse error</a>.
					Return the <<url-token>>.

				<dt><a>whitespace</a>
				<dd>
					Consume as much <a>whitespace</a> as possible.
					If the <a>next input code point</a> is U+0029 RIGHT PARENTHESIS ()) or EOF,
					consume it and return the <<url-token>>
					(if EOF was encountered, this is a <a>parse error</a>);
					otherwise,
					<a>consume the remnants of a bad url</a>,
					create a <<bad-url-token>>,
					and return it.

				<dt>U+0022 QUOTATION MARK (")
				<dt>U+0027 APOSTROPHE (&apos;)
				<dt>U+0028 LEFT PARENTHESIS (()
				<dt><a>non-printable code point</a>
				<dd>
					This is a <a>parse error</a>.
					<a>Consume the remnants of a bad url</a>,
					create a <<bad-url-token>>,
					and return it.

				<dt>U+005C REVERSE SOLIDUS (\)
				<dd>
					If the stream <a>starts with a valid escape</a>,
					<a>consume an escaped code point</a>
					and append the returned <a>code point</a> to the <<url-token>>’s value.

					Otherwise,
					this is a <a>parse error</a>.
					<a>Consume the remnants of a bad url</a>,
					create a <<bad-url-token>>,
					and return it.

				<dt>anything else
				<dd>
					Append the <a>current input code point</a>
					to the <<url-token>>’s value.
			</dl>
	</ol>


<h4 id="consume-escaped-code-point">
Consume an escaped code point</h4>

	This section describes how to <dfn>consume an escaped code point</dfn>.
	It assumes that the U+005C REVERSE SOLIDUS (\) has already been consumed
	and that the next input code point has already been verified
	to be part of a valid escape.
	It will return a <a>code point</a>.

	Consume the <a>next input code point</a>.

	<dl>
		<dt><a>hex digit</a>
		<dd>
			Consume as many <a>hex digits</a> as possible, but no more than 5.
			<span class='note'>Note that this means 1-6 hex digits have been consumed in total.</span>
			If the <a>next input code point</a> is
			<a>whitespace</a>,
			consume it as well.
			Interpret the <a>hex digits</a> as a hexadecimal number.
			If this number is zero,
			or is for a <a>surrogate</a>,
			or is greater than the <a>maximum allowed code point</a>,
			return U+FFFD REPLACEMENT CHARACTER (�).
			Otherwise, return the <a>code point</a> with that value.

		<dt>EOF
		<dd>
			This is a <a>parse error</a>.
			Return U+FFFD REPLACEMENT CHARACTER (�).

		<dt>anything else
		<dd>
			Return the <a>current input code point</a>.
	</dl>


<h4 id="starts-with-a-valid-escape">
Check if two code points are a valid escape</h4>

	This section describes how to <dfn lt="check if two code points are a valid escape|are a valid escape|starts with a valid escape">check if two code points are a valid escape</dfn>.
	The algorithm described here can be called explicitly with two <a>code points</a>,
	or can be called with the input stream itself.
	In the latter case, the two <a>code points</a> in question are
	the <a>current input code point</a>
	and the <a>next input code point</a>,
	in that order.

	Note: This algorithm will not consume any additional <a>code point</a>.

	If the first <a>code point</a> is not U+005C REVERSE SOLIDUS (\),
	return false.

	Otherwise,
	if the second <a>code point</a> is a <a>newline</a>,
	return false.

	Otherwise, return true.


<h4 id="would-start-an-identifier">
Check if three code points would start an identifier</dfn></h4>

	This section describes how to <dfn lt="check if three code points would start an identifier|starts with an identifier|start with an identifier|would start an identifier">check if three code points would start an <a>identifier</a></dfn>.
	The algorithm described here can be called explicitly with three <a>code points</a>,
	or can be called with the input stream itself.
	In the latter case, the three <a>code points</a> in question are
	the <a>current input code point</a>
	and the <a lt="next input code point">next two input code points</a>,
	in that order.

	Note: This algorithm will not consume any additional <a>code points</a>.

	Look at the first <a>code point</a>:

	<dl>
		<dt>U+002D HYPHEN-MINUS
		<dd>
			If the second <a>code point</a> is a <a>name-start code point</a>
			or a U+002D HYPHEN-MINUS,
			or the second and third <a>code points</a> <a>are a valid escape</a>,
			return true.
			Otherwise, return false.

		<dt><a>name-start code point</a>
		<dd>
			Return true.

		<dt>U+005C REVERSE SOLIDUS (\)
		<dd>
			If the first and second <a>code points</a> <a>are a valid escape</a>,
			return true.
			Otherwise, return false.

		<dt>anything else
		<dd>
			Return false.
	</dl>

<h4 id="starts-with-a-number">
Check if three code points would start a number</h4>

	This section describes how to <dfn lt="check if three code points would start a number|starts with a number|start with a number|would start a number">check if three code points would start a number</dfn>.
	The algorithm described here can be called explicitly with three <a>code points</a>,
	or can be called with the input stream itself.
	In the latter case, the three <a>code points</a> in question are
	the <a>current input code point</a>
	and the <a lt="next input code point">next two input code points</a>,
	in that order.

	Note: This algorithm will not consume any additional <a>code points</a>.

	Look at the first <a>code point</a>:

	<dl>
		<dt>U+002B PLUS SIGN (+)
		<dt>U+002D HYPHEN-MINUS (-)
		<dd>
			If the second <a>code point</a>
			is a <a>digit</a>,
			return true.

			Otherwise,
			if the second <a>code point</a>
			is a U+002E FULL STOP (.)
			and the third <a>code point</a>
			is a <a>digit</a>,
			return true.

			Otherwise, return false.

		<dt>U+002E FULL STOP (.)
		<dd>
			If the second <a>code point</a>
			is a <a>digit</a>,
			return true.
			Otherwise, return false.

		<dt><a>digit</a>
		<dd>
			Return true.

		<dt>anything else
		<dd>
			Return false.
	</dl>


<h4 id="consume-name">
Consume a name</h4>

	This section describes how to <dfn>consume a name</dfn> from a stream of <a>code points</a>.
	It returns a string containing
	the largest name that can be formed from adjacent <a>code points</a> in the stream, starting from the first.

	Note: This algorithm does not do the verification of the first few <a>code points</a>
	that are necessary to ensure the returned <a>code points</a> would constitute an <<ident-token>>.
	If that is the intended use,
	ensure that the stream <a>starts with an identifier</a>
	before calling this algorithm.

	Let <var>result</var> initially be an empty string.

	Repeatedly consume the <a>next input code point</a> from the stream:

	<dl>
		<dt><a>name code point</a>
		<dd>
			Append the <a>code point</a> to <var>result</var>.

		<dt>the stream <a>starts with a valid escape</a>
		<dd>
			<a>Consume an escaped code point</a>.
			Append the returned <a>code point</a> to <var>result</var>.

		<dt>anything else
		<dd>
			<a>Reconsume the current input code point</a>.
			Return <var>result</var>.
	</dl>


<h4 id="consume-number">
Consume a number</h4>

	This section describes how to <dfn>consume a number</dfn> from a stream of <a>code points</a>.
	It returns a numeric |value|,
	and a |type| which is either "integer" or "number".

	Note: This algorithm does not do the verification of the first few <a>code points</a>
	that are necessary to ensure a number can be obtained from the stream.
	Ensure that the stream <a>starts with a number</a>
	before calling this algorithm.

	Execute the following steps in order:

	<ol>
		<li>
			Initially set <var>type</var> to "integer".
			Let |repr| be the empty string.

		<li>
			If the <a>next input code point</a> is U+002B PLUS SIGN (+) or U+002D HYPHEN-MINUS (-),
			consume it and append it to <var>repr</var>.

		<li>
			While the <a>next input code point</a> is a <a>digit</a>,
			consume it and append it to <var>repr</var>.

		<li>
			If the <a lt="next input code point">next 2 input code points</a> are
			U+002E FULL STOP (.) followed by a <a>digit</a>,
			then:

			<ol>
				<li>Consume them.
				<li>Append them to <var>repr</var>.
				<li>Set <var>type</var> to "number".
				<li>While the <a>next input code point</a> is a <a>digit</a>, consume it and append it to <var>repr</var>.
			</ol>

		<li>
			If the <a lt="next input code point">next 2 or 3 input code points</a> are
			U+0045 LATIN CAPITAL LETTER E (E) or U+0065 LATIN SMALL LETTER E (e),
			optionally followed by U+002D HYPHEN-MINUS (-) or U+002B PLUS SIGN (+),
			followed by a <a>digit</a>,
			then:

			<ol>
				<li>Consume them.
				<li>Append them to <var>repr</var>.
				<li>Set <var>type</var> to "number".
				<li>While the <a>next input code point</a> is a <a>digit</a>, consume it and append it to <var>repr</var>.
			</ol>

		<li>
			<a lt="convert a string to a number">Convert <var>repr</var> to a number</a>,
			and set the <var>value</var> to the returned value.

		<li>
			Return <var>value</var> and <var>type</var>.
	</ol>


<h4 id="convert-string-to-number">
Convert a string to a number</h4>

	This section describes how to <dfn>convert a string to a number</dfn>.
	It returns a number.

	Note: This algorithm does not do any verification to ensure that the string contains only a number.
	Ensure that the string contains only a valid CSS number
	before calling this algorithm.

	Divide the string into seven components,
	in order from left to right:

	<ol>
		<li>A <b>sign</b>:
			a single U+002B PLUS SIGN (+) or U+002D HYPHEN-MINUS (-),
			or the empty string.
			Let <var>s</var> be the number -1 if the sign is U+002D HYPHEN-MINUS (-);
			otherwise, let <var>s</var> be the number 1.

		<li>An <b>integer part</b>:
			zero or more <a>digits</a>.
			If there is at least one digit,
			let <var>i</var> be the number formed by interpreting the digits as a base-10 integer;
			otherwise, let <var>i</var> be the number 0.

		<li>A <b>decimal point</b>:
			a single U+002E FULL STOP (.),
			or the empty string.

		<li>A <b>fractional part</b>:
			zero or more <a>digits</a>.
			If there is at least one digit,
			let <var>f</var> be the number formed by interpreting the digits as a base-10 integer
			and <var>d</var> be the number of digits;
			otherwise, let <var>f</var> and <var>d</var> be the number 0.

		<li>An <b>exponent indicator</b>:
			a single U+0045 LATIN CAPITAL LETTER E (E) or U+0065 LATIN SMALL LETTER E (e),
			or the empty string.

		<li>An <b>exponent sign</b>:
			a single U+002B PLUS SIGN (+) or U+002D HYPHEN-MINUS (-),
			or the empty string.
			Let <var>t</var> be the number -1 if the sign is U+002D HYPHEN-MINUS (-);
			otherwise, let <var>t</var> be the number 1.

		<li>An <b>exponent</b>:
			zero or more <a>digits</a>.
			If there is at least one digit,
			let <var>e</var> be the number formed by interpreting the digits as a base-10 integer;
			otherwise, let <var>e</var> be the number 0.
	</ol>

	Return the number <code>s·(i + f·10<sup>-d</sup>)·10<sup>te</sup></code>.


<h4 id="consume-remnants-of-bad-url">
Consume the remnants of a bad url</h4>

	This section describes how to <dfn>consume the remnants of a bad url</dfn> from a stream of <a>code points</a>,
	"cleaning up" after the tokenizer realizes that it's in the middle of a <<bad-url-token>> rather than a <<url-token>>.
	It returns nothing;
	its sole use is to consume enough of the input stream to reach a recovery point
	where normal tokenizing can resume.

	Repeatedly consume the <a>next input code point</a> from the stream:

	<dl>
		<dt>U+0029 RIGHT PARENTHESIS ())
		<dt>EOF
		<dd>
			Return.

		<dt>the input stream <a>starts with a valid escape</a>
		<dd>
			<a>Consume an escaped code point</a>.
			<span class='note'>This allows an escaped right parenthesis ("\)") to be encountered without ending the <<bad-url-token>>.
				This is otherwise identical to the "anything else" clause.</span>

		<dt>anything else
		<dd>
			Do nothing.
	</dl>


<!--
████████     ███    ████████   ██████  ████████ ████████
██     ██   ██ ██   ██     ██ ██    ██ ██       ██     ██
██     ██  ██   ██  ██     ██ ██       ██       ██     ██
████████  ██     ██ ████████   ██████  ██████   ████████
██        █████████ ██   ██         ██ ██       ██   ██
██        ██     ██ ██    ██  ██    ██ ██       ██    ██
██        ██     ██ ██     ██  ██████  ████████ ██     ██
-->

<h2 id="parsing">
Parsing</h2>

	The input to the parsing stage is a stream or list of tokens from the tokenization stage.
	The output depends on how the parser is invoked,
	as defined by the entry points listed later in this section.
	The parser output can consist of at-rules,
	qualified rules,
	and/or declarations.

	The parser's output is constructed according to the fundamental syntax of CSS,
	without regards for the validity of any specific item.
	Implementations may check the validity of items as they are returned by the various parser algorithms
	and treat the algorithm as returning nothing if the item was invalid according to the implementation's own grammar knowledge,
	or may construct a full tree as specified
	and "clean up" afterwards by removing any invalid items.

	The items that can appear in the tree are:

	<dl export dfn-for=CSS>
		<dt><dfn id="at-rule">at-rule</dfn>
		<dd>
			An at-rule has a name,
			a prelude consisting of a list of component values,
			and an optional block consisting of a simple {} block.

			Note: This specification places no limits on what an at-rule's block may contain.
			Individual at-rules must define whether they accept a block,
			and if so,
			how to parse it
			(preferably using one of the parser algorithms or entry points defined in this specification).

		<dt><dfn id="qualified-rule">qualified rule</dfn>
		<dd>
			A qualified rule has
			a prelude consisting of a list of component values,
			and a block consisting of a simple {} block.

			Note: Most qualified rules will be style rules,
			where the prelude is a selector [[SELECT]]
			and the block a <a lt="parse a list of declarations">list of declarations</a>.

		<dt><dfn id="declaration">declaration</dfn>
		<dd>
			A declaration has a name,
			a value consisting of a list of component values,
			and an <var>important</var> flag which is initially unset.

			Declarations are further categorized as "properties" or "descriptors",
			with the former typically appearing in <a>qualified rules</a>
			and the latter appearing in <a>at-rules</a>.
			(This categorization does not occur at the Syntax level;
			instead, it is a product of where the declaration appears,
			and is defined by the respective specifications defining the given rule.)

		<dt><dfn id="component-value">component value</dfn>
		<dd>
			A component value is one of the [=preserved tokens=],
			a [=function=],
			or a [=simple block=].

		<dt><dfn id="preserved-tokens">preserved tokens</dfn>
		<dd>
			Any token produced by the tokenizer
			except for <<function-token>>s,
			<a href="#tokendef-open-curly">&lt;{-token></a>s,
			<a href="#tokendef-open-paren">&lt;(-token></a>s,
			and <a href="#tokendef-open-square">&lt;[-token></a>s.

			Note: The non-[=preserved tokens=] listed above are always consumed into higher-level objects,
			either functions or simple blocks,
			and so never appear in any parser output themselves.

			Note: The tokens <a href="#tokendef-close-curly">&lt;}-token></a>s, <a href="#tokendef-close-paren">&lt;)-token></a>s, <a href="#tokendef-close-square">&lt;]-token></a>, <<bad-string-token>>, and <<bad-url-token>> are always parse errors,
			but they are preserved in the token stream by this specification to allow other specs,
			such as Media Queries,
			to define more fine-grained error-handling
			than just dropping an entire declaration or block.

		<dt><dfn id="function">function</dfn>
		<dd>
			A function has a name
			and a value consisting of a list of component values.

		<dt><dfn id="simple-block">simple block</dfn>
		<dd>
			A simple block has an associated token (either a <a href="#tokendef-open-square">&lt;[-token></a>, <a href="#tokendef-open-paren">&lt;(-token></a>, or <a href="#tokendef-open-curly">&lt;{-token></a>)
			and a value consisting of a list of component values.
	</dl>

<!--
████████     ███    ████ ██       ████████   ███████     ███    ████████
██     ██   ██ ██    ██  ██       ██     ██ ██     ██   ██ ██   ██     ██
██     ██  ██   ██   ██  ██       ██     ██ ██     ██  ██   ██  ██     ██
████████  ██     ██  ██  ██       ████████  ██     ██ ██     ██ ██     ██
██   ██   █████████  ██  ██       ██   ██   ██     ██ █████████ ██     ██
██    ██  ██     ██  ██  ██       ██    ██  ██     ██ ██     ██ ██     ██
██     ██ ██     ██ ████ ████████ ██     ██  ███████  ██     ██ ████████
-->

<h3 id='parser-diagrams'>
Parser Railroad Diagrams</h3>

	<em>This section is non-normative.</em>

	This section presents an informative view of the parser,
	in the form of railroad diagrams.

	These diagrams are <em>informative</em> and <em>incomplete</em>;
	they describe the grammar of "correct" stylesheets,
	but do not describe error-handling at all.
	They are provided solely to make it easier to get an intuitive grasp of the syntax.

	<dl>
		<dt id="stylesheet-diagram">Stylesheet
		<dd>
			<pre class='railroad'>
			Star:
				Choice: 3
					N: <CDO-token>
					N: <CDC-token>
					N: <whitespace-token>
					N: Qualified rule
					N: At-rule
			</pre>

		<dt id="rule-list-diagram">Rule list
		<dd>
			<pre class='railroad'>
			Star:
				Choice: 1
					N: <whitespace-token>
					N: Qualified rule
					N: At-rule
			</pre>

		<dt id="at-rule-diagram">At-rule
		<dd>
			<pre class='railroad'>
			N: <at-keyword-token>
			Star:
				N: Component value
			Choice:
				N: {} block
				T: ;
			</pre>

		<dt id="qualified-rule-diagram">Qualified rule
		<dd>
			<pre class='railroad'>
			Star:
				N: Component value
			N: {} block
			</pre>

		<dt id="declaration-list-diagram">Declaration list
		<dd>
			<pre class='railroad'>
			N: ws*
			Choice:
				Seq:
					Opt:
						N: Declaration
					Opt:
						Seq:
							T: ;
							N: Declaration list
				Seq:
					N: At-rule
					N: Declaration list
			</pre>

		<dt id="declaration-diagram">Declaration
		<dd>
			<pre class='railroad'>
			N: <ident-token>
			N: ws*
			T: :
			Star:
				N: Component value
			Opt: skip
				N: !important
			</pre>

		<dt id="!important-diagram">!important
		<dd>
			<pre class='railroad'>
			T: !
			N: ws*
			N: <ident-token "important">
			N: ws*
			</pre>

		<dt id="component-value-diagram">Component value
		<dd>
			<pre class='railroad'>
			Choice:
				N: Preserved token
				N: {} block
				N: () block
				N: [] block
				N: Function block
			</pre>


		<dt id="{}-block-diagram">{} block
		<dd>
			<pre class='railroad'>
			T: {
			Star:
				N: Component value
			T: }
			</pre>

		<dt id="()-block-diagram">() block
		<dd>
			<pre class='railroad'>
			T: (
			Star:
				N: Component value
			T: )
			</pre>

		<dt id="[]-block-diagram">[] block
		<dd>
			<pre class='railroad'>
			T: [
			Star:
				N: Component value
			T: ]
			</pre>

		<dt id="function-block-diagram">Function block
		<dd>
			<pre class='railroad'>
			N: <function-token>
			Star:
				N: Component value
			T: )
			</pre>
	</dl>

<!--
████████  ████████ ██    ██  ██████
██     ██ ██       ███   ██ ██    ██
██     ██ ██       ████  ██ ██
██     ██ ██████   ██ ██ ██  ██████
██     ██ ██       ██  ████       ██
██     ██ ██       ██   ███ ██    ██
████████  ██       ██    ██  ██████
-->

<h3 id="parser-definitions">
Definitions</h3>

	<dl>
		<dt><dfn>current input token</dfn>
		<dd>
			The token or <a>component value</a> currently being operated on, from the list of tokens produced by the tokenizer.

		<dt><dfn>next input token</dfn>
		<dd>
			The token or <a>component value</a> following the <a>current input token</a> in the list of tokens produced by the tokenizer.
			If there isn't a token following the <a>current input token</a>,
			the <a>next input token</a> is an <<EOF-token>>.

		<dt><dfn><<EOF-token>></dfn>
		<dd>
			A conceptual token representing the end of the list of tokens.
			Whenever the list of tokens is empty,
			the <a>next input token</a> is always an <<EOF-token>>.

		<dt><dfn>consume the next input token</dfn>
		<dd>
			Let the <a>current input token</a> be the current <a>next input token</a>,
			adjusting the <a>next input token</a> accordingly.

		<dt><dfn>reconsume the current input token</dfn>
		<dd>
			The next time an algorithm instructs you to <a>consume the next input token</a>,
			instead do nothing
			(retain the <a>current input token</a> unchanged).
	</dl>

<!--
████████ ██    ██ ████████ ████████  ██    ██       ████████   ███████  ████ ██    ██ ████████  ██████
██       ███   ██    ██    ██     ██  ██  ██        ██     ██ ██     ██  ██  ███   ██    ██    ██    ██
██       ████  ██    ██    ██     ██   ████         ██     ██ ██     ██  ██  ████  ██    ██    ██
██████   ██ ██ ██    ██    ████████     ██          ████████  ██     ██  ██  ██ ██ ██    ██     ██████
██       ██  ████    ██    ██   ██      ██          ██        ██     ██  ██  ██  ████    ██          ██
██       ██   ███    ██    ██    ██     ██          ██        ██     ██  ██  ██   ███    ██    ██    ██
████████ ██    ██    ██    ██     ██    ██          ██         ███████  ████ ██    ██    ██     ██████
-->

<h3 id="parser-entry-points">
Parser Entry Points</h3>

	The algorithms defined in this section produce high-level CSS objects
	from lower-level objects.
	They assume that they are invoked on a token stream,
	but they may also be invoked on a string;
	if so,
	first perform <a href="#input-preprocessing">input preprocessing</a>
	to produce a <a>code point</a> stream,
	then perform <a href="#tokenization">tokenization</a>
	to produce a token stream.

	"<a>Parse a stylesheet</a>" can also be invoked on a byte stream,
	in which case <a href="#input-byte-stream">The input byte stream</a>
	defines how to decode it into Unicode.

	Note: This specification does not define how a byte stream is decoded for other entry points.

	Note: Other specs can define additional entry points for their own purposes.

	<div class='note'>
		The following notes should probably be translated into normative text in the relevant specs,
		hooking this spec's terms:

		<ul>
			<li>
				"<a>Parse a stylesheet</a>" is intended to be the normal parser entry point,
				for parsing stylesheets.

			<li>
				"<a>Parse a list of rules</a>" is intended for the content of at-rules such as ''@media''.
				It differs from "<a>Parse a stylesheet</a>" in the handling of <<CDO-token>> and <<CDC-token>>.

			<li>
				"<a>Parse a rule</a>" is intended for use by the <code>CSSStyleSheet#insertRule</code> method,
				and similar functions which might exist,
				which parse text into a single rule.

			<li>
				"<a>Parse a declaration</a>" is used in ''@supports'' conditions. [[CSS3-CONDITIONAL]]

			<li>
				"<a>Parse a list of declarations</a>" is for the contents of a <code>style</code> attribute,
				which parses text into the contents of a single style rule.

			<li>
				"<a>Parse a component value</a>" is for things that need to consume a single value,
				like the parsing rules for ''attr()''.

			<li>
				"<a>Parse a list of component values</a>" is for the contents of presentational attributes,
				which parse text into a single declaration's value,
				or for parsing a stand-alone selector [[SELECT]] or list of Media Queries [[MEDIAQ]],
				as in <a href="https://www.w3.org/TR/selectors-api/">Selectors API</a>
				or the <code>media</code> HTML attribute.
		</ul>
	</div>

	All of the algorithms defined in this spec may be called with either a list of tokens or of component values.
	Either way produces an identical result.

<h4 id="parse-grammar">
Parse something according to a CSS grammar</h4>

	It is often desirable to parse a string or token list
	to see if it matches some CSS grammar,
	and if it does,
	to destructure it according to the grammar.
	This section provides a generic hook for this kind of operation.
	It should be invoked like
	<span class="informative">"parse <var>foo</var> as a CSS <<color>>"</span>, or similar.

	Note: As a reminder, this algorithm, along with all the others in this section,
	can be called with a string,
	a stream of CSS tokens,
	or a stream of CSS component values,
	whichever is most convenient.

	This algorithm must be called with <b>some input to be parsed</b>,
	and <b>some CSS grammar specification or term</b>.

	This algorithm returns either failure,
	if the input does not match the provided grammar,
	or the result of parsing the input according to the grammar,
	which is an unspecified structure corresponding to the provided grammar specification.
	The return value must only be interacted with by specification prose,
	where the representation ambiguity is not problematic.
	If it is meant to be exposed outside of spec language,
	the spec using the result must explicitly translate it into a well-specified representation,
	such as, for example, by invoking a CSS serialization algorithm
	<span class=informative>(like "serialize as a CSS <<string>> value").</span>

	To <dfn export lt="parse something according to a CSS grammar|parse" for=CSS>parse something according to a CSS grammar</dfn>:

	<ol>
		<li>
			<a>Parse a list of component values</a> from the input,
			and let <var>result</var> be the return value.

		<li>
			Attempt to match <var>result</var> against the provided grammar.
			If this is successful,
			return the matched result;
			otherwise, return failure.
	</ol>


<h4 id="parse-stylesheet">
Parse a stylesheet</h4>

	To <dfn export>parse a stylesheet</dfn> from a stream of tokens:

	<ol>
		<li>
			Create a new stylesheet.

		<li>
			<a>Consume a list of rules</a> from the stream of tokens, with the top-level flag set.
			Let the return value be <var>rules</var>.

		<li>
			Assign <var>rules</var> to the stylesheet's value.

		<li>
			Return the stylesheet.
	</ol>

<h4 id="parse-list-of-rules">
Parse a list of rules</h4>

	To <dfn export>parse a list of rules</dfn> from a stream of tokens:

	<ol>
		<li>
			<a>Consume a list of rules</a> from the stream of tokens, with the <var>top-level flag</var> unset.

		<li>
			Return the returned list.
	</ol>

<h4 id="parse-rule">
Parse a rule</h4>

	To <dfn export>parse a rule</dfn> from a stream of tokens:

	<ol>
		<li>
			While the <a>next input token</a> is a <<whitespace-token>>,
			<a>consume the next input token</a>.

		<li>
			If the <a>next input token</a> is an <<EOF-token>>,
			return a syntax error.

			Otherwise,
			if the <a>next input token</a> is an <<at-keyword-token>>,
			<a>consume an at-rule</a>,
			and let <var>rule</var> be the return value.

			Otherwise,
			<a>consume a qualified rule</a>
			and let <var>rule</var> be the return value.
			If nothing was returned,
			return a syntax error.

		<li>
			While the <a>next input token</a> is a <<whitespace-token>>,
			<a>consume the next input token</a>.

		<li>
			If the <a>next input token</a> is an <<EOF-token>>,
			return <var>rule</var>.
			Otherwise, return a syntax error.
	</ol>

<h4 id="parse-declaration">
Parse a declaration</h4>

	Note: Unlike "<a>Parse a list of declarations</a>",
	this parses only a declaration and not an at-rule.

	To <dfn export>parse a declaration</dfn>:

	<ol>
		<li>
			While the <a>next input token</a> is a <<whitespace-token>>,
			<a>consume the next input token</a>.

		<li>
			If the <a>next input token</a> is not an <<ident-token>>,
			return a syntax error.

		<li>
			<a>Consume a declaration</a>.
			If anything was returned, return it.
			Otherwise, return a syntax error.
	</ol>

<h4 id="parse-list-of-declarations">
Parse a list of declarations</h4>

	Note: Despite the name,
	this actually parses a mixed list of declarations and at-rules,
	as CSS 2.1 does for ''@page''.
	Unexpected at-rules (which could be all of them, in a given context)
	are invalid and should be ignored by the consumer.

	To <dfn export>parse a list of declarations</dfn>:

	<ol>
		<li>
			<a>Consume a list of declarations</a>.

		<li>
			Return the returned list.
	</ol>

<h4 id="parse-component-value">
Parse a component value</h4>

	To <dfn export>parse a component value</dfn>:

	<ol>
		<li>
			While the <a>next input token</a> is a <<whitespace-token>>,
			<a>consume the next input token</a>.

		<li>
			If the <a>next input token</a> is an <<EOF-token>>,
			return a syntax error.

		<li>
			<a>Consume a component value</a>
			and let <var>value</var> be the return value.

		<li>
			While the <a>next input token</a> is a <<whitespace-token>>,
			<a>consume the next input token</a>.

		<li>
			If the <a>next input token</a> is an <<EOF-token>>,
			return <var>value</var>.
			Otherwise,
			return a syntax error.
	</ol>

<h4 id="parse-list-of-component-values">
Parse a list of component values</h4>

	To <dfn export>parse a list of component values</dfn>:

	<ol>
		<li>
			Repeatedly <a>consume a component value</a> until an <<EOF-token>> is returned,
			appending the returned values (except the final <<EOF-token>>) into a list.
			Return the list.
	</ol>

<h4 id="parse-comma-separated-list-of-component-values">
Parse a comma-separated list of component values</h4>

	To <dfn export>parse a comma-separated list of component values</dfn>:

	<ol>
		<li>
			Let <var>list of cvls</var> be an initially empty list of component value lists.

		<li>
			Repeatedly <a>consume a component value</a> until an <<EOF-token>> or <<comma-token>> is returned,
			appending the returned values (except the final <<EOF-token>> or <<comma-token>>) into a list.
			Append the list to <var>list of cvls</var>.

			If it was a <<comma-token>> that was returned,
			repeat this step.

		<li>
			Return <var>list of cvls</var>.
	</ol>

<!--
   ███    ██        ██████    ███████   ██████
  ██ ██   ██       ██    ██  ██     ██ ██    ██
 ██   ██  ██       ██        ██     ██ ██
██     ██ ██       ██   ████ ██     ██  ██████
█████████ ██       ██    ██  ██     ██       ██
██     ██ ██       ██    ██  ██     ██ ██    ██
██     ██ ████████  ██████    ███████   ██████
-->

<h3 id="parser-algorithms">
Parser Algorithms</h3>

	The following algorithms comprise the parser.
	They are called by the parser entry points above.

	These algorithms may be called with a list of either tokens or of component values.
	(The difference being that some tokens are replaced by <a>functions</a> and <a>simple blocks</a> in a list of component values.)
	Similar to how the input stream returned EOF code points to represent when it was empty during the tokenization stage,
	the lists in this stage must return an <<EOF-token>> when the next token is requested but they are empty.

	An algorithm may be invoked with a specific list,
	in which case it consumes only that list
	(and when that list is exhausted,
	it begins returning <<EOF-token>>s).
	Otherwise,
	it is implicitly invoked with the same list as the invoking algorithm.


<h4 id="consume-list-of-rules">
Consume a list of rules</h4>

	To <dfn>consume a list of rules</dfn>:

	Create an initially empty list of rules.

	Repeatedly consume the <a>next input token</a>:

	<dl>
		<dt><<whitespace-token>>
		<dd>
			Do nothing.

		<dt><<EOF-token>>
		<dd>
			Return the list of rules.

		<dt><<CDO-token>>
		<dt><<CDC-token>>
		<dd>
			If the <var>top-level flag</var> is set,
			do nothing.

			Otherwise,
			<a>reconsume the current input token</a>.
			<a>Consume a qualified rule</a>.
			If anything is returned,
			append it to the list of rules.

		<dt><<at-keyword-token>>
		<dd>
			<a>Reconsume the current input token</a>.
			<a>Consume an at-rule</a>,
			and append the returned value to the list of rules.

		<dt>anything else
		<dd>
			<a>Reconsume the current input token</a>.
			<a>Consume a qualified rule</a>.
			If anything is returned,
			append it to the list of rules.
	</dl>


<h4 id="consume-at-rule">
Consume an at-rule</h4>

	To <dfn>consume an at-rule</dfn>:

	<a>Consume the next input token</a>.
	Create a new at-rule
	with its name set to the value of the <a>current input token</a>,
	its prelude initially set to an empty list,
	and its value initially set to nothing.

	Repeatedly consume the <a>next input token</a>:

	<dl>
		<dt><<semicolon-token>>
		<dd>
			Return the at-rule.

		<dt><<EOF-token>>
		<dd>
			This is a <a>parse error</a>.
			Return the at-rule.

		<dt><a href="#tokendef-open-curly">&lt;{-token></a>
		<dd>
			<a>Consume a simple block</a>
			and assign it to the at-rule's block.
			Return the at-rule.

		<dt><a>simple block</a> with an associated token of <a href="#tokendef-open-curly">&lt;{-token></a>
		<dd>
			Assign the block to the at-rule's block.
			Return the at-rule.

		<dt>anything else
		<dd>
			<a>Reconsume the current input token</a>.
			<a>Consume a component value</a>.
			Append the returned value to the at-rule's prelude.
	</dl>


<h4 id="consume-qualified-rule">
Consume a qualified rule</h4>

	To <dfn>consume a qualified rule</dfn>:

	Create a new qualified rule
	with its prelude initially set to an empty list,
	and its value initially set to nothing.

	Repeatedly consume the <a>next input token</a>:

	<dl>
		<dt><<EOF-token>>
		<dd>
			This is a <a>parse error</a>.
			Return nothing.

		<dt><a href="#tokendef-open-curly">&lt;{-token></a>
		<dd>
			<a>Consume a simple block</a>
			and assign it to the qualified rule's block.
			Return the qualified rule.

		<dt><a>simple block</a> with an associated token of <a href="#tokendef-open-curly">&lt;{-token></a>
		<dd>
			Assign the block to the qualified rule's block.
			Return the qualified rule.

		<dt>anything else
		<dd>
			<a>Reconsume the current input token</a>.
			<a>Consume a component value</a>.
			Append the returned value to the qualified rule's prelude.
	</dl>


<h4 id="consume-list-of-declarations">
Consume a list of declarations</h4>

	To <dfn>consume a list of declarations</dfn>:

	Create an initially empty list of declarations.

	Repeatedly consume the <a>next input token</a>:

	<dl>
		<dt><<whitespace-token>>
		<dt><<semicolon-token>>
		<dd>
			Do nothing.

		<dt><<EOF-token>>
		<dd>
			Return the list of declarations.

		<dt><<at-keyword-token>>
		<dd>
			<a>Reconsume the current input token</a>.
			<a>Consume an at-rule</a>.
			Append the returned rule to the list of declarations.

		<dt><<ident-token>>
		<dd>
			Initialize a temporary list initially filled with the <a>current input token</a>.
			As long as the <a>next input token</a> is anything other than a <<semicolon-token>> or <<EOF-token>>,
			<a>consume a component value</a> and append it to the temporary list.
			<a>Consume a declaration</a> from the temporary list.
			If anything was returned,
			append it to the list of declarations.

		<dt>anything else</dd>
		<dd>
			This is a <a>parse error</a>.
			<a>Reconsume the current input token</a>.
			As long as the <a>next input token</a> is anything other than a <<semicolon-token>> or <<EOF-token>>,
			<a>consume a component value</a>
			and throw away the returned value.
	</dl>


<h4 id="consume-declaration">
Consume a declaration</h4>

	Note: This algorithm assumes that the <a>next input token</a> has already been checked to be an <<ident-token>>.

	To <dfn>consume a declaration</dfn>:

	<a>Consume the next input token</a>.
	Create a new declaration
	with its name set to the value of the <a>current input token</a>
	and its value initially set to the empty [=list=].

	<ol>
		<li>
			While the <a>next input token</a> is a <<whitespace-token>>,
			<a>consume the next input token</a>.

		<li>
			If the <a>next input token</a> is anything other than a <<colon-token>>,
			this is a <a>parse error</a>.
			Return nothing.

			Otherwise, <a>consume the next input token</a>.

		<li>
			While the <a>next input token</a> is a <<whitespace-token>>,
			<a>consume the next input token</a>.

		<li>
			As long as the <a>next input token</a> is anything other than an <<EOF-token>>,
			<a>consume a component value</a>
			and append it to the declaration's value.

		<li>
			If the last two non-<<whitespace-token>>s in the declaration's value are
			a <<delim-token>> with the value "!"
			followed by an <<ident-token>> with a value that is an <a>ASCII case-insensitive</a> match for "important",
			remove them from the declaration's value
			and set the declaration's <var>important</var> flag to true.

		<li>
			While the last token in the declaration's value is a <<whitespace-token>>,
			[=list/remove=] that token.

		<li>
			Return the declaration.
	</ol>


<h4 id="consume-component-value">
Consume a component value</h4>

	To <dfn>consume a component value</dfn>:

	<a>Consume the next input token</a>.

	If the <a>current input token</a>
	is a <a href="#tokendef-open-curly">&lt;{-token></a>, <a href="#tokendef-open-square">&lt;[-token></a>, or <a href="#tokendef-open-paren">&lt;(-token></a>,
	<a>consume a simple block</a>
	and return it.

	Otherwise, if the <a>current input token</a>
	is a <<function-token>>,
	<a>consume a function</a>
	and return it.

	Otherwise, return the <a>current input token</a>.


<h4 id="consume-simple-block">
Consume a simple block</h4>

	Note: This algorithm assumes that the <a>current input token</a> has already been checked to be an <a href="#tokendef-open-curly">&lt;{-token></a>, <a href="#tokendef-open-square">&lt;[-token></a>, or <a href="#tokendef-open-paren">&lt;(-token></a>.

	To <dfn>consume a simple block</dfn>:

	The <dfn>ending token</dfn> is the mirror variant of the <a>current input token</a>.
	(E.g. if it was called with <a href="#tokendef-open-square">&lt;[-token></a>, the <a>ending token</a> is <a href="#tokendef-close-square">&lt;]-token></a>.)

	Create a <a>simple block</a> with its associated token set to the <a>current input token</a>
	and with a value with is initially an empty list.

	Repeatedly consume the <a>next input token</a> and process it as follows:

	<dl>
		<dt><a>ending token</a>
		<dd>
			Return the block.

		<dt><<EOF-token>>
		<dd>
			This is a <a>parse error</a>.
			Return the block.

		<dt>anything else
		<dd>
			<a>Reconsume the current input token</a>.
			<a>Consume a component value</a>
			and append it to the value of the block.
	</dl>

	Note: CSS has an unfortunate syntactic ambiguity
	between blocks that can contain declarations
	and blocks that can contain qualified rules,
	so any "consume" algorithms that handle rules
	will initially use this more generic algorithm
	rather than the more specific
	[=consume a list of declarations=]
	or [=consume a list of rules=] algorithms.
	These more specific algorithms are instead invoked when grammars are applied,
	depending on whether it contains a <<declaration-list>>
	or a <<rule-list>>/<<stylesheet>>.


<h4 id="consume-function">
Consume a function</h4>

	Note: This algorithm assumes that the <a>current input token</a> has already been checked to be a <<function-token>>.

	To <dfn>consume a function</dfn>:

	Create a function with a name equal to the value of the <a>current input token</a>,
	and with a value which is initially an empty list.

	Repeatedly consume the <a>next input token</a> and process it as follows:

	<dl>
		<dt><a href="#tokendef-close-paren">&lt;)-token></a>
		<dd>
			Return the function.

		<dt><<EOF-token>>
		<dd>
			This is a <a>parse error</a>.
			Return the function.

		<dt>anything else
		<dd>
			<a>Reconsume the current input token</a>.
			<a>Consume a component value</a>
			and append the returned value
			to the function's value.
	</dl>

<!--
   ███    ██    ██        ████████
  ██ ██   ███   ██   ██   ██     ██
 ██   ██  ████  ██   ██   ██     ██
██     ██ ██ ██ ██ ██████ ████████
█████████ ██  ████   ██   ██     ██
██     ██ ██   ███   ██   ██     ██
██     ██ ██    ██        ████████
-->

<h2 id="anb-microsyntax">
The <var>An+B</var> microsyntax</h2>

	Several things in CSS,
	such as the <span class=informative>'':nth-child()''</span> pseudoclass,
	need to indicate indexes in a list.
	The <var>An+B</var> microsyntax is useful for this,
	allowing an author to easily indicate single elements
	or all elements at regularly-spaced intervals in a list.

	The <dfn export>An+B</dfn> notation defines an integer step (|A|) and offset (|B|),
	and represents the <var>An+B</var>th elements in a list,
	for every positive integer or zero value of <var>n</var>,
	with the first element in the list having index 1 (not 0).

	For values of <var>A</var> and <var>B</var> greater than 0,
	this effectively divides the list into groups of <var>A</var> elements
	(the last group taking the remainder),
	and selecting the <var>B</var>th element of each group.

	The <var>An+B</var> notation also accepts the ''even'' and ''odd'' keywords,
	which have the same meaning as ''2n'' and ''2n+1'', respectively.

	<div class="example">
		<p>Examples:
		<pre><!--
		-->2n+0   /* represents all of the even elements in the list */&#xa;<!--
		-->even   /* same */&#xa;<!--
		-->4n+1   /* represents the 1st, 5th, 9th, 13th, etc. elements in the list */</pre>
	</div>

	The values of <var>A</var> and <var>B</var> can be negative,
	but only the positive results of <var>An+B</var>,
	for <var>n</var> ≥ 0,
	are used.

	<div class="example">
		<p>Example:
		<pre><!--
			-->-1n+6   /* represents the first 6 elements of the list */&#xa;<!--
			-->-4n+10  /* represents the 2nd, 6th, and 10th elements of the list */
		</pre>
	</div>

	If both <var>A</var> and <var>B</var> are 0,
	the pseudo-class represents no element in the list.

<h3 id='anb-syntax'>
Informal Syntax Description</h3>

	<em>This section is non-normative.</em>

	When <var>A</var> is 0, the <var>An</var> part may be omitted
	(unless the <var>B</var> part is already omitted).
	When <var>An</var> is not included
	and <var>B</var> is non-negative,
	the ''+'' sign before <var>B</var> (when allowed)
	may also be omitted.
	In this case the syntax simplifies to just <var>B</var>.

	<div class="example">
		<p>Examples:
		<pre><!--
		-->0n+5   /* represents the 5th element in the list */&#xa;<!--
		-->5      /* same */</pre>
	</div>

	When <var>A</var> is 1 or -1,
	the <code>1</code> may be omitted from the rule.

	<div class="example">
		<p>Examples:
		<p>The following notations are therefore equivalent:
		<pre><!--
		-->1n+0   /* represents all elements in the list */&#xa;<!--
		-->n+0    /* same */&#xa;<!--
		-->n      /* same */</pre>
	</div>

	If <var>B</var> is 0, then every <var>A</var>th element is picked.
	In such a case,
	the <var>+B</var> (or <var>-B</var>) part may be omitted
	unless the <var>A</var> part is already omitted.

	<div class="example">
		<p>Examples:
		<pre><!--
		-->2n+0   /* represents every even element in the list */&#xa;<!--
		-->2n     /* same */</pre>
	</div>

	When B is negative, its minus sign replaces the ''+'' sign.

	<div class="example">
		<p>Valid example:
		<pre>3n-6</pre>
		<p>Invalid example:
		<pre>3n + -6</pre>
	</div>

	Whitespace is permitted on either side of the ''+'' or ''-''
	that separates the <var>An</var> and <var>B</var> parts when both are present.

	<div class="example">
		<p>Valid Examples with white space:
		<pre><!--
		-->3n + 1&#xa;<!--
		-->+3n - 2&#xa;<!--
		-->-n+ 6&#xa;<!--
		-->+6</pre>
		<p>Invalid Examples with white space:
		<pre><!--
		-->3 n&#xa;<!--
		-->+ 2n&#xa;<!--
		-->+ 2</pre>
	</div>


<h3 id="the-anb-type">
The <code>&lt;an+b></code> type</h3>

	The <var>An+B</var> notation was originally defined using a slightly different tokenizer than the rest of CSS,
	resulting in a somewhat odd definition when expressed in terms of CSS tokens.
	This section describes how to recognize the <var>An+B</var> notation in terms of CSS tokens
	(thus defining the <var>&lt;an+b></var> type for CSS grammar purposes),
	and how to interpret the CSS tokens to obtain values for <var>A</var> and <var>B</var>.

	The <var>&lt;an+b></var> type is defined
	(using the <a href="https://www.w3.org/TR/css3-values/#value-defs">Value Definition Syntax in the Values &amp; Units spec</a>)
	as:

	<pre class='prod'>
		<dfn id="anb-production">&lt;an+b></dfn> =
		  odd | even |
		  <var>&lt;integer></var> |

		  <var>&lt;n-dimension></var> |
		  '+'?<sup><a href="#anb-plus">†</a></sup> n |
		  -n |

		  <var>&lt;ndashdigit-dimension></var> |
		  '+'?<sup><a href="#anb-plus">†</a></sup> <var>&lt;ndashdigit-ident></var> |
		  <var>&lt;dashndashdigit-ident></var> |

		  <var>&lt;n-dimension></var> <var>&lt;signed-integer></var> |
		  '+'?<sup><a href="#anb-plus">†</a></sup> n <var>&lt;signed-integer></var> |
		  -n <var>&lt;signed-integer></var> |

		  <var>&lt;ndash-dimension></var> <var>&lt;signless-integer></var> |
		  '+'?<sup><a href="#anb-plus">†</a></sup> n- <var>&lt;signless-integer></var> |
		  -n- <var>&lt;signless-integer></var> |

		  <var>&lt;n-dimension></var> ['+' | '-'] <var>&lt;signless-integer></var>
		  '+'?<sup><a href="#anb-plus">†</a></sup> n ['+' | '-'] <var>&lt;signless-integer></var> |
		  -n ['+' | '-'] <var>&lt;signless-integer></var>
	</pre>

	where:

	<ul>
		<li><dfn><code>&lt;n-dimension></code></dfn> is a <<dimension-token>> with its type flag set to "integer", and a unit that is an <a>ASCII case-insensitive</a> match for "n"
		<li><dfn><code>&lt;ndash-dimension></code></dfn> is a <<dimension-token>> with its type flag set to "integer", and a unit that is an <a>ASCII case-insensitive</a> match for "n-"
		<li><dfn><code>&lt;ndashdigit-dimension></code></dfn> is a <<dimension-token>> with its type flag set to "integer", and a unit that is an <a>ASCII case-insensitive</a> match for "n-*", where "*" is a series of one or more <a>digits</a>
		<li><dfn><code>&lt;ndashdigit-ident></code></dfn> is an <<ident-token>> whose value is an <a>ASCII case-insensitive</a> match for "n-*", where "*" is a series of one or more <a>digits</a>
		<li><dfn><code>&lt;dashndashdigit-ident></code></dfn> is an <<ident-token>> whose value is an <a>ASCII case-insensitive</a> match for "-n-*", where "*" is a series of one or more <a>digits</a>
		<li><dfn><code>&lt;integer></code></dfn> is a <<number-token>> with its type flag set to "integer"
		<li><dfn><code>&lt;signed-integer></code></dfn> is a <<number-token>> with its type flag set to "integer", and whose <a>representation</a> starts with "+" or "-"
		<li><dfn><code>&lt;signless-integer></code></dfn> is a <<number-token>> with its type flag set to "integer", and whose <a>representation</a> starts with a <a>digit</a>
	</ul>

	<p id="anb-plus">
		<sup>†</sup>: When a plus sign (+) precedes an ident starting with "n", as in the cases marked above,
		there must be no whitespace between the two tokens,
		or else the tokens do not match the above grammar.
		Whitespace is valid (and ignored) between any other two tokens.

	The clauses of the production are interpreted as follows:

	<dl>
		<dt>''odd''
		<dd>
			<var>A</var> is 2, <var>B</var> is 1.

		<dt>''even''
		<dd>
			<var>A</var> is 2, <var>B</var> is 0.

		<dt><code><var>&lt;integer></var></code>
		<dd>
			<var>A</var> is 0, <var>B</var> is the integer’s value.

		<dt><code><var>&lt;n-dimension></var></code>
		<dt><code>'+'? n</code>
		<dt><code>-n</code>
		<dd>
			<var>A</var> is the dimension's value, 1, or -1, respectively.
			<var>B</var> is 0.

		<dt><code><var>&lt;ndashdigit-dimension></var></code>
		<dt><code>'+'? <var>&lt;ndashdigit-ident></var></code>
		<dd>
			<var>A</var> is the dimension's value or 1, respectively.
			<var>B</var> is the dimension's unit or ident's value, respectively,
			with the first <a>code point</a> removed and the remainder interpreted as a base-10 number.
			<span class=note>B is negative.</span>

		<dt><code><var>&lt;dashndashdigit-ident></var></code>
		<dd>
			<var>A</var> is -1.
			<var>B</var> is the ident's value, with the first two <a>code points</a> removed and the remainder interpreted as a base-10 number.
			<span class=note>B is negative.</span>

		<dt><code><var>&lt;n-dimension></var> <var>&lt;signed-integer></var></code>
		<dt><code>'+'? n <var>&lt;signed-integer></var></code>
		<dt><code>-n <var>&lt;signed-integer></var></code>
		<dd>
			<var>A</var> is the dimension's value, 1, or -1, respectively.
			<var>B</var> is the integer’s value.

		<dt><code><var>&lt;ndash-dimension></var> <var>&lt;signless-integer></var></code>
		<dt><code>'+'? n- <var>&lt;signless-integer></var></code>
		<dt><code>-n- <var>&lt;signless-integer></var></code>
		<dd>
			<var>A</var> is the dimension's value, 1, or -1, respectively.
			<var>B</var> is the negation of the integer’s value.

		<dt><code><var>&lt;n-dimension></var> ['+' | '-'] <var>&lt;signless-integer></var></code>
		<dt><code>'+'? n ['+' | '-'] <var>&lt;signless-integer></var></code>
		<dt><code>-n ['+' | '-'] <var>&lt;signless-integer></var></code>
		<dd>
			<var>A</var> is the dimension's value, 1, or -1, respectively.
			<var>B</var> is the integer’s value.
			If a <code>'-'</code> was provided between the two, <var>B</var> is instead the negation of the integer’s value.
	</dl>

<!--
██     ██ ████████     ███    ██    ██  ██████   ████████
██     ██ ██     ██   ██ ██   ███   ██ ██    ██  ██
██     ██ ██     ██  ██   ██  ████  ██ ██        ██
██     ██ ████████  ██     ██ ██ ██ ██ ██   ████ ██████
██     ██ ██   ██   █████████ ██  ████ ██    ██  ██
██     ██ ██    ██  ██     ██ ██   ███ ██    ██  ██
 ███████  ██     ██ ██     ██ ██    ██  ██████   ████████
-->

<h2 id="urange">
The Unicode-Range microsyntax</h2>

	Some constructs,
	<span class=informative>such as the 'unicode-range' descriptor for the ''@font-face'' rule,</span>
	need a way to describe one or more unicode code points.
	The <dfn>&lt;urange></dfn> production represents a range of one or more unicode code points.

	Informally, the <<urange>> production has three forms:

	<dl>
		<dt>U+0001
		<dd>
			Defines a range consisting of a single code point,
			in this case the code point "1".

		<dt>U+0001-00ff
		<dd>
			Defines a range of codepoints between the first and the second value inclusive,
			in this case the range between "1" and "ff" (255 in decimal) inclusive.

		<dt>U+00??
		<dd>
			Defines a range of codepoints where the "?" characters range over all <a>hex digits</a>,
			in this case defining the same as the value ''U+0000-00ff''.
	</dl>

	In each form, a maximum of 6 digits is allowed for each hexadecimal number
	(if you treat "?" as a hexadecimal digit).

<h3 id="urange-syntax">
The <<urange>> type</h3>

	The <<urange>> notation was originally defined as a primitive token in CSS,
	but it is used very rarely,
	and collides with legitimate <<ident-token>>s in confusing ways.
	This section describes how to recognize the <<urange>> notation
	in terms of existing CSS tokens,
	and how to interpret it as a range of unicode codepoints.

	<details class=note>
		<summary>What are the confusing collisions?</summary>

		For example, in the CSS <nobr>''u + a { color: green; }''</nobr>,
		the intended meaning is that an <code>a</code> element
		following a <code>u</code> element
		should be colored green.
		Whitespace is not normally required between combinators
		and the surrounding selectors,
		so it <em>should</em> be equivalent to minify it to
		<nobr>''u+a{color:green;}''</nobr>.

		With any other combinator, the two pieces of CSS would be equivalent,
		but due to the previous existence of a specialized unicode-range token,
		the selector portion of the minified code now contains a unicode-range,
		not two idents and a combinator.
		It thus fails to match the Selectors grammar,
		and the rule is thrown out as invalid.

		(This example is taken from a real-world bug reported to Firefox.)
	</details>

	Note: The syntax described here is intentionally very low-level,
	and geared toward implementors.
	Authors should instead read the informal syntax description in the previous section,
	as it contains all information necessary to use <<urange>>,
	and is actually readable.

	The <<urange>> type is defined
	(using the <a href="https://www.w3.org/TR/css3-values/#value-defs">Value Definition Syntax in the Values & Units spec</a>) as:

	<pre class="prod">
		<<urange>> =
			u '+' <<ident-token>> '?'* |
			u <<dimension-token>> '?'* |
			u <<number-token>> '?'* |
			u <<number-token>> <<dimension-token>> |
			u <<number-token>> <<number-token>> |
			u '+' '?'+
	</pre>

	In this production,
	no whitespace can occur between any of the tokens.

	The <<urange>> production represents a range of one or more contiguous unicode code points
	as a <var>start value</var> and an <var>end value</var>,
	which are non-negative integers.
	To interpret the production above into a range,
	execute the following steps in order:

	1. Skipping the first ''u'' token,
		concatenate the <a>representations</a> of all the tokens in the production together.
		Let this be <var>text</var>.

	2. If the first character of <var>text</var> is U+002B PLUS SIGN,
		consume it.
		Otherwise,
		this is an invalid <<urange>>,
		and this algorithm must exit.

	3. Consume as many <a>hex digits</a> from <var>text</var> as possible.
		then consume as many U+003F QUESTION MARK (?) <a>code points</a> as possible.
		If zero <a>code points</a> were consumed,
		or more than six <a>code points</a> were consumed,
		this is an invalid <<urange>>,
		and this algorithm must exit.

		If any U+003F QUESTION MARK (?) <a>code points</a> were consumed, then:

		1. If there are any <a>code points</a> left in <var>text</var>,
			this is an invalid <<urange>>,
			and this algorithm must exit.

		2. Interpret the consumed <a>code points</a> as a hexadecimal number,
			with the U+003F QUESTION MARK (?) <a>code points</a>
			replaced by U+0030 DIGIT ZERO (0) <a>code points</a>.
			This is the <var>start value</var>.

		3. Interpret the consumed <a>code points</a> as a hexadecimal number again,
			with the U+003F QUESTION MARK (?) <a>code points</a>
			replaced by U+0046 LATIN CAPITAL LETTER F (F) <a>code points</a>.
			This is the <var>end value</var>.

		4. Exit this algorithm.

		Otherwise, interpret the consumed <a>code points</a> as a hexadecimal number.
		This is the <var>start value</var>.

	4. If there are no <a>code points</a> left in <var>text</var>,
		The <var>end value</var> is the same as the <var>start value</var>.
		Exit this algorithm.

	5. If the next <a>code point</a> in <var>text</var> is U+002D HYPHEN-MINUS (-),
		consume it.
		Otherwise,
		this is an invalid <<urange>>,
		and this algorithm must exit.

	6. Consume as many <a>hex digits</a> as possible from <var>text</var>.

		If zero <a>hex digits</a> were consumed,
		or more than 6 <a>hex digits</a> were consumed,
		this is an invalid <<urange>>,
		and this algorithm must exit.
		If there are any <a>code points</a> left in <var>text</var>,
		this is an invalid <<urange>>,
		and this algorithm must exit.

	7. Interpret the consumed <a>code points</a> as a hexadecimal number.
		This is the <var>end value</var>.

	To determine what codepoints the <<urange>> represents:

	1. If <var>end value</var> is greater than the <a>maximum allowed code point</a>,
		the <<urange>> is invalid and a syntax error.

	2. If <var>start value</var> is greater than <var>end value</var>,
		the <<urange>> is invalid and a syntax error.

	3. Otherwise, the <<urange>> represents a contiguous range of codepoints from <var>start value</var> to <var>end value</var>, inclusive.

	Note: The syntax of <<urange>> is intentionally fairly wide;
	its patterns capture every possible token sequence
	that the informal syntax can generate.
	However, it requires no whitespace between its constituent tokens,
	which renders it fairly safe to use in practice.
	Even grammars which have a <<urange>> followed by a <<number>> or <<dimension>>
	(which might appear to be ambiguous
	if an author specifies the <<urange>> with the ''u <<number>>'' clause)
	are actually quite safe,
	as an author would have to intentionally separate the <<urange>> and the <<number>>/<<dimension>>
	with a comment rather than whitespace
	for it to be ambiguous.
	Thus, while it's <em>possible</em> for authors to write things that are parsed in confusing ways,
	the actual code they'd have to write to cause the confusion is, itself, confusing and rare.


<!--
 ██████   ████████     ███    ██     ██ ██     ██    ███    ████████   ██████
██    ██  ██     ██   ██ ██   ███   ███ ███   ███   ██ ██   ██     ██ ██    ██
██        ██     ██  ██   ██  ████ ████ ████ ████  ██   ██  ██     ██ ██
██   ████ ████████  ██     ██ ██ ███ ██ ██ ███ ██ ██     ██ ████████   ██████
██    ██  ██   ██   █████████ ██     ██ ██     ██ █████████ ██   ██         ██
██    ██  ██    ██  ██     ██ ██     ██ ██     ██ ██     ██ ██    ██  ██    ██
 ██████   ██     ██ ██     ██ ██     ██ ██     ██ ██     ██ ██     ██  ██████
-->

<h2 id='rule-defs'>
Defining Grammars for Rules and Other Values</h2>

	The <a href="https://www.w3.org/TR/css3-values/">Values</a> spec defines how to specify a grammar for properties.
	This section does the same, but for rules.

	Just like in property grammars,
	the notation <code>&lt;foo></code> refers to the "foo" grammar term,
	assumed to be defined elsewhere.
	Substituting the <code>&lt;foo></code> for its definition results in a semantically identical grammar.

	Several types of tokens are written literally, without quotes:

	<ul>
		<li><<ident-token>>s (such as <code>auto</code>, <code>disc</code>, etc), which are simply written as their value.
		<li><<at-keyword-token>>s, which are written as an @ character followed by the token's value, like <code>@media</code>.
		<li><<function-token>>s, which are written as the function name followed by a ( character, like <code>translate(</code>.
		<li>The <<colon-token>> (written as <code>:</code>), <<comma-token>> (written as <code>,</code>), <<semicolon-token>> (written as <code>;</code>), <a href="#tokendef-open-paren">&lt;(-token></a>, <a href="#tokendef-close-paren">&lt;)-token></a>, <a href="#tokendef-open-curly">&lt;{-token></a>, and <a href="#tokendef-close-curly">&lt;}-token></a>s.
	</ul>

	Tokens match if their value is a match for the value defined in the grammar.
	Unless otherwise specified, all matches are <a>ASCII case-insensitive</a>.

	Note: Although it is possible, with <a>escaping</a>,
	to construct an <<ident-token>> whose value ends with <code>(</code> or  starts with <code>@</code>,
	such a tokens is not a <<function-token>> or an <<at-keyword-token>>
	and does not match corresponding grammar definitions.

	<<delim-token>>s are written with their value enclosed in single quotes.
	For example, a <<delim-token>> containing the "+" <a>code point</a> is written as <code>'+'</code>.
	Similarly, the <a href="#tokendef-open-square">&lt;[-token></a> and <a href="#tokendef-close-square">&lt;]-token></a>s must be written in single quotes,
	as they're used by the syntax of the grammar itself to group clauses.
	<<whitespace-token>> is never indicated in the grammar;
	<<whitespace-token>>s are allowed before, after, and between any two tokens,
	unless explicitly specified otherwise in prose definitions.
	(For example, if the prelude of a rule is a selector,
	whitespace is significant.)

	When defining a function or a block,
	the ending token must be specified in the grammar,
	but if it's not present in the eventual token stream,
	it still matches.

	<div class='example'>
		For example, the syntax of the ''translateX()'' function is:

		<pre>translateX( <<translation-value>> )</pre>

		However, the stylesheet may end with the function unclosed, like:

		<pre>.foo { transform: translate(50px</pre>

		The CSS parser parses this as a style rule containing one declaration,
		whose value is a function named "translate".
		This matches the above grammar,
		even though the ending token didn't appear in the token stream,
		because by the time the parser is finished,
		the presence of the ending token is no longer possible to determine;
		all you have is the fact that there's a block and a function.
	</div>

<h3 id='declaration-rule-list'>
Defining Block Contents: the <<declaration-list>>, <<rule-list>>, and <<stylesheet>> productions</h3>

	The CSS parser is agnostic as to the contents of blocks,
	such as those that come at the end of some at-rules.
	Defining the generic grammar of the blocks in terms of tokens is non-trivial,
	but there are dedicated and unambiguous algorithms defined for parsing this.

	The <dfn>&lt;declaration-list></dfn> production represents a list of declarations.
	It may only be used in grammars as the sole value in a block,
	and represents that the contents of the block must be parsed using the <a>consume a list of declarations</a> algorithm.

	Similarly, the <dfn>&lt;rule-list></dfn> production represents a list of rules,
	and may only be used in grammars as the sole value in a block.
	It represents that the contents of the block must be parsed using the <a>consume a list of rules</a> algorithm.

	Finally, the <dfn>&lt;stylesheet></dfn> production represents a list of rules.
	It is identical to <<rule-list>>,
	except that blocks using it default to accepting all rules
	that aren't otherwise limited to a particular context.

	<div class='example'>
		For example, the ''@font-face'' rule is defined to have an empty prelude,
		and to contain a list of declarations.
		This is expressed with the following grammar:

		<pre>@font-face { <<declaration-list>> }</pre>

		This is a complete and sufficient definition of the rule's grammar.

		For another example,
		''@keyframes'' rules are more complex,
		interpreting their prelude as a name and containing keyframes rules in their block
		Their grammar is:

		<pre>@keyframes <<keyframes-name>> { <<rule-list>> }</pre>
	</div>

	For rules that use <<declaration-list>>,
	the spec for the rule must define which properties, descriptors, and/or at-rules are valid inside the rule;
	this may be as simple as saying "The @foo rule accepts the properties/descriptors defined in this specification/section.",
	and extension specs may simply say "The @foo rule additionally accepts the following properties/descriptors.".
	Any declarations or at-rules found inside the block that are not defined as valid
	must be removed from the rule's value.

	Within a <<declaration-list>>,
	<code>!important</code> is automatically invalid on any descriptors.
	If the rule accepts properties,
	the spec for the rule must define whether the properties interact with the cascade,
	and with what specificity.
	If they don't interact with the cascade,
	properties containing <code>!important</code> are automatically invalid;
	otherwise using <code>!important</code> is valid and has its usual effect on the cascade origin of the property.

	<div class='example'>
		For example, the grammar for ''@font-face'' in the previous example must,
		in addition to what is written there,
		define that the allowed declarations are the descriptors defined in the Fonts spec.
	</div>

	For rules that use <<rule-list>>,
	the spec for the rule must define what types of rules are valid inside the rule,
	same as <<declaration-list>>,
	and unrecognized rules must similarly be removed from the rule's value.

	<div class='example'>
		For example, the grammar for ''@keyframes'' in the previous example must,
		in addition to what is written there,
		define that the only allowed rules are <<keyframe-rule>>s,
		which are defined as:

		<pre><<keyframe-rule>> = <<keyframe-selector>> { <<declaration-list>> }</pre>

		Keyframe rules, then,
		must further define that they accept as declarations all animatable CSS properties,
		plus the 'animation-timing-function' property,
		but that they do not interact with the cascade.
	</div>

	For rules that use <<stylesheet>>,
	all rules are allowed by default,
	but the spec for the rule may define what types of rules are <em>invalid</em> inside the rule.

	<div class='example'>
		For example, the ''@media'' rule accepts anything that can be placed in a stylesheet,
		except more ''@media'' rules.
		As such, its grammar is:

		<pre>@media <<media-query-list>> { <<stylesheet>> }</pre>

		It additionally defines a restriction that the <<stylesheet>> can not contain ''@media'' rules,
		which causes them to be dropped from the outer rule's value if they appear.
	</div>

<h3 id="any-value">
Defining Arbitrary Contents: the <<declaration-value>> and <<any-value>> productions</h3>

	In some grammars,
	it is useful to accept any reasonable input in the grammar,
	and do more specific error-handling on the contents manually
	(rather than simply invalidating the construct,
	as grammar mismatches tend to do).

	<span class=informative>For example, <a>custom properties</a> allow any reasonable value,
	as they can contain arbitrary pieces of other CSS properties,
	or be used for things that aren't part of existing CSS at all.
	For another example, the <<general-enclosed>> production in Media Queries
	defines the bounds of what future syntax MQs will allow,
	and uses special logic to deal with "unknown" values.</span>

	To aid in this, two additional productions are defined:

	The <dfn>&lt;declaration-value></dfn> production matches <em>any</em> sequence of one or more tokens,
	so long as the sequence does not contain
	<<bad-string-token>>,
	<<bad-url-token>>,
	unmatched <<)-token>>, <<]-token>>, or <<}-token>>,
	or top-level <<semicolon-token>> tokens or <<delim-token>> tokens with a value of "!".
	It represents the entirety of what a valid declaration can have as its value.

	The <dfn>&lt;any-value></dfn> production is identical to <<declaration-value>>,
	but also allows top-level <<semicolon-token>> tokens
	and <<delim-token>> tokens with a value of "!".
	It represents the entirety of what valid CSS can be in any context.

<!--
 ██████   ██████   ██████
██    ██ ██    ██ ██    ██
██       ██       ██
██        ██████   ██████
██             ██       ██
██    ██ ██    ██ ██    ██
 ██████   ██████   ██████
-->

<h2 id="css-stylesheets">
CSS stylesheets</h2>

	To <dfn export>parse a CSS stylesheet</dfn>,
	first <a>parse a stylesheet</a>.
	Interpret all of the resulting top-level <a>qualified rules</a> as <a>style rules</a>, defined below.

	If any style rule is <a>invalid</a>,
	or any at-rule is not recognized or is invalid according to its grammar or context,
	it's a <a>parse error</a>.
	Discard that rule.

<h3 id="style-rules">
Style rules</h3>

	A <dfn export>style rule</dfn> is a <a>qualified rule</a>
	that associates a <a>selector list</a>
	with a list of property declarations.
	They are also called
	<a href="https://www.w3.org/TR/CSS2/syndata.html#rule-sets">rule sets</a> in [[CSS2]].
	CSS Cascading and Inheritance [[CSS-CASCADE-3]] defines how the declarations inside of style rules participate in the cascade.

	The prelude of the qualified rule is [=CSS/parsed=]
	as a <<selector-list>>.
	If this returns failure,
	the entire style rule is <a>invalid</a>.

	The content of the qualified rule’s block is parsed as a
	<a lt="parse a list of declarations">list of declarations</a>.
	Unless defined otherwise by another specification or a future level of this specification,
	at-rules in that list are <a>invalid</a>
	and must be ignored.
	Declaration for an unknown CSS property
	or whose value does not match the syntax defined by the property are <a>invalid</a>
	and must be ignored.
	The validity of the style rule’s contents have no effect on the validity of the style rule itself.
	Unless otherwise specified, property names are <a>ASCII case-insensitive</a>.

	Note: The names of Custom Properties [[CSS-VARIABLES]] are case-sensitive.

	<a>Qualified rules</a> at the top-level of a CSS stylesheet are style rules.
	Qualified rules in other contexts may or may not be style rules,
	as defined by the context.

	<p class='example'>
		For example, qualified rules inside ''@media'' rules [[CSS3-CONDITIONAL]] are style rules,
		but qualified rules inside ''@keyframes'' rules are not [[CSS3-ANIMATIONS]].

<h3 id='charset-rule'>
The ''@charset'' Rule</h3>

	The algorithm used to <a>determine the fallback encoding</a> for a stylesheet
	looks for a specific byte sequence as the very first few bytes in the file,
	which has the syntactic form of an <a>at-rule</a> named "@charset".

	However, there is no actual <a>at-rule</a> named <dfn>@charset</dfn>.
	When a stylesheet is actually parsed,
	any occurrences of an ''@charset'' rule must be treated as an unrecognized rule,
	and thus dropped as invalid when the stylesheet is grammar-checked.

	Note: In CSS 2.1, ''@charset'' was a valid rule.
	Some legacy specs may still refer to a ''@charset'' rule,
	and explicitly talk about its presence in the stylesheet.

<!--
 ██████  ████████ ████████  ████    ███    ██
██    ██ ██       ██     ██  ██    ██ ██   ██
██       ██       ██     ██  ██   ██   ██  ██
 ██████  ██████   ████████   ██  ██     ██ ██
      ██ ██       ██   ██    ██  █████████ ██
██    ██ ██       ██    ██   ██  ██     ██ ██
 ██████  ████████ ██     ██ ████ ██     ██ ████████
-->

<h2 id="serialization">
Serialization</h2>

	The tokenizer described in this specification does not produce tokens for comments,
	or otherwise preserve them in any way.
	Implementations may preserve the contents of comments and their location in the token stream.
	If they do, this preserved information must have no effect on the parsing step.

	This specification does not define how to serialize CSS in general,
	leaving that task to the CSSOM and individual feature specifications.
	In particular, the serialization of comments and whitespace is not defined.

	The only requirement for serialization is that it must "round-trip" with parsing,
	that is, parsing the stylesheet must produce the same data structures as
	parsing, serializing, and parsing again,
	except for consecutive <<whitespace-token>>s,
	which may be collapsed into a single token.

	Note: This exception can exist because
	CSS grammars always interpret any amount of whitespace as identical to a single space.

	<div class=note id='serialization-tables'>
		To satisfy this requirement:

		<ul>
			<li>
				A <<delim-token>> containing U+005C REVERSE SOLIDUS (\)
				must be serialized as U+005C REVERSE SOLIDUS
				followed by a <a>newline</a>.
				(The tokenizer only ever emits such a token followed by a <<whitespace-token>>
				that starts with a newline.)

			<li>
				A <<hash-token>> with the "unrestricted" type flag may not need
				as much escaping as the same token with the "id" type flag.

			<li>
				The unit of a <<dimension-token>> may need escaping
				to disambiguate with scientific notation.

			<li>
				For any consecutive pair of tokens,
				if the first token shows up in the row headings of the following table,
				and the second token shows up in the column headings,
				and there's a ✗ in the cell denoted by the intersection of the chosen row and column,
				the pair of tokens must be serialized with a comment between them.

				If the tokenizer preserves comments,
				the preserved comment should be used;
				otherwise, an empty comment (<code>/**/</code>) must be inserted.
				(Preserved comments may be reinserted even if the following tables don't require a comment between two tokens.)

				Single characters in the row and column headings represent a <<delim-token>> with that value,
				except for "<code>(</code>",
				which represents a <a href=#tokendef-open-paren>(-token</a>.
		</ul>

		<style>
			#serialization-tables th, #serialization-tables td {
				font-size: 80%;
				line-height: normal;
				padding: 0.5em;
				text-align: center;
			}
		</style>

		<table class='data'>
			<tr>
				<td>
				<th>ident
				<th>function
				<th>url
				<th>bad url
				<th>-
				<th>number
				<th>percentage
				<th>dimension
				<th>CDC
				<th>(
				<th>*
				<th>%
			<tr>
				<th>ident
				<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td> <td>
			<tr>
				<th>at-keyword
				<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td> <td> <td>
			<tr>
				<th>hash
				<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td> <td> <td>
			<tr>
				<th>dimension
				<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td> <td> <td>
			<tr>
				<th>#
				<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td> <td> <td> <td>
			<tr>
				<th>-
				<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td>✗<td> <td> <td> <td>
			<tr>
				<th>number
				<td>✗<td>✗<td>✗<td>✗<td> <td>✗<td>✗<td>✗<td> <td> <td> <td>✗
			<tr>
				<th>@
				<td>✗<td>✗<td>✗<td>✗<td>✗<td> <td> <td> <td> <td> <td> <td>
			<tr>
				<th>.
				<td> <td> <td> <td> <td> <td>✗<td>✗<td>✗<td> <td> <td> <td>
			<tr>
				<th>+
				<td> <td> <td> <td> <td> <td>✗<td>✗<td>✗<td> <td> <td> <td>
			<tr>
				<th>/
				<td> <td> <td> <td> <td> <td> <td> <td> <td> <td> <td>✗<td>
		</table>
	</div>

<h3 id='serializing-anb'>
Serializing <var>&lt;an+b></var></h3>

	<div algorithm>
		To <dfn export>serialize an <<an+b>> value</dfn>,
		with integer values |A| and |B|:

		1. If |A| is zero,
			return the serialization of |B|.

		2. Otherwise, let |result| initially be an empty [=string=].

		3.
			<dl class=switch>
				: |A| is <code>1</code>
				:: Append "n" to |result|.

				: |A| is <code>-1</code>
				:: Append "-n" to |result|.

				: |A| is non-zero
				:: Serialize |A| and append it to |result|,
					then append "n" to |result|.
			</dl>

		4.
			<dl class=switch>
				: |B| is greater than zero
				:: Append "+" to |result|,
					then append the serialization of |B| to |result|.

				: |B| is less than zero
				:: Append the serialization of |B| to |result|.
			</dl>

		5. Return |result|.
	</div>


<h2 id="priv-sec">
Privacy and Security Considerations</h2>

	This specification introduces no new privacy concerns.

	This specification improves security, in that CSS parsing is now unambiguously defined for all inputs.

	Insofar as old parsers, such as whitelists/filters, parse differently from this specification,
	they are somewhat insecure,
	but the previous parsing specification left a lot of ambiguous corner cases which browsers interpreted differently,
	so those filters were potentially insecure already,
	and this specification does not worsen the situation.

<!--
 ██████  ██     ██    ███    ██    ██  ██████   ████████  ██████
██    ██ ██     ██   ██ ██   ███   ██ ██    ██  ██       ██    ██
██       ██     ██  ██   ██  ████  ██ ██        ██       ██
██       █████████ ██     ██ ██ ██ ██ ██   ████ ██████    ██████
██       ██     ██ █████████ ██  ████ ██    ██  ██             ██
██    ██ ██     ██ ██     ██ ██   ███ ██    ██  ██       ██    ██
 ██████  ██     ██ ██     ██ ██    ██  ██████   ████████  ██████
-->

<h2 id="changes" class=non-normative>
Changes</h2>

	<em>This section is non-normative.</em>


<h3 id="changes-CR-20140220">
Changes from the 20 February 2014 Candidate Recommendation</h3>

	The following substantive changes were made:

	* Removed <<unicode-range-token>>, in favor of creating a <<urange>> production.

	* url() functions that contain a string are now parsed as normal <<function-token>>s.
		url() functions that contain "raw" URLs are still specially parsed as <<url-token>>s.

	* Fixed a bug in the "Consume a URL token" algorithm,
		where it didn't consume the quote character starting a string before attempting to consume the string.

	* Fixed a bug in several of the parser algorithms
		related to the current/next input token and things getting consumed early/late.

	* Fix several bugs in the tokenization and parsing algorithms.

	* Change the definition of ident-like tokens to allow "--" to start an ident.
		As part of this, rearrange the ordering of the clauses in the "-" step of <a>consume a token</a>
		so that <<CDC-token>>s are recognized as such instead of becoming a ''--'' <<ident-token>>.

	* Don't serialize the digit in an <<an+b>> when A is 1 or -1.

	* Define all tokens to have a <a>representation</a>.

	* Fixed minor bug in <a>check if two code points are a valid escape</a>--
		a <code>\</code> followed by an EOF is now correctly reported as <em>not</em> a valid escape.
		A final <code>\</code> in a stylesheet now just emits itself as a <<delim-token>>.

	* @charset is no longer a valid CSS rule (there's just an encoding declaration that <em>looks</em> like a rule named @charset)

	* Trimmed whitespace from the beginning/ending of a declaration's value during parsing.

	* Removed the Selectors-specific tokens, per WG resolution.

	* Filtered <a>surrogates</a> from the input stream, per WG resolution.
		Now the entire specification operates only on <a>scalar values</a>.

	The following editorial changes were made:

	* The "Consume a string token" algorithm was changed to allow calling it without specifying an explicit ending token,
		so that it uses the current input token instead.
		The three call-sites of the algorithm were changed to use that form.

	* Minor editorial restructuring of algorithms.

	* Added the [=CSS/parse=] and [=parse a comma-separated list of component values=] API entry points.

	* Added the <<declaration-value>> and <<any-value>> productions.

	* Removed "code point" and "surrogate code point" in favor of the identical definitions in the Infra Standard.

	* Clarified on every range that they are inclusive.

	* Added a column to the comment-insertion table to handle a number token appearing next to a "%" delim token.

	<a href="https://github.com/w3c/csswg-drafts/milestone/5?closed=1">A Disposition of Comments is available.</a>


<h3 id="changes-WD-20131105">
Changes from the 5 November 2013 Last Call Working Draft</h3>

	<ul>
		<li>
			The <a href="#serialization">Serialization</a> section has been rewritten
			to make only the "round-trip" requirement normative,
			and move the details of how to achieve it into a note.
			Some corner cases in these details have been fixed.
		<li>
			[[ENCODING]] has been added to the list of normative references.
			It was already referenced in normative text before,
			just not listed as such.
		<li>
			In the algorithm to <a>determine the fallback encoding</a> of a stylesheet,
			limit the <code>@charset</code> byte sequence to 1024 bytes.
			This aligns with what HTML does for <code>&lt;meta charset></code>
			and makes sure the size of the sequence is bounded.
			This only makes a difference with leading or trailing whitespace
			in the encoding label:

			<pre>@charset "   <em>(lots of whitespace)</em>   utf-8";</pre>
	</ul>

<h3 id="changes-WD-20130919">
Changes from the 19 September 2013 Working Draft</h3>

	<ul>
		<li>
			The concept of <a>environment encoding</a> was added.
			The behavior does not change,
			but some of the definitions should be moved to the relevant specs.
	</ul>

<h3 id="changes-css21">
Changes from CSS 2.1 and Selectors Level 3</h3>

	Note: The point of this spec is to match reality;
	changes from CSS2.1 are nearly always because CSS 2.1 specified something that doesn't match actual browser behavior,
	or left something unspecified.
	If some detail doesn't match browsers,
	please let me know
	as it's almost certainly unintentional.

	Changes in decoding from a byte stream:

	<ul>
		<li>
			Only detect ''@charset'' rules in ASCII-compatible byte patterns.

		<li>
			Ignore ''@charset'' rules that specify an ASCII-incompatible encoding,
			as that would cause the rule itself to not decode properly.

		<li>
			Refer to [[!ENCODING]]
			rather than the IANA registery for character encodings.

	</ul>

	Tokenization changes:

	<ul>
		<li>
			Any U+0000 NULL <a>code point</a> in the CSS source is replaced with U+FFFD REPLACEMENT CHARACTER.

		<li>
			Any hexadecimal escape sequence such as ''\0'' that evaluates to zero
			produce U+FFFD REPLACEMENT CHARACTER rather than U+0000 NULL.
			<!--
				This covers a security issue:
				https://bugzilla.mozilla.org/show_bug.cgi?id=228856
			-->

		<li>
			The definition of <a>non-ASCII code point</a> was changed
			to be consistent with every definition of ASCII.
			This affects <a>code points</a> U+0080 to U+009F,
			which are now <a>name code points</a> rather than <<delim-token>>s,
			like the rest of <a>non-ASCII code points</a>.

		<li>
			Tokenization does not emit COMMENT or BAD_COMMENT tokens anymore.
			BAD_COMMENT is now considered the same as a normal token (not an error).
			<a href="#serialization">Serialization</a> is responsible
			for inserting comments as necessary between tokens that need to be separated,
			e.g. two consecutive <<ident-token>>s.

		<li>
			The <<unicode-range-token>> was removed,
			as it was low value and occasionally actively harmful.
			(''u+a { font-weight: bold; }'' was an invalid selector, for example...)

			Instead, a <<urange>> production was added,
			based on token patterns.
			It is technically looser than what 2.1 allowed
			(any number of digits and ? characters),
			but not in any way that should impact its use in practice.

		<li>
			Apply the <a href="https://www.w3.org/TR/CSS2/syndata.html#unexpected-eof">EOF error handling rule</a> in the tokenizer
			and emit normal <<string-token>> and <<url-token>> rather than BAD_STRING or BAD_URI
			on EOF.

		<li>
			The BAD_URI token (now <<bad-url-token>>) is "self-contained".
			In other words, once the tokenizer realizes it's in a <<bad-url-token>> rather than a <<url-token>>,
			it just seeks forward to look for the closing ),
			ignoring everything else.
			This behavior is simpler than treating it like a <<function-token>>
			and paying attention to opened blocks and such.
			Only WebKit exhibits this behavior,
			but it doesn't appear that we've gotten any compat bugs from it.

		<li>
			The <<comma-token>> has been added.

		<li>
			<<number-token>>, <<percentage-token>>, and <<dimension-token>> have been changed
			to include the preceding +/- sign as part of their value
			(rather than as a separate <<delim-token>> that needs to be manually handled every time the token is mentioned in other specs).
			The only consequence of this is that comments can no longer be inserted between the sign and the number.

		<li>
			Scientific notation is supported for numbers/percentages/dimensions to match SVG,
			per WG resolution.

		<li>
			Hexadecimal escape for <a>surrogate</a> now emit a replacement character rather than the surrogate.
			This allows implementations to safely use UTF-16 internally.

	</ul>

	Parsing changes:

	<ul>
		<li>
			Any list of declarations now also accepts at-rules, like ''@page'',
			per WG resolution.
			This makes a difference in error handling
			even if no such at-rules are defined yet:
			an at-rule, valid or not, ends at a {} block without a <<semicolon-token>>
			and lets the next declaration begin.

		<li>
			The handling of some miscellanous "special" tokens
			(like an unmatched <a href="#tokendef-close-curly">&lt;}-token></a>)
			showing up in various places in the grammar
			has been specified with some reasonable behavior shown by at least one browser.
			Previously, stylesheets with those tokens in those places just didn't match the stylesheet grammar at all,
			so their handling was totally undefined.
			Specifically:

			<ul>
				<li>
					[] blocks, () blocks and functions can now contain {} blocks, <<at-keyword-token>>s or <<semicolon-token>>s

				<li>
					Qualified rule preludes can now contain semicolons

				<li>
					Qualified rule and at-rule preludes can now contain <<at-keyword-token>>s
			</ul>

	</ul>

	<var>An+B</var> changes from Selectors Level 3 [[SELECT]]:

	<ul>
		<li>
			The <var>An+B</var> microsyntax has now been formally defined in terms of CSS tokens,
			rather than with a separate tokenizer.
			This has resulted in minor differences:

			<ul>
				<li>
					In some cases, minus signs or digits can be escaped
					(when they appear as part of the unit of a <<dimension-token>> or <<ident-token>>).
			</ul>
	</ul>

<h2 class=no-num id="acknowledgments">
Acknowledgments</h2>

	Thanks for feedback and contributions from
	Anne van Kesteren,
	David Baron,
	Henri Sivonen,
	Johannes Koch,
	呂康豪 (Kang-Hao Lu),
	Marc O'Morain,
	Raffaello Giulietti,
	Simon Pieter,
	Tyler Karaszewski,
	and Zack Weinberg.
