Skip to content

Commit

Permalink
[lex.phases] Use preprocessing token consistently
Browse files Browse the repository at this point in the history
Prior to converting preprocessing tokens to tokens in phase 7,
all tokens are strictly preprocessing tokens.  Add the missing
qualification is appropriate through the phases of translation
up to that point of conversion.
  • Loading branch information
AlisdairM committed Oct 29, 2024
1 parent bf43925 commit 4ca51f4
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions source/lex.tex
Original file line number Diff line number Diff line change
Expand Up @@ -169,19 +169,19 @@
All preprocessing directives are then deleted.

\item
For a sequence of two or more adjacent \grammarterm{string-literal} tokens,
For a sequence of two or more adjacent \grammarterm{string-literal} preprocessing tokens,
a common \grammarterm{encoding-prefix} is determined
as specified in \ref{lex.string}.
Each such \grammarterm{string-literal} token is then considered to have
Each such \grammarterm{string-literal} preprocessing token is then considered to have
that common \grammarterm{encoding-prefix}.

\item
Adjacent \grammarterm{string-literal} tokens are concatenated\iref{lex.string}.
Adjacent \grammarterm{string-literal} preprocessing tokens are concatenated\iref{lex.string}.

\item Whitespace characters separating tokens are no longer
significant. Each preprocessing token is converted into a
token\iref{lex.token}. The resulting tokens
constitute a \defn{translation unit} and
\item
Each preprocessing token is converted into a token\iref{lex.token}.
Whitespace characters separating tokens are no longer significant.
The resulting tokens constitute a \defn{translation unit} and
are syntactically and
semantically analyzed and translated.
\begin{note}
Expand Down

0 comments on commit 4ca51f4

Please sign in to comment.