Skip to content

Commit

Permalink
[lex.phases] Use preprocessing token consistently
Browse files Browse the repository at this point in the history
Prior to converting preprocessing tokens to tokens in phase 7,
all tokens are strictly preprocessing tokens.  Add the missing
qualification is appropriate through the phases of translation
up to that point of conversion.
  • Loading branch information
AlisdairM committed Oct 29, 2024
1 parent bf43925 commit 712fc29
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions source/lex.tex
Original file line number Diff line number Diff line change
Expand Up @@ -169,16 +169,16 @@
All preprocessing directives are then deleted.

\item
For a sequence of two or more adjacent \grammarterm{string-literal} tokens,
For a sequence of two or more adjacent \grammarterm{string-literal} preprocessing tokens,
a common \grammarterm{encoding-prefix} is determined
as specified in \ref{lex.string}.
Each such \grammarterm{string-literal} token is then considered to have
Each such \grammarterm{string-literal} preprocessing token is then considered to have
that common \grammarterm{encoding-prefix}.

\item
Adjacent \grammarterm{string-literal} tokens are concatenated\iref{lex.string}.
Adjacent \grammarterm{string-literal} preprocessing tokens are concatenated\iref{lex.string}.

\item Whitespace characters separating tokens are no longer
\item Whitespace characters separating preprocessing tokens are no longer
significant. Each preprocessing token is converted into a
token\iref{lex.token}. The resulting tokens
constitute a \defn{translation unit} and
Expand Down

0 comments on commit 712fc29

Please sign in to comment.