From 712fc29ef0116b7aa4ef1ec8a8dc12903b5542ff Mon Sep 17 00:00:00 2001 From: Alisdair Meredith Date: Tue, 29 Oct 2024 09:49:44 -0400 Subject: [PATCH] [lex.phases] Use preprocessing token consistently Prior to converting preprocessing tokens to tokens in phase 7, all tokens are strictly preprocessing tokens. Add the missing qualification is appropriate through the phases of translation up to that point of conversion. --- source/lex.tex | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/source/lex.tex b/source/lex.tex index d034d35f94..aab4c4131a 100644 --- a/source/lex.tex +++ b/source/lex.tex @@ -169,16 +169,16 @@ All preprocessing directives are then deleted. \item -For a sequence of two or more adjacent \grammarterm{string-literal} tokens, +For a sequence of two or more adjacent \grammarterm{string-literal} preprocessing tokens, a common \grammarterm{encoding-prefix} is determined as specified in \ref{lex.string}. -Each such \grammarterm{string-literal} token is then considered to have +Each such \grammarterm{string-literal} preprocessing token is then considered to have that common \grammarterm{encoding-prefix}. \item -Adjacent \grammarterm{string-literal} tokens are concatenated\iref{lex.string}. +Adjacent \grammarterm{string-literal} preprocessing tokens are concatenated\iref{lex.string}. -\item Whitespace characters separating tokens are no longer +\item Whitespace characters separating preprocessing tokens are no longer significant. Each preprocessing token is converted into a token\iref{lex.token}. The resulting tokens constitute a \defn{translation unit} and