Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[lex.phases] Use preprocessing token consistently #7361

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions source/lex.tex
Original file line number Diff line number Diff line change
Expand Up @@ -169,19 +169,19 @@
All preprocessing directives are then deleted.

\item
For a sequence of two or more adjacent \grammarterm{string-literal} tokens,
For a sequence of two or more adjacent \grammarterm{string-literal} preprocessing tokens,
a common \grammarterm{encoding-prefix} is determined
as specified in \ref{lex.string}.
Each such \grammarterm{string-literal} token is then considered to have
Each such \grammarterm{string-literal} preprocessing token is then considered to have
that common \grammarterm{encoding-prefix}.

\item
Adjacent \grammarterm{string-literal} tokens are concatenated\iref{lex.string}.
Adjacent \grammarterm{string-literal} preprocessing tokens are concatenated\iref{lex.string}.

\item Whitespace characters separating tokens are no longer
significant. Each preprocessing token is converted into a
token\iref{lex.token}. The resulting tokens
constitute a \defn{translation unit} and
\item
Each preprocessing token is converted into a token\iref{lex.token}.
Whitespace characters separating tokens are no longer significant.
The resulting tokens constitute a \defn{translation unit} and
are syntactically and
semantically analyzed and translated.
\begin{note}
Expand Down
Loading