You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using shiki to highlight code blocks from LLM streams by using shiki's highlighterCoreSync and manually loading the languages as needed and it is working pretty well. I'm using the highlighter.codeToTokens API and rendering the tokens with a custom component.
Since the code block is being streamed in, I'd like to 'cache' the work its done so far so that it does not RECOMPUTE the entire highlight when additional code string is appended in.
I've been digging into the code a bit and noticed that codeToTokens takes in option for grammarState.
/** * Represent the state of the grammar, allowing to continue tokenizing from a intermediate grammar state. * * You can get the grammar state from `getLastGrammarState`. */
grammarState?: GrammarState;
As the code block is streamed in to from the LLM, I'm passing in the grammarState from the previously computed 'TokensResult' back in. Am I using the grammarState option in codeToTokens correctly? I want to be able to cache the work the highlighter has been doing so far so that every subsequent code appended by the LLM doesn't recompute the work for the entire code block.
When comparing the TokensResult.Tokens with the previous computation, the results are never true when compared (===). I don't know if passing the GrammarState is caching any work.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am using shiki to highlight code blocks from LLM streams by using shiki's highlighterCoreSync and manually loading the languages as needed and it is working pretty well. I'm using the
highlighter.codeToTokens
API and rendering the tokens with a custom component.shiki/packages/core/src/highlight/code-to-tokens.ts
Line 13 in 6f4ba4d
Since the code block is being streamed in, I'd like to 'cache' the work its done so far so that it does not RECOMPUTE the entire highlight when additional code string is appended in.
I've been digging into the code a bit and noticed that codeToTokens takes in option for
grammarState
.As the code block is streamed in to from the LLM, I'm passing in the grammarState from the previously computed 'TokensResult' back in. Am I using the
grammarState
option incodeToTokens
correctly? I want to be able to cache the work the highlighter has been doing so far so that every subsequent code appended by the LLM doesn't recompute the work for the entire code block.When comparing the TokensResult.Tokens with the previous computation, the results are never true when compared (===). I don't know if passing the GrammarState is caching any work.
Beta Was this translation helpful? Give feedback.
All reactions