You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that the maximum length of the dataset used in the paper is 85K, and Llama 3.1 supports 130K. If I want to use Qwen 2.5, which only supports 30K, does that mean I cannot reuse the data?
The text was updated successfully, but these errors were encountered:
Thank you for your feedback! While it is possible to adapt the dataset for models with shorter context lengths like Qwen 2.5 by truncating or splitting the reference text, we strongly recommend using an LLM that supports longer contexts. The ability to handle extended inputs is a core requirement for the Cache-Augmented Generation (CAG) approach, as it relies on preloading the entire reference text into the model's context.
To maximize the benefits of CAG and preserve the methodology as designed, we recommend selecting an LLM capable of handling the full context size of your dataset. This ensures that the core properties of CAG, such as efficiency and retrieval-free operation, are fully realized.
I noticed that the maximum length of the dataset used in the paper is 85K, and Llama 3.1 supports 130K. If I want to use Qwen 2.5, which only supports 30K, does that mean I cannot reuse the data?
The text was updated successfully, but these errors were encountered: