Skip to content

Commit

Permalink
faithfulness meaning was wrong
Browse files Browse the repository at this point in the history
  • Loading branch information
bubl-ai authored May 8, 2024
1 parent 7e1f679 commit e9ff2f5
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion _posts/2024-03-23-RAG-Design-Tradeoffs.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Several factors should be taken into account when making chunking decisions:
- **Cost and response time:** Smaller chunks require less processing, resulting in shorter response times. Additionally, considerations should extend to synthesis time and cost, which tend to increase with larger chunk sizes.

Is there a method to empirically determine the optimal chunk size? The [LlamaIndex Response Evaluation](https://www.llamaindex.ai/blog/evaluating-the-ideal-chunk-size-for-a-rag-system-using-llamaindex-6207e5d3fec5) addresses this question. They emphasize that finding the best chunk size for a RAG system involves both intuition and empirical evidence. With this module, you can experiment with different sizes and base your decisions on concrete data, evaluating the efficiency and accuracy of your RAG system using the following criteria:
- **Faithfulness:** Evaluate the absence of 'hallucinations' by comparing the query with the retrieved contexts. Measure whether the response from a query engine matches any source nodes.
- **Faithfulness:** Evaluate the absence of 'hallucinations' by comparing the retrieved documents with the response.
- **Relevancy:** Determine if the response effectively answers the query and if it aligns with the source nodes.
- **Response Generation Time:** Larger chunk sizes increase the volume of information processed by the LLM to generate an answer, slowing down the system and affecting its responsiveness.

Expand Down

0 comments on commit e9ff2f5

Please sign in to comment.