You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Prompt-Engineering-Guide/pages/research/rag-faithfulness.en.mdx

26 lines
1.4 KiB
Markdown

# How Faithful are RAG Models?
import {Bleed} from 'nextra-theme-docs'
<Bleed>
<iframe width="100%"
height="415px"
src="https://www.youtube.com/embed/eEU1dWVE8QQ?si=b-qgCU8nibBCSX8H" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
/>
</Bleed>
This new paper by [Wu et al. (2024)](https://arxiv.org/abs/2404.10198) aims to quantify the tug-of-war between RAG and LLMs' internal prior.
It focuses on GPT-4 and other LLMs on question answering for the analysis.
It finds that providing correct retrieved information fixes most of the model mistakes (94% accuracy).
!["RAG Faithfulness"](../../img/research/rag-faith.png)
*Source: [Wu et al. (2024)](https://arxiv.org/abs/2404.10198)*
When the documents contain more incorrect values and the LLM's internal prior is weak, the LLM is more likely to recite incorrect information. However, the LLMs are found to be more resistant when they have a stronger prior.
The paper also reports that "the more the modified information deviates from the model's prior, the less likely the model is to prefer it."
So many developers and companies are using RAG systems in production. This work highlights the importance of assessing risks when using LLMs given different kinds of contextual information that may contain supporting, contradicting, or completely incorrection information.