diff --git a/img/research/rag-faith.png b/img/research/rag-faith.png
new file mode 100644
index 0000000..dbccb25
Binary files /dev/null and b/img/research/rag-faith.png differ
diff --git a/pages/models/llama-3.en.mdx b/pages/models/llama-3.en.mdx
index f379b4b..3d57665 100644
--- a/pages/models/llama-3.en.mdx
+++ b/pages/models/llama-3.en.mdx
@@ -1,5 +1,7 @@
# Llama 3
+import {Bleed} from 'nextra-theme-docs'
+
Meta recently [introduced](https://llama.meta.com/llama3/) their new family of large language models (LLMs) called Llama 3. This release includes 8B and 70B parameters pre-trained and instruction-tuned models.
## Llama 3 Architecture Details
@@ -29,4 +31,19 @@ The pretrained models also outperform other models on several benchmarks like AG
Meta also reported that they will be releasing a 400B parameter model which is still training and coming soon! There are also efforts around multimodal support, multilingual capabilities, and longer context windows in the pipeline. The current checkpoint for Llama 3 400B (as of April 15, 2024) produces the following results on the common benchmarks like MMLU and Big-Bench Hard:
-The licensing information for the Llama 3 models can be found on the [model card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md).
\ No newline at end of file
+!["Llama 3 400B"](../../img/llama3/llama-400b.png)
+*Source: [Meta AI](https://ai.meta.com/blog/meta-llama-3/)*
+
+The licensing information for the Llama 3 models can be found on the [model card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md).
+
+## Extended Review of Llama 3
+
+Here is a longer review of Llama 3:
+
+
+
+
\ No newline at end of file
diff --git a/pages/research/_meta.en.json b/pages/research/_meta.en.json
index 09c81a4..2e8b35c 100644
--- a/pages/research/_meta.en.json
+++ b/pages/research/_meta.en.json
@@ -2,6 +2,7 @@
"llm-agents": "LLM Agents",
"rag": "RAG for LLMs",
"llm-reasoning": "LLM Reasoning",
+ "rag-faithfulness": "RAG Faithfulness",
"llm-recall": "LLM In-Context Recall",
"rag_hallucinations": "RAG Reduces Hallucination",
"synthetic_data": "Synthetic Data",
diff --git a/pages/research/rag-faithfulness.en.mdx b/pages/research/rag-faithfulness.en.mdx
new file mode 100644
index 0000000..143ecee
--- /dev/null
+++ b/pages/research/rag-faithfulness.en.mdx
@@ -0,0 +1,26 @@
+# How Faithful are RAG Models?
+
+import {Bleed} from 'nextra-theme-docs'
+
+
+
+
+
+This new paper by [Wu et al. (2024)](https://arxiv.org/abs/2404.10198) aims to quantify the tug-of-war between RAG and LLMs' internal prior.
+
+It focuses on GPT-4 and other LLMs on question answering for the analysis.
+
+It finds that providing correct retrieved information fixes most of the model mistakes (94% accuracy).
+
+!["RAG Faithfulness"](../../img/research/rag-faith.png)
+*Source: [Wu et al. (2024)](https://arxiv.org/abs/2404.10198)*
+
+When the documents contain more incorrect values and the LLM's internal prior is weak, the LLM is more likely to recite incorrect information. However, the LLMs are found to be more resistant when they have a stronger prior.
+
+The paper also reports that "the more the modified information deviates from the model's prior, the less likely the model is to prefer it."
+
+So many developers and companies are using RAG systems in production. This work highlights the importance of assessing risks when using LLMs given different kinds of contextual information that may contain supporting, contradicting, or completely incorrection information.
\ No newline at end of file