Compare commits

...

2 Commits

Author SHA1 Message Date
Elvis Saravia 8dcc7bffd6 rag-faithfulness 1 month ago
Elvis Saravia ac2b46c623 llama 3 1 month ago

Binary file not shown.

After

Width:  |  Height:  |  Size: 142 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

@ -10,6 +10,7 @@
"gpt-4": "GPT-4",
"grok-1": "Grok-1",
"llama": "LLaMA",
"llama-3": "Llama 3",
"mistral-7b": "Mistral 7B",
"mistral-large": "Mistral Large",
"mixtral": "Mixtral",

@ -0,0 +1,49 @@
# Llama 3
import {Bleed} from 'nextra-theme-docs'
Meta recently [introduced](https://llama.meta.com/llama3/) their new family of large language models (LLMs) called Llama 3. This release includes 8B and 70B parameters pre-trained and instruction-tuned models.
## Llama 3 Architecture Details
Here is a summary of the mentioned technical details of Llama 3:
- It uses a standard decoder-only transformer.
- The vocabulary is 128K tokens.
- It is trained on sequences of 8K tokens.
- It applies grouped query attention (GQA)
- It is pretrained on over 15T tokens.
- It involves post-training that includes a combination of SFT, rejection sampling, PPO, and DPO.
## Performance
Notably, Llama 3 8B (instruction-tuned) outperforms [Gemma 7B](https://www.promptingguide.ai/models/gemma) and [Mistral 7B Instruct](https://www.promptingguide.ai/models/mistral-7b). Llama 3 70 broadly outperforms [Gemini Pro 1.5](https://www.promptingguide.ai/models/gemini-pro) and [Claude 3 Sonnet](https://www.promptingguide.ai/models/claude-3) and falls a bit behind on the MATH benchmark when compared to Gemini Pro 1.5.
!["Llama 3 Performance"](../../img/llama3/llama-instruct-performance.png)
*Source: [Meta AI](https://ai.meta.com/blog/meta-llama-3/)*
The pretrained models also outperform other models on several benchmarks like AGIEval (English), MMLU, and Big-Bench Hard.
!["Llama 3 Performance"](../../img/llama3/llama3-pretrained-results.png)
*Source: [Meta AI](https://ai.meta.com/blog/meta-llama-3/)*
## Llama 3 400B
Meta also reported that they will be releasing a 400B parameter model which is still training and coming soon! There are also efforts around multimodal support, multilingual capabilities, and longer context windows in the pipeline. The current checkpoint for Llama 3 400B (as of April 15, 2024) produces the following results on the common benchmarks like MMLU and Big-Bench Hard:
!["Llama 3 400B"](../../img/llama3/llama-400b.png)
*Source: [Meta AI](https://ai.meta.com/blog/meta-llama-3/)*
The licensing information for the Llama 3 models can be found on the [model card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md).
## Extended Review of Llama 3
Here is a longer review of Llama 3:
<Bleed>
<iframe width="100%"
height="415px"
src="https://www.youtube.com/embed/h2aEmciRd6U?si=m7-xXu5IWpB-6mE0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
/>
</Bleed>

@ -2,6 +2,7 @@
"llm-agents": "LLM Agents",
"rag": "RAG for LLMs",
"llm-reasoning": "LLM Reasoning",
"rag-faithfulness": "RAG Faithfulness",
"llm-recall": "LLM In-Context Recall",
"rag_hallucinations": "RAG Reduces Hallucination",
"synthetic_data": "Synthetic Data",

@ -0,0 +1,26 @@
# How Faithful are RAG Models?
import {Bleed} from 'nextra-theme-docs'
<Bleed>
<iframe width="100%"
height="415px"
src="https://www.youtube.com/embed/eEU1dWVE8QQ?si=b-qgCU8nibBCSX8H" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowFullScreen
/>
</Bleed>
This new paper by [Wu et al. (2024)](https://arxiv.org/abs/2404.10198) aims to quantify the tug-of-war between RAG and LLMs' internal prior.
It focuses on GPT-4 and other LLMs on question answering for the analysis.
It finds that providing correct retrieved information fixes most of the model mistakes (94% accuracy).
!["RAG Faithfulness"](../../img/research/rag-faith.png)
*Source: [Wu et al. (2024)](https://arxiv.org/abs/2404.10198)*
When the documents contain more incorrect values and the LLM's internal prior is weak, the LLM is more likely to recite incorrect information. However, the LLMs are found to be more resistant when they have a stronger prior.
The paper also reports that "the more the modified information deviates from the model's prior, the less likely the model is to prefer it."
So many developers and companies are using RAG systems in production. This work highlights the importance of assessing risks when using LLMs given different kinds of contextual information that may contain supporting, contradicting, or completely incorrection information.
Loading…
Cancel
Save