From ee5bf661779279a9d76f8e9fc09c8514fd3deab6 Mon Sep 17 00:00:00 2001 From: Maxime Labonne <81252890+mlabonne@users.noreply.github.com> Date: Fri, 19 Apr 2024 16:56:14 +0100 Subject: [PATCH] Fix colab link --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 79fdedc..e3ca296 100644 --- a/README.md +++ b/README.md @@ -43,7 +43,7 @@ A list of notebooks and articles related to large language models. | Fine-tune CodeLlama using Axolotl | End-to-end guide to the state-of-the-art tool for fine-tuning. | [Article](https://mlabonne.github.io/blog/posts/A_Beginners_Guide_to_LLM_Finetuning.html) | Open In Colab | | Fine-tune Mistral-7b with SFT | Supervised fine-tune Mistral-7b in a free-tier Google Colab with TRL. | [Article](https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html) | Open In Colab | | Fine-tune Mistral-7b with DPO | Boost the performance of supervised fine-tuned models with DPO. | [Article](https://mlabonne.github.io/blog/posts/Fine_tune_Mistral_7b_with_DPO.html) | Open In Colab | -| Fine-tune Llama-3-8b with ORPO | Cheaper and faster fine-tuning in a single stage with ORPO. | [Article](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) | Open In Colab | +| Fine-tune Llama-3-8b with ORPO | Cheaper and faster fine-tuning in a single stage with ORPO. | [Article](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) | Open In Colab | ### Quantization