From 139e343da2ea244d593ede0e5281b12d92cc3e1a Mon Sep 17 00:00:00 2001 From: Maxime Labonne <81252890+mlabonne@users.noreply.github.com> Date: Mon, 28 Aug 2023 14:30:51 +0100 Subject: [PATCH] Update README.md --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 7f7c58b..518b3f3 100644 --- a/README.md +++ b/README.md @@ -7,12 +7,12 @@ A list of notebooks and articles related to large language models. | Notebook | Description | Article | Notebook | |---------------------------------------|-------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------| | Decoding Strategies in Large Language Models | A guide to text generation from beam search to nucleus sampling | [Article](https://mlabonne.github.io/blog/posts/2022-06-07-Decoding_strategies.html) | Open In Colab | -| Introduction to Weight Quantization | Large language model optimization using 8-bit quantization. | [Article](https://mlabonne.github.io/blog/posts/Introduction_to_Weight_Quantization.html) | Open In Colab | | Visualizing GPT-2's Loss Landscape | 3D plot of the loss landscape based on weight pertubations. | [Tweet](https://twitter.com/maximelabonne/status/1667618081844219904) | Open In Colab | | Improve ChatGPT with Knowledge Graphs | Augment ChatGPT's answers with knowledge graphs. | [Article](https://mlabonne.github.io/blog/posts/Article_Improve_ChatGPT_with_Knowledge_Graphs.html) | Open In Colab | -| 4-bit LLM Quantization using GPTQ | Quantize your own open-source LLMs to run them on consumer hardware. | [Article](https://mlabonne.github.io/blog/4bit_quantization/) | Open In Colab | | Fine-tune Llama 2 in Google Colab | Fine-tune a Llama 2 model on an HF dataset and upload it to the HF Hub. | [Tweet](https://twitter.com/maximelabonne/status/1681791164083576833) | Open In Colab | - +| Introduction to Weight Quantization | Large language model optimization using 8-bit quantization. | [Article](https://mlabonne.github.io/blog/posts/Introduction_to_Weight_Quantization.html) | Open In Colab | +| 4-bit LLM Quantization using GPTQ | Quantize your own open-source LLMs to run them on consumer hardware. | [Article](https://mlabonne.github.io/blog/4bit_quantization/) | Open In Colab | +| Quantize Llama 2 models using ggml | Quantize Llama 2 models with ggml and upload it to the HF Hub. | [Tweet](https://twitter.com/maximelabonne/status/1696151994568741365) | Open In Colab | ## Roadmap