From 6a332ace3d7f8e0bcba939345c95d26c7ba050ec Mon Sep 17 00:00:00 2001 From: Andriy Mulyar Date: Tue, 28 Mar 2023 16:21:09 -0400 Subject: [PATCH] Update README.md --- README.md | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 030fad76..2d05fddf 100644 --- a/README.md +++ b/README.md @@ -16,15 +16,20 @@ You can download pre-compiled LLaMa C++ Interactive Chat binaries here: - [Intel/Windows]() and the model -- [gpt4all-quantized]() +- [gpt4all-quantized](https://s3.amazonaws.com/static.nomic.ai/gpt4all/models/gpt4all-lora-quantized.bin) # Reproducibility -You can find trained LoRa model weights at: -- gpt4all-lora https://huggingface.co/nomic-ai/gpt4all-lora +Trained LoRa Weights: +- gpt4all-lora: https://huggingface.co/nomic-ai/gpt4all-lora +- gpt4all-lora-epoch-2 https://huggingface.co/nomic-ai/gpt4all-lora-epoch-2 + +Raw Data: +- [Training Data Without P3](https://s3.amazonaws.com/static.nomic.ai/gpt4all/2022_03_27/gpt4all_curated_data_without_p3_2022_03_27.tar.gz) +- [Full Dataset with P3](https://s3.amazonaws.com/static.nomic.ai/gpt4all/2022_03_27/gpt4all_curated_data_full_2022_03_27.tar.gz) We are not distributing a LLaMa 7B checkpoint.