diff --git a/README.md b/README.md index 41bd676d..d7b76489 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ # Reproducibility You can find trained LoRa model weights at: -- gpt4all-lora https://huggingface.co/nomic-ai/vicuna-lora-multi-turn +- gpt4all-lora https://huggingface.co/nomic-ai/gpt4all-lora We are not distributing LLaMa 7B checkpoint they need to be used in association with.