mirror of
https://github.com/nomic-ai/gpt4all
synced 2024-11-08 07:10:32 +00:00
Update README.md
This commit is contained in:
parent
19fc709cd1
commit
f2d0e5e17b
@ -8,7 +8,14 @@
|
|||||||
|
|
||||||
|
|
||||||
# Reproducibility
|
# Reproducibility
|
||||||
To reproduce our trained assistant model with LoRA, do the following:
|
|
||||||
|
You can find trained LoRa model weights at:
|
||||||
|
- gpt4all-lora https://huggingface.co/nomic-ai/vicuna-lora-multi-turn
|
||||||
|
|
||||||
|
We are not distributing LLaMa 7B checkpoint they need to be used in association with.
|
||||||
|
|
||||||
|
|
||||||
|
To reproduce our LoRA training run, do the following:
|
||||||
|
|
||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user