You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Go to file
Andriy Mulyar 63b460c84c Update README.md 2 years ago
configs metrics run on configs now 2 years ago
eval_data started eval script and added eval data 2 years ago
figs feat: training log 2 years ago
peft@098962fa65
transformers@cae78c46d6
.gitignore chore: gitignore 2 years ago
.gitmodules
README.md Update README.md 2 years ago
TRAINING_LOG.md feat: training log 2 years ago
clean.py fix: naming 2 years ago
data.py fix: just read from watermark file 2 years ago
env.yaml
eval_self_instruct.py metrics run on configs now 2 years ago
generate.py metrics run on configs now 2 years ago
read.py
requirements.txt
train.py fix: append first_epoch 2 years ago

README.md

GPT4All

Demo, data and code to train an assistant-style large language model

Try it yourself

-- TODO LLAMA C++ code

Reproducibility

You can find trained LoRa model weights at:

We are not distributing LLaMa 7B checkpoint.

To reproduce our LoRA training run, do the following:

Setup

Clone the repo

git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git

git submodule configure && git submodule update

Setup the environment

python -m pip install -r requirements.txt

cd transformers
pip install -e . 

cd ../peft
pip install -e .

Generate

python generate.py --config configs/generate/generate.yaml --prompt "Write a script to reverse a string in Python

Train

accelerate launch --dynamo_backend=inductor --num_processes=8 --num_machines=1 --machine_rank=0 --deepspeed_multinode_launcher standard --mixed_precision=bf16  --use_deepspeed --deepspeed_config_file=configs/deepspeed/ds_config.json train.py --config configs/train/finetune-7b.yaml

If you utilize this reposistory, models or data in a downstream project, please consider citing it with:

@misc{gpt4all,
  author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
  title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}