You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Go to file
Andriy Mulyar 9bf4e0e5a2
Update README.md
1 year ago
configs Merge branch 'main' of https://github.com/nomic-ai/gpt4all into main 1 year ago
eval_data started eval script and added eval data 1 year ago
figs Merge branch 'main' of https://github.com/nomic-ai/gpt4all into main 1 year ago
peft@098962fa65 feat: peft submodule 1 year ago
transformers@cae78c46d6 feat: transformers submodule, gitignore 1 year ago
.gitignore chore: gitignore 1 year ago
.gitmodules feat: peft submodule 1 year ago
README.md Update README.md 1 year ago
TRAINING_LOG.md Update TRAINING_LOG.md 1 year ago
clean.py fix: naming 1 year ago
data.py fix: just read from watermark file 1 year ago
env.yaml feat: env for conda, pip 1 year ago
eval_figures.py updated eval 1 year ago
eval_self_instruct.py added eval code 1 year ago
generate.py metrics run on configs now 1 year ago
gpt4all-lora-demo.gif GIF 1 year ago
read.py feat: train and clean data 1 year ago
requirements.txt feat: env for conda, pip 1 year ago
train.py fix: append first_epoch 1 year ago

README.md

GPT4All

Demo, data and code to train an assistant-style large language model on ~440k GPT-3.5-Turbo Generations

📗 Technical Report

gpt4all-lora-demo

Try it yourself

You can download pre-compiled LLaMa C++ Interactive Chat binaries here:

  • OSX
  • Intel/Windows

and the model

Place the binary and quantized model in the same directory and start chatting!

To compile for custom hardware, see our fork of the Alpaca C++ repo.

Reproducibility

Trained LoRa Weights:

Raw Data:

We are not distributing a LLaMa 7B checkpoint.

You can reproduce our trained model by doing the following:

Setup

Clone the repo

git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git

git submodule configure && git submodule update

Setup the environment

python -m pip install -r requirements.txt

cd transformers
pip install -e . 

cd ../peft
pip install -e .

Training

accelerate launch --dynamo_backend=inductor --num_processes=8 --num_machines=1 --machine_rank=0 --deepspeed_multinode_launcher standard --mixed_precision=bf16  --use_deepspeed --deepspeed_config_file=configs/deepspeed/ds_config.json train.py --config configs/train/finetune-7b.yaml

Generate

python generate.py --config configs/generate/generate.yaml --prompt "Write a script to reverse a string in Python

If you utilize this reposistory, models or data in a downstream project, please consider citing it with:

@misc{gpt4all,
  author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
  title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}