You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
gpt4all/gpt4all-training
CharlesCNorton ce4dc2e789
typo in training log documentation (#2452)
Corrected a typo in the training log documentation where "seemded" was changed to "seemed". This enhances the readability and professionalism of the document.

Signed-off-by: CharlesCNorton <135471798+CharlesCNorton@users.noreply.github.com>
2 months ago
..
configs
figs
GPT-J_MAP.md
README.md small edits and placeholder gif (#2513) 3 months ago
TRAINING_LOG.md typo in training log documentation (#2452) 2 months ago
build_map.py make scripts executable (#1555) 11 months ago
clean.py make scripts executable (#1555) 11 months ago
create_hostname.sh make scripts executable (#1555) 11 months ago
data.py
env.yaml
eval_figures.py make scripts executable (#1555) 11 months ago
eval_self_instruct.py make scripts executable (#1555) 11 months ago
generate.py make scripts executable (#1555) 11 months ago
inference.py make scripts executable (#1555) 11 months ago
launcher.sh make scripts executable (#1555) 11 months ago
old-README.md llamamodel: fix BERT tokenization after llama.cpp update (#2381) 4 months ago
read.py
requirements.txt
train.py make scripts executable (#1555) 11 months ago

README.md

Training GPT4All-J

Technical Reports

📗 Technical Report 3: GPT4All Snoozy and Groovy

📗 Technical Report 2: GPT4All-J

📗 Technical Report 1: GPT4All

GPT4All-J Training Data

We have released updated versions of our GPT4All-J model and training data.

  • v1.0: The original model trained on the v1.0 dataset
  • v1.1-breezy: Trained on a filtered dataset where we removed all instances of AI language model
  • v1.2-jazzy: Trained on a filtered dataset where we also removed instances like I'm sorry, I can't answer... and AI language model

The models and data versions can be specified by passing a revision argument.

For example, to load the v1.2-jazzy model and dataset, run:

from datasets import load_dataset
from transformers import AutoModelForCausalLM

dataset = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision="v1.2-jazzy")
model = AutoModelForCausalLM.from_pretrained("nomic-ai/gpt4all-j", revision="v1.2-jazzy")

GPT4All-J Training Instructions

accelerate launch --dynamo_backend=inductor --num_processes=8 --num_machines=1 --machine_rank=0 --deepspeed_multinode_launcher standard --mixed_precision=bf16  --use_deepspeed --deepspeed_config_file=configs/deepspeed/ds_config_gptj.json train.py --config configs/train/finetune_gptj.yaml