From fb2d36da6c6395693b4e26f48116a1b0b16f9110 Mon Sep 17 00:00:00 2001 From: Zach Nussbaum Date: Thu, 13 Apr 2023 20:55:49 +0000 Subject: [PATCH] fix: readme --- README.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 9af511b9..73525c12 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@

GPT4All

Demo, data, and code to train open-source assistant-style large language model based on GPT-J and LLaMa

-:green_book: Technical Report 2: GPT4All-J +:green_book: Technical Report 2: GPT4All-J

@@ -17,7 +17,7 @@

-:speech_balloon: Official Chat Interface +:speech_balloon: Official Web Chat Interface

@@ -204,6 +204,8 @@ Feel free to convert this to a more structured table. Trained LoRa Weights: - gpt4all-lora (four full epochs of training): https://huggingface.co/nomic-ai/gpt4all-lora - gpt4all-lora-epoch-2 (three full epochs of training) https://huggingface.co/nomic-ai/gpt4all-lora-epoch-2 +- gpt4all-j (one full epoch of training) (https://huggingface.co/nomic-ai/gpt4all-j) +- gpt4all-j-lora (one full epoch of training) (https://huggingface.co/nomic-ai/gpt4all-j-lora) Raw Data: - [Training Data Without P3](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations) @@ -229,9 +231,6 @@ Setup the environment ``` python -m pip install -r requirements.txt -cd transformers -pip install -e . - cd ../peft pip install -e . ```