diff --git a/README.md b/README.md index 9af511b9..73525c12 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@
Demo, data, and code to train open-source assistant-style large language model based on GPT-J and LLaMa
-:green_book: Technical Report 2: GPT4All-J +:green_book: Technical Report 2: GPT4All-J
@@ -17,7 +17,7 @@
-:speech_balloon: Official Chat Interface +:speech_balloon: Official Web Chat Interface
@@ -204,6 +204,8 @@ Feel free to convert this to a more structured table. Trained LoRa Weights: - gpt4all-lora (four full epochs of training): https://huggingface.co/nomic-ai/gpt4all-lora - gpt4all-lora-epoch-2 (three full epochs of training) https://huggingface.co/nomic-ai/gpt4all-lora-epoch-2 +- gpt4all-j (one full epoch of training) (https://huggingface.co/nomic-ai/gpt4all-j) +- gpt4all-j-lora (one full epoch of training) (https://huggingface.co/nomic-ai/gpt4all-j-lora) Raw Data: - [Training Data Without P3](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations) @@ -229,9 +231,6 @@ Setup the environment ``` python -m pip install -r requirements.txt -cd transformers -pip install -e . - cd ../peft pip install -e . ```