Merge pull request #16 from mazzzystar/main

To make the model bin file more clear in README.
This commit is contained in:
Andriy Mulyar 2023-03-28 22:43:03 -04:00 committed by GitHub
commit 4120210d9d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -16,10 +16,9 @@ Run on M1 Mac (not sped up!)
# Try it yourself
Clone this repository down and download the CPU quantized gpt4all model.
- [gpt4all-quantized](https://s3.amazonaws.com/static.nomic.ai/gpt4all/models/gpt4all-lora-quantized.bin)
Download the CPU quantized gpt4all model checkpoint: [gpt4all-lora-quantized.bin](https://s3.amazonaws.com/static.nomic.ai/gpt4all/models/gpt4all-lora-quantized.bin)
Place the quantized model in the `chat` directory and start chatting by running:
Clone this repository down and place the quantized model in the `chat` directory and start chatting by running:
- `cd chat;./gpt4all-lora-quantized-OSX-m1` on M1 Mac/OSX
- `cd chat;./gpt4all-lora-quantized-linux-x86` on Windows/Linux