diff --git a/README.md b/README.md index f2bb72ed..d933c2ec 100644 --- a/README.md +++ b/README.md @@ -16,7 +16,7 @@ Run on M1 Mac (not sped up!) # Try it yourself -Download the CPU quantized gpt4all model checkpoint: [gpt4all-lora-quantized.bin](https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/gpt4all-lora-quantized.bin) ( [torrent](magnet:?xt=urn:btih:b8289dbfa08ddd0951241bb7011041d929802db5&xt=urn:btmh:12206e4b6214d331ae37338c4584f6f4d99a892d8af57f288be068a80a60e89cd4e0&dn=gpt4all-lora-quantized.bin)) +Download the CPU quantized gpt4all model checkpoint: [gpt4all-lora-quantized.bin](https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/gpt4all-lora-quantized.bin). Clone this repository down and place the quantized model in the `chat` directory and start chatting by running: @@ -161,3 +161,6 @@ If you utilize this reposistory, models or data in a downstream project, please } ``` +### Alternative Download Locations +#### gpt4all-lora-quantized.bin Backup Torrent Link +magnet:?xt=urn:btih:1F11A9691EE06C18F0040E359361DCA0479BCB5A&dn=gpt4all-lora-quantized.bin&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce&tr=udp%3A%2F%2Fopentracker.i2p.rocks%3A6969%2Fannounce