diff --git a/README.md b/README.md index 097a8e65..3a0c201e 100644 --- a/README.md +++ b/README.md @@ -17,8 +17,8 @@ Clone this repository down and download the CPU quantized gpt4all model. Place the quantized model in the `chat` directory and start chatting by running: -- `./gpt4all-lora-quantized-OSX-m1` on Mac/OSX -- `./gpt4all-lora-quantized-linux-x86` on Windows/Linux +- `./chat/gpt4all-lora-quantized-OSX-m1` on Mac/OSX +- `./chat/gpt4all-lora-quantized-linux-x86` on Windows/Linux To compile for custom hardware, see our fork of the [Alpaca C++](https://github.com/zanussbaum/gpt4all.cpp) repo.