diff --git a/README.md b/README.md index f91de4a0..b2285a60 100644 --- a/README.md +++ b/README.md @@ -4,9 +4,6 @@

:green_book: Technical Report

-

-Discord -

@@ -21,8 +18,8 @@ Clone this repository down and download the CPU quantized gpt4all model. Place the quantized model in the `chat` directory and start chatting by running: -- `./chat/gpt4all-lora-quantized-OSX-m1` on M1 Mac/OSX -- `./chat/gpt4all-lora-quantized-linux-x86` on Windows/Linux +- `cd chat;./gpt4all-lora-quantized-OSX-m1` on M1 Mac/OSX +- `cd chat;./gpt4all-lora-quantized-linux-x86` on Windows/Linux To compile for custom hardware, see our fork of the [Alpaca C++](https://github.com/zanussbaum/gpt4all.cpp) repo.