Update README.md

pull/913/head
Andriy Mulyar 1 year ago committed by GitHub
parent 2d41cfc5ca
commit 88a09e4fee

@ -22,7 +22,7 @@ Place the quantized model in the `chat` directory and start chatting by running:
To compile for custom hardware, see our fork of the [Alpaca C++](https://github.com/zanussbaum/gpt4all.cpp) repo.
Note: the full model on GPU (16GB of RAM required) perform much better in our qualitative evaluations.
Note: the full model on GPU (16GB of RAM required) performs much better in our qualitative evaluations.
# Reproducibility

Loading…
Cancel
Save