diff --git a/gpt4all-backend/README.md b/gpt4all-backend/README.md index 37e917b6..bb8a6727 100644 --- a/gpt4all-backend/README.md +++ b/gpt4all-backend/README.md @@ -33,9 +33,9 @@ A few things. Number one, we are maintaining compatibility with our current mode In newer versions of llama.cpp, there has been some added support for NVIDIA GPU's for inference. We're investigating how to incorporate this into our downloadable installers. -# Ok, so bottom line... how do I make my model on huggingface compatible with GPT4All ecosystem right now? +# Ok, so bottom line... how do I make my model on Hugging Face compatible with GPT4All ecosystem right now? -1. Check to make sure the huggingface model is available in one of our three supported architectures +1. Check to make sure the Hugging Face model is available in one of our three supported architectures 2. If it is, then you can use the conversion script inside of our pinned llama.cpp submodule for GPTJ and LLAMA based models 3. Or if your model is an MPT model you can use the conversion script located directly in this backend directory under the scripts subdirectory