<palign="center">GPT4All runs large language models (LLMs) privately on everyday desktops & laptops. <br><br> No API calls or GPUs required - you can just download the application and <ahref="https://docs.gpt4all.io/gpt4all_desktop/quickstart.html#quickstart">get started</a>
<palign="center">GPT4All runs large language models (LLMs) privately on everyday desktops & laptops. <br><br> No API calls or GPUs required - you can just download the application and <ahref="https://docs.gpt4all.io/gpt4all_desktop/quickstart.html#quickstart">get started</a>
Our backend supports models with a `llama.cpp` implementation which have been uploaded to [HuggingFace](https://huggingface.co/).
We support models with a `llama.cpp` implementation which have been uploaded to [HuggingFace](https://huggingface.co/).
### Which embedding models are supported?
### Which embedding models are supported?
The following embedding models can be used within the application and with the `Embed4All` class from the `gpt4all` Python library. The default context length as GGUF files is 2048 but can be [extended](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF#description).
We support SBert and Nomic Embed Text v1 & v1.5.
| Name | Initializing with `Embed4All` | Context Length | Embedding Length | File Size |
Most of the language models you will be able to access from HuggingFace have been trained as assistants. This guides language models to not just answer with relevant text, but *helpful* text.
Most of the language models you will be able to access from HuggingFace have been trained as assistants. This guides language models to not just answer with relevant text, but *helpful* text.
@ -75,16 +84,6 @@ If you want your LLM's responses to be helpful in the typical sense, we recommen
Directly calling `model.generate()` prompts the model without applying any templates.
Directly calling `model.generate()` prompts the model without applying any templates.
@ -150,3 +149,11 @@ The easiest way to run the text embedding model locally uses the [`nomic`](https
![Nomic embed text local inference](../assets/local_embed.gif)
![Nomic embed text local inference](../assets/local_embed.gif)
To learn more about making embeddings locally with `nomic`, visit our [embeddings guide](https://docs.nomic.ai/atlas/guides/embeddings#local-inference).
To learn more about making embeddings locally with `nomic`, visit our [embeddings guide](https://docs.nomic.ai/atlas/guides/embeddings#local-inference).
The following embedding models can be used within the application and with the `Embed4All` class from the `gpt4all` Python library. The default context length as GGUF files is 2048 but can be [extended](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF#description).
| Name| Using with `nomic`| `Embed4All` model name| Context Length| # Embedding Dimensions| File Size|