From 69102a2859d2de3cffc74d5e1dde16c603383be2 Mon Sep 17 00:00:00 2001 From: mcembalest <70534565+mcembalest@users.noreply.github.com> Date: Tue, 2 Jul 2024 11:41:39 -0400 Subject: [PATCH] small edits and placeholder gif (#2513) * small edits and placeholder gif Signed-off-by: Max Cembalest * jul2 docs updates Signed-off-by: Max Cembalest * added video Signed-off-by: mcembalest <70534565+mcembalest@users.noreply.github.com> Signed-off-by: Max Cembalest * quantization nits Signed-off-by: Max Cembalest --------- Signed-off-by: Max Cembalest Signed-off-by: mcembalest <70534565+mcembalest@users.noreply.github.com> --- README.md | 45 ++++++------------- .../python/docs/assets/ubuntu.svg | 5 +++ .../python/docs/gpt4all_desktop/models.md | 14 +++--- .../python/docs/gpt4all_help/faq.md | 10 +---- .../python/docs/gpt4all_python/home.md | 27 ++++++----- gpt4all-training/README.md | 14 +++++- roadmap.md | 6 +-- 7 files changed, 61 insertions(+), 60 deletions(-) create mode 100644 gpt4all-bindings/python/docs/assets/ubuntu.svg diff --git a/README.md b/README.md index 833f85f3..9c668e9c 100644 --- a/README.md +++ b/README.md @@ -2,6 +2,7 @@

GPT4All runs large language models (LLMs) privately on everyday desktops & laptops.

No API calls or GPUs required - you can just download the application and get started +https://github.com/nomic-ai/gpt4all/assets/70534565/513a0f15-4964-4109-89e4-4f9a9011f311

@@ -12,15 +13,15 @@

-
+
Download for MacOS

-
- Download for Linux +
+ Download for Ubuntu

@@ -37,8 +38,6 @@ GPT4All is made possible by our compute partner phorm.ai

- - ## Install GPT4All Python `gpt4all` gives you access to LLMs with our Python client around [`llama.cpp`](https://github.com/ggerganov/llama.cpp) implementations. @@ -57,10 +56,17 @@ with model.chat_session(): ``` -### Release History +## Integrations + +:parrot::link: [Langchain](https://python.langchain.com/v0.2/docs/integrations/providers/gpt4all/) +:card_file_box: [Weaviate Vector Database](https://github.com/weaviate/weaviate) - [module docs](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-gpt4all) +:telescope: [OpenLIT (OTel-native Monitoring)](https://github.com/openlit/openlit) - [Docs](https://docs.openlit.io/latest/integrations/gpt4all) + +## Release History - **July 2nd, 2024**: V3.0.0 Release - - New UI/UX: fresh redesign of the chat application GUI and user experience - - LocalDocs: bring information from files on-device into chats + - Fresh redesign of the chat application UI + - Improved user workflow for LocalDocs + - Expanded access to more model architectures - **October 19th, 2023**: GGUF Support Launches with Support for: - Mistral 7b base model, an updated model gallery on [gpt4all.io](https://gpt4all.io), several new local code models including Rift Coder v1.5 - [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) support for Q4\_0 and Q4\_1 quantizations in GGUF. @@ -71,13 +77,6 @@ with model.chat_session(): [Docker-based API server]: https://github.com/nomic-ai/gpt4all/tree/cef74c2be20f5b697055d5b8b506861c7b997fab/gpt4all-api -### Integrations - -* :parrot::link: [Langchain](https://python.langchain.com/v0.2/docs/integrations/providers/gpt4all/) -* :card_file_box: [Weaviate Vector Database](https://github.com/weaviate/weaviate) - [module docs](https://weaviate.io/developers/weaviate/modules/retriever-vectorizer-modules/text2vec-gpt4all) -* :telescope: [OpenLIT (OTel-native Monitoring)](https://github.com/openlit/openlit) - [Docs](https://docs.openlit.io/latest/integrations/gpt4all) - - ## Contributing GPT4All welcomes contributions, involvement, and discussion from the open source community! Please see CONTRIBUTING.md and follow the issues, bug reports, and PR markdown templates. @@ -86,22 +85,6 @@ Check project discord, with project owners, or through existing issues/PRs to av Please make sure to tag all of the above with relevant project identifiers or your contribution could potentially get lost. Example tags: `backend`, `bindings`, `python-bindings`, `documentation`, etc. - -## Technical Reports - -

-:green_book: Technical Report 3: GPT4All Snoozy and Groovy -

- -

-:green_book: Technical Report 2: GPT4All-J -

- -

-:green_book: Technical Report 1: GPT4All -

- - ## Citation If you utilize this repository, models or data in a downstream project, please consider citing it with: diff --git a/gpt4all-bindings/python/docs/assets/ubuntu.svg b/gpt4all-bindings/python/docs/assets/ubuntu.svg new file mode 100644 index 00000000..60e90e08 --- /dev/null +++ b/gpt4all-bindings/python/docs/assets/ubuntu.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/gpt4all-bindings/python/docs/gpt4all_desktop/models.md b/gpt4all-bindings/python/docs/gpt4all_desktop/models.md index f936bb22..c94c6dc4 100644 --- a/gpt4all-bindings/python/docs/gpt4all_desktop/models.md +++ b/gpt4all-bindings/python/docs/gpt4all_desktop/models.md @@ -56,13 +56,13 @@ Many LLMs are available at various sizes, quantizations, and licenses. Here are a few examples: -| Model| Filesize| RAM Required| Parameters| Developer| License| MD5 Sum (Unique Hash)| -|------|---------|-------------|-----------|----------|--------|----------------------| -| Llama 3 Instruct | 4.66 GB| 8 GB| 8 Billion| Meta| [Llama 3 License](https://llama.meta.com/llama3/license/)| c87ad09e1e4c8f9c35a5fcef52b6f1c9| -| Nous Hermes 2 Mistral DPO| 4.21 GB| 8 GB| 7 Billion| Mistral & Nous Research | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)| Coa5f6b4eabd3992da4d7fb7f020f921eb| -| Phi-3 Mini Instruct | 2.03 GB| 4 GB| 4 billion| Microsoft| [MIT](https://opensource.org/license/mit)| f8347badde9bfc2efbe89124d78ddaf5| -| Mini Orca (Small)| 1.84 GB| 4 GB| 3 billion| Microsoft | [CC-BY-NC-SA-4.0](https://spdx.org/licenses/CC-BY-NC-SA-4.0)| 0e769317b90ac30d6e09486d61fefa26| -| GPT4All Snoozy| 7.36 GB| 16 GB| 13 billion| Nomic AI| [GPL](https://www.gnu.org/licenses/gpl-3.0.en.html)| 40388eb2f8d16bb5d08c96fdfaac6b2c| +| Model| Filesize| RAM Required| Parameters| Quantization| Developer| License| MD5 Sum (Unique Hash)| +|------|---------|-------------|-----------|-------------|----------|--------|----------------------| +| Llama 3 Instruct | 4.66 GB| 8 GB| 8 Billion| q4_0| Meta| [Llama 3 License](https://llama.meta.com/llama3/license/)| c87ad09e1e4c8f9c35a5fcef52b6f1c9| +| Nous Hermes 2 Mistral DPO| 4.11 GB| 8 GB| 7 Billion| q4_0| Mistral & Nous Research | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)| Coa5f6b4eabd3992da4d7fb7f020f921eb| +| Phi-3 Mini Instruct | 2.18 GB| 4 GB| 4 billion| q4_0| Microsoft| [MIT](https://opensource.org/license/mit)| f8347badde9bfc2efbe89124d78ddaf5| +| Mini Orca (Small)| 1.98 GB| 4 GB| 3 billion| q4_0| Microsoft | [CC-BY-NC-SA-4.0](https://spdx.org/licenses/CC-BY-NC-SA-4.0)| 0e769317b90ac30d6e09486d61fefa26| +| GPT4All Snoozy| 7.37 GB| 16 GB| 13 billion| q4_0| Nomic AI| [GPL](https://www.gnu.org/licenses/gpl-3.0.en.html)| 40388eb2f8d16bb5d08c96fdfaac6b2c| ### Search Results diff --git a/gpt4all-bindings/python/docs/gpt4all_help/faq.md b/gpt4all-bindings/python/docs/gpt4all_help/faq.md index 57e5eb89..943710f2 100644 --- a/gpt4all-bindings/python/docs/gpt4all_help/faq.md +++ b/gpt4all-bindings/python/docs/gpt4all_help/faq.md @@ -4,17 +4,11 @@ ### Which language models are supported? -Our backend supports models with a `llama.cpp` implementation which have been uploaded to [HuggingFace](https://huggingface.co/). +We support models with a `llama.cpp` implementation which have been uploaded to [HuggingFace](https://huggingface.co/). ### Which embedding models are supported? -The following embedding models can be used within the application and with the `Embed4All` class from the `gpt4all` Python library. The default context length as GGUF files is 2048 but can be [extended](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF#description). - -| Name | Initializing with `Embed4All` | Context Length | Embedding Length | File Size | -|--------------------|------------------------------------------------------|---------------:|-----------------:|----------:| -| [SBert](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)| ```pythonemb = Embed4All("all-MiniLM-L6-v2.gguf2.f16.gguf")```| 512 | 384 | 44 MiB | -| [Nomic Embed v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1-GGUF) | nomic‑embed‑text‑v1.f16.gguf| 2048 | 768 | 262 MiB | -| [Nomic Embed v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF) | nomic‑embed‑text‑v1.5.f16.gguf| 2048 | 64-768 | 262 MiB | +We support SBert and Nomic Embed Text v1 & v1.5. ## Software diff --git a/gpt4all-bindings/python/docs/gpt4all_python/home.md b/gpt4all-bindings/python/docs/gpt4all_python/home.md index b25d7819..f77f7a00 100644 --- a/gpt4all-bindings/python/docs/gpt4all_python/home.md +++ b/gpt4all-bindings/python/docs/gpt4all_python/home.md @@ -23,6 +23,15 @@ Models are loaded by name via the `GPT4All` class. If it's your first time loadi print(model.generate("How can I run LLMs efficiently on my laptop?", max_tokens=1024)) ``` +| `GPT4All` model name| Filesize| RAM Required| Parameters| Quantization| Developer| License| MD5 Sum (Unique Hash)| +|------|---------|-------|-------|-----------|----------|--------|----------------------| +| `Meta-Llama-3-8B-Instruct.Q4_0.gguf`| 4.66 GB| 8 GB| 8 Billion| q4_0| Meta| [Llama 3 License](https://llama.meta.com/llama3/license/)| c87ad09e1e4c8f9c35a5fcef52b6f1c9| +| `Nous-Hermes-2-Mistral-7B-DPO.Q4_0.gguf`| 4.11 GB| 8 GB| 7 Billion| q4_0| Mistral & Nous Research | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)| Coa5f6b4eabd3992da4d7fb7f020f921eb| +| `Phi-3-mini-4k-instruct.Q4_0.gguf` | 2.18 GB| 4 GB| 3.8 billion| q4_0| Microsoft| [MIT](https://opensource.org/license/mit)| f8347badde9bfc2efbe89124d78ddaf5| +| `orca-mini-3b-gguf2-q4_0.gguf`| 1.98 GB| 4 GB| 3 billion| q4_0| Microsoft | [CC-BY-NC-SA-4.0](https://spdx.org/licenses/CC-BY-NC-SA-4.0)| 0e769317b90ac30d6e09486d61fefa26| +| `gpt4all-13b-snoozy-q4_0.gguf`| 7.37 GB| 16 GB| 13 billion| q4_0| Nomic AI| [GPL](https://www.gnu.org/licenses/gpl-3.0.en.html)| 40388eb2f8d16bb5d08c96fdfaac6b2c| + + ## Chat Session Generation Most of the language models you will be able to access from HuggingFace have been trained as assistants. This guides language models to not just answer with relevant text, but *helpful* text. @@ -75,16 +84,6 @@ If you want your LLM's responses to be helpful in the typical sense, we recommen b = 5 ``` -## Example Models - -| Model| Filesize| RAM Required| Parameters| Developer| License| MD5 Sum (Unique Hash)| -|------|---------|-------------|-----------|----------|--------|----------------------| -| `Meta-Llama-3-8B-Instruct.Q4_0.gguf` | 4.66 GB| 8 GB| 8 Billion| Meta| [Llama 3 License](https://llama.meta.com/llama3/license/)| c87ad09e1e4c8f9c35a5fcef52b6f1c9| -| Nous Hermes 2 Mistral DPO| 4.21 GB| 8 GB| 7 Billion| Mistral & Nous Research | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)| Coa5f6b4eabd3992da4d7fb7f020f921eb| -| Phi-3 Mini Instruct | 2.03 GB| 4 GB| 4 billion| Microsoft| [MIT](https://opensource.org/license/mit)| f8347badde9bfc2efbe89124d78ddaf5| -| Mini Orca (Small)| 1.84 GB| 4 GB| 3 billion| Microsoft | [CC-BY-NC-SA-4.0](https://spdx.org/licenses/CC-BY-NC-SA-4.0)| 0e769317b90ac30d6e09486d61fefa26| -| GPT4All Snoozy| 7.36 GB| 16 GB| 13 billion| Nomic AI| [GPL](https://www.gnu.org/licenses/gpl-3.0.en.html)| 40388eb2f8d16bb5d08c96fdfaac6b2c| - ## Direct Generation Directly calling `model.generate()` prompts the model without applying any templates. @@ -150,3 +149,11 @@ The easiest way to run the text embedding model locally uses the [`nomic`](https ![Nomic embed text local inference](../assets/local_embed.gif) To learn more about making embeddings locally with `nomic`, visit our [embeddings guide](https://docs.nomic.ai/atlas/guides/embeddings#local-inference). + +The following embedding models can be used within the application and with the `Embed4All` class from the `gpt4all` Python library. The default context length as GGUF files is 2048 but can be [extended](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF#description). + +| Name| Using with `nomic`| `Embed4All` model name| Context Length| # Embedding Dimensions| File Size| +|--------------------|-|------------------------------------------------------|---------------:|-----------------:|----------:| +| [Nomic Embed v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1-GGUF) | ```embed.text(strings, model="nomic-embed-text-v1", inference_mode="local")```| ```Embed4All("nomic-embed-text-v1.f16.gguf")```| 2048 | 768 | 262 MiB | +| [Nomic Embed v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5-GGUF) | ```embed.text(strings, model="nomic-embed-text-v1.5", inference_mode="local")```| ```Embed4All("nomic-embed-text-v1.5.f16.gguf")``` | 2048| 64-768 | 262 MiB | +| [SBert](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)| n/a| ```Embed4All("all-MiniLM-L6-v2.gguf2.f16.gguf")```| 512 | 384 | 44 MiB | diff --git a/gpt4all-training/README.md b/gpt4all-training/README.md index d595478c..dd8bca23 100644 --- a/gpt4all-training/README.md +++ b/gpt4all-training/README.md @@ -1,6 +1,18 @@ ## Training GPT4All-J -Please see [GPT4All-J Technical Report](https://static.nomic.ai/gpt4all/2023_GPT4All-J_Technical_Report_2.pdf) for details. +### Technical Reports + +

+:green_book: Technical Report 3: GPT4All Snoozy and Groovy +

+ +

+:green_book: Technical Report 2: GPT4All-J +

+ +

+:green_book: Technical Report 1: GPT4All +

### GPT4All-J Training Data diff --git a/roadmap.md b/roadmap.md index 0920556c..1b9cacab 100644 --- a/roadmap.md +++ b/roadmap.md @@ -11,15 +11,15 @@ Each item should have an issue link below. - [ ] Portuguese - [ ] Your native language here. - UI Redesign: an internal effort at Nomic to improve the UI/UX of gpt4all for all users. - - [ ] Design new user interface and gather community feedback - - [ ] Implement the new user interface and experience. + - [x] Design new user interface and gather community feedback + - [x] Implement the new user interface and experience. - Installer and Update Improvements - [ ] Seamless native installation and update process on OSX - [ ] Seamless native installation and update process on Windows - [ ] Seamless native installation and update process on Linux - Model discoverability improvements: - [x] Support huggingface model discoverability - - [ ] Support Nomic hosted model discoverability + - [x] Support Nomic hosted model discoverability - LocalDocs (towards a local perplexity) - Multilingual LocalDocs Support - [ ] Create a multilingual experience