From 9b10a8028d19c7d102a540d894c2bb0a7dcc41fa Mon Sep 17 00:00:00 2001 From: Yash-sudo-web Date: Wed, 25 Oct 2023 15:38:19 +0530 Subject: [PATCH] Enhanced How-to-use-different-LLM.md --- docs/pages/Guides/How-to-use-different-LLM.md | 54 ++++++++++--------- 1 file changed, 30 insertions(+), 24 deletions(-) diff --git a/docs/pages/Guides/How-to-use-different-LLM.md b/docs/pages/Guides/How-to-use-different-LLM.md index 8d7cccce..c300bef3 100644 --- a/docs/pages/Guides/How-to-use-different-LLM.md +++ b/docs/pages/Guides/How-to-use-different-LLM.md @@ -1,36 +1,42 @@ -Fortunately, there are many providers for LLMs, and some of them can even be run locally. +# Setting Up Local Language Models for Your App -There are two models used in the app: -1. Embeddings. -2. Text generation. +Your app relies on two essential models: Embeddings and Text Generation. While OpenAI's default models work seamlessly, you have the flexibility to switch providers or even run the models locally. -By default, we use OpenAI's models, but if you want to change it or even run it locally, it's very simple! +## Step 1: Configure Environment Variables -### Go to .env file or set environment variables: +Navigate to the `.env` file or set the following environment variables: -`LLM_NAME=` +```env +LLM_NAME= +API_KEY= +EMBEDDINGS_NAME= +EMBEDDINGS_KEY= +VITE_API_STREAMING= +``` -`API_KEY=` +You can omit the keys if users provide their own. Ensure you set `LLM_NAME` and `EMBEDDINGS_NAME`. -`EMBEDDINGS_NAME=` +## Step 2: Choose Your Models -`EMBEDDINGS_KEY=` +**Options for `LLM_NAME`:** +- openai +- manifest +- cohere +- Arc53/docsgpt-14b +- Arc53/docsgpt-7b-falcon +- llama.cpp -`VITE_API_STREAMING=` +**Options for `EMBEDDINGS_NAME`:** +- openai_text-embedding-ada-002 +- huggingface_sentence-transformers/all-mpnet-base-v2 +- huggingface_hkunlp/instructor-large +- cohere_medium -You don't need to provide keys if you are happy with users providing theirs, so make sure you set `LLM_NAME` and `EMBEDDINGS_NAME`. +If using Llama, set `EMBEDDINGS_NAME` to `huggingface_sentence-transformers/all-mpnet-base-v2`. Download the required model and place it in the `models/` folder. -Options: -LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon, llama.cpp) -EMBEDDINGS_NAME (openai_text-embedding-ada-002, huggingface_sentence-transformers/all-mpnet-base-v2, huggingface_hkunlp/instructor-large, cohere_medium) +Alternatively, for local Llama setup, run `setup.sh` and choose option 1. The script handles the DocsGPT model addition. -If using Llama, set the `EMBEDDINGS_NAME` to `huggingface_sentence-transformers/all-mpnet-base-v2` and be sure to download [this model](https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf) into the `models/` folder: `https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf`. +## Step 3: Local Hosting for Privacy -Alternatively, if you wish to run Llama locally, you can run `setup.sh` and choose option 1 when prompted. You do not need to manually add the DocsGPT model mentioned above to your `models/` folder if you use `setup.sh`, as the script will manage that step for you. - -That's it! - -### Hosting everything locally and privately (for using our optimised open-source models) -If you are working with critical data and don't want anything to leave your premises. - -Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable, and for your `LLM_NAME`, you can use anything that is on Hugging Face. +If working with sensitive data, host everything locally by setting `SELF_HOSTED_MODEL` to true in your `.env`. For `LLM_NAME`, use any model available on Hugging Face. +That's it! Your app is now configured for local and private hosting, ensuring optimal security for critical data.