You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
DocsGPT/docs/pages/Guides/How-to-use-different-LLM.md

49 lines
2.2 KiB
Markdown

# Setting Up Local Language Models for Your App
1 year ago
Your app relies on two essential models: Embeddings and Text Generation. While OpenAI's default models work seamlessly, you have the flexibility to switch providers or even run the models locally.
1 year ago
## Step 1: Configure Environment Variables
1 year ago
Navigate to the `.env` file or set the following environment variables:
1 year ago
```env
LLM_NAME=<your Text Generation model>
API_KEY=<API key for Text Generation>
EMBEDDINGS_NAME=<LLM for Embeddings>
EMBEDDINGS_KEY=<API key for Embeddings>
VITE_API_STREAMING=<true or false>
```
1 year ago
You can omit the keys if users provide their own. Ensure you set `LLM_NAME` and `EMBEDDINGS_NAME`.
1 year ago
## Step 2: Choose Your Models
1 year ago
**Options for `LLM_NAME`:**
- openai ([More details](https://platform.openai.com/docs/models))
- anthropic ([More details](https://docs.anthropic.com/claude/reference/selecting-a-model))
- manifest ([More details](https://python.langchain.com/docs/integrations/llms/manifest))
- cohere ([More details](https://docs.cohere.com/docs/llmu))
- llama.cpp ([More details](https://python.langchain.com/docs/integrations/llms/llamacpp))
- huggingface (Arc53/DocsGPT-7B by default)
- sagemaker ([Mode details](https://aws.amazon.com/sagemaker/))
1 year ago
Note: for huggingface you can choose any model inside application/llm/huggingface.py or pass llm_name on init, loads
**Options for `EMBEDDINGS_NAME`:**
- openai_text-embedding-ada-002
- huggingface_sentence-transformers/all-mpnet-base-v2
- huggingface_hkunlp/instructor-large
- cohere_medium
1 year ago
If you want to be completely local, set `EMBEDDINGS_NAME` to `huggingface_sentence-transformers/all-mpnet-base-v2`.
For llama.cpp Download the required model and place it in the `models/` folder.
1 year ago
Alternatively, for local Llama setup, run `setup.sh` and choose option 1. The script handles the DocsGPT model addition.
1 year ago
## Step 3: Local Hosting for Privacy
If working with sensitive data, host everything locally by setting `LLM_NAME`, llama.cpp or huggingface, use any model available on Hugging Face, for llama.cpp you need to convert it into gguf format.
That's it! Your app is now configured for local and private hosting, ensuring optimal security for critical data.