2023-10-08 16:38:18 +00:00
|
|
|
Fortunately, there are many providers for LLM's and some of them can even be run locally
|
2023-09-07 11:36:39 +00:00
|
|
|
|
|
|
|
There are two models used in the app:
|
2023-10-05 17:27:48 +00:00
|
|
|
1. Embeddings.
|
|
|
|
2. Text generation.
|
2023-09-07 11:36:39 +00:00
|
|
|
|
2023-10-05 17:27:48 +00:00
|
|
|
By default, we use OpenAI's models but if you want to change it or even run it locally, it's very simple!
|
2023-09-07 11:36:39 +00:00
|
|
|
|
|
|
|
### Go to .env file or set environment variables:
|
|
|
|
|
|
|
|
`LLM_NAME=<your Text generation>`
|
|
|
|
|
|
|
|
`API_KEY=<api_key for Text generation>`
|
|
|
|
|
|
|
|
`EMBEDDINGS_NAME=<llm for embeddings>`
|
|
|
|
|
|
|
|
`EMBEDDINGS_KEY=<api_key for embeddings>`
|
|
|
|
|
|
|
|
`VITE_API_STREAMING=<true or false (true if using openai, false for all others)>`
|
|
|
|
|
2023-10-05 17:27:48 +00:00
|
|
|
You don't need to provide keys if you are happy with users providing theirs, so make sure you set `LLM_NAME` and `EMBEDDINGS_NAME`.
|
2023-09-07 11:36:39 +00:00
|
|
|
|
|
|
|
Options:
|
2023-10-07 04:20:40 +00:00
|
|
|
LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon, llama.cpp)
|
2023-09-07 11:36:39 +00:00
|
|
|
EMBEDDINGS_NAME (openai_text-embedding-ada-002, huggingface_sentence-transformers/all-mpnet-base-v2, huggingface_hkunlp/instructor-large, cohere_medium)
|
|
|
|
|
2023-10-07 04:20:40 +00:00
|
|
|
If using Llama, set the `EMBEDDINGS_NAME` to `huggingface_sentence-transformers/all-mpnet-base-v2`.
|
|
|
|
|
|
|
|
Alternatively, if you wish to run Llama locally, you can run `setup.sh` and choose option 1 when prompted.
|
|
|
|
|
2023-10-01 15:25:23 +00:00
|
|
|
That's it!
|
2023-09-07 11:36:39 +00:00
|
|
|
|
|
|
|
### Hosting everything locally and privately (for using our optimised open-source models)
|
2023-10-05 17:27:48 +00:00
|
|
|
If you are working with important data and don't want anything to leave your premises.
|
2023-09-07 11:36:39 +00:00
|
|
|
|
2023-10-08 16:38:18 +00:00
|
|
|
Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.
|