Update How-to-use-different-LLM.md

Corrected grammatical errors to remove ambiguity and improve professionalism.
This commit is contained in:
Ayush-Prabhu 2023-10-18 16:21:15 +05:30 committed by GitHub
parent 49a4b119e1
commit d93266fee2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -1,10 +1,10 @@
Fortunately, there are many providers for LLM's and some of them can even be run locally
Fortunately, there are many providers for LLMs, and some of them can even be run locally.
There are two models used in the app:
1. Embeddings.
2. Text generation.
By default, we use OpenAI's models but if you want to change it or even run it locally, it's very simple!
By default, we use OpenAI's models, but if you want to change it or even run it locally, it's very simple!
### Go to .env file or set environment variables:
@ -31,6 +31,6 @@ Alternatively, if you wish to run Llama locally, you can run `setup.sh` and choo
That's it!
### Hosting everything locally and privately (for using our optimised open-source models)
If you are working with important data and don't want anything to leave your premises.
If you are working with critical data and don't want anything to leave your premises.
Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.
Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable, and for your `LLM_NAME`, you can use anything that is on Hugging Face.