It uses [PGVector](https://github.com/pgvector/pgvector) extension as shown in the [RAG empowered SQL cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/retrieval_in_sql.ipynb)
If you are using `ChatOpenAI` as your LLM, make sure the `OPENAI_API_KEY` is set in your environment. You can change both the LLM and embeddings model inside `chain.py`
And you can configure configure the following environment variables
for use by the template (defaults are in parentheses)
-`POSTGRES_USER` (postgres)
-`POSTGRES_PASSWORD` (test)
-`POSTGRES_DB` (vectordb)
-`POSTGRES_HOST` (localhost)
-`POSTGRES_PORT` (5432)
If you don't have a postgres instance, you can run one locally in docker:
```bash
docker run \
--name some-postgres \
-e POSTGRES_PASSWORD=test \
-e POSTGRES_USER=postgres \
-e POSTGRES_DB=vectordb \
-p 5432:5432 \
postgres:16
```
And to start again later, use the `--name` defined above:
```bash
docker start some-postgres
```
### PostgreSQL Database setup
Apart from having `pgvector` extension enabled, you will need to do some setup before being able to run semantic search within your SQL queries.
In order to run RAG over your postgreSQL database you will need to generate the embeddings for the specific columns you want.
This process is covered in the [RAG empowered SQL cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/retrieval_in_sql.ipynb), but the overall approach consist of: