# Vectara Integration

This PR provides integration with Vectara. Implemented here are:
* langchain/vectorstore/vectara.py
* tests/integration_tests/vectorstores/test_vectara.py
* langchain/retrievers/vectara_retriever.py
And two IPYNB notebooks to do more testing:
* docs/modules/chains/index_examples/vectara_text_generation.ipynb
* docs/modules/indexes/vectorstores/examples/vectara.ipynb

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
searx_updates
Ofer Mendelevitch 12 months ago committed by GitHub
parent 9c4b43b494
commit c81fb88035
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,40 @@
# Vectara
What is Vectara?
**Vectara Overview:**
- Vectara is developer-first API platform for building conversational search applications
- To use Vectara - first [sign up](https://console.vectara.com/signup) and create an account. Then create a corpus and an API key for indexing and searching.
- You can use Vectara's [indexing API](https://docs.vectara.com/docs/indexing-apis/indexing) to add documents into Vectara's index
- You can use Vectara's [Search API](https://docs.vectara.com/docs/search-apis/search) to query Vectara's index (which also supports Hybrid search implicitly).
- You can use Vectara's integration with LangChain as a Vector store or using the Retriever abstraction.
## Installation and Setup
To use Vectara with LangChain no special installation steps are required. You just have to provide your customer_id, corpus ID, and an API key created within the Vectara console to enable indexing and searching.
### VectorStore
There exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Vectara
```
To create an instance of the Vectara vectorstore:
```python
vectara = Vectara(
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key
)
```
The customer_id, corpus_id and api_key are optional, and if they are not supplied will be read from the environment variables `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`, respectively.
For a more detailed walkthrough of the Vectara wrapper, see one of the two example notebooks:
* [Chat Over Documents with Vectara](./vectara/vectara_chat.html)
* [Vectara Text Generation](./vectara/vectara_text_generation.html)

@ -0,0 +1,726 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "134a0785",
"metadata": {},
"source": [
"# Chat Over Documents with Vectara\n",
"\n",
"This notebook is based on the [chat_vector_db](https://github.com/hwchase17/langchain/blob/master/docs/modules/chains/index_examples/chat_vector_db.ipynb) notebook, but using Vectara as the vector database."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "70c4e529",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"from langchain.vectorstores import Vectara\n",
"from langchain.vectorstores.vectara import VectaraRetriever\n",
"from langchain.llms import OpenAI\n",
"from langchain.chains import ConversationalRetrievalChain"
]
},
{
"cell_type": "markdown",
"id": "cdff94be",
"metadata": {},
"source": [
"Load in documents. You can replace this with a loader for whatever type of data you want"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "01c46e92",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader(\"../../modules/state_of_the_union.txt\")\n",
"documents = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "239475d2",
"metadata": {},
"source": [
"We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a8930cf7",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"vectorstore = Vectara.from_documents(documents, embedding=None)"
]
},
{
"cell_type": "markdown",
"id": "898b574b",
"metadata": {},
"source": [
"We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "af803fee",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import ConversationBufferMemory\n",
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)"
]
},
{
"cell_type": "markdown",
"id": "3c96b118",
"metadata": {},
"source": [
"We now initialize the `ConversationalRetrievalChain`"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7b4110f3",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'langchain.vectorstores.vectara.Vectara'>\n"
]
}
],
"source": [
"openai_api_key = os.environ['OPENAI_API_KEY']\n",
"llm = OpenAI(openai_api_key=openai_api_key, temperature=0)\n",
"retriever = VectaraRetriever(vectorstore, alpha=0.025, k=5, filter=None)\n",
"\n",
"print(type(vectorstore))\n",
"d = retriever.get_relevant_documents('What did the president say about Ketanji Brown Jackson')\n",
"\n",
"qa = ConversationalRetrievalChain.from_llm(llm, retriever, memory=memory)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e8ce4fe9",
"metadata": {},
"outputs": [],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query})"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "4c79862b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, and a former federal public defender.\""
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c697d9d1",
"metadata": {},
"outputs": [],
"source": [
"query = \"Did he mention who she suceeded\"\n",
"result = qa({\"question\": query})"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ba0678f3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' Justice Stephen Breyer.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "b3308b01-5300-4999-8cd3-22f16dae757e",
"metadata": {},
"source": [
"## Pass in chat history\n",
"\n",
"In the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1b41a10b-bf68-4689-8f00-9aed7675e2ab",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever())"
]
},
{
"cell_type": "markdown",
"id": "83f38c18-ac82-45f4-a79e-8b37ce1ae115",
"metadata": {},
"source": [
"Here's an example of asking a question with no chat history"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "bc672290-8a8b-4828-a90c-f1bbdd6b3920",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "6b62d758-c069-4062-88f0-21e7ea4710bf",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, and a former federal public defender.\""
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
]
},
{
"cell_type": "markdown",
"id": "8c26a83d-c945-4458-b54a-c6bd7f391303",
"metadata": {},
"source": [
"Here's an example of asking a question with some chat history"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "9c95460b-7116-4155-a9d2-c0fb027ee592",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "698ac00c-cadc-407f-9423-226b2d9258d0",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"' Justice Stephen Breyer.'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "0eaadf0f",
"metadata": {},
"source": [
"## Return Source Documents\n",
"You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "562769c6",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"qa = ConversationalRetrievalChain.from_llm(llm, vectorstore.as_retriever(), return_source_documents=True)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "ea478300",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "4cb75b4e",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. A former top litigator in private practice. A former federal public defender.', metadata={'source': '../../modules/state_of_the_union.txt'})"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['source_documents'][0]"
]
},
{
"cell_type": "markdown",
"id": "669ede2f-d69f-4960-8468-8a768ce1a55f",
"metadata": {},
"source": [
"## ConversationalRetrievalChain with `search_distance`\n",
"If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "f4f32c6f-8e49-44af-9116-8830b1fcc5f2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"vectordbkwargs = {\"search_distance\": 0.9}"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "1e251775-31e7-4679-b744-d4a57937f93a",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)\n",
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history, \"vectordbkwargs\": vectordbkwargs})"
]
},
{
"cell_type": "markdown",
"id": "99b96dae",
"metadata": {},
"source": [
"## ConversationalRetrievalChain with `map_reduce`\n",
"We can also use different types of combine document chains with the ConversationalRetrievalChain chain."
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e53a9d66",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "bf205e35",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_chain(llm, chain_type=\"map_reduce\")\n",
"\n",
"chain = ConversationalRetrievalChain(\n",
" retriever=vectorstore.as_retriever(),\n",
" question_generator=question_generator,\n",
" combine_docs_chain=doc_chain,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "78155887",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = chain({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "e54b5fa2",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"' The president did not mention Ketanji Brown Jackson.'"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "a2fe6b14",
"metadata": {},
"source": [
"## ConversationalRetrievalChain with Question Answering with sources\n",
"\n",
"You can also use this chain with the question answering with sources chain."
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "d1058fd2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains.qa_with_sources import load_qa_with_sources_chain"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "a6594482",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"\n",
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_with_sources_chain(llm, chain_type=\"map_reduce\")\n",
"\n",
"chain = ConversationalRetrievalChain(\n",
" retriever=vectorstore.as_retriever(),\n",
" question_generator=question_generator,\n",
" combine_docs_chain=doc_chain,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "e2badd21",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = chain({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "edb31fe5",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"' The president did not mention Ketanji Brown Jackson.\\nSOURCES: ../../modules/state_of_the_union.txt'"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "2324cdc6-98bf-4708-b8cd-02a98b1e5b67",
"metadata": {},
"source": [
"## ConversationalRetrievalChain with streaming to `stdout`\n",
"\n",
"Output from the chain will be streamed to `stdout` token by token in this example."
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "2efacec3-2690-4b05-8de3-a32fd2ac3911",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains.llm import LLMChain\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"\n",
"# Construct a ConversationalRetrievalChain with a streaming llm for combine docs\n",
"# and a separate, non-streaming llm for question generation\n",
"llm = OpenAI(temperature=0, openai_api_key=openai_api_key)\n",
"streaming_llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0, openai_api_key=openai_api_key)\n",
"\n",
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_chain(streaming_llm, chain_type=\"stuff\", prompt=QA_PROMPT)\n",
"\n",
"qa = ConversationalRetrievalChain(\n",
" retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "fd6d43f4-7428-44a4-81bc-26fe88a98762",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, and a former federal public defender."
]
}
],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "5ab38978-f3e8-4fa7-808c-c79dec48379a",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Justice Stephen Breyer."
]
}
],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})\n"
]
},
{
"cell_type": "markdown",
"id": "f793d56b",
"metadata": {},
"source": [
"## get_chat_history Function\n",
"You can also specify a `get_chat_history` function, which can be used to format the chat_history string."
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "a7ba9d8c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"def get_chat_history(inputs) -> str:\n",
" res = []\n",
" for human, ai in inputs:\n",
" res.append(f\"Human:{human}\\nAI:{ai}\")\n",
" return \"\\n\".join(res)\n",
"qa = ConversationalRetrievalChain.from_llm(llm, vectorstore.as_retriever(), get_chat_history=get_chat_history)"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "a3e33c0d",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "936dc62f",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, and a former federal public defender.\""
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b8c26901",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,199 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Vectara Text Generation\n",
"\n",
"This notebook is based on [chat_vector_db](https://github.com/hwchase17/langchain/blob/master/docs/modules/chains/index_examples/question_answering.ipynb) and adapted to Vectara."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare Data\n",
"\n",
"First, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.docstore.document import Document\n",
"import requests\n",
"from langchain.vectorstores import Vectara\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.prompts import PromptTemplate\n",
"import pathlib\n",
"import subprocess\n",
"import tempfile"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Cloning into '.'...\n"
]
}
],
"source": [
"def get_github_docs(repo_owner, repo_name):\n",
" with tempfile.TemporaryDirectory() as d:\n",
" subprocess.check_call(\n",
" f\"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .\",\n",
" cwd=d,\n",
" shell=True,\n",
" )\n",
" git_sha = (\n",
" subprocess.check_output(\"git rev-parse HEAD\", shell=True, cwd=d)\n",
" .decode(\"utf-8\")\n",
" .strip()\n",
" )\n",
" repo_path = pathlib.Path(d)\n",
" markdown_files = list(repo_path.glob(\"*/*.md\")) + list(\n",
" repo_path.glob(\"*/*.mdx\")\n",
" )\n",
" for markdown_file in markdown_files:\n",
" with open(markdown_file, \"r\") as f:\n",
" relative_path = markdown_file.relative_to(repo_path)\n",
" github_url = f\"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}\"\n",
" yield Document(page_content=f.read(), metadata={\"source\": github_url})\n",
"\n",
"sources = get_github_docs(\"yirenlu92\", \"deno-manual-forked\")\n",
"\n",
"source_chunks = []\n",
"splitter = CharacterTextSplitter(separator=\" \", chunk_size=1024, chunk_overlap=0)\n",
"for source in sources:\n",
" for chunk in splitter.split_text(source.page_content):\n",
" source_chunks.append(chunk)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set Up Vector DB\n",
"\n",
"Now that we have the documentation content in chunks, let's put all this information in a vector index for easy retrieval."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"search_index = Vectara.from_texts(source_chunks, embedding=None)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set Up LLM Chain with Custom Prompt\n",
"\n",
"Next, let's set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: `context`, which will be the documents fetched from the vector search, and `topic`, which is given by the user."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"prompt_template = \"\"\"Use the context below to write a 400 word blog post about the topic below:\n",
" Context: {context}\n",
" Topic: {topic}\n",
" Blog post:\"\"\"\n",
"\n",
"PROMPT = PromptTemplate(\n",
" template=prompt_template, input_variables=[\"context\", \"topic\"]\n",
")\n",
"\n",
"llm = OpenAI(openai_api_key=os.environ['OPENAI_API_KEY'], temperature=0)\n",
"\n",
"chain = LLMChain(llm=llm, prompt=PROMPT)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate Text\n",
"\n",
"Finally, we write a function to apply our inputs to the chain. The function takes an input parameter `topic`. We find the documents in the vector index that correspond to that `topic`, and use them as additional context in our simple LLM chain."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"def generate_blog_post(topic):\n",
" docs = search_index.similarity_search(topic, k=4)\n",
" inputs = [{\"context\": doc.page_content, \"topic\": topic} for doc in docs]\n",
" print(chain.apply(inputs))"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'text': '\\n\\nEnvironment variables are an essential part of any development workflow. They provide a way to store and access information that is specific to the environment in which the code is running. This can be especially useful when working with different versions of a language or framework, or when running code on different machines.\\n\\nThe Deno CLI tasks extension provides a way to easily manage environment variables when running Deno commands. This extension provides a task definition for allowing you to create tasks that execute the `deno` CLI from within the editor. The template for the Deno CLI tasks has the following interface, which can be configured in a `tasks.json` within your workspace:\\n\\nThe task definition includes the `type` field, which should be set to `deno`, and the `command` field, which is the `deno` command to run (e.g. `run`, `test`, `cache`, etc.). Additionally, you can specify additional arguments to pass on the command line, the current working directory to execute the command, and any environment variables.\\n\\nUsing environment variables with the Deno CLI tasks extension is a great way to ensure that your code is running in the correct environment. For example, if you are running a test suite,'}, {'text': '\\n\\nEnvironment variables are an important part of any programming language, and they can be used to store and access data in a variety of ways. In this blog post, we\\'ll be taking a look at environment variables specifically for the shell.\\n\\nShell variables are similar to environment variables, but they won\\'t be exported to spawned commands. They are defined with the following syntax:\\n\\n```sh\\nVAR_NAME=value\\n```\\n\\nShell variables can be used to store and access data in a variety of ways. For example, you can use them to store values that you want to re-use, but don\\'t want to be available in any spawned processes.\\n\\nFor example, if you wanted to store a value and then use it in a command, you could do something like this:\\n\\n```sh\\nVAR=hello && echo $VAR && deno eval \"console.log(\\'Deno: \\' + Deno.env.get(\\'VAR\\'))\"\\n```\\n\\nThis would output the following:\\n\\n```\\nhello\\nDeno: undefined\\n```\\n\\nAs you can see, the value stored in the shell variable is not available in the spawned process.\\n\\n'}, {'text': '\\n\\nWhen it comes to developing applications, environment variables are an essential part of the process. Environment variables are used to store information that can be used by applications and scripts to customize their behavior. This is especially important when it comes to developing applications with Deno, as there are several environment variables that can impact the behavior of Deno.\\n\\nThe most important environment variable for Deno is `DENO_AUTH_TOKENS`. This environment variable is used to store authentication tokens that are used to access remote resources. This is especially important when it comes to accessing remote APIs or databases. Without the proper authentication tokens, Deno will not be able to access the remote resources.\\n\\nAnother important environment variable for Deno is `DENO_DIR`. This environment variable is used to store the directory where Deno will store its files. This includes the Deno executable, the Deno cache, and the Deno configuration files. By setting this environment variable, you can ensure that Deno will always be able to find the files it needs.\\n\\nFinally, there is the `DENO_PLUGINS` environment variable. This environment variable is used to store the list of plugins that Deno will use. This is important for customizing the'}, {'text': '\\n\\nEnvironment variables are a great way to store and access sensitive information in your Deno applications. Deno offers built-in support for environment variables with `Deno.env`, and you can also use a `.env` file to store and access environment variables. In this blog post, we\\'ll explore both of these options and how to use them in your Deno applications.\\n\\n## Built-in `Deno.env`\\n\\nThe Deno runtime offers built-in support for environment variables with [`Deno.env`](https://deno.land/api@v1.25.3?s=Deno.env). `Deno.env` has getter and setter methods. Here is example usage:\\n\\n```ts\\nDeno.env.set(\"FIREBASE_API_KEY\", \"examplekey123\");\\nDeno.env.set(\"FIREBASE_AUTH_DOMAIN\", \"firebasedomain.com\");\\n\\nconsole.log(Deno.env.get(\"FIREBASE_API_KEY\")); // examplekey123\\nconsole.log(Deno.env.get(\"FIREBASE_AUTH_'}]\n"
]
}
],
"source": [
"generate_blog_post(\"environment variables\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

@ -0,0 +1,318 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "683953b3",
"metadata": {},
"source": [
"# Vectara\n",
"\n",
">[Vectara](https://Vectara.com/docs/) is a API platform for building LLM-powered applications. It provides a simple to use API for document indexing and query that is managed by Vectara and is optimized for performance and accuracy. \n",
"\n",
"\n",
"This notebook shows how to use functionality related to the `Vectara` vector database. \n",
"\n",
"See the [Vectara API documentation ](https://Vectara.com/docs/) for more information on how to use the API."
]
},
{
"cell_type": "markdown",
"id": "7b2f111b-357a-4f42-9730-ef0603bdc1b5",
"metadata": {},
"source": [
"We want to use `OpenAIEmbeddings` so we have to get the OpenAI API Key."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "082e7e8b-ac52-430c-98d6-8f0924457642",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key:········\n"
]
}
],
"source": [
"import os\n",
"import getpass\n",
"\n",
"os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "aac9563e",
"metadata": {
"ExecuteTime": {
"end_time": "2023-04-04T10:51:22.282884Z",
"start_time": "2023-04-04T10:51:21.408077Z"
},
"tags": []
},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Vectara\n",
"from langchain.document_loaders import TextLoader"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a3c3999a",
"metadata": {
"ExecuteTime": {
"end_time": "2023-04-04T10:51:22.520144Z",
"start_time": "2023-04-04T10:51:22.285826Z"
},
"tags": []
},
"outputs": [],
"source": [
"loader = TextLoader('../../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"docs = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "markdown",
"id": "eeead681",
"metadata": {},
"source": [
"## Connecting to Vectara from LangChain\n",
"\n",
"The Vectara API provides simple API endpoints for indexing and querying."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8429667e",
"metadata": {
"ExecuteTime": {
"end_time": "2023-04-04T10:51:22.525091Z",
"start_time": "2023-04-04T10:51:22.522015Z"
},
"tags": []
},
"outputs": [],
"source": [
"vectara = Vectara.from_documents(docs, embedding=None)"
]
},
{
"cell_type": "markdown",
"id": "1f9215c8",
"metadata": {
"ExecuteTime": {
"end_time": "2023-04-04T09:27:29.920258Z",
"start_time": "2023-04-04T09:27:29.913714Z"
}
},
"source": [
"## Similarity search\n",
"\n",
"The simplest scenario for using Vectara is to perform a similarity search. "
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a8c513ab",
"metadata": {
"ExecuteTime": {
"end_time": "2023-04-04T10:51:25.204469Z",
"start_time": "2023-04-04T10:51:24.855618Z"
},
"tags": []
},
"outputs": [],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = vectara.similarity_search(query)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "fc516993",
"metadata": {
"ExecuteTime": {
"end_time": "2023-04-04T10:51:25.220984Z",
"start_time": "2023-04-04T10:51:25.213943Z"
},
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. A former top litigator in private practice. A former federal public defender.\n"
]
}
],
"source": [
"print(found_docs[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "1bda9bf5",
"metadata": {},
"source": [
"## Similarity search with score\n",
"\n",
"Sometimes we might want to perform the search, but also obtain a relevancy score to know how good is a particular result."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "8804a21d",
"metadata": {
"ExecuteTime": {
"end_time": "2023-04-04T10:51:25.631585Z",
"start_time": "2023-04-04T10:51:25.227384Z"
}
},
"outputs": [],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"found_docs = vectara.similarity_search_with_score(query)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "756a6887",
"metadata": {
"ExecuteTime": {
"end_time": "2023-04-04T10:51:25.642282Z",
"start_time": "2023-04-04T10:51:25.635947Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. A former top litigator in private practice. A former federal public defender.\n",
"\n",
"Score: 1.0046461\n"
]
}
],
"source": [
"document, score = found_docs[0]\n",
"print(document.page_content)\n",
"print(f\"\\nScore: {score}\")"
]
},
{
"cell_type": "markdown",
"id": "691a82d6",
"metadata": {},
"source": [
"## Vectara as a Retriever\n",
"\n",
"Vectara, as all the other vector stores, is a LangChain Retriever, by using cosine similarity. "
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "9427195f",
"metadata": {
"ExecuteTime": {
"end_time": "2023-04-04T10:51:26.031451Z",
"start_time": "2023-04-04T10:51:26.018763Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"VectorStoreRetriever(vectorstore=<langchain.vectorstores.vectara.Vectara object at 0x156d3e830>, search_type='similarity', search_kwargs={})"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever = vectara.as_retriever()\n",
"retriever"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "f3c70c31",
"metadata": {
"ExecuteTime": {
"end_time": "2023-04-04T10:51:26.495652Z",
"start_time": "2023-04-04T10:51:26.046407Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence. A former top litigator in private practice. A former federal public defender.', metadata={'source': '../../modules/state_of_the_union.txt'})"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"retriever.get_relevant_documents(query)[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2300e785",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -18,6 +18,7 @@ from langchain.vectorstores.redis import Redis
from langchain.vectorstores.supabase import SupabaseVectorStore
from langchain.vectorstores.tair import Tair
from langchain.vectorstores.typesense import Typesense
from langchain.vectorstores.vectara import Vectara
from langchain.vectorstores.weaviate import Weaviate
from langchain.vectorstores.zilliz import Zilliz
@ -40,6 +41,7 @@ __all__ = [
"MyScaleSettings",
"SupabaseVectorStore",
"AnalyticDB",
"Vectara",
"Tair",
"LanceDB",
"DocArrayHnswSearch",

@ -0,0 +1,309 @@
"""Wrapper around Vectara vector database."""
from __future__ import annotations
import json
import logging
import os
from hashlib import md5
from typing import Any, Iterable, List, Optional, Tuple, Type
import requests
from pydantic import Field
from langchain.embeddings.base import Embeddings
from langchain.schema import Document
from langchain.vectorstores.base import VectorStore, VectorStoreRetriever
class Vectara(VectorStore):
"""Implementation of Vector Store using Vectara (https://vectara.com).
Example:
.. code-block:: python
from langchain.vectorstores import Vectara
vectorstore = Vectara(
vectara_customer_id=vectara_customer_id,
vectara_corpus_id=vectara_corpus_id,
vectara_api_key=vectara_api_key
)
"""
def __init__(
self,
vectara_customer_id: Optional[str] = None,
vectara_corpus_id: Optional[str] = None,
vectara_api_key: Optional[str] = None,
):
"""Initialize with Vectara API."""
self._vectara_customer_id = vectara_customer_id or os.environ.get(
"VECTARA_CUSTOMER_ID"
)
self._vectara_corpus_id = vectara_corpus_id or os.environ.get(
"VECTARA_CORPUS_ID"
)
self._vectara_api_key = vectara_api_key or os.environ.get("VECTARA_API_KEY")
if (
self._vectara_customer_id is None
or self._vectara_corpus_id is None
or self._vectara_api_key is None
):
logging.warning(
"Cant find Vectara credentials, customer_id or corpus_id in "
"environment."
)
else:
logging.debug(f"Using corpus id {self._vectara_corpus_id}")
self._session = requests.Session() # to reuse connections
def _get_post_headers(self) -> dict:
"""Returns headers that should be attached to each post request."""
return {
"x-api-key": self._vectara_api_key,
"customer-id": self._vectara_customer_id,
"Content-Type": "application/json",
}
def _delete_doc(self, doc_id: str) -> bool:
"""
Delete a document from the Vectara corpus.
Args:
url (str): URL of the page to delete.
doc_id (str): ID of the document to delete.
Returns:
bool: True if deletion was successful, False otherwise.
"""
body = {
"customer_id": self._vectara_customer_id,
"corpus_id": self._vectara_corpus_id,
"document_id": doc_id,
}
response = self._session.post(
"https://api.vectara.io/v1/delete-doc",
data=json.dumps(body),
verify=True,
headers=self._get_post_headers(),
)
if response.status_code != 200:
logging.error(
f"Delete request failed for doc_id = {doc_id} with status code "
f"{response.status_code}, reason {response.reason}, text "
f"{response.text}"
)
return False
return True
def _index_doc(self, doc_id: str, text: str, metadata: dict) -> bool:
request: dict[str, Any] = {}
request["customer_id"] = self._vectara_customer_id
request["corpus_id"] = self._vectara_corpus_id
request["document"] = {
"document_id": doc_id,
"metadataJson": json.dumps(metadata),
"section": [{"text": text, "metadataJson": json.dumps(metadata)}],
}
response = self._session.post(
headers=self._get_post_headers(),
url="https://api.vectara.io/v1/index",
data=json.dumps(request),
timeout=30,
verify=True,
)
status_code = response.status_code
result = response.json()
status_str = result["status"]["code"] if "status" in result else None
if status_code == 409 or (status_str and status_str == "ALREADY_EXISTS"):
return False
else:
return True
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
Returns:
List of ids from adding the texts into the vectorstore.
"""
ids = [md5(text.encode("utf-8")).hexdigest() for text in texts]
for i, doc in enumerate(texts):
doc_id = ids[i]
metadata = metadatas[i] if metadatas else {}
succeeded = self._index_doc(doc_id, doc, metadata)
if not succeeded:
self._delete_doc(doc_id)
self._index_doc(doc_id, doc, metadata)
return ids
def similarity_search_with_score(
self,
query: str,
k: int = 5,
alpha: float = 0.025,
filter: Optional[str] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return Vectara documents most similar to query, along with scores.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 5.
alpha: parameter for hybrid search (called "lambda" in Vectara
documentation).
filter: Dictionary of argument(s) to filter on metadata. For example a
filter can be "doc.rating > 3.0 and part.lang = 'deu'"} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview
for more details.
Returns:
List of Documents most similar to the query and score for each.
"""
response = self._session.post(
headers=self._get_post_headers(),
url="https://api.vectara.io/v1/query",
data=json.dumps(
{
"query": [
{
"query": query,
"start": 0,
"num_results": k,
"context_config": {
"sentences_before": 3,
"sentences_after": 3,
},
"corpus_key": [
{
"customer_id": self._vectara_customer_id,
"corpus_id": self._vectara_corpus_id,
"metadataFilter": filter,
"lexical_interpolation_config": {"lambda": alpha},
}
],
}
]
}
),
timeout=10,
)
if response.status_code != 200:
logging.error(
"Query failed %s",
f"(code {response.status_code}, reason {response.reason}, details "
f"{response.text})",
)
return []
result = response.json()
responses = result["responseSet"][0]["response"]
vectara_default_metadata = ["lang", "len", "offset"]
docs = [
(
Document(
page_content=x["text"],
metadata={
m["name"]: m["value"]
for m in x["metadata"]
if m["name"] not in vectara_default_metadata
},
),
x["score"],
)
for x in responses
]
return docs
def similarity_search(
self,
query: str,
k: int = 5,
alpha: float = 0.025,
filter: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Return Vectara documents most similar to query, along with scores.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 5.
filter: Dictionary of argument(s) to filter on metadata. For example a
filter can be "doc.rating > 3.0 and part.lang = 'deu'"} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview for more
details.
Returns:
List of Documents most similar to the query
"""
docs_and_scores = self.similarity_search_with_score(
query, k=k, alpha=alpha, filter=filter, **kwargs
)
return [doc for doc, _ in docs_and_scores]
@classmethod
def from_texts(
cls: Type[Vectara],
texts: List[str],
embedding: Optional[Embeddings] = None,
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> Vectara:
"""Construct Vectara wrapper from raw documents.
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain import Vectara
vectara = Vectara.from_texts(
texts,
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key,
)
"""
# Note: Vectara generates its own embeddings, so we ignore the provided
# embeddings (required by interface)
vectara = cls(**kwargs)
vectara.add_texts(texts, metadatas)
return vectara
def as_retriever(self, **kwargs: Any) -> VectaraRetriever:
return VectaraRetriever(vectorstore=self, **kwargs)
class VectaraRetriever(VectorStoreRetriever):
vectorstore: Vectara
search_kwargs: dict = Field(default_factory=lambda: {"alpha": 0.025, "k": 5})
"""Search params.
k: Number of Documents to return. Defaults to 5.
alpha: parameter for hybrid search (called "lambda" in Vectara
documentation).
filter: Dictionary of argument(s) to filter on metadata. For example a
filter can be "doc.rating > 3.0 and part.lang = 'deu'"} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview
for more details.
"""
def add_texts(
self, texts: List[str], metadatas: Optional[List[dict]] = None
) -> None:
"""Add text to the Vectara vectorstore.
Args:
texts (List[str]): The text
metadatas (List[dict]): Metadata dicts, must line up with existing store
"""
self.vectorstore.add_texts(texts, metadatas)

@ -0,0 +1,34 @@
from langchain.docstore.document import Document
from langchain.vectorstores.vectara import Vectara
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
def get_abbr(s: str) -> str:
words = s.split(" ") # Split the string into words
first_letters = [word[0] for word in words] # Extract the first letter of each word
return "".join(first_letters) # Join the first letters into a single string
def test_vectara_add_documents() -> None:
"""Test end to end construction and search."""
# start with some initial documents
texts = ["grounded generation", "retrieval augmented generation", "data privacy"]
docsearch: Vectara = Vectara.from_texts(
texts,
embedding=FakeEmbeddings(),
metadatas=[{"abbr": "gg"}, {"abbr": "rag"}, {"abbr": "dp"}],
)
# then add some additional documents
new_texts = ["large language model", "information retrieval", "question answering"]
docsearch.add_documents(
[Document(page_content=t, metadata={"abbr": get_abbr(t)}) for t in new_texts]
)
# finally do a similarity search to see if all works okay
output = docsearch.similarity_search("large language model", k=2)
assert output[0].page_content == "large language model"
assert output[0].metadata == {"abbr": "llm"}
assert output[1].page_content == "information retrieval"
assert output[1].metadata == {"abbr": "ir"}
Loading…
Cancel
Save