there are still a few broken ones:

- some in the chains docs, which I will delete soon :)
- some pointing to a sqlite tool, which we should add
pull/15568/head
Harrison Chase 9 months ago committed by GitHub
parent 7c4fe58f55
commit fd5fbb507d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -656,6 +656,6 @@ agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is
## Other callbacks
`Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There are a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/how_to/filecallbackhandler). You can also implement your own callbacks to execute custom functionality.
`Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There are a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/filecallbackhandler). You can also implement your own callbacks to execute custom functionality.
See here for more info on [Callbacks](/docs/modules/callbacks/), how to use them, and customize them.

@ -20,11 +20,11 @@ This guide aims to provide a comprehensive overview of the requirements for depl
Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include:
- [Ray Serve](/docs/ecosystem/integrations/ray_serve)
- [Ray Serve](/docs/integrations/providers/ray_serve)
- [BentoML](https://github.com/bentoml/BentoML)
- [OpenLLM](/docs/ecosystem/integrations/openllm)
- [Modal](/docs/ecosystem/integrations/modal)
- [Jina](/docs/ecosystem/integrations/jina#deployment)
- [OpenLLM](/docs/integrations/providers/openllm)
- [Modal](/docs/integrations/providers/modal)
- [Jina](/docs/integrations/providers/jina)
These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs.

@ -28,9 +28,7 @@
"cell_type": "code",
"execution_count": null,
"id": "9bdbfdc7c949a9c1",
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": [
"!pip install \"optimum[onnxruntime]\""
@ -44,8 +42,7 @@
"ExecuteTime": {
"end_time": "2023-12-18T11:41:24.738278Z",
"start_time": "2023-12-18T11:41:20.842567Z"
},
"collapsed": false
}
},
"outputs": [],
"source": [
@ -80,7 +77,9 @@
"outputs": [
{
"data": {
"text/plain": "'hugging_face_injection_identifier'"
"text/plain": [
"'hugging_face_injection_identifier'"
]
},
"execution_count": 10,
"metadata": {},
@ -119,7 +118,9 @@
"outputs": [
{
"data": {
"text/plain": "'Name 5 cities with the biggest number of inhabitants'"
"text/plain": [
"'Name 5 cities with the biggest number of inhabitants'"
]
},
"execution_count": 11,
"metadata": {},
@ -374,7 +375,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -4,6 +4,6 @@ One of the key concerns with using LLMs is that they may generate harmful or une
- [Amazon Comprehend moderation chain](/docs/guides/safety/amazon_comprehend_chain): Use [Amazon Comprehend](https://aws.amazon.com/comprehend/) to detect and handle Personally Identifiable Information (PII) and toxicity.
- [Constitutional chain](/docs/guides/safety/constitutional_chain): Prompt the model with a set of principles which should guide the model behavior.
- [Hugging Face prompt injection identification](/docs/guides/safety/huggingface_prompt_injection_identification): Detect and handle prompt injection attacks.
- [Hugging Face prompt injection identification](/docs/guides/safety/hugging_face_prompt_injection): Detect and handle prompt injection attacks.
- [Logical Fallacy chain](/docs/guides/safety/logical_fallacy_chain): Checks the model output against logical fallacies to correct any deviation.
- [Moderation chain](/docs/guides/safety/moderation): Check if any output text is harmful and flag it.

@ -7,7 +7,7 @@
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/streamlit-agent?quickstart=1)
In this guide we will demonstrate how to use `StreamlitCallbackHandler` to display the thoughts and actions of an agent in an
interactive Streamlit app. Try it out with the running app below using the [MRKL agent](/docs/modules/agents/how_to/mrkl/):
interactive Streamlit app. Try it out with the running app below using the MRKL agent:
<iframe loading="lazy" src="https://langchain-mrkl.streamlit.app/?embed=true&embed_options=light_theme"
style={{ width: 100 + '%', border: 'none', marginBottom: 1 + 'rem', height: 600 }}

@ -346,7 +346,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use a [self-querying retriever](/docs/modules/data_connection/retrievers/how_to/self_query/) to improve our query accuracy, using this additional metadata:"
"We can use a [self-querying retriever](/docs/modules/data_connection/retrievers/self_query/) to improve our query accuracy, using this additional metadata:"
]
},
{
@ -656,7 +656,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -5,7 +5,7 @@
"metadata": {},
"source": [
"# Psychic\n",
"This notebook covers how to load documents from `Psychic`. See [here](/docs/ecosystem/integrations/psychic) for more details.\n",
"This notebook covers how to load documents from `Psychic`. See [here](/docs/integrations/providers/psychic) for more details.\n",
"\n",
"## Prerequisites\n",
"1. Follow the Quick Start section in [this document](/docs/ecosystem/integrations/psychic)\n",
@ -118,7 +118,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.1"
},
"vscode": {
"interpreter": {

@ -1,218 +1,218 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "959300d4",
"metadata": {},
"source": [
"# Hugging Face Local Pipelines\n",
"\n",
"Hugging Face models can be run locally through the `HuggingFacePipeline` class.\n",
"\n",
"The [Hugging Face Model Hub](https://huggingface.co/models) hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.\n",
"\n",
"These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the [HuggingFaceHub](huggingface_hub.html) notebook."
]
},
{
"cell_type": "markdown",
"id": "4c1b8450-5eaf-4d34-8341-2d785448a1ff",
"metadata": {
"tags": []
},
"source": [
"To use, you should have the ``transformers`` python [package installed](https://pypi.org/project/transformers/), as well as [pytorch](https://pytorch.org/get-started/locally/). You can also install `xformer` for a more memory-efficient attention implementation."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d772b637-de00-4663-bd77-9bc96d798db2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install transformers --quiet"
]
},
{
"cell_type": "markdown",
"id": "91ad075f-71d5-4bc8-ab91-cc0ad5ef16bb",
"metadata": {},
"source": [
"### Model Loading\n",
"\n",
"Models can be loaded by specifying the model parameters using the `from_model_id` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "165ae236-962a-4763-8052-c4836d78a5d2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline\n",
"\n",
"hf = HuggingFacePipeline.from_model_id(\n",
" model_id=\"gpt2\",\n",
" task=\"text-generation\",\n",
" pipeline_kwargs={\"max_new_tokens\": 10},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "00104b27-0c15-4a97-b198-4512337ee211",
"metadata": {},
"source": [
"They can also be loaded by passing in an existing `transformers` pipeline directly"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline\n",
"from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n",
"\n",
"model_id = \"gpt2\"\n",
"tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
"model = AutoModelForCausalLM.from_pretrained(model_id)\n",
"pipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=10)\n",
"hf = HuggingFacePipeline(pipeline=pipe)"
],
"id": "7f426a4f"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Chain\n",
"\n",
"With the model loaded into memory, you can compose it with a prompt to\n",
"form a chain."
],
"id": "60e7ba8d"
},
{
"cell_type": "code",
"execution_count": null,
"id": "3acf0069",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate.from_template(template)\n",
"\n",
"chain = prompt | hf\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"id": "dbbc3a37",
"metadata": {},
"source": [
"### GPU Inference\n",
"\n",
"When running on a machine with GPU, you can specify the `device=n` parameter to put the model on the specified device.\n",
"Defaults to `-1` for CPU inference.\n",
"\n",
"If you have multiple-GPUs and/or the model is too large for a single GPU, you can specify `device_map=\"auto\"`, which requires and uses the [Accelerate](https://huggingface.co/docs/accelerate/index) library to automatically determine how to load the model weights. \n",
"\n",
"*Note*: both `device` and `device_map` should not be specified together and can lead to unexpected behavior."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"gpu_llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"gpt2\",\n",
" task=\"text-generation\",\n",
" device=0, # replace with device_map=\"auto\" to use the accelerate library.\n",
" pipeline_kwargs={\"max_new_tokens\": 10},\n",
")\n",
"\n",
"gpu_chain = prompt | gpu_llm\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(gpu_chain.invoke({\"question\": question}))"
],
"id": "703c91c8"
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Batch GPU Inference\n",
"\n",
"If running on a device with GPU, you can also run inference on the GPU in batch mode."
],
"id": "59276016"
},
{
"cell_type": "code",
"execution_count": null,
"id": "097ba62f",
"metadata": {},
"outputs": [],
"source": [
"gpu_llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"bigscience/bloom-1b7\",\n",
" task=\"text-generation\",\n",
" device=0, # -1 for CPU\n",
" batch_size=2, # adjust as needed based on GPU map and model size.\n",
" model_kwargs={\"temperature\": 0, \"max_length\": 64},\n",
")\n",
"\n",
"gpu_chain = prompt | gpu_llm.bind(stop=[\"\\n\\n\"])\n",
"\n",
"questions = []\n",
"for i in range(4):\n",
" questions.append({\"question\": f\"What is the number {i} in french?\"})\n",
"\n",
"answers = gpu_chain.batch(questions)\n",
"for answer in answers:\n",
" print(answer)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
"cells": [
{
"cell_type": "markdown",
"id": "959300d4",
"metadata": {},
"source": [
"# Hugging Face Local Pipelines\n",
"\n",
"Hugging Face models can be run locally through the `HuggingFacePipeline` class.\n",
"\n",
"The [Hugging Face Model Hub](https://huggingface.co/models) hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.\n",
"\n",
"These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the [HuggingFaceHub](./huggingface_hub) notebook."
]
},
"nbformat": 4,
"nbformat_minor": 5
}
{
"cell_type": "markdown",
"id": "4c1b8450-5eaf-4d34-8341-2d785448a1ff",
"metadata": {
"tags": []
},
"source": [
"To use, you should have the ``transformers`` python [package installed](https://pypi.org/project/transformers/), as well as [pytorch](https://pytorch.org/get-started/locally/). You can also install `xformer` for a more memory-efficient attention implementation."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d772b637-de00-4663-bd77-9bc96d798db2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%pip install transformers --quiet"
]
},
{
"cell_type": "markdown",
"id": "91ad075f-71d5-4bc8-ab91-cc0ad5ef16bb",
"metadata": {},
"source": [
"### Model Loading\n",
"\n",
"Models can be loaded by specifying the model parameters using the `from_model_id` method."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "165ae236-962a-4763-8052-c4836d78a5d2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline\n",
"\n",
"hf = HuggingFacePipeline.from_model_id(\n",
" model_id=\"gpt2\",\n",
" task=\"text-generation\",\n",
" pipeline_kwargs={\"max_new_tokens\": 10},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "00104b27-0c15-4a97-b198-4512337ee211",
"metadata": {},
"source": [
"They can also be loaded by passing in an existing `transformers` pipeline directly"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7f426a4f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline\n",
"from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n",
"\n",
"model_id = \"gpt2\"\n",
"tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
"model = AutoModelForCausalLM.from_pretrained(model_id)\n",
"pipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=10)\n",
"hf = HuggingFacePipeline(pipeline=pipe)"
]
},
{
"cell_type": "markdown",
"id": "60e7ba8d",
"metadata": {},
"source": [
"### Create Chain\n",
"\n",
"With the model loaded into memory, you can compose it with a prompt to\n",
"form a chain."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3acf0069",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"\n",
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate.from_template(template)\n",
"\n",
"chain = prompt | hf\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"id": "dbbc3a37",
"metadata": {},
"source": [
"### GPU Inference\n",
"\n",
"When running on a machine with GPU, you can specify the `device=n` parameter to put the model on the specified device.\n",
"Defaults to `-1` for CPU inference.\n",
"\n",
"If you have multiple-GPUs and/or the model is too large for a single GPU, you can specify `device_map=\"auto\"`, which requires and uses the [Accelerate](https://huggingface.co/docs/accelerate/index) library to automatically determine how to load the model weights. \n",
"\n",
"*Note*: both `device` and `device_map` should not be specified together and can lead to unexpected behavior."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "703c91c8",
"metadata": {},
"outputs": [],
"source": [
"gpu_llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"gpt2\",\n",
" task=\"text-generation\",\n",
" device=0, # replace with device_map=\"auto\" to use the accelerate library.\n",
" pipeline_kwargs={\"max_new_tokens\": 10},\n",
")\n",
"\n",
"gpu_chain = prompt | gpu_llm\n",
"\n",
"question = \"What is electroencephalography?\"\n",
"\n",
"print(gpu_chain.invoke({\"question\": question}))"
]
},
{
"cell_type": "markdown",
"id": "59276016",
"metadata": {},
"source": [
"### Batch GPU Inference\n",
"\n",
"If running on a device with GPU, you can also run inference on the GPU in batch mode."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "097ba62f",
"metadata": {},
"outputs": [],
"source": [
"gpu_llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"bigscience/bloom-1b7\",\n",
" task=\"text-generation\",\n",
" device=0, # -1 for CPU\n",
" batch_size=2, # adjust as needed based on GPU map and model size.\n",
" model_kwargs={\"temperature\": 0, \"max_length\": 64},\n",
")\n",
"\n",
"gpu_chain = prompt | gpu_llm.bind(stop=[\"\\n\\n\"])\n",
"\n",
"questions = []\n",
"for i in range(4):\n",
" questions.append({\"question\": f\"What is the number {i} in french?\"})\n",
"\n",
"answers = gpu_chain.batch(questions)\n",
"for answer in answers:\n",
" print(answer)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -318,7 +318,7 @@
"metadata": {},
"source": [
"### Standard Cache\n",
"Use [Redis](/docs/ecosystem/integrations/redis) to cache prompts and responses."
"Use [Redis](/docs/integrations/partners/redis) to cache prompts and responses."
]
},
{
@ -404,7 +404,7 @@
"metadata": {},
"source": [
"### Semantic Cache\n",
"Use [Redis](/docs/ecosystem/integrations/redis) to cache prompts and responses and evaluate hits based on semantic similarity."
"Use [Redis](/docs/integrations/partners/redis) to cache prompts and responses and evaluate hits based on semantic similarity."
]
},
{
@ -728,7 +728,7 @@
},
"source": [
"## `Momento` Cache\n",
"Use [Momento](/docs/ecosystem/integrations/momento) to cache prompts and responses.\n",
"Use [Momento](/docs/integrations/partners/momento) to cache prompts and responses.\n",
"\n",
"Requires momento to use, uncomment below to install:"
]
@ -1588,7 +1588,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -191,7 +191,7 @@ We need to install the `boto3` library.
pip install boto3
```
See a [usage example](/docs/integrations/retrievers/amazon_bedrock_knowledge_bases).
See a [usage example](/docs/integrations/retrievers/bedrock).
```python
from langchain.retrievers import AmazonKnowledgeBasesRetriever

@ -196,7 +196,7 @@ We need to install several python packages.
pip install tensorflow google-cloud-aiplatform tensorflow-hub tensorflow-text
```
See a [usage example](/docs/integrations/vectorstores/matchingengine).
See a [usage example](/docs/integrations/vectorstores/google_vertex_ai_vector_search).
```python
from langchain_community.vectorstores import MatchingEngine

@ -41,7 +41,7 @@ from langchain_community.embeddings import AzureOpenAIEmbeddings
## LLMs
### Azure OpenAI
See a [usage example](/docs/integrations/llms/azure_openai_example).
See a [usage example](/docs/integrations/llms/azure_openai).
```python
from langchain_community.llms import AzureOpenAI
@ -165,7 +165,7 @@ The `UnstructuredExcelLoader` is used to load `Microsoft Excel` files. The loade
The page content will be the raw text of the Excel file. If you use the loader in `"elements"` mode, an HTML
representation of the Excel file will be available in the document metadata under the `text_as_html` key.
See a [usage example](/docs/integrations/document_loaders/excel).
See a [usage example](/docs/integrations/document_loaders/microsoft_excel).
```python
from langchain_community.document_loaders import UnstructuredExcelLoader
@ -203,7 +203,7 @@ First, let's install dependencies:
pip install bs4 msal
```
See a [usage example](/docs/integrations/document_loaders/onenote).
See a [usage example](/docs/integrations/document_loaders/microsoft_onenote).
```python
from langchain_community.document_loaders.onenote import OneNoteLoader

@ -36,7 +36,7 @@ If you are using a model hosted on `Azure`, you should use different wrapper for
```python
from langchain_community.llms import AzureOpenAI
```
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/llms/azure_openai_example)
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/llms/azure_openai)
## Chat model
@ -73,7 +73,7 @@ You can also use it to count tokens when splitting documents with
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...)
```
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/document_transformers/text_splitters/split_by_token#tiktoken)
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/document_transformers/split_by_token#tiktoken)
## Document Loader

@ -8,13 +8,12 @@ This page covers how to use the Deep Lake ecosystem within LangChain.
Activeloop Deep Lake supports SelfQuery Retrieval:
[Activeloop Deep Lake Self Query Retrieval](/docs/extras/modules/data_connection/retrievers/self_query/activeloop_deeplake_self_query)
[Activeloop Deep Lake Self Query Retrieval](/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query)
## More Resources
1. [Ultimate Guide to LangChain & Deep Lake: Build ChatGPT to Answer Questions on Your Financial Data](https://www.activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/)
2. [Twitter the-algorithm codebase analysis with Deep Lake](/docs/use_cases/question_answering/code/twitter-the-algorithm-analysis-deeplake)
4. [Code Understanding](/docs/modules/data_connection/retrievers/self_query/activeloop_deeplake_self_query)
3. Here is [whitepaper](https://www.deeplake.ai/whitepaper) and [academic paper](https://arxiv.org/pdf/2209.10785.pdf) for Deep Lake
4. Here is a set of additional resources available for review: [Deep Lake](https://github.com/activeloopai/deeplake), [Get started](https://docs.activeloop.ai/getting-started) and [Tutorials](https://docs.activeloop.ai/hub-tutorials)

@ -18,11 +18,11 @@ whether for semantic search or example selection.
from langchain_community.vectorstores import Chroma
```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](/docs/integrations/vectorstores/chroma)
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](/docs/integrations/vectorstores/chroma_self_query)
## Retriever
See a [usage example](/docs/modules/data_connection/retrievers/how_to/self_query/chroma_self_query).
See a [usage example](/docs/integrations/retrievers/self_query/chroma).
```python
from langchain.retrievers import SelfQueryRetriever

@ -15,11 +15,11 @@ Get a [Cohere api key](https://dashboard.cohere.ai/) and set it as an environmen
|API|description|Endpoint docs|Import|Example usage|
|---|---|---|---|---|
|Chat|Build chat bots|[chat](https://docs.cohere.com/reference/chat)|`from langchain_community.chat_models import ChatCohere`|[cohere.ipynb](/docs/docs/integrations/chat/cohere.ipynb)|
|LLM|Generate text|[generate](https://docs.cohere.com/reference/generate)|`from langchain_community.llms import Cohere`|[cohere.ipynb](/docs/docs/integrations/llms/cohere.ipynb)|
|RAG Retriever|Connect to external data sources|[chat + rag](https://docs.cohere.com/reference/chat)|`from langchain.retrievers import CohereRagRetriever`|[cohere.ipynb](/docs/docs/integrations/retrievers/cohere.ipynb)|
|Text Embedding|Embed strings to vectors|[embed](https://docs.cohere.com/reference/embed)|`from langchain_community.embeddings import CohereEmbeddings`|[cohere.ipynb](/docs/docs/integrations/text_embedding/cohere.ipynb)|
|Rerank Retriever|Rank strings based on relevance|[rerank](https://docs.cohere.com/reference/rerank)|`from langchain.retrievers.document_compressors import CohereRerank`|[cohere.ipynb](/docs/docs/integrations/retrievers/cohere-reranker.ipynb)|
|Chat|Build chat bots|[chat](https://docs.cohere.com/reference/chat)|`from langchain_community.chat_models import ChatCohere`|[cohere.ipynb](/docs/integrations/chat/cohere)|
|LLM|Generate text|[generate](https://docs.cohere.com/reference/generate)|`from langchain_community.llms import Cohere`|[cohere.ipynb](/docs/integrations/llms/cohere)|
|RAG Retriever|Connect to external data sources|[chat + rag](https://docs.cohere.com/reference/chat)|`from langchain.retrievers import CohereRagRetriever`|[cohere.ipynb](/docs/integrations/retrievers/cohere)|
|Text Embedding|Embed strings to vectors|[embed](https://docs.cohere.com/reference/embed)|`from langchain_community.embeddings import CohereEmbeddings`|[cohere.ipynb](/docs/integrations/text_embedding/cohere)|
|Rerank Retriever|Rank strings based on relevance|[rerank](https://docs.cohere.com/reference/rerank)|`from langchain.retrievers.document_compressors import CohereRerank`|[cohere.ipynb](/docs/integrations/retrievers/cohere-reranker)|
## Quick copy examples

@ -13,7 +13,7 @@ Databricks embraces the LangChain ecosystem in various ways:
Databricks connector for the SQLDatabase Chain
----------------------------------------------
You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain.
See the notebook [Connect to Databricks](/docs/use_cases/qa_structured/integrations/databricks) for details.
Databricks MLflow integrates with LangChain
-------------------------------------------

@ -11,7 +11,7 @@ Get a [Minimax group id](https://api.minimax.chat/user-center/basic-information)
## LLM
There exists a Minimax LLM wrapper, which you can access with
See a [usage example](/docs/modules/model_io/llms/integrations/minimax).
See a [usage example](/docs/integrations/llms/minimax).
```python
from langchain_community.llms import Minimax
@ -19,7 +19,7 @@ from langchain_community.llms import Minimax
## Chat Models
See a [usage example](/docs/modules/model_io/chat/integrations/minimax)
See a [usage example](/docs/integrations/chat/minimax)
```python
from langchain_community.chat_models import MiniMaxChat

@ -46,6 +46,6 @@ eng = sqlalchemy.create_engine(conn_str)
set_llm_cache(SQLAlchemyCache(engine=eng))
```
From here, see the [LLM Caching](/docs/modules/model_io/llms/how_to/llm_caching) documentation on how to use.
From here, see the [LLM Caching](/docs/integrations/llms/llm_caching) documentation on how to use.

@ -6,7 +6,7 @@
"source": [
"# Log, Trace, and Monitor\n",
"\n",
"When building apps or agents using Langchain, you end up making multiple API calls to fulfill a single user request. However, these requests are not chained when you want to analyse them. With [**Portkey**](/docs/ecosystem/integrations/portkey), all the embeddings, completion, and other requests from a single user request will get logged and traced to a common ID, enabling you to gain full visibility of user interactions.\n",
"When building apps or agents using Langchain, you end up making multiple API calls to fulfill a single user request. However, these requests are not chained when you want to analyse them. With [**Portkey**](/docs/integrations/providers/portkey), all the embeddings, completion, and other requests from a single user request will get logged and traced to a common ID, enabling you to gain full visibility of user interactions.\n",
"\n",
"This notebook serves as a step-by-step guide on how to log, trace, and monitor Langchain LLM calls using `Portkey` in your Langchain app."
]
@ -229,7 +229,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -1,6 +1,6 @@
# Reddit
>[Reddit](www.reddit.com) is an American social news aggregation, content rating, and discussion website.
>[Reddit](https://www.reddit.com) is an American social news aggregation, content rating, and discussion website.
## Installation and Setup

@ -13,7 +13,7 @@ pip install spacy
## Text Splitter
See a [usage example](/docs/modules/data_connection/document_transformers/text_splitters/split_by_token#spacy).
See a [usage example](/docs/modules/data_connection/document_transformers/split_by_token#spacy).
```python
from langchain.text_splitter import SpacyTextSplitter

@ -83,10 +83,10 @@ To use this pipeline, you can specify the `summary_config` argument in `similari
## Example Notebooks
For a more detailed examples of using Vectara, see the following examples:
* [this notebook](/docs/integrations/vectorstores/vectara.html) shows how to use Vectara as a vectorstore for semantic search
* [this notebook](/docs/integrations/providers/vectara/vectara_chat.html) shows how to build a chatbot with Langchain and Vectara
* [this notebook](/docs/integrations/providers/vectara/vectara_summary.html) shows how to use the full Vectara RAG pipeline, including generative summarization
* [this notebook](/docs/integrations/retrievers/self_query/vectara_self_query.html) shows the self-query capability with Vectara.
* [this notebook](/docs/integrations/vectorstores/vectara) shows how to use Vectara as a vectorstore for semantic search
* [this notebook](/docs/integrations/providers/vectara/vectara_chat) shows how to build a chatbot with Langchain and Vectara
* [this notebook](/docs/integrations/providers/vectara/vectara_summary) shows how to use the full Vectara RAG pipeline, including generative summarization
* [this notebook](/docs/integrations/retrievers/self_query/vectara_self_query) shows the self-query capability with Vectara.

@ -41,7 +41,7 @@ Both of these approaches may be useful, with the first providing the LLM with co
from langchain.memory import ZepMemory
```
See a [RAG App Example here](/docs/docs/integrations/memory/zep_memory).
See a [RAG App Example here](/docs/integrations/memory/zep_memory).
## Memory Retriever

@ -13,7 +13,7 @@
"source": [
">[Activeloop Deep Memory](https://docs.activeloop.ai/performance-features/deep-memory) is a suite of tools that enables you to optimize your Vector Store for your use-case and achieve higher accuracy in your LLM apps.\n",
"\n",
"`Retrieval-Augmented Generatation` (`RAG`) has recently gained significant attention. As advanced RAG techniques and agents emerge, they expand the potential of what RAGs can accomplish. However, several challenges may limit the integration of RAGs into production. The primary factors to consider when implementing RAGs in production settings are accuracy (recall), cost, and latency. For basic use cases, OpenAI's Ada model paired with a naive similarity search can produce satisfactory results. Yet, for higher accuracy or recall during searches, one might need to employ advanced retrieval techniques. These methods might involve varying data chunk sizes, rewriting queries multiple times, and more, potentially increasing latency and costs. [Activeloop's](https://activeloop.ai/) [Deep Memory](https://www.activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/) a feature available to `Activeloop Deep Lake` users, addresses these issuea by introducing a tiny neural network layer trained to match user queries with relevant data from a corpus. While this addition incurs minimal latency during search, it can boost retrieval accuracy by up to 27\n",
"`Retrieval-Augmented Generatation` (`RAG`) has recently gained significant attention. As advanced RAG techniques and agents emerge, they expand the potential of what RAGs can accomplish. However, several challenges may limit the integration of RAGs into production. The primary factors to consider when implementing RAGs in production settings are accuracy (recall), cost, and latency. For basic use cases, OpenAI's Ada model paired with a naive similarity search can produce satisfactory results. Yet, for higher accuracy or recall during searches, one might need to employ advanced retrieval techniques. These methods might involve varying data chunk sizes, rewriting queries multiple times, and more, potentially increasing latency and costs. Activeloop's [Deep Memory](https://www.activeloop.ai/resources/use-deep-memory-to-boost-rag-apps-accuracy-by-up-to-22/) a feature available to `Activeloop Deep Lake` users, addresses these issuea by introducing a tiny neural network layer trained to match user queries with relevant data from a corpus. While this addition incurs minimal latency during search, it can boost retrieval accuracy by up to 27\n",
"% and remains cost-effective and simple to use, without requiring any additional advanced rag techniques.\n"
]
},
@ -253,10 +253,10 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Here above we showed the overall schema how deep_memory works. So as you can see, in order to train it you need relevence, queries together with corpus data (data that we want to query). Corpus data was already populated in the previous section, here we will be generating questions and relevance. \n",
"Here above we showed the overall schema how deep_memory works. So as you can see, in order to train it you need relevance, queries together with corpus data (data that we want to query). Corpus data was already populated in the previous section, here we will be generating questions and relevance. \n",
"\n",
"1. `questions` - is a text of strings, where each string represents a query\n",
"2. `relevence` - contains links to the ground truth for each question. There might be several docs that contain answer to the given question. Because of this relevenve is `List[List[tuple[str, float]]]`, where outer list represents queries and inner list relevent documents. Tuple contains str, float pair where string represent the id of the source doc (corresponds to the `id` tensor in the dataset), while float corresponds to how much current document is related to the question. "
"2. `relevance` - contains links to the ground truth for each question. There might be several docs that contain answer to the given question. Because of this relevenve is `List[List[tuple[str, float]]]`, where outer list represents queries and inner list relevant documents. Tuple contains str, float pair where string represent the id of the source doc (corresponds to the `id` tensor in the dataset), while float corresponds to how much current document is related to the question. "
]
},
{
@ -632,13 +632,13 @@
}
],
"source": [
"retriver = db.as_retriever()\n",
"retriver.search_kwargs[\"deep_memory\"] = True\n",
"retriver.search_kwargs[\"k\"] = 10\n",
"retriever = db.as_retriever()\n",
"retriever.search_kwargs[\"deep_memory\"] = True\n",
"retriever.search_kwargs[\"k\"] = 10\n",
"\n",
"query = \"Deamination of cytidine to uridine on the minus strand of viral DNA results in catastrophic G-to-A mutations in the viral genome.\"\n",
"qa = RetrievalQA.from_chain_type(\n",
" llm=OpenAIChat(model=\"gpt-4\"), chain_type=\"stuff\", retriever=retriver\n",
" llm=OpenAIChat(model=\"gpt-4\"), chain_type=\"stuff\", retriever=retriever\n",
")\n",
"print(qa.run(query))"
]
@ -664,13 +664,13 @@
}
],
"source": [
"retriver = db.as_retriever()\n",
"retriver.search_kwargs[\"deep_memory\"] = False\n",
"retriver.search_kwargs[\"k\"] = 10\n",
"retriever = db.as_retriever()\n",
"retriever.search_kwargs[\"deep_memory\"] = False\n",
"retriever.search_kwargs[\"k\"] = 10\n",
"\n",
"query = \"Deamination of cytidine to uridine on the minus strand of viral DNA results in catastrophic G-to-A mutations in the viral genome.\"\n",
"qa = RetrievalQA.from_chain_type(\n",
" llm=OpenAIChat(model=\"gpt-4\"), chain_type=\"stuff\", retriever=retriver\n",
" llm=OpenAIChat(model=\"gpt-4\"), chain_type=\"stuff\", retriever=retriever\n",
")\n",
"qa.run(query)"
]
@ -706,7 +706,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -11,8 +11,6 @@
"\n",
"This notebook demonstrates a sample composition of the `Speak`, `Klarna`, and `Spoonacluar` APIs.\n",
"\n",
"For a detailed walkthrough of the OpenAPI chains wrapped within the NLAToolkit, see the [OpenAPI Operation Chain](/docs/use_cases/apis/openapi) notebook.\n",
"\n",
"### First, import dependencies and load the LLM"
]
},
@ -418,7 +416,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -6,7 +6,7 @@
"source": [
"# Apify\n",
"\n",
"This notebook shows how to use the [Apify integration](/docs/ecosystem/integrations/apify) for LangChain.\n",
"This notebook shows how to use the [Apify integration](/docs/integrations/providers/apify) for LangChain.\n",
"\n",
"[Apify](https://apify.com) is a cloud platform for web scraping and data extraction,\n",
"which provides an [ecosystem](https://apify.com/store) of more than a thousand\n",
@ -160,7 +160,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -45,13 +45,12 @@
"\n",
"Read the [help document](https://www.alibabacloud.com/help/en/opensearch/latest/vector-search) to quickly familiarize and configure OpenSearch Vector Search Edition instance.\n",
"\n",
"If you encounter any problems during use, please feel free to contact [xingshaomin.xsm@alibaba-inc.com](xingshaomin.xsm@alibaba-inc.com), and we will do our best to provide you with assistance and support."
"If you encounter any problems during use, please feel free to contact xingshaomin.xsm@alibaba-inc.com, and we will do our best to provide you with assistance and support."
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
@ -63,7 +62,6 @@
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
@ -84,7 +82,6 @@
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
@ -97,7 +94,6 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
@ -124,7 +120,6 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
@ -153,7 +148,6 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
@ -188,7 +182,6 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
@ -251,7 +244,6 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
@ -278,7 +270,6 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
@ -303,7 +294,6 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
@ -333,7 +323,6 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
@ -359,7 +348,6 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
},
@ -408,7 +396,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -346,7 +346,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -32,7 +32,7 @@
"* Stemming-based query expansion in [many languages](https://redis.io/docs/stack/search/reference/stemming/) (using [Snowball](http://snowballstem.org/))\n",
"* Support for Chinese-language tokenization and querying (using [Friso](https://github.com/lionsoul2014/friso))\n",
"* Numeric filters and ranges\n",
"* Geospatial searches using [Redis geospatial indexing](/commands/georadius)\n",
"* Geospatial searches using Redis geospatial indexing\n",
"* A powerful aggregations engine\n",
"* Supports for all utf-8 encoded text\n",
"* Retrieve full documents, selected fields, or only the document IDs\n",
@ -1269,7 +1269,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -46,7 +46,7 @@
"- [Building a custom agent](/docs/modules/agents/how_to/custom_agent)\n",
"- [Streaming (of both intermediate steps and tokens](/docs/modules/agents/how_to/streaming)\n",
"- [Building an agent that returns structured output](/docs/modules/agents/how_to/agent_structured)\n",
"- Lots functionality around using AgentExecutor, including: [using it as an iterator](/docs/modules/agents/how_to/agent_iter), [handle parsing errors](/docs/modules/agents/how_to/handle_parsing_errors), [returning intermediate steps](/docs/modules/agents/how_to/itermediate_steps), [capping the max number of iterations](/docs/modules/agents/how_to/max_iterations), and [timeouts for agents](/docs/modules/agents/how_to/max_time_limit)"
"- Lots functionality around using AgentExecutor, including: [using it as an iterator](/docs/modules/agents/how_to/agent_iter), [handle parsing errors](/docs/modules/agents/how_to/handle_parsing_errors), [returning intermediate steps](/docs/modules/agents/how_to/intermediate_steps), [capping the max number of iterations](/docs/modules/agents/how_to/max_iterations), and [timeouts for agents](/docs/modules/agents/how_to/max_time_limit)"
]
},
{
@ -74,7 +74,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -9,7 +9,7 @@
"\n",
"This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:\n",
"\n",
"- [Memory in LLMChain](/docs/modules/memory/how_to/adding_memory)\n",
"- [Memory in LLMChain](/docs/modules/memory/adding_memory)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent)\n",
"\n",
"In order to add a memory to an agent we are going to perform the following steps:\n",
@ -318,7 +318,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -9,9 +9,9 @@
"\n",
"This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:\n",
"\n",
"- [Memory in LLMChain](/docs/modules/memory/how_to/adding_memory)\n",
"- [Memory in LLMChain](/docs/modules/memory/adding_memory)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent)\n",
"- [Memory in Agent](/docs/modules/memory/how_to/agent_with_memory)\n",
"- [Memory in Agent](/docs/modules/memory/agent_with_memory)\n",
"\n",
"In order to add a memory with an external message store to an agent we are going to do the following steps:\n",
"\n",
@ -348,7 +348,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -101,11 +101,11 @@
"metadata": {},
"source": [
"You can create custom prompt templates that format the prompt in any way you want.\n",
"For more information, see [Custom Prompt Templates](./custom_prompt_template.html).\n",
"For more information, see [Custom Prompt Templates](./custom_prompt_template).\n",
"\n",
"## `ChatPromptTemplate`\n",
"\n",
"The prompt to [chat models](../models/chat) is a list of chat messages.\n",
"The prompt to [chat models](../chat) is a list of chat messages.\n",
"\n",
"Each chat message is associated with content, and an additional parameter called `role`.\n",
"For example, in the OpenAI [Chat Completions API](https://platform.openai.com/docs/guides/chat/introduction), a chat message can be associated with an AI assistant, a human or a system role.\n",

@ -29,7 +29,7 @@
"\n",
"The pipeline for QA over code follows [the steps we do for document question answering](/docs/use_cases/question_answering), with some differences:\n",
"\n",
"In particular, we can employ a [splitting strategy](https://python.langchain.com/docs/integrations/document_loaders/source_code) that does a few things:\n",
"In particular, we can employ a [splitting strategy](/docs/integrations/document_loaders/source_code) that does a few things:\n",
"\n",
"* Keeps each top-level function and class in the code is loaded into separate documents. \n",
"* Puts remaining into a separate document.\n",
@ -183,7 +183,7 @@
"\n",
"When setting up the vectorstore retriever:\n",
"\n",
"* We test [max marginal relevance](/docs/docs/use_cases/question_answering) for retrieval\n",
"* We test [max marginal relevance](/docs/use_cases/question_answering) for retrieval\n",
"* And 8 documents returned\n",
"\n",
"#### Go deeper\n",
@ -216,12 +216,12 @@
"source": [
"### Chat\n",
"\n",
"Test chat, just as we do for [chatbots](/docs/docs/use_cases/chatbots).\n",
"Test chat, just as we do for [chatbots](/docs/use_cases/chatbots).\n",
"\n",
"#### Go deeper\n",
"\n",
"- Browse the > 55 LLM and chat model integrations [here](https://integrations.langchain.com/).\n",
"- See further documentation on LLMs and chat models [here](/docs/modules/model_io/models/).\n",
"- See further documentation on LLMs and chat models [here](/docs/modules/model_io/).\n",
"- Use local LLMS: The popularity of [PrivateGPT](https://github.com/imartinez/privateGPT) and [GPT4All](https://github.com/nomic-ai/gpt4all) underscore the importance of running LLMs locally."
]
},
@ -1063,7 +1063,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -1269,7 +1269,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -39,7 +39,7 @@
"**Note**: Here we focus on Q&A for unstructured data. Two RAG use cases which we cover elsewhere are:\n",
"\n",
"- [Q&A over structured data](/docs/use_cases/qa_structured/sql) (e.g., SQL)\n",
"- [Q&A over code](/docs/use_cases/question_answering/code_understanding) (e.g., Python)"
"- [Q&A over code](/docs/use_cases/code_understanding) (e.g., Python)"
]
},
{
@ -65,7 +65,7 @@
"\n",
"#### Retrieval and generation\n",
"4. **Retrieve**: Given a user input, relevant splits are retrieved from storage using a [Retriever](/docs/modules/data_connection/retrievers/).\n",
"5. **Generate**: A [ChatModel](/docs/modules/model_io/chat_models) / [LLM](/docs/modules/model_io/llms/) produces an answer using a prompt that includes the question and the retrieved data\n",
"5. **Generate**: A [ChatModel](/docs/modules/model_io/chat) / [LLM](/docs/modules/model_io/llms/) produces an answer using a prompt that includes the question and the retrieved data\n",
"\n",
"![retrieval_diagram](../../../static/img/rag_retrieval_generation.png)"
]
@ -89,9 +89,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@ -103,7 +103,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

@ -55,7 +55,7 @@
"\n",
"#### Retrieval and generation\n",
"4. **Retrieve**: Given a user input, relevant splits are retrieved from storage using a [Retriever](/docs/modules/data_connection/retrievers/).\n",
"5. **Generate**: A [ChatModel](/docs/modules/model_io/chat_models) / [LLM](/docs/modules/model_io/llms/) produces an answer using a prompt that includes the question and the retrieved data"
"5. **Generate**: A [ChatModel](/docs/modules/model_io/chat) / [LLM](/docs/modules/model_io/llms/) produces an answer using a prompt that includes the question and the retrieved data"
]
},
{
@ -356,7 +356,7 @@
"\n",
"To handle this we'll split the `Document` into chunks for embedding and vector storage. This should help us retrieve only the most relevant bits of the blog post at run time.\n",
"\n",
"In this case we'll split our documents into chunks of 1000 characters with 200 characters of overlap between chunks. The overlap helps mitigate the possibility of separating a statement from important context related to it. We use the [RecursiveCharacterTextSplitter](/docs/modules/data_connection/document_transformers/text_splitters/recursive_text_splitter), which will recursively split the document using common separators like new lines until each chunk is the appropriate size. This is the recommended text splitter for generic text use cases.\n",
"In this case we'll split our documents into chunks of 1000 characters with 200 characters of overlap between chunks. The overlap helps mitigate the possibility of separating a statement from important context related to it. We use the [RecursiveCharacterTextSplitter](/docs/modules/data_connection/document_transformers/recursive_text_splitter), which will recursively split the document using common separators like new lines until each chunk is the appropriate size. This is the recommended text splitter for generic text use cases.\n",
"\n",
"We set `add_start_index=True` so that the character index at which each split Document starts within the initial Document is preserved as metadata attribute \"start_index\"."
]
@ -449,7 +449,7 @@
"\n",
"`TextSplitter`: Object that splits a list of `Document`s into smaller chunks. Subclass of `DocumentTransformer`s.\n",
"- Explore `Context-aware splitters`, which keep the location (\"context\") of each split in the original `Document`:\n",
" - [Markdown files](/docs/use_cases/question_answering/document-context-aware-QA)\n",
" - [Markdown files](/docs/modules/data_connection/document_transformers/markdown_header_metadata)\n",
" - [Code (py or js)](docs/integrations/document_loaders/source_code)\n",
" - [Scientific papers](/docs/integrations/document_loaders/grobid)\n",
"- [Interface](https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TextSplitter.html): API reference for the base interface.\n",
@ -852,9 +852,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@ -866,7 +866,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.1"
}
},
"nbformat": 4,

Loading…
Cancel
Save