docs/fix links (#6498)

master
Davis Chase 11 months ago committed by GitHub
parent ae6196507d
commit 3298bf4f00
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -54,7 +54,7 @@ Learn best practices for developing with LangChain.
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/ecosystem/integrations/) and [dependent repos](/docs/ecosystem/dependents.html).
### [Additional resources](/docs/additional_resources/)
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/ecosystem/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
<h3><span style={{color:"#2e8555"}}> Support </span></h3>

@ -1,6 +1,6 @@
# Document QA
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our [Document chains](../document.html).
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our [Document chains](/docs/modules/chains/document/).
import Example from "@snippets/modules/chains/additional/question_answering.mdx"

@ -23,8 +23,8 @@ const config = {
// For GitHub pages deployment, it is often '/<projectName>/'
baseUrl: "/",
onBrokenLinks: "ignore",
onBrokenMarkdownLinks: "ignore",
onBrokenLinks: "warn",
onBrokenMarkdownLinks: "throw",
plugins: [
() => ({

@ -18,4 +18,4 @@ whether for semantic search or example selection.
from langchain.vectorstores import AwaDB
```
For a more detailed walkthrough of the AwaDB wrapper, see [this notebook](../modules/indexes/vectorstores/examples/awadb.ipynb)
For a more detailed walkthrough of the AwaDB wrapper, see [here](/docs/modules/data_connection/vectorstores/integrations/awadb.html).

@ -12,12 +12,12 @@ Databricks embraces the LangChain ecosystem in various ways:
Databricks connector for the SQLDatabase Chain
----------------------------------------------
You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain. See the notebook [Connect to Databricks](./databricks/databricks.html) for details.
You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain. See the notebook [Connect to Databricks](/docs/ecosystem/integrations/databricks/databricks.html) for details.
Databricks-managed MLflow integrates with LangChain
---------------------------------------------------
MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](./mlflow_tracking.ipynb) for details about MLflow's integration with LangChain.
MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](/docs/ecosystem/integrations/mlflow_tracking.ipynb) for details about MLflow's integration with LangChain.
Databricks provides a fully managed and hosted version of MLflow integrated with enterprise security features, high availability, and other Databricks workspace features such as experiment and run management and notebook revision capture. MLflow on Databricks offers an integrated experience for tracking and securing machine learning model training runs and running machine learning projects. See [MLflow guide](https://docs.databricks.com/mlflow/index.html) for more details.
@ -26,11 +26,11 @@ Databricks-managed MLflow makes it more convenient to develop LangChain applicat
Databricks as an LLM provider
-----------------------------
The notebook [Wrap Databricks endpoints as LLMs](../modules/models/llms/integrations/databricks.html) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
The notebook [Wrap Databricks endpoints as LLMs](/docs/modules/model_io/models/llms/integrations/databricks.html) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
Databricks endpoints support Dolly, but are also great for hosting models like MPT-7B or any other models from the Hugging Face ecosystem. Databricks endpoints can also be used with proprietary models like OpenAI to provide a governance layer for enterprises.
Databricks Dolly
----------------
Databricks Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](../modules/models/llms/integrations/huggingface_hub.html) for instructions to access it through the Hugging Face Hub integration with LangChain.
Databricks Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](/docs/modules/model_io/models/llms/integrations/huggingface_hub.html) for instructions to access it through the Hugging Face Hub integration with LangChain.

@ -29,4 +29,4 @@ from langchain.agents import load_tools
tools = load_tools(["google-search"])
```
For more information on this, see [this page](/docs/modules/agents/tools/getting_started.md)
For more information on tools, see [this page](/docs/modules/agents/tools/).

@ -70,4 +70,4 @@ from langchain.agents import load_tools
tools = load_tools(["google-serper"])
```
For more information on this, see [this page](/docs/modules/agents/tools/getting_started.md)
For more information on tools, see [this page](/docs/modules/agents/tools/).

@ -66,4 +66,4 @@ For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_
The Hugging Face Hub has lots of great [datasets](https://huggingface.co/datasets) that can be used to evaluate your LLM chains.
For a detailed walkthrough of how to use them to do so, see [this notebook](../use_cases/evaluation/huggingface_datasets.html)
For a detailed walkthrough of how to use them to do so, see [this notebook](/docs/use_cases/evaluation/huggingface_datasets.html)

@ -41,4 +41,4 @@ from langchain.agents import load_tools
tools = load_tools(["openweathermap-api"])
```
For more information on this, see [this page](/docs/modules/agents/tools/getting_started.md)
For more information on tools, see [this page](/docs/modules/agents/tools/).

@ -40,7 +40,7 @@ for res in llm_results.generations:
```
You can use the PromptLayer request ID to add a prompt, score, or other metadata to your request. [Read more about it here](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
This LLM is identical to the [OpenAI LLM](./openai.md), except that
This LLM is identical to the [OpenAI](/docs/ecosystem/integrations/openai.html) LLM, except that
- all your requests will be logged to your PromptLayer account
- you can add `pl_tags` when instantializing to tag your requests on PromptLayer
- you can add `return_pl_id` when instantializing to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).

@ -87,4 +87,4 @@ arxiv_tool = SearxSearchResults(name="Arxiv", wrapper=wrapper,
})
```
For more information on tools, see [this page](../modules/agents/tools/getting_started.md)
For more information on tools, see [this page](/docs/modules/agents/tools/).

@ -54,7 +54,7 @@ The results are returned as a list of relevant documents, and a relevance score
For a more detailed examples of using the Vectara wrapper, see one of these two sample notebooks:
* [Chat Over Documents with Vectara](./vectara/vectara_chat.html)
* [Vectara Text Generation](./vectara/vectara_text_generation.html)
* [Chat Over Documents with Vectara](./vectara_chat.html)
* [Vectara Text Generation](./vectara_text_generation.html)

@ -36,4 +36,4 @@ from langchain.agents import load_tools
tools = load_tools(["wolfram-alpha"])
```
For more information on this, see [this page](/docs/modules/agents/tools/getting_started.md)
For more information on tools, see [this page](/docs/modules/agents/tools/).

@ -7,7 +7,7 @@
"source": [
"# Evaluating an OpenAPI Chain\n",
"\n",
"This notebook goes over ways to semantically evaluate an [OpenAPI Chain](openapi.html), which calls an endpoint defined by the OpenAPI specification using purely natural language."
"This notebook goes over ways to semantically evaluate an [OpenAPI Chain](/docs/modules/chains/additiona/openapi.html), which calls an endpoint defined by the OpenAPI specification using purely natural language."
]
},
{
@ -967,7 +967,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.3"
}
},
"nbformat": 4,

@ -1,474 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5371a9bb",
"metadata": {},
"source": [
"# Tracing Walkthrough\n",
"\n",
"There are two recommended ways to trace your LangChains:\n",
"\n",
"1. Setting the `LANGCHAIN_TRACING` environment variable to \"true\".\n",
"1. Using a context manager with tracing_enabled() to trace a particular block of code.\n",
"\n",
"**Note** if the environment variable is set, all code will be traced, regardless of whether or not it's within the context manager."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "17c04cc6-c93d-4b6c-a033-e897577f4ed1",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING\"] = \"true\"\n",
"\n",
"## Uncomment below if using hosted setup.\n",
"# os.environ[\"LANGCHAIN_ENDPOINT\"] = \"https://langchain-api-gateway-57eoxz8z.uc.gateway.dev\"\n",
"\n",
"## Uncomment below if you want traces to be recorded to \"my_session\" instead of \"default\".\n",
"# os.environ[\"LANGCHAIN_SESSION\"] = \"my_session\"\n",
"\n",
"## Better to set this environment variable in the terminal\n",
"## Uncomment below if using hosted version. Replace \"my_api_key\" with your actual API Key.\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = \"my_api_key\"\n",
"\n",
"import langchain\n",
"from langchain.agents import Tool, initialize_agent, load_tools\n",
"from langchain.agents import AgentType\n",
"from langchain.callbacks import tracing_enabled\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1b62cd48",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example.\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"tools = load_tools([\"llm-math\"], llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "bfa16b79-aa4b-4d41-a067-70d1f593f667",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 2^.123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0891804557407723\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: 1.0891804557407723\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'1.0891804557407723'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")\n",
"\n",
"agent.run(\"What is 2 raised to .123243 power?\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4829eb1d",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 2 ^ .123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0891804557407723\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the answer to the question. \n",
"Final Answer: 1.0891804557407723\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'1.0891804557407723'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Agent run with tracing using a chat model\n",
"agent = initialize_agent(\n",
" tools,\n",
" ChatOpenAI(temperature=0),\n",
" agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
" verbose=True,\n",
")\n",
"\n",
"agent.run(\"What is 2 raised to .123243 power?\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "76abfd82",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 2 ^ .123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0891804557407723\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the answer to the question. \n",
"Final Answer: 1.0891804557407723\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 5 ^ .123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.2193914912400514\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the answer to the question. \n",
"Final Answer: 1.2193914912400514\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"# Both of the agent runs will be traced because the environment variable is set\n",
"agent.run(\"What is 2 raised to .123243 power?\")\n",
"with tracing_enabled() as session:\n",
" agent.run(\"What is 5 raised to .123243 power?\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "fe833c33-033f-4806-be0c-cc3d147db13d",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 5 ^ .123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.2193914912400514\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the answer to the question. \n",
"Final Answer: 1.2193914912400514\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 2 ^ .123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0891804557407723\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the answer to the question. \n",
"Final Answer: 1.0891804557407723\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'1.0891804557407723'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Now, we unset the environment variable and use a context manager.\n",
"if \"LANGCHAIN_TRACING\" in os.environ:\n",
" del os.environ[\"LANGCHAIN_TRACING\"]\n",
"\n",
"# here, we are writing traces to \"my_test_session\"\n",
"with tracing_enabled(\"my_session\") as session:\n",
" assert session\n",
" agent.run(\"What is 5 raised to .123243 power?\") # this should be traced\n",
"\n",
"agent.run(\"What is 2 raised to .123243 power?\") # this should not be traced"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "b34105a4-be8e-46e4-8abe-01adba3ba727",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\n",
"\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 3^0.123\u001b[0m\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 2^0.123\u001b[0m\u001b[32;1m\u001b[1;3mAny number raised to the power of 0 is 1, but I'm not sure about a decimal power.\n",
"Action: Calculator\n",
"Action Input: 1^.123\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.1446847956963533\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0889970153361064\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0\u001b[0m\n",
"Thought:\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'1.0'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# The context manager is concurrency safe:\n",
"import asyncio\n",
"\n",
"if \"LANGCHAIN_TRACING\" in os.environ:\n",
" del os.environ[\"LANGCHAIN_TRACING\"]\n",
"\n",
"questions = [f\"What is {i} raised to .123 power?\" for i in range(1, 4)]\n",
"\n",
"# start a background task\n",
"task = asyncio.create_task(agent.arun(questions[0])) # this should not be traced\n",
"with tracing_enabled() as session:\n",
" assert session\n",
" tasks = [agent.arun(q) for q in questions[1:3]] # these should be traced\n",
" await asyncio.gather(*tasks)\n",
"\n",
"await task"
]
},
{
"cell_type": "markdown",
"id": "c552a5dd-cbca-48b9-90e6-930076006f78",
"metadata": {},
"source": [
"## [Beta] Tracing V2\n",
"\n",
"We are rolling out a newer version of our tracing service with more features coming soon. Here are the instructions on how to use it to trace your runs.\n",
"\n",
"To use, you can use the `tracing_v2_enabled` context manager or set `LANGCHAIN_TRACING_V2 = 'true'`\n",
"\n",
"**Option 1 (Local)**: \n",
"* Run the local LangChainPlus Server\n",
"```\n",
"pip install --upgrade langchain\n",
"langchain plus start\n",
"```\n",
"\n",
"**Option 2 (Hosted)**:\n",
"* After making an account an grabbing a LangChainPlus API Key, set the `LANGCHAIN_ENDPOINT` and `LANGCHAIN_API_KEY` environment variables"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "87027b0d-3a61-47cf-8a65-3002968be7f9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_ENDPOINT\"] = \"https://api.langchain.plus\" # Uncomment this line if you want to use the hosted version\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = \"<YOUR-LANGCHAINPLUS-API-KEY>\" # Uncomment this line if you want to use the hosted version."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "5b4f49a2-7d09-4601-a8ba-976f0517c64c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import langchain\n",
"from langchain.agents import Tool, initialize_agent, load_tools\n",
"from langchain.agents import AgentType\n",
"from langchain.callbacks import tracing_enabled\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "029b4a57-dc49-49de-8f03-53c292144e09",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example.\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"tools = load_tools([\"llm-math\"], llm=llm)\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "91a85fb2-6027-4bd0-b1fe-2a3b3b79e2dd",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 2^.123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0891804557407723\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: 1.0891804557407723\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'1.0891804557407723'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What is 2 raised to .123243 power?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f2291e9f-02f3-4b55-bd3d-d719de815df1",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 348 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 239 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 253 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

@ -1,57 +0,0 @@
# Tracing
By enabling tracing in your LangChain runs, youll be able to more effectively visualize, step through, and debug your chains and agents.
First, you should install tracing and set up your environment properly.
You can use either a locally hosted version of this (uses Docker) or a cloud hosted version (in closed alpha).
If you're interested in using the hosted platform, please fill out the form [here](https://forms.gle/tRCEMSeopZf6TE3b6).
- [Locally Hosted Setup](./local_installation.md)
- [Cloud Hosted Setup](./hosted_installation.md)
## Tracing Walkthrough
When you first access the UI, you should see a page with your tracing sessions.
An initial one "default" should already be created for you.
A session is just a way to group traces together.
If you click on a session, it will take you to a page with no recorded traces that says "No Runs."
You can create a new session with the new session form.
![](./homepage.png)
If we click on the `default` session, we can see that to start we have no traces stored.
![](./default_empty.png)
If we now start running chains and agents with tracing enabled, we will see data show up here.
To do so, we can run [this notebook](./agent_with_tracing.html) as an example.
After running it, we will see an initial trace show up.
![](./first_trace.png)
From here we can explore the trace at a high level by clicking on the arrow to show nested runs.
We can keep on clicking further and further down to explore deeper and deeper.
![](./explore.png)
We can also click on the "Explore" button of the top level run to dive even deeper.
Here, we can see the inputs and outputs in full, as well as all the nested traces.
![](./explore_trace.png)
We can keep on exploring each of these nested traces in more detail.
For example, here is the lowest level trace with the exact inputs/outputs to the LLM.
![](./explore_llm.png)
## Changing Sessions
1. To initially record traces to a session other than `"default"`, you can set the `LANGCHAIN_SESSION` environment variable to the name of the session you want to record to:
```python
import os
os.environ["LANGCHAIN_TRACING"] = "true"
os.environ["LANGCHAIN_SESSION"] = "my_session" # Make sure this session actually exists. You can create a new session in the UI.
```
2. To switch sessions mid-script or mid-notebook, do NOT set the `LANGCHAIN_SESSION` environment variable. Instead: `langchain.set_tracing_callback_manager(session_name="my_session")`

@ -1,196 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "9502d5b0",
"metadata": {},
"source": [
"# OpenAI Functions Agent\n",
"\n",
"This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model\n",
"\n",
"Install openai,google-search-results packages which are required as the langchain packages call them internally\n",
"\n",
">pip install openai google-search-results\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c0a83623",
"metadata": {},
"outputs": [],
"source": [
"from langchain import (\n",
" LLMMathChain,\n",
" OpenAI,\n",
" SerpAPIWrapper,\n",
" SQLDatabase,\n",
" SQLDatabaseChain,\n",
")\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents import AgentType\n",
"from langchain.chat_models import ChatOpenAI"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "86198d9c",
"metadata": {},
"source": [
"The agent is given ability to perform 3 search functionalities with the respective tools\n",
"\n",
"SerpAPIWrapper:\n",
">This initializes the SerpAPIWrapper for search functionality (search).\n",
"\n",
"LLMMathChain Initialization:\n",
">This component provides math-related functionality.\n",
"\n",
"SQL Database Initialization:\n",
">This component provides the agent to query in Custom Data Base.\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "6fefaba2",
"metadata": {},
"outputs": [],
"source": [
"# Initialize the OpenAI language model\n",
"#Replace <your_api_key> in openai_api_key=\"<your_api_key>\" with your actual OpenAI key.\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-0613\",openai_api_key=\"<your_api_key>\")\n",
"\n",
"# Initialize the SerpAPIWrapper for search functionality\n",
"#Replace <your_api_key> in openai_api_key=\"<your_api_key>\" with your actual SerpAPI key.\n",
"search = SerpAPIWrapper(serpapi_api_key=\"<your_api_key>\")\n",
"\n",
"# Initialize the LLMMathChain\n",
"llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)\n",
"\n",
"# Initialize the SQL database using the Chinook database file\n",
"# Replace the file location to the custom Data Base\n",
"db = SQLDatabase.from_uri(\"sqlite:///../../../../../notebooks/Chinook.db\")\n",
"\n",
"# Initialize the SQLDatabaseChain with the OpenAI language model and SQL database\n",
"db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)\n",
"\n",
"# Define a list of tools offered by the agent\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=search.run,\n",
" description=\"Useful when you need to answer questions about current events. You should ask targeted questions.\"\n",
" ),\n",
" Tool(\n",
" name=\"Calculator\",\n",
" func=llm_math_chain.run,\n",
" description=\"Useful when you need to answer questions about math.\"\n",
" ),\n",
" Tool(\n",
" name=\"FooBar-DB\",\n",
" func=db_chain.run,\n",
" description=\"Useful when you need to answer questions about FooBar. Input should be in the form of a question containing full context.\"\n",
" )\n",
"]\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9ff6cee9",
"metadata": {},
"outputs": [],
"source": [
"mrkl = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "ba8e4cbe",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Search` with `{'query': 'Leo DiCaprio girlfriend'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mAmidst his casual romance with Gigi, Leo allegedly entered a relationship with 19-year old model, Eden Polani, in February 2023.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Calculator` with `{'expression': '19^0.43'}`\n",
"\n",
"\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"19^0.43\u001b[32;1m\u001b[1;3m```text\n",
"19**0.43\n",
"```\n",
"...numexpr.evaluate(\"19**0.43\")...\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m3.547023357958959\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[33;1m\u001b[1;3mAnswer: 3.547023357958959\u001b[0m\u001b[32;1m\u001b[1;3mLeo DiCaprio's girlfriend is reportedly Eden Polani. Her current age raised to the power of 0.43 is approximately 3.55.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Leo DiCaprio's girlfriend is reportedly Eden Polani. Her current age raised to the power of 0.43 is approximately 3.55.\""
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mrkl.run(\n",
" \"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f5f6743",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -7,7 +7,7 @@
"source": [
"# Custom agent with tool retrieval\n",
"\n",
"This notebook builds off of [this notebook](custom_llm_agent.html) and assumes familiarity with how agents work.\n",
"This notebook builds off of [this notebook](/docs/modules/agents/how_to/custom_llm_agent.html) and assumes familiarity with how agents work.\n",
"\n",
"The novel idea introduced in this notebook is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.\n",
"\n",

@ -9,8 +9,8 @@
"\n",
"This notebook goes over adding memory to **both** of an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:\n",
"\n",
"- [Adding memory to an LLM Chain](../../../memory/integrations/adding_memory.html)\n",
"- [Custom Agents](../../agents/custom_agent.html)\n",
"- [Adding memory to an LLM Chain](/docs/modules/memory/integrations/adding_memory.html)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent.html)\n",
"\n",
"We are going to create a custom Agent. The agent has access to a conversation memory, search tool, and a summarization tool. And, the summarization tool also needs access to the conversation memory."
]

@ -9,7 +9,7 @@
"\n",
"Natural Language API Toolkits (NLAToolkits) permit LangChain Agents to efficiently plan and combine calls across endpoints. This notebook demonstrates a sample composition of the Speak, Klarna, and Spoonacluar APIs.\n",
"\n",
"For a detailed walkthrough of the OpenAPI chains wrapped within the NLAToolkit, see the [OpenAPI Operation Chain](openapi.html) notebook.\n",
"For a detailed walkthrough of the OpenAPI chains wrapped within the NLAToolkit, see the [OpenAPI Operation Chain](/docs/modules/chains/additional/openapi.html) notebook.\n",
"\n",
"### First, import dependencies and load the LLM"
]
@ -420,7 +420,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.3"
}
},
"nbformat": 4,

@ -6,7 +6,7 @@
"source": [
"# Apify\n",
"\n",
"This notebook shows how to use the [Apify integration](../../../../ecosystem/apify.md) for LangChain.\n",
"This notebook shows how to use the [Apify integration](/docs/ecosystem/integrations/apify.html) for LangChain.\n",
"\n",
"[Apify](https://apify.com) is a cloud platform for web scraping and data extraction,\n",
"which provides an [ecosystem](https://apify.com/store) of more than a thousand\n",
@ -72,7 +72,7 @@
"source": [
"Then run the Actor, wait for it to finish, and fetch its results from the Apify dataset into a LangChain document loader.\n",
"\n",
"Note that if you already have some results in an Apify dataset, you can load them directly using `ApifyDatasetLoader`, as shown in [this notebook](../../../data_connection/document_loaders/examples/apify_dataset.html). In that notebook, you'll also find the explanation of the `dataset_mapping_function`, which is used to map fields from the Apify dataset records to LangChain `Document` fields."
"Note that if you already have some results in an Apify dataset, you can load them directly using `ApifyDatasetLoader`, as shown in [this notebook](/docs/modules/data_connection/document_loaders/integrations/apify_dataset.html). In that notebook, you'll also find the explanation of the `dataset_mapping_function`, which is used to map fields from the Apify dataset records to LangChain `Document` fields."
]
},
{
@ -160,7 +160,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.11.3"
}
},
"nbformat": 4,

@ -9,7 +9,7 @@
"\n",
"LangChain provides async support for Chains by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n",
"\n",
"Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`), `ChatVectorDBChain`, and [QA chains](../index_examples/question_answering.html). Async support for other chains is on the roadmap."
"Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`), `ChatVectorDBChain`, and [QA chains](/docs/modules/chains/additional/question_answering.html). Async support for other chains is on the roadmap."
]
},
{

@ -147,8 +147,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Tips: You can easily integrate a `Chain` object as a `Tool` in your `Agent` via its `run` method. See an example [here](../agents/tools/custom_tools.html)."
"Tips: You can easily integrate a `Chain` object as a `Tool` in your `Agent` via its `run` method. See an example [here](/docs/modules/agents/tools/how_to/custom_tools.html)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

@ -13,7 +13,7 @@
"\n",
"## Prerequisites\n",
"\n",
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](../../../agents/tools/integrations/apify.html) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/modules/agents/tools/integrations/apify.html) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
]
},
{
@ -175,7 +175,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.11.3"
}
},
"nbformat": 4,

@ -36,7 +36,7 @@
"3. Create an access token via the Developer Playground for your workspace. [Detailed instructions](https://help.docugami.com/home/docugami-api)\n",
"4. Explore the [Docugami API](https://api-docs.docugami.com) to get a list of your processed docset IDs, or just the document IDs for a particular docset. \n",
"6. Use the DocugamiLoader as detailed below, to get rich semantic chunks for your documents.\n",
"7. Optionally, build and publish one or more [reports or abstracts](https://help.docugami.com/home/reports). This helps Docugami improve the semantic XML with better tags based on your preferences, which are then added to the DocugamiLoader output as metadata. Use techniques like [self-querying retriever](https://python.langchain.com/en/latest/modules/data_connection/retrievers/examples/self_query_retriever.html) to do high accuracy Document QA.\n",
"7. Optionally, build and publish one or more [reports or abstracts](https://help.docugami.com/home/reports). This helps Docugami improve the semantic XML with better tags based on your preferences, which are then added to the DocugamiLoader output as metadata. Use techniques like [self-querying retriever](/docs/modules/data_connection/retrievers/how_to/self_query_retriever/) to do high accuracy Document QA.\n",
"\n",
"## Advantages vs Other Chunking Techniques\n",
"\n",
@ -317,7 +317,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use a [self-querying retriever](../../retrievers/examples/self_query.html) to improve our query accuracy, using this additional metadata:"
"We can use a [self-querying retriever](/docs/modules/data_connection/retrievers/how_to/self_query/) to improve our query accuracy, using this additional metadata:"
]
},
{
@ -423,7 +423,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.11.3"
}
},
"nbformat": 4,

@ -1,21 +1,19 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Psychic\n",
"This notebook covers how to load documents from `Psychic`. See [here](../../../../ecosystem/psychic.md) for more details.\n",
"This notebook covers how to load documents from `Psychic`. See [here](/docs/ecosystem/integrations/psychic.html) for more details.\n",
"\n",
"## Prerequisites\n",
"1. Follow the Quick Start section in [this document](../../../../ecosystem/psychic.md)\n",
"1. Follow the Quick Start section in [this document](/docs/ecosystem/integrations/psychic.html)\n",
"2. Log into the [Psychic dashboard](https://dashboard.psychic.dev/) and get your secret key\n",
"3. Install the frontend react library into your web app and have a user authenticate a connection. The connection will be created using the connection id that you specify."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@ -65,7 +63,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@ -107,7 +104,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -121,9 +118,8 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
"version": "3.11.3"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"

@ -6,7 +6,7 @@
"source": [
"# Reddit\n",
"\n",
">[Reddit](www.reddit.com) is an American social news aggregation, content rating, and discussion website.\n",
">[Reddit](https://www.reddit.com) is an American social news aggregation, content rating, and discussion website.\n",
"\n",
"\n",
"This loader fetches the text from the Posts of Subreddits or Reddit users, using the `praw` Python package.\n",
@ -108,7 +108,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.11.3"
}
},
"nbformat": 4,

@ -1,7 +1,6 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "13afcae7",
"metadata": {},
@ -18,13 +17,12 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "68e75fb9",
"metadata": {},
"source": [
"## Creating a MyScale vectorstore\n",
"MyScale has already been integrated to LangChain for a while. So you can follow [this notebook](../../vectorstores/examples/myscale.ipynb) to create your own vectorstore for a self-query retriever.\n",
"MyScale has already been integrated to LangChain for a while. So you can follow [this notebook](/docs/modules/data_connection/vectorstores/integrations/myscale.ipynb) to create your own vectorstore for a self-query retriever.\n",
"\n",
"NOTE: All self-query retrievers requires you to have `lark` installed (`pip install lark`). We use `lark` for grammar definition. Before you proceed to the next step, we also want to remind you that `clickhouse-connect` is also needed to interact with your MyScale backend."
]
@ -42,7 +40,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "83811610-7df3-4ede-b268-68a6a83ba9e2",
"metadata": {},
@ -86,7 +83,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "bf7f6fc4",
"metadata": {},
@ -121,7 +117,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "5ecaab6d",
"metadata": {},
@ -179,7 +174,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "ea9df8d4",
"metadata": {},
@ -246,7 +240,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "86371ac8",
"metadata": {},
@ -301,7 +294,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "39bd1de1-b9fe-4a98-89da-58d8a7a6ae51",
"metadata": {},
@ -362,7 +354,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
"version": "3.11.3"
}
},
"nbformat": 4,

@ -9,7 +9,7 @@
"\n",
">[Cohere](https://cohere.ai/about) is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.\n",
"\n",
"This notebook shows how to use [Cohere's rerank endpoint](https://docs.cohere.com/docs/reranking) in a retriever. This builds on top of ideas in the [ContextualCompressionRetriever](contextual-compression.html)."
"This notebook shows how to use [Cohere's rerank endpoint](https://docs.cohere.com/docs/reranking) in a retriever. This builds on top of ideas in the [ContextualCompressionRetriever](/docs/modules/data_connection/retrievers/contextual_compression/)."
]
},
{
@ -479,7 +479,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
"version": "3.11.3"
}
},
"nbformat": 4,

@ -9,8 +9,8 @@
"\n",
"This notebook goes over adding memory to an Agent. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:\n",
"\n",
"- [Adding memory to an LLM Chain](adding_memory.html)\n",
"- [Custom Agents](../../agents/agents/custom_agent.html)\n",
"- [Adding memory to an LLM Chain](/docs/modules/memory/how_to/adding_memory.html)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent.html)\n",
"\n",
"In order to add a memory to an agent we are going to the the following steps:\n",
"\n",
@ -317,7 +317,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.3"
}
},
"nbformat": 4,

@ -9,9 +9,9 @@
"\n",
"This notebook goes over adding memory to an Agent where the memory uses an external message store. Before going through this notebook, please walkthrough the following notebooks, as this will build on top of both of them:\n",
"\n",
"- [Adding memory to an LLM Chain](adding_memory.html)\n",
"- [Custom Agents](../../agents/agents/custom_agent.html)\n",
"- [Agent with Memory](agent_with_memory.html)\n",
"- [Adding memory to an LLM Chain](/docs/modules/memory/how_to/adding_memory.html)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent.html)\n",
"- [Agent with Memory](/docs/modules/memory/how_to/agent_with_memory.html)\n",
"\n",
"In order to add a memory with an external message store to an agent we are going to do the following steps:\n",
"\n",
@ -90,17 +90,17 @@
},
{
"cell_type": "markdown",
"id": "6d60bbd5",
"metadata": {},
"source": [
"Now we can create the ChatMessageHistory backed by the database."
],
"metadata": {
"collapsed": false
},
"id": "6d60bbd5"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "17638dc7",
"metadata": {},
"outputs": [],
"source": [
"message_history = RedisChatMessageHistory(\n",
@ -110,11 +110,7 @@
"memory = ConversationBufferMemory(\n",
" memory_key=\"chat_history\", chat_memory=message_history\n",
")"
],
"metadata": {
"collapsed": false
},
"id": "17638dc7"
]
},
{
"cell_type": "markdown",
@ -352,9 +348,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
}

@ -214,7 +214,7 @@
"metadata": {},
"source": [
"### Standard Cache\n",
"Use [Redis](../../../../ecosystem/redis.md) to cache prompts and responses."
"Use [Redis](/docs/ecosystem/integrations/redis.html) to cache prompts and responses."
]
},
{
@ -300,7 +300,7 @@
"metadata": {},
"source": [
"### Semantic Cache\n",
"Use [Redis](../../../../ecosystem/redis.md) to cache prompts and responses and evaluate hits based on semantic similarity."
"Use [Redis](/docs/ecosystem/integrations/redis.html) to cache prompts and responses and evaluate hits based on semantic similarity."
]
},
{
@ -617,7 +617,7 @@
"metadata": {},
"source": [
"## Momento Cache\n",
"Use [Momento](../../../../integrations/momento.md) to cache prompts and responses.\n",
"Use [Momento](/docs/ecosystem/integrations/momento.html) to cache prompts and responses.\n",
"\n",
"Requires momento to use, uncomment below to install:"
]

@ -10,7 +10,7 @@ An `ExampleSelector` must implement two methods:
Let's implement a custom `ExampleSelector` that just selects two examples at random.
:::{note}
Take a look at the current set of example selector implementations supported in LangChain [here](../../prompt_templates/getting_started.md).
Take a look at the current set of example selector implementations supported in LangChain [here](/docs/modules/model_io/prompts/example_selectors/).
:::
<!-- TODO(shreya): Add the correct link. -->

@ -13,7 +13,7 @@
"\n",
"LangChain provides a set of default prompt templates that can be used to generate prompts for a variety of tasks. However, there may be cases where the default prompt templates do not meet your needs. For example, you may want to create a prompt template with specific dynamic instructions for your language model. In such cases, you can create a custom prompt template.\n",
"\n",
"Take a look at the current set of default prompt templates [here](../getting_started.md)."
"Take a look at the current set of default prompt templates [here](/docs/modules/model_io/prompts/prompt_templates/)."
]
},
{

@ -89,7 +89,7 @@
"\n",
" Memories are retrieved using a weighted sum of salience, recency, and importance.\n",
"\n",
"You can review the definitions of the `GenerativeAgent` and `GenerativeAgentMemory` in the [reference documentation](\"../../reference/modules/experimental\") for the following imports, focusing on `add_memory` and `summarize_related_memories` methods."
"You can review the definitions of the `GenerativeAgent` and `GenerativeAgentMemory` in the [reference documentation](\"https://api.python.langchain.com/en/latest/modules/experimental.html\") for the following imports, focusing on `add_memory` and `summarize_related_memories` methods."
]
},
{
@ -986,7 +986,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.11.3"
}
},
"nbformat": 4,

@ -9,8 +9,8 @@
"\n",
"This notebook combines two concepts in order to build a custom agent that can interact with AI Plugins:\n",
"\n",
"1. [Custom Agent with Retrieval](../../modules/agents/agents/custom_agent_with_plugin_retrieval.html): This introduces the concept of retrieving many tools, which is useful when trying to work with arbitrarily many plugins.\n",
"2. [Natural Language API Chains](../../modules/chains/examples/openapi.html): This creates Natural Language wrappers around OpenAPI endpoints. This is useful because (1) plugins use OpenAPI endpoints under the hood, (2) wrapping them in an NLAChain allows the router agent to call it more easily.\n",
"1. [Custom Agent with Tool Retrieval](/docs/modules/agents/how_to/custom_agent_with_tool_retrieval.html): This introduces the concept of retrieving many tools, which is useful when trying to work with arbitrarily many plugins.\n",
"2. [Natural Language API Chains](/docs/modules/chains/additional/openapi.html): This creates Natural Language wrappers around OpenAPI endpoints. This is useful because (1) plugins use OpenAPI endpoints under the hood, (2) wrapping them in an NLAChain allows the router agent to call it more easily.\n",
"\n",
"The novel idea introduced in this notebook is the idea of using retrieval to select not the tools explicitly, but the set of OpenAPI specs to use. We can then generate tools from those OpenAPI specs. The use case for this is when trying to get agents to use plugins. It may be more efficient to choose plugins first, then the endpoints, rather than the endpoints directly. This is because the plugins may contain more useful information for selection."
]
@ -540,7 +540,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.3"
},
"vscode": {
"interpreter": {

@ -7,7 +7,7 @@
"source": [
"# Plug-and-Plai\n",
"\n",
"This notebook builds upon the idea of [tool retrieval](custom_agent_with_plugin_retrieval.html), but pulls all tools from `plugnplai` - a directory of AI Plugins."
"This notebook builds upon the idea of [plugin retrieval](./custom_agent_with_plugin_retrieval.html), but pulls all tools from `plugnplai` - a directory of AI Plugins."
]
},
{
@ -564,7 +564,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.3"
},
"vscode": {
"interpreter": {

@ -17,4 +17,4 @@ Agents are more complex, and involve multiple queries to the LLM to understand w
The downside of agents are that you have less control. The upside is that they are more powerful,
which allows you to use them on larger and more complex schemas.
- [OpenAPI Agent](/docs/modules/agents/tools/toolkits/examples/openapi.html)
- [OpenAPI Agent](/docs/modules/agents/toolkits/openapi.html)

@ -11,13 +11,13 @@ usage of LangChain's collection of tools.
## Baby AGI ([Original Repo](https://github.com/yoheinakajima/babyagi))
- [Baby AGI](./autonomous_agents/baby_agi.html): a notebook implementing BabyAGI as LLM Chains
- [Baby AGI with Tools](./autonomous_agents/baby_agi_with_agent.html): building off the above notebook, this example substitutes in an agent with tools as the execution tools, allowing it to actually take actions.
- [Baby AGI](/docs/use_cases/autonomous_agents/aby_agi.html): a notebook implementing BabyAGI as LLM Chains
- [Baby AGI with Tools](/docs/use_cases/autonomous_agents/baby_agi_with_agent.html): building off the above notebook, this example substitutes in an agent with tools as the execution tools, allowing it to actually take actions.
## AutoGPT ([Original Repo](https://github.com/Significant-Gravitas/Auto-GPT))
- [AutoGPT](./autonomous_agents/autogpt.html): a notebook implementing AutoGPT in LangChain primitives
- [WebSearch Research Assistant](./autonomous_agents/marathon_times.html): a notebook showing how to use AutoGPT plus specific tools to act as research assistant that can use the web.
- [AutoGPT](/docs/use_cases/autonomous_agents/autogpt.html): a notebook implementing AutoGPT in LangChain primitives
- [WebSearch Research Assistant](/docs/use_cases/autonomous_agents/marathon_times.html): a notebook showing how to use AutoGPT plus specific tools to act as research assistant that can use the web.
## MetaPrompt ([Original Repo](https://github.com/ngoodman/metaprompt))
- [Meta-Prompt](./autonomous_agents/meta_prompt.html): a notebook implementing Meta-Prompt in LangChain primitives
- [Meta-Prompt](/docs/use_cases/autonomous_agents/meta_prompt.html): a notebook implementing Meta-Prompt in LangChain primitives

@ -6,13 +6,11 @@ Most chat based applications rely on remembering what happened in previous inter
The following resources exist:
- [ChatGPT Clone](/docs/modules/agents/how_to/chatgpt_clone.html): A notebook walking through how to recreate a ChatGPT-like experience with LangChain.
- [Conversation Memory](/docs/modules/memory.html): A notebook walking through how to use different types of conversational memory.
- [Conversation Agent](/docs/modules/agents/agent_types/conversational_agent.html): A notebook walking through how to create an agent optimized for conversation.
- [Conversation Agent](/docs/modules/agents/agent_types/chat_conversation_agent.html): A notebook walking through how to create an agent optimized for conversation.
Additional related resources include:
- [Memory Key Concepts](/docs/modules/memory.html): Explanation of key concepts related to memory.
- [Memory Examples](/docs/modules/memory/how_to_guides.html): A collection of how-to examples for working with memory.
- [Memory concepts and examples](/docs/modules/memory/): Explanation of key concepts related to memory along with how-to's and examples.
More end-to-end examples include:
- [Voice Assistant](./voice_assistant.html): A notebook walking through how to create a voice assistant using LangChain.

@ -8,7 +8,7 @@ Examples of this include:
- Extracting multiple rows to insert into a database from a long document
- Extracting the correct API parameters from a user query
This work is extremely related to [output parsing](/docs/modules/prompts/output_parsers.html).
This work is extremely related to [output parsing](/docs/modules/model_io/output_parsers/).
Output parsers are responsible for instructing the LLM to respond in a specific format.
In this case, the output parsers specify the format of the data you would like to extract from the document.
Then, in addition to the output format instructions, the prompt should also contain the data you would like to extract information from.

@ -1,7 +1,7 @@
# Question answering over documents
Question answering in this context refers to question answering over your document data.
For question answering over other types of data, please see other sources documentation like [SQL database Question Answering](../tabular) or [Interacting with APIs](../apis).
For question answering over other types of data, please see other sources documentation like [SQL database Question Answering](/docs/use_cases/tabular.html) or [Interacting with APIs](/docs/use_cases/apis.html).
For question answering over many documents, you almost always want to create an index over the data.
This can be used to smartly access the most relevant documents for a given question, allowing you to avoid having to pass all the documents to the LLM (saving you time and money).
@ -10,10 +10,10 @@ This can be used to smartly access the most relevant documents for a given quest
```python
from langchain.document_loaders import TextLoader
loader = TextLoader('../state_of_the_union.txt')
loader = TextLoader('../../modules/state_of_the_union.txt')
```
See [here](/docs/modules/data_connection/document_loaders) for more information on how to get started with document loading.
See [here](/docs/modules/data_connection/document_loaders/) for more information on how to get started with document loading.
**Create Your Index**
@ -38,7 +38,7 @@ query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query)
```
Again, these high level interfaces obfuscate a lot of what is going on under the hood, so please see [this notebook](/docs/modules/data_connection/getting_started.html) for a lower level walkthrough.
Again, these high level interfaces obfuscate a lot of what is going on under the hood, so please see [this notebook](/docs/modules/data_connection/) for a more thorough introduction to data modules.
## Document Question Answering
@ -55,8 +55,8 @@ chain.run(input_documents=docs, question=query)
The following resources exist:
- [Question Answering Notebook](/docs/modules/chains/index_examples/question_answering.html): A notebook walking through how to accomplish this task.
- [VectorDB Question Answering Notebook](/docs/modules/chains/index_examples/vector_db_qa.html): A notebook walking through how to do question answering over a vector database. This can often be useful for when you have a LOT of documents, and you don't want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
- [Question Answering Notebook](/docs/modules/chains/additional/question_answering.html): A notebook walking through how to accomplish this task.
- [VectorDB Question Answering Notebook](/docs/modules/chains/popular/vector_db_qa.html): A notebook walking through how to do question answering over a vector database. This can often be useful for when you have a LOT of documents, and you don't want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
## Adding in sources
@ -70,21 +70,16 @@ chain = load_qa_with_sources_chain(llm, chain_type="stuff")
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
```
The following resources exist:
- [QA With Sources Notebook](/docs/modules/chains/index_examples/qa_with_sources.html): A notebook walking through how to accomplish this task.
- [VectorDB QA With Sources Notebook](/docs/modules/chains/index_examples/vector_db_qa_with_sources.html): A notebook walking through how to do question answering with sources over a vector database. This can often be useful for when you have a LOT of documents, and you don't want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
## Additional Related Resources
Additional related resources include:
- [Building blocks for working with Documents](/docs/modules/data_connection): Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents) and Embeddings & Vectorstores (useful for the above Vector DB example).
- [CombineDocuments Chains](/docs/modules/chains/documents): A conceptual overview of specific types of chains by which you can accomplish this task.
- [Building blocks for working with Documents](/docs/modules/data_connection/): Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents) and Embeddings & Vectorstores (useful for the above Vector DB example).
- [CombineDocuments Chains](/docs/modules/chains/documents/): A conceptual overview of specific types of chains by which you can accomplish this task.
## End-to-end examples
For examples to this done in an end-to-end manner, please see the following resources:
- [Semantic search over a group chat with Sources Notebook](./semantic-search-over-chat.html): A notebook that semantically searches over a group chat conversation.
- [Document context aware text splitting and QA](./document-context-aware-QA.html): A notebook that shows context aware splitting on markdown files and SelfQueryRetriever for QA using the resulting metadata.
- [Semantic search over a group chat with Sources Notebook](/docs/use_cases/question_answering/semantic-search-over-chat.html): A notebook that semantically searches over a group chat conversation.
- [Document context aware text splitting and QA](/docs/use_cases/question_answering/document-context-aware-QA.html): A notebook that shows context aware splitting on markdown files and SelfQueryRetriever for QA using the resulting metadata.

@ -15,4 +15,4 @@ The following resources exist:
- [Summarization notebook](/docs/modules/chains/popular/summarize.html): A notebook walking through how to accomplish this task.
Additional related resources include:
- [Utilities for working with documents](/api_reference/utils.html): Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents).
- [Modules for working with documents](/docs/modules/data_connection): Core components for working with documents.

@ -26,6 +26,6 @@ Agents are more complex, and involve multiple queries to the LLM to understand w
The downside of agents are that you have less control. The upside is that they are more powerful,
which allows you to use them on larger databases and more complex schemas.
- [SQL Agent](/docs/modules/agents/toolkits/integrations/sql_database.html)
- [Pandas Agent](/docs/modules/agents/toolkits/integrations/pandas.html)
- [CSV Agent](/docs/modules/agents/toolkits/integrations/csv.html)
- [SQL Agent](/docs/modules/agents/toolkits/sql_database.html)
- [Pandas Agent](/docs/modules/agents/toolkits/pandas.html)
- [CSV Agent](/docs/modules/agents/toolkits/csv.html)

@ -1,3 +1,7 @@
Install openai,google-search-results packages which are required as the langchain packages call them internally
>pip install openai google-search-results
```python
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain
from langchain.agents import initialize_agent, Tool

@ -21,5 +21,5 @@ conversation.run("And the next 4?")
</CodeOutputBlock>
Essentially, `BaseMemory` defines an interface of how `langchain` stores memory. It allows reading of stored data through `load_memory_variables` method and storing new data through `save_context` method. You can learn more about it in [Memory](../memory.html) section.
Essentially, `BaseMemory` defines an interface of how `langchain` stores memory. It allows reading of stored data through `load_memory_variables` method and storing new data through `save_context` method. You can learn more about it in the [Memory](/docs/modules/memory/) section.

@ -1,4 +1,4 @@
Under the hood, LangChain uses SQLAlchemy to connect to SQL databases. The `SQLDatabaseChain` can therefore be used with any SQL dialect supported by SQLAlchemy, such as MS SQL, MySQL, MariaDB, PostgreSQL, Oracle SQL, [Databricks](../../../ecosystem/integrations/databricks.html) and SQLite. Please refer to the SQLAlchemy documentation for more information about requirements for connecting to your database. For example, a connection to MySQL requires an appropriate connector such as PyMySQL. A URI for a MySQL connection might look like: `mysql+pymysql://user:pass@some_mysql_db_address/db_name`.
Under the hood, LangChain uses SQLAlchemy to connect to SQL databases. The `SQLDatabaseChain` can therefore be used with any SQL dialect supported by SQLAlchemy, such as MS SQL, MySQL, MariaDB, PostgreSQL, Oracle SQL, [Databricks](/docs/ecosystem/integrations/databricks.html) and SQLite. Please refer to the SQLAlchemy documentation for more information about requirements for connecting to your database. For example, a connection to MySQL requires an appropriate connector such as PyMySQL. A URI for a MySQL connection might look like: `mysql+pymysql://user:pass@some_mysql_db_address/db_name`.
This demonstration uses SQLite and the example Chinook database.
To set it up, follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the `.db` file in a notebooks folder at the root of this repository.

@ -35,7 +35,7 @@ qa.run(query)
</CodeOutputBlock>
## Chain Type
You can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see [this notebook](question_answering.html).
You can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see [this notebook](/docs/modules/chains/additional/question_answering.html).
There are two ways to load different chain types. First, you can specify the chain type argument in the `from_chain_type` method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to `map_reduce`.
@ -58,7 +58,7 @@ qa.run(query)
</CodeOutputBlock>
The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in [this notebook](question_answering.html)) and then pass that directly to the the RetrievalQA chain with the `combine_documents_chain` parameter. For example:
The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in [this notebook](/docs/modules/chains/additional/question_answering.html)) and then pass that directly to the the RetrievalQA chain with the `combine_documents_chain` parameter. For example:
```python
@ -82,7 +82,7 @@ qa.run(query)
</CodeOutputBlock>
## Custom Prompts
You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the [base question answering chain](./question_answering.html)
You can pass in custom prompts to do question answering. These prompts are the same prompts as you can pass into the [base question answering chain](/docs/modules/chains/additional/question_answering.html)
```python

@ -1,4 +1,4 @@
Under the hood, by default this uses the [UnstructuredLoader](./unstructured_file.html)
Under the hood, by default this uses the [UnstructuredLoader](/docs/modules/data_connection/document_loaders/integrations/unstructured_file.html)
```python
from langchain.document_loaders import DirectoryLoader

@ -24,7 +24,7 @@ Of course, we also help construct what we think useful Retrievers are. The main
In order to understand what a vectorstore retriever is, it's important to understand what a Vectorstore is. So let's look at that.
By default, LangChain uses [Chroma](../../ecosystem/chroma.md) as the vectorstore to index and search embeddings. To walk through this tutorial, we'll first need to install `chromadb`.
By default, LangChain uses [Chroma](/docs/ecosystem/integrations/chroma.html) as the vectorstore to index and search embeddings. To walk through this tutorial, we'll first need to install `chromadb`.
```
pip install chromadb

Loading…
Cancel
Save