docs: make links internal (#19063)

So they can be properly link checked
pull/19085/head
Bagatur 7 months ago committed by GitHub
parent ae73b9d839
commit 0ae39ab30e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -55,7 +55,7 @@
"id": "9eb73e8b",
"metadata": {},
"source": [
"We will show examples of streaming using the chat model from [Anthropic](https://python.langchain.com/docs/integrations/platforms/anthropic). To use the model, you will need to install the `langchain-anthropic` package. You can do this with the following command:"
"We will show examples of streaming using the chat model from [Anthropic](/docs/integrations/platforms/anthropic). To use the model, you will need to install the `langchain-anthropic` package. You can do this with the following command:"
]
},
{

@ -7,7 +7,7 @@
"source": [
"# JSON Evaluators\n",
"\n",
"Evaluating [extraction](https://python.langchain.com/docs/use_cases/extraction) and function calling applications often comes down to validation that the LLM's string output can be parsed correctly and how it compares to a reference object. The following `JSON` validators provide functionality to check your model's output consistently.\n",
"Evaluating [extraction](/docs/use_cases/extraction) and function calling applications often comes down to validation that the LLM's string output can be parsed correctly and how it compares to a reference object. The following `JSON` validators provide functionality to check your model's output consistently.\n",
"\n",
"## JsonValidityEvaluator\n",
"\n",

@ -24,7 +24,7 @@
"<img src=\"/img/qa_privacy_protection.png\" width=\"900\"/>\n",
"\n",
"\n",
"In the following notebook, we will not go into the details of how the anonymizer works. If you are interested, please visit [this part of the documentation](https://python.langchain.com/docs/guides/privacy/presidio_data_anonymization/).\n",
"In the following notebook, we will not go into the details of how the anonymizer works. If you are interested, please visit [this part of the documentation](/docs/guides/privacy/presidio_data_anonymization/).\n",
"\n",
"## Quickstart\n",
"\n",

@ -9,7 +9,7 @@
"\n",
">[PromptLayer](https://docs.promptlayer.com/introduction) is a platform for prompt engineering. It also helps with the LLM observability to visualize requests, version prompts, and track usage.\n",
">\n",
">While `PromptLayer` does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), using a callback is the recommended way to integrate `PromptLayer` with LangChain.\n",
">While `PromptLayer` does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](/docs/integrations/llms/promptlayer_openai)), using a callback is the recommended way to integrate `PromptLayer` with LangChain.\n",
"\n",
"In this guide, we will go over how to setup the `PromptLayerCallbackHandler`. \n",
"\n",

@ -124,7 +124,7 @@
"tags": []
},
"source": [
"Here are two examples of how to use the `TrubricsCallbackHandler` with Langchain [LLMs](https://python.langchain.com/docs/modules/model_io/llms/) or [Chat Models](https://python.langchain.com/docs/modules/model_io/chat/). We will use OpenAI models, so set your `OPENAI_API_KEY` key here:"
"Here are two examples of how to use the `TrubricsCallbackHandler` with Langchain [LLMs](/docs/modules/model_io/llms/) or [Chat Models](/docs/modules/model_io/chat/). We will use OpenAI models, so set your `OPENAI_API_KEY` key here:"
]
},
{

@ -10,7 +10,7 @@
"\n",
"In particular, we will:\n",
"1. Utilize the [HuggingFaceTextGenInference](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_text_gen_inference.py), [HuggingFaceEndpoint](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_endpoint.py), or [HuggingFaceHub](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_hub.py) integrations to instantiate an `LLM`.\n",
"2. Utilize the `ChatHuggingFace` class to enable any of these LLMs to interface with LangChain's [Chat Messages](https://python.langchain.com/docs/modules/model_io/chat/#messages) abstraction.\n",
"2. Utilize the `ChatHuggingFace` class to enable any of these LLMs to interface with LangChain's [Chat Messages](/docs/modules/model_io/chat/#messages) abstraction.\n",
"3. Demonstrate how to use an open-source LLM to power an `ChatAgent` pipeline\n",
"\n",
"\n",
@ -280,7 +280,7 @@
"source": [
"## 3. Take it for a spin as an agent!\n",
"\n",
"Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. The example below is taken from [here](https://python.langchain.com/docs/modules/agents/agent_types/react#using-chat-models).\n",
"Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. The example below is taken from [here](/docs/modules/agents/agent_types/react#using-chat-models).\n",
"\n",
"> Note: To run this section, you'll need to have a [SerpAPI Token](https://serpapi.com/) saved as an environment variable: `SERPAPI_API_KEY`"
]

@ -17,9 +17,9 @@
"source": [
"# Llama2Chat\n",
"\n",
"This notebook shows how to augment Llama-2 `LLM`s with the `Llama2Chat` wrapper to support the [Llama-2 chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). Several `LLM` implementations in LangChain can be used as interface to Llama-2 chat models. These include [HuggingFaceTextGenInference](https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference), [LlamaCpp](https://python.langchain.com/docs/use_cases/question_answering/how_to/local_retrieval_qa), [GPT4All](https://python.langchain.com/docs/integrations/llms/gpt4all), ..., to mention a few examples. \n",
"This notebook shows how to augment Llama-2 `LLM`s with the `Llama2Chat` wrapper to support the [Llama-2 chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). Several `LLM` implementations in LangChain can be used as interface to Llama-2 chat models. These include [ChatHuggingFace](/docs/integrations/chat/huggingface), [LlamaCpp](/docs/use_cases/question_answering/local_retrieval_qa), [GPT4All](/docs/integrations/llms/gpt4all), ..., to mention a few examples. \n",
"\n",
"`Llama2Chat` is a generic wrapper that implements `BaseChatModel` and can therefore be used in applications as [chat model](https://python.langchain.com/docs/modules/model_io/models/chat/). `Llama2Chat` converts a list of [chat messages](https://python.langchain.com/docs/modules/model_io/models/chat/#messages) into the [required chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and forwards the formatted prompt as `str` to the wrapped `LLM`."
"`Llama2Chat` is a generic wrapper that implements `BaseChatModel` and can therefore be used in applications as [chat model](/docs/modules/model_io/chat/). `Llama2Chat` converts a list of Messages into the [required chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and forwards the formatted prompt as `str` to the wrapped `LLM`."
]
},
{
@ -77,7 +77,7 @@
"id": "2ff99380",
"metadata": {},
"source": [
"A [HuggingFaceTextGenInference](https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference) LLM encapsulates access to a [text-generation-inference](https://github.com/huggingface/text-generation-inference) server. In the following example, the inference server serves a [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) model. It can be started locally with:\n",
"A HuggingFaceTextGenInference LLM encapsulates access to a [text-generation-inference](https://github.com/huggingface/text-generation-inference) server. In the following example, the inference server serves a [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) model. It can be started locally with:\n",
"\n",
"```bash\n",
"docker run \\\n",
@ -220,7 +220,7 @@
"id": "52c1a0b9",
"metadata": {},
"source": [
"For using a Llama-2 chat model with a [LlamaCPP](https://python.langchain.com/docs/integrations/llms/llamacpp) `LMM`, install the `llama-cpp-python` library using [these installation instructions](https://python.langchain.com/docs/integrations/llms/llamacpp#installation). The following example uses a quantized [llama-2-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_0.gguf) model stored locally at `~/Models/llama-2-7b-chat.Q4_0.gguf`. \n",
"For using a Llama-2 chat model with a [LlamaCPP](/docs/integrations/llms/llamacpp) `LMM`, install the `llama-cpp-python` library using [these installation instructions](/docs/integrations/llms/llamacpp#installation). The following example uses a quantized [llama-2-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/resolve/main/llama-2-7b-chat.Q4_0.gguf) model stored locally at `~/Models/llama-2-7b-chat.Q4_0.gguf`. \n",
"\n",
"After creating a `LlamaCpp` instance, the `llm` is again wrapped into `Llama2Chat`"
]
@ -731,7 +731,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -1005,7 +1005,7 @@
"id": "79efa62d"
},
"source": [
"Like any other integration, ChatNVIDIA is fine to support chat utilities like conversation buffers by default. Below, we show the [LangChain ConversationBufferMemory](https://python.langchain.com/docs/modules/memory/types/buffer) example applied to the `mixtral_8x7b` model."
"Like any other integration, ChatNVIDIA is fine to support chat utilities like conversation buffers by default. Below, we show the [LangChain ConversationBufferMemory](/docs/modules/memory/types/buffer) example applied to the `mixtral_8x7b` model."
]
},
{

@ -107,7 +107,7 @@
"\n",
"# using LangChain Expressive Language chain syntax\n",
"# learn more about the LCEL on\n",
"# https://python.langchain.com/docs/expression_language/why\n",
"# /docs/expression_language/why\n",
"chain = prompt | llm | StrOutputParser()\n",
"\n",
"# for brevity, response is printed in terminal\n",
@ -235,7 +235,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Take a look at the [LangChain Expressive Language (LCEL) Interface](https://python.langchain.com/docs/expression_language/interface) for the other available interfaces for use when a chain is created.\n",
"Take a look at the [LangChain Expressive Language (LCEL) Interface](/docs/expression_language/interface) for the other available interfaces for use when a chain is created.\n",
"\n",
"## Building from source\n",
"\n",
@ -250,7 +250,7 @@
" \n",
"Use the latest version of Ollama and supply the [`format`](https://github.com/jmorganca/ollama/blob/main/docs/api.md#json-mode) flag. The `format` flag will force the model to produce the response in JSON.\n",
"\n",
"> **Note:** You can also try out the experimental [OllamaFunctions](https://python.langchain.com/docs/integrations/chat/ollama_functions) wrapper for convenience."
"> **Note:** You can also try out the experimental [OllamaFunctions](/docs/integrations/chat/ollama_functions) wrapper for convenience."
]
},
{

@ -118,7 +118,7 @@
"\n",
"1. You can set min and max chunk size, which the system tries to adhere to with minimal truncation. You can set `loader.min_text_length` and `loader.max_text_length` to control these.\n",
"2. By default, only the text for chunks is returned. However, Docugami's XML knowledge graph has additional rich information including semantic tags for entities inside the chunk. Set `loader.include_xml_tags = True` if you want the additional xml metadata on the returned chunks.\n",
"3. In addition, you can set `loader.parent_hierarchy_levels` if you want Docugami to return parent chunks in the chunks it returns. The child chunks point to the parent chunks via the `loader.parent_id_key` value. This is useful e.g. with the [MultiVector Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector) for [small-to-big](https://www.youtube.com/watch?v=ihSiRrOUwmg) retrieval. See detailed example later in this notebook."
"3. In addition, you can set `loader.parent_hierarchy_levels` if you want Docugami to return parent chunks in the chunks it returns. The child chunks point to the parent chunks via the `loader.parent_id_key` value. This is useful e.g. with the [MultiVector Retriever](/docs/modules/data_connection/retrievers/multi_vector) for [small-to-big](https://www.youtube.com/watch?v=ihSiRrOUwmg) retrieval. See detailed example later in this notebook."
]
},
{
@ -457,7 +457,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Documents are inherently semi-structured and the DocugamiLoader is able to navigate the semantic and structural contours of the document to provide parent chunk references on the chunks it returns. This is useful e.g. with the [MultiVector Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector) for [small-to-big](https://www.youtube.com/watch?v=ihSiRrOUwmg) retrieval.\n",
"Documents are inherently semi-structured and the DocugamiLoader is able to navigate the semantic and structural contours of the document to provide parent chunk references on the chunks it returns. This is useful e.g. with the [MultiVector Retriever](/docs/modules/data_connection/retrievers/multi_vector) for [small-to-big](https://www.youtube.com/watch?v=ihSiRrOUwmg) retrieval.\n",
"\n",
"To get parent chunk references, you can set `loader.parent_hierarchy_levels` to a non-zero value."
]

@ -47,7 +47,7 @@
"id": "04981332",
"metadata": {},
"source": [
"Create a GeoPandas dataframe from [`Open City Data`](https://python.langchain.com/docs/integrations/document_loaders/open_city_data) as an example input."
"Create a GeoPandas dataframe from [`Open City Data`](/docs/integrations/document_loaders/open_city_data) as an example input."
]
},
{

@ -8,7 +8,7 @@
"\n",
"> [Bigtable](https://cloud.google.com/bigtable) is a key-value and wide-column store, ideal for fast access to structured, semi-structured, or unstructured data. Extend your database application to build AI-powered experiences leveraging Bigtable's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `BigtableLoader` and `BigtableSaver`.\n",
"This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `BigtableLoader` and `BigtableSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-bigtable-python/).\n",
"\n",

@ -8,7 +8,7 @@
"\n",
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgres), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Cloud SQL for SQL server](https://cloud.google.com/sql/sqlserver) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MSSQLLoader` and `MSSQLDocumentSaver`.\n",
"This notebook goes over how to use [Cloud SQL for SQL server](https://cloud.google.com/sql/sqlserver) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `MSSQLLoader` and `MSSQLDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mssql-python/).\n",
"\n",

@ -8,7 +8,7 @@
"\n",
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgresql), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Cloud SQL for MySQL](https://cloud.google.com/sql/mysql) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MySQLLoader` and `MySQLDocumentSaver`.\n",
"This notebook goes over how to use [Cloud SQL for MySQL](https://cloud.google.com/sql/mysql) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `MySQLLoader` and `MySQLDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mysql-python/).\n",
"\n",

@ -8,7 +8,7 @@
"\n",
"> [Firestore in Datastore Mode](https://cloud.google.com/datastore) is a NoSQL document database built for automatic scaling, high performance and ease of application development. Extend your database application to build AI-powered experiences leveraging Datastore's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Firestore in Datastore Mode](https://cloud.google.com/datastore) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `DatastoreLoader` and `DatastoreSaver`.\n",
"This notebook goes over how to use [Firestore in Datastore Mode](https://cloud.google.com/datastore) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `DatastoreLoader` and `DatastoreSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-datastore-python/).\n",
"\n",

@ -18,7 +18,7 @@
"by leveraging the El Carro Langchain integration.\n",
"\n",
"This guide goes over how to use El Carro Langchain integration to\n",
"[save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/)\n",
"[save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/)\n",
"with `ElCarroLoader` and `ElCarroDocumentSaver`. This integration works for any Oracle database, regardless of where it is running.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-el-carro-python/).\n",

@ -8,7 +8,7 @@
"\n",
"> [Firestore](https://cloud.google.com/firestore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Firestore's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `FirestoreLoader` and `FirestoreSaver`.\n",
"This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `FirestoreLoader` and `FirestoreSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-firestore-python/).\n",
"\n",

@ -10,7 +10,7 @@
"\n",
"> [Google Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) is a fully-managed service that is powered by the Redis in-memory data store to build application caches that provide sub-millisecond data access. Extend your database application to build AI-powered experiences leveraging Memorystore for Redis's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `MemorystoreDocumentLoader` and `MemorystoreDocumentSaver`.\n",
"This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `MemorystoreDocumentLoader` and `MemorystoreDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-memorystore-redis-python/).\n",
"\n",

@ -8,7 +8,7 @@
"\n",
"> [Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution.\n",
"\n",
"This notebook goes over how to use [Spanner](https://cloud.google.com/spanner) to [save, load and delete langchain documents](https://python.langchain.com/docs/modules/data_connection/document_loaders/) with `SpannerLoader` and `SpannerDocumentSaver`.\n",
"This notebook goes over how to use [Spanner](https://cloud.google.com/spanner) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `SpannerLoader` and `SpannerDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-spanner-python/).\n",
"\n",

@ -16,7 +16,7 @@
"---\n",
"The best approach is to install Grobid via docker, see https://grobid.readthedocs.io/en/latest/Grobid-docker/. \n",
"\n",
"(Note: additional instructions can be found [here](https://python.langchain.com/docs/docs/integrations/providers/grobid.mdx).)\n",
"(Note: additional instructions can be found [here](/docs/integrations/providers/grobid).)\n",
"\n",
"Once grobid is up-and-running you can interact as described below. \n"
]

@ -39,9 +39,7 @@
"metadata": {},
"outputs": [],
"source": [
"loader = ToMarkdownLoader(\n",
" url=\"https://python.langchain.com/docs/get_started/introduction\", api_key=api_key\n",
")"
"loader = ToMarkdownLoader(url=\"/docs/get_started/introduction\", api_key=api_key)"
]
},
{
@ -72,9 +70,9 @@
"This framework consists of several parts.\n",
"\n",
"- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.\n",
"- **[LangChain Templates](https://python.langchain.com/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.\n",
"- **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as a REST API.\n",
"- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.\n",
"- **[LangChain Templates](/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.\n",
"- **[LangServe](/docs/langserve)**: A library for deploying LangChain chains as a REST API.\n",
"- **[LangSmith](/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.\n",
"\n",
"![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](https://python.langchain.com/assets/images/langchain_stack-f21828069f74484521f38199910007c1.svg)\n",
"\n",
@ -101,11 +99,11 @@
"\n",
"## Get started [](\\#get-started \"Direct link to Get started\")\n",
"\n",
"[Heres](https://python.langchain.com/docs/get_started/installation) how to install LangChain, set up your environment, and start building.\n",
"[Heres](/docs/get_started/installation) how to install LangChain, set up your environment, and start building.\n",
"\n",
"We recommend following our [Quickstart](https://python.langchain.com/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application.\n",
"We recommend following our [Quickstart](/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application.\n",
"\n",
"Read up on our [Security](https://python.langchain.com/docs/security) best practices to make sure you're developing safely with LangChain.\n",
"Read up on our [Security](/docs/security) best practices to make sure you're developing safely with LangChain.\n",
"\n",
"note\n",
"\n",
@ -115,43 +113,43 @@
"\n",
"LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.\n",
"\n",
"- **[Overview](https://python.langchain.com/docs/expression_language/)**: LCEL and its benefits\n",
"- **[Interface](https://python.langchain.com/docs/expression_language/interface)**: The standard interface for LCEL objects\n",
"- **[How-to](https://python.langchain.com/docs/expression_language/how_to)**: Key features of LCEL\n",
"- **[Cookbook](https://python.langchain.com/docs/expression_language/cookbook)**: Example code for accomplishing common tasks\n",
"- **[Overview](/docs/expression_language/)**: LCEL and its benefits\n",
"- **[Interface](/docs/expression_language/interface)**: The standard interface for LCEL objects\n",
"- **[How-to](/docs/expression_language/how_to)**: Key features of LCEL\n",
"- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks\n",
"\n",
"## Modules [](\\#modules \"Direct link to Modules\")\n",
"\n",
"LangChain provides standard, extendable interfaces and integrations for the following modules:\n",
"\n",
"#### [Model I/O](https://python.langchain.com/docs/modules/model_io/) [](\\#model-io \"Direct link to model-io\")\n",
"#### [Model I/O](/docs/modules/model_io/) [](\\#model-io \"Direct link to model-io\")\n",
"\n",
"Interface with language models\n",
"\n",
"#### [Retrieval](https://python.langchain.com/docs/modules/data_connection/) [](\\#retrieval \"Direct link to retrieval\")\n",
"#### [Retrieval](/docs/modules/data_connection/) [](\\#retrieval \"Direct link to retrieval\")\n",
"\n",
"Interface with application-specific data\n",
"\n",
"#### [Agents](https://python.langchain.com/docs/modules/agents/) [](\\#agents \"Direct link to agents\")\n",
"#### [Agents](/docs/modules/agents/) [](\\#agents \"Direct link to agents\")\n",
"\n",
"Let models choose which tools to use given high-level directives\n",
"\n",
"## Examples, ecosystem, and resources [](\\#examples-ecosystem-and-resources \"Direct link to Examples, ecosystem, and resources\")\n",
"\n",
"### [Use cases](https://python.langchain.com/docs/use_cases/question_answering/) [](\\#use-cases \"Direct link to use-cases\")\n",
"### [Use cases](/docs/use_cases/question_answering/) [](\\#use-cases \"Direct link to use-cases\")\n",
"\n",
"Walkthroughs and techniques for common end-to-end use cases, like:\n",
"\n",
"- [Document question answering](https://python.langchain.com/docs/use_cases/question_answering/)\n",
"- [Chatbots](https://python.langchain.com/docs/use_cases/chatbots/)\n",
"- [Analyzing structured data](https://python.langchain.com/docs/use_cases/sql/)\n",
"- [Document question answering](/docs/use_cases/question_answering/)\n",
"- [Chatbots](/docs/use_cases/chatbots/)\n",
"- [Analyzing structured data](/docs/use_cases/sql/)\n",
"- and much more...\n",
"\n",
"### [Integrations](https://python.langchain.com/docs/integrations/providers/) [](\\#integrations \"Direct link to integrations\")\n",
"### [Integrations](/docs/integrations/providers/) [](\\#integrations \"Direct link to integrations\")\n",
"\n",
"LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](https://python.langchain.com/docs/integrations/providers/).\n",
"LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/).\n",
"\n",
"### [Guides](https://python.langchain.com/docs/guides/debugging) [](\\#guides \"Direct link to guides\")\n",
"### [Guides](/docs/guides/debugging) [](\\#guides \"Direct link to guides\")\n",
"\n",
"Best practices for developing with LangChain.\n",
"\n",
@ -159,11 +157,11 @@
"\n",
"Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental Python packages.\n",
"\n",
"### [Developer's guide](https://python.langchain.com/docs/contributing) [](\\#developers-guide \"Direct link to developers-guide\")\n",
"### [Developer's guide](/docs/contributing) [](\\#developers-guide \"Direct link to developers-guide\")\n",
"\n",
"Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.\n",
"\n",
"Head to the [Community navigator](https://python.langchain.com/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLMs.\n"
"Head to the [Community navigator](/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLMs.\n"
]
}
],

@ -7,7 +7,7 @@
"source": [
"# Baseten\n",
"\n",
"[Baseten](https://baseten.co) is a [Provider](https://python.langchain.com/docs/integrations/providers/baseten) in the LangChain ecosystem that implements the LLMs component.\n",
"[Baseten](https://baseten.co) is a [Provider](/docs/integrations/providers/baseten) in the LangChain ecosystem that implements the LLMs component.\n",
"\n",
"This example demonstrates using an LLM — Mistral 7B hosted on Baseten — with LangChain."
]
@ -83,7 +83,7 @@
"\n",
"We can chain together multiple calls to one or multiple models, which is the whole point of Langchain!\n",
"\n",
"For example, we can replace GPT with Mistral in this [demo of terminal emulation](https://python.langchain.com/docs/modules/agents/how_to/chatgpt_clone)."
"For example, we can replace GPT with Mistral in this demo of terminal emulation."
]
},
{
@ -167,7 +167,7 @@
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -181,10 +181,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
},
"orig_nbformat": 4
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

@ -25,7 +25,7 @@
"id": "bead5ede-d9cc-44b9-b062-99c90a10cf40",
"metadata": {},
"source": [
"A guide on using [Google Generative AI](https://developers.generativeai.google/) models with Langchain. Note: It's separate from Google Cloud Vertex AI [integration](https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm)."
"A guide on using [Google Generative AI](https://developers.generativeai.google/) models with Langchain. Note: It's separate from Google Cloud Vertex AI [integration](/docs/integrations/llms/google_vertex_ai_palm)."
]
},
{

@ -105,7 +105,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](https://python.langchain.com/docs/expression_language/interface)"
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/expression_language/interface)"
]
}
],

@ -175,7 +175,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](https://python.langchain.com/docs/expression_language/interface)"
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/expression_language/interface)"
]
},
{

@ -14,7 +14,7 @@
"\n",
"This example showcases how to connect to [PromptLayer](https://www.promptlayer.com) to start recording your OpenAI requests.\n",
"\n",
"Another example is [here](https://python.langchain.com/docs/integrations/providers/promptlayer)."
"Another example is [here](/docs/integrations/providers/promptlayer)."
]
},
{

@ -290,7 +290,7 @@
"metadata": {},
"source": [
"## Streaming Response\n",
"You can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on [Streaming](https://python.langchain.com/docs/modules/model_io/llms/how_to/streaming_llm) for more information."
"You can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on [Streaming](/docs/modules/model_io/llms/streaming_llm) for more information."
]
},
{
@ -540,9 +540,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv"
"name": "python3"
},
"language_info": {
"codemirror_mode": {

@ -288,7 +288,7 @@
"\n",
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
"\n",
"To do this we will use one of [Google's Vertex AI chat models](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
"To do this we will use one of [Google's Vertex AI chat models](/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
]
},
{

@ -293,7 +293,7 @@
"\n",
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
"\n",
"To do this we will use one of [Google's Vertex AI chat models](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
"To do this we will use one of [Google's Vertex AI chat models](/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
]
},
{

@ -393,7 +393,7 @@
"\n",
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
"\n",
"To do this we will use one of [Google's Vertex AI chat models](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
"To do this we will use one of [Google's Vertex AI chat models](/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
]
},
{

@ -394,7 +394,7 @@
"\n",
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
"\n",
"To do this we will use one of [Google's Vertex AI chat models](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
"To do this we will use one of [Google's Vertex AI chat models](/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
]
},
{

@ -394,7 +394,7 @@
"\n",
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history)\n",
"\n",
"To do this we will use one of [Google's Vertex AI chat models](https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
"To do this we will use one of [Google's Vertex AI chat models](/docs/integrations/chat/google_vertex_ai_palm) which requires that you [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) in your Google Cloud Project.\n"
]
},
{

@ -61,7 +61,7 @@
"id": "b60dc735",
"metadata": {},
"source": [
"We can easily combine this message history class with [LCEL Runnables](https://python.langchain.com/docs/expression_language/how_to/message_history).\n",
"We can easily combine this message history class with [LCEL Runnables](/docs/expression_language/how_to/message_history).\n",
"\n",
"The history will be persisted across re-runs of the Streamlit app within a given user session. A given `StreamlitChatMessageHistory` will NOT be persisted or shared across user sessions."
]

@ -7,7 +7,7 @@
>It optimizes setup and configuration details, including GPU usage.
>For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).
See [this guide](https://python.langchain.com/docs/guides/local_llms#quickstart) for more details
See [this guide](/docs/guides/local_llms#quickstart) for more details
on how to use `Ollama` with LangChain.
## Installation and Setup

@ -22,7 +22,7 @@
"See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n",
"\n",
"This notebook shows how to use functionality related to the `Vectara`'s integration with langchain.\n",
"Specificaly we will demonstrate how to use chaining with [LangChain's Expression Language](https://python.langchain.com/docs/expression_language/) and using Vectara's integrated summarization capability."
"Specificaly we will demonstrate how to use chaining with [LangChain's Expression Language](/docs/expression_language/) and using Vectara's integrated summarization capability."
]
},
{
@ -219,7 +219,7 @@
"id": "8f16bf8d",
"metadata": {},
"source": [
"Vectara's \"RAG as a service\" does a lot of the heavy lifting in creating question answering or chatbot chains. The integration with LangChain provides the option to use additional capabilities such as query pre-processing like `SelfQueryRetriever` or `MultiQueryRetriever`. Let's look at an example of using the [MultiQueryRetriever](https://python.langchain.com/docs/modules/data_connection/retrievers/MultiQueryRetriever).\n",
"Vectara's \"RAG as a service\" does a lot of the heavy lifting in creating question answering or chatbot chains. The integration with LangChain provides the option to use additional capabilities such as query pre-processing like `SelfQueryRetriever` or `MultiQueryRetriever`. Let's look at an example of using the [MultiQueryRetriever](/docs/modules/data_connection/retrievers/MultiQueryRetriever).\n",
"\n",
"Since MQR uses an LLM we have to set that up - here we choose `ChatOpenAI`:"
]

@ -7,7 +7,7 @@
"source": [
"# Fleet AI Libraries Context\n",
"\n",
"The Fleet AI team is on a mission to embed the world's most important data. They've started by embedding the top 1200 Python libraries to enable code generation with up-to-date knowledge. They've been kind enough to share their embeddings of the [LangChain docs](https://python.langchain.com/docs/get_started/introduction) and [API reference](https://api.python.langchain.com/en/latest/api_reference.html).\n",
"The Fleet AI team is on a mission to embed the world's most important data. They've started by embedding the top 1200 Python libraries to enable code generation with up-to-date knowledge. They've been kind enough to share their embeddings of the [LangChain docs](/docs/get_started/introduction) and [API reference](https://api.python.langchain.com/en/latest/api_reference.html).\n",
"\n",
"Let's take a look at how we can use these embeddings to power a docs retrieval system and ultimately a simple code generating chain!"
]

@ -100,7 +100,7 @@
"text/plain": [
"[Document(page_content='This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.\\n\\nIf we compare it to the standard ReAct agent, the main difference is the prompt. We want it to be much more conversational.\\n\\nfrom langchain.agents import AgentType, Tool, initialize_agent\\n\\nfrom langchain_openai import OpenAI\\n\\nfrom langchain.memory import ConversationBufferMemory\\n\\nfrom langchain_community.utilities import SerpAPIWrapper\\n\\nsearch = SerpAPIWrapper() tools = \\\\[ Tool( name=\"Current Search\", func=search.run, description=\"useful for when you need to answer questions about current events or the current state of the world\", ), \\\\]\\n\\n\\\\\\nllm = OpenAI(temperature=0)\\n\\nUsing LCEL\\n\\nWe will first show how to create this agent using LCEL\\n\\nfrom langchain import hub\\n\\nfrom langchain.agents.format_scratchpad import format_log_to_str\\n\\nfrom langchain.agents.output_parsers import ReActSingleInputOutputParser\\n\\nfrom langchain.tools.render import render_text_description\\n\\nprompt = hub.pull(\"hwchase17/react-chat\")\\n\\nprompt = prompt.partial( tools=render_text_description(tools), tool_names=\", \".join(\\\\[[t.name](http://t.name) for t in tools\\\\]), )\\n\\nllm_with_stop = llm.bind(stop=\\\\[\"\\\\nObservation\"\\\\])\\n\\nagent = ( { \"input\": lambda x: x\\\\[\"input\"\\\\], \"agent_scratchpad\": lambda x: format_log_to_str(x\\\\[\"intermediate_steps\"\\\\]), \"chat_history\": lambda x: x\\\\[\"chat_history\"\\\\], } | prompt | llm_with_stop | ReActSingleInputOutputParser() )\\n\\nfrom langchain.agents import AgentExecutor\\n\\nmemory = ConversationBufferMemory(memory_key=\"chat_history\") agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)\\n\\nagent_executor.invoke({\"input\": \"hi, i am bob\"})\\\\[\"output\"\\\\]\\n\\n```\\n> Entering new AgentExecutor chain...\\n\\nThought: Do I need to use a tool? No\\nFinal Answer: Hi Bob, nice to meet you! How can I help you today?\\n\\n> Finished chain.\\n```\\n\\n\\\\\\n\\'Hi Bob, nice to meet you! How can I help you today?\\'\\n\\nagent_executor.invoke({\"input\": \"whats my name?\"})\\\\[\"output\"\\\\]\\n\\n```\\n> Entering new AgentExecutor chain...\\n\\nThought: Do I need to use a tool? No\\nFinal Answer: Your name is Bob.\\n\\n> Finished chain.\\n```\\n\\n\\\\\\n\\'Your name is Bob.\\'\\n\\nagent_executor.invoke({\"input\": \"what are some movies showing 9/21/2023?\"})\\\\[\"output\"\\\\]\\n\\n```\\n> Entering new AgentExecutor chain...\\n\\nThought: Do I need to use a tool? Yes\\nAction: Current Search\\nAction Input: Movies showing 9/21/2023[\\'September 2023 Movies: The Creator • Dumb Money • Expend4bles • The Kill Room • The Inventor • The Equalizer 3 • PAW Patrol: The Mighty Movie, ...\\'] Do I need to use a tool? No\\nFinal Answer: According to current search, some movies showing on 9/21/2023 are The Creator, Dumb Money, Expend4bles, The Kill Room, The Inventor, The Equalizer 3, and PAW Patrol: The Mighty Movie.\\n\\n> Finished chain.\\n```\\n\\n\\\\\\n\\'According to current search, some movies showing on 9/21/2023 are The Creator, Dumb Money, Expend4bles, The Kill Room, The Inventor, The Equalizer 3, and PAW Patrol: The Mighty Movie.\\'\\n\\n\\\\\\nUse the off-the-shelf agent\\n\\nWe can also create this agent using the off-the-shelf agent class\\n\\nagent_executor = initialize_agent( tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory, )\\n\\nUse a chat model\\n\\nWe can also use a chat model here. The main difference here is in the prompts used.\\n\\nfrom langchain import hub\\n\\nfrom langchain_openai import ChatOpenAI\\n\\nprompt = hub.pull(\"hwchase17/react-chat-json\") chat_model = ChatOpenAI(temperature=0, model=\"gpt-4\")\\n\\nprompt = prompt.partial( tools=render_text_description(tools), tool_names=\", \".join(\\\\[[t.name](http://t.name) for t in tools\\\\]), )\\n\\nchat_model_with_stop = chat_model.bind(stop=\\\\[\"\\\\nObservation\"\\\\])\\n\\nfrom langchain.agents.format_scratchpad import format_log_to_messages\\n\\nfrom langchain.agents.output_parsers import JSONAgentOutputParser\\n\\n# We need some extra steering, or the c', metadata={'title': 'Conversational', 'source': 'https://d01.getoutline.com/doc/conversational-B5dBkUgQ4b'}),\n",
" Document(page_content='Quickstart\\n\\nIn this quickstart we\\'ll show you how to:\\n\\nGet setup with LangChain, LangSmith and LangServe\\n\\nUse the most basic and common components of LangChain: prompt templates, models, and output parsers\\n\\nUse LangChain Expression Language, the protocol that LangChain is built on and which facilitates component chaining\\n\\nBuild a simple application with LangChain\\n\\nTrace your application with LangSmith\\n\\nServe your application with LangServe\\n\\nThat\\'s a fair amount to cover! Let\\'s dive in.\\n\\nSetup\\n\\nInstallation\\n\\nTo install LangChain run:\\n\\nPip\\n\\nConda\\n\\npip install langchain\\n\\nFor more details, see our Installation guide.\\n\\nEnvironment\\n\\nUsing LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we\\'ll use OpenAI\\'s model APIs.\\n\\nFirst we\\'ll need to install their Python package:\\n\\npip install openai\\n\\nAccessing the API requires an API key, which you can get by creating an account and heading here. Once we have a key we\\'ll want to set it as an environment variable by running:\\n\\nexport OPENAI_API_KEY=\"...\"\\n\\nIf you\\'d prefer not to set an environment variable you can pass the key in directly via the openai_api_key named parameter when initiating the OpenAI LLM class:\\n\\nfrom langchain_openai import ChatOpenAI\\n\\nllm = ChatOpenAI(openai_api_key=\"...\")\\n\\nLangSmith\\n\\nMany of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with LangSmith.\\n\\nNote that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:\\n\\nexport LANGCHAIN_TRACING_V2=\"true\" export LANGCHAIN_API_KEY=...\\n\\nLangServe\\n\\nLangServe helps developers deploy LangChain chains as a REST API. You do not need to use LangServe to use LangChain, but in this guide we\\'ll show how you can deploy your app with LangServe.\\n\\nInstall with:\\n\\npip install \"langserve\\\\[all\\\\]\"\\n\\nBuilding with LangChain\\n\\nLangChain provides many modules that can be used to build language model applications. Modules can be used as standalones in simple applications and they can be composed for more complex use cases. Composition is powered by LangChain Expression Language (LCEL), which defines a unified Runnable interface that many modules implement, making it possible to seamlessly chain components.\\n\\nThe simplest and most common chain contains three things:\\n\\nLLM/Chat Model: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them. Prompt Template: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial. Output Parser: These translate the raw response from the language model to a more workable format, making it easy to use the output downstream. In this guide we\\'ll cover those three components individually, and then go over how to combine them. Understanding these concepts will set you up well for being able to use and customize LangChain applications. Most LangChain applications allow you to configure the model and/or the prompt, so knowing how to take advantage of this will be a big enabler.\\n\\nLLM / Chat Model\\n\\nThere are two types of language models:\\n\\nLLM: underlying model takes a string as input and returns a string\\n\\nChatModel: underlying model takes a list of messages as input and returns a message\\n\\nStrings are simple, but what exactly are messages? The base message interface is defined by BaseMessage, which has two required attributes:\\n\\ncontent: The content of the message. Usually a string. role: The entity from which the BaseMessage is coming. LangChain provides several ob', metadata={'title': 'Quick Start', 'source': 'https://d01.getoutline.com/doc/quick-start-jGuGGGOTuL'}),\n",
" Document(page_content='This walkthrough showcases using an agent to implement the [ReAct](https://react-lm.github.io/) logic.\\n\\n```javascript\\nfrom langchain.agents import AgentType, initialize_agent, load_tools\\nfrom langchain_openai import OpenAI\\n```\\n\\nFirst, let\\'s load the language model we\\'re going to use to control the agent.\\n\\n```javascript\\nllm = OpenAI(temperature=0)\\n```\\n\\nNext, let\\'s load some tools to use. Note that the llm-math tool uses an LLM, so we need to pass that in.\\n\\n```javascript\\ntools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\\n```\\n\\n## Using LCEL[\\u200b](https://python.langchain.com/docs/modules/agents/agent_types/react#using-lcel \"Direct link to Using LCEL\")\\n\\nWe will first show how to create the agent using LCEL\\n\\n```javascript\\nfrom langchain import hub\\nfrom langchain.agents.format_scratchpad import format_log_to_str\\nfrom langchain.agents.output_parsers import ReActSingleInputOutputParser\\nfrom langchain.tools.render import render_text_description\\n```\\n\\n```javascript\\nprompt = hub.pull(\"hwchase17/react\")\\nprompt = prompt.partial(\\n tools=render_text_description(tools),\\n tool_names=\", \".join([t.name for t in tools]),\\n)\\n```\\n\\n```javascript\\nllm_with_stop = llm.bind(stop=[\"\\\\nObservation\"])\\n```\\n\\n```javascript\\nagent = (\\n {\\n \"input\": lambda x: x[\"input\"],\\n \"agent_scratchpad\": lambda x: format_log_to_str(x[\"intermediate_steps\"]),\\n }\\n | prompt\\n | llm_with_stop\\n | ReActSingleInputOutputParser()\\n)\\n```\\n\\n```javascript\\nfrom langchain.agents import AgentExecutor\\n```\\n\\n```javascript\\nagent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\\n```\\n\\n```javascript\\nagent_executor.invoke(\\n {\\n \"input\": \"Who is Leo DiCaprio\\'s girlfriend? What is her current age raised to the 0.43 power?\"\\n }\\n)\\n```\\n\\n```javascript\\n \\n \\n > Entering new AgentExecutor chain...\\n I need to find out who Leo DiCaprio\\'s girlfriend is and then calculate her age raised to the 0.43 power.\\n Action: Search\\n Action Input: \"Leo DiCaprio girlfriend\"model Vittoria Ceretti I need to find out Vittoria Ceretti\\'s age\\n Action: Search\\n Action Input: \"Vittoria Ceretti age\"25 years I need to calculate 25 raised to the 0.43 power\\n Action: Calculator\\n Action Input: 25^0.43Answer: 3.991298452658078 I now know the final answer\\n Final Answer: Leo DiCaprio\\'s girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078.\\n \\n > Finished chain.\\n\\n\\n\\n\\n\\n {\\'input\\': \"Who is Leo DiCaprio\\'s girlfriend? What is her current age raised to the 0.43 power?\",\\n \\'output\\': \"Leo DiCaprio\\'s girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078.\"}\\n```\\n\\n## Using ZeroShotReactAgent[\\u200b](https://python.langchain.com/docs/modules/agents/agent_types/react#using-zeroshotreactagent \"Direct link to Using ZeroShotReactAgent\")\\n\\nWe will now show how to use the agent with an off-the-shelf agent implementation\\n\\n```javascript\\nagent_executor = initialize_agent(\\n tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\\n)\\n```\\n\\n```javascript\\nagent_executor.invoke(\\n {\\n \"input\": \"Who is Leo DiCaprio\\'s girlfriend? What is her current age raised to the 0.43 power?\"\\n }\\n)\\n```\\n\\n```javascript\\n \\n \\n > Entering new AgentExecutor chain...\\n I need to find out who Leo DiCaprio\\'s girlfriend is and then calculate her age raised to the 0.43 power.\\n Action: Search\\n Action Input: \"Leo DiCaprio girlfriend\"\\n Observation: model Vittoria Ceretti\\n Thought: I need to find out Vittoria Ceretti\\'s age\\n Action: Search\\n Action Input: \"Vittoria Ceretti age\"\\n Observation: 25 years\\n Thought: I need to calculate 25 raised to the 0.43 power\\n Action: Calculator\\n Action Input: 25^0.43\\n Observation: Answer: 3.991298452658078\\n Thought: I now know the final answer\\n Final Answer: Leo DiCaprio\\'s girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078.\\n \\n > Finished chain.\\n\\n\\n\\n\\n\\n {\\'input\\': \"Who is L', metadata={'title': 'ReAct', 'source': 'https://d01.getoutline.com/doc/react-d6rxRS1MHk'})]"
" Document(page_content='This walkthrough showcases using an agent to implement the [ReAct](https://react-lm.github.io/) logic.\\n\\n```javascript\\nfrom langchain.agents import AgentType, initialize_agent, load_tools\\nfrom langchain_openai import OpenAI\\n```\\n\\nFirst, let\\'s load the language model we\\'re going to use to control the agent.\\n\\n```javascript\\nllm = OpenAI(temperature=0)\\n```\\n\\nNext, let\\'s load some tools to use. Note that the llm-math tool uses an LLM, so we need to pass that in.\\n\\n```javascript\\ntools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\\n```\\n\\n## Using LCEL[\\u200b](/docs/modules/agents/agent_types/react#using-lcel \"Direct link to Using LCEL\")\\n\\nWe will first show how to create the agent using LCEL\\n\\n```javascript\\nfrom langchain import hub\\nfrom langchain.agents.format_scratchpad import format_log_to_str\\nfrom langchain.agents.output_parsers import ReActSingleInputOutputParser\\nfrom langchain.tools.render import render_text_description\\n```\\n\\n```javascript\\nprompt = hub.pull(\"hwchase17/react\")\\nprompt = prompt.partial(\\n tools=render_text_description(tools),\\n tool_names=\", \".join([t.name for t in tools]),\\n)\\n```\\n\\n```javascript\\nllm_with_stop = llm.bind(stop=[\"\\\\nObservation\"])\\n```\\n\\n```javascript\\nagent = (\\n {\\n \"input\": lambda x: x[\"input\"],\\n \"agent_scratchpad\": lambda x: format_log_to_str(x[\"intermediate_steps\"]),\\n }\\n | prompt\\n | llm_with_stop\\n | ReActSingleInputOutputParser()\\n)\\n```\\n\\n```javascript\\nfrom langchain.agents import AgentExecutor\\n```\\n\\n```javascript\\nagent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\\n```\\n\\n```javascript\\nagent_executor.invoke(\\n {\\n \"input\": \"Who is Leo DiCaprio\\'s girlfriend? What is her current age raised to the 0.43 power?\"\\n }\\n)\\n```\\n\\n```javascript\\n \\n \\n > Entering new AgentExecutor chain...\\n I need to find out who Leo DiCaprio\\'s girlfriend is and then calculate her age raised to the 0.43 power.\\n Action: Search\\n Action Input: \"Leo DiCaprio girlfriend\"model Vittoria Ceretti I need to find out Vittoria Ceretti\\'s age\\n Action: Search\\n Action Input: \"Vittoria Ceretti age\"25 years I need to calculate 25 raised to the 0.43 power\\n Action: Calculator\\n Action Input: 25^0.43Answer: 3.991298452658078 I now know the final answer\\n Final Answer: Leo DiCaprio\\'s girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078.\\n \\n > Finished chain.\\n\\n\\n\\n\\n\\n {\\'input\\': \"Who is Leo DiCaprio\\'s girlfriend? What is her current age raised to the 0.43 power?\",\\n \\'output\\': \"Leo DiCaprio\\'s girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078.\"}\\n```\\n\\n## Using ZeroShotReactAgent[\\u200b](/docs/modules/agents/agent_types/react#using-zeroshotreactagent \"Direct link to Using ZeroShotReactAgent\")\\n\\nWe will now show how to use the agent with an off-the-shelf agent implementation\\n\\n```javascript\\nagent_executor = initialize_agent(\\n tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\\n)\\n```\\n\\n```javascript\\nagent_executor.invoke(\\n {\\n \"input\": \"Who is Leo DiCaprio\\'s girlfriend? What is her current age raised to the 0.43 power?\"\\n }\\n)\\n```\\n\\n```javascript\\n \\n \\n > Entering new AgentExecutor chain...\\n I need to find out who Leo DiCaprio\\'s girlfriend is and then calculate her age raised to the 0.43 power.\\n Action: Search\\n Action Input: \"Leo DiCaprio girlfriend\"\\n Observation: model Vittoria Ceretti\\n Thought: I need to find out Vittoria Ceretti\\'s age\\n Action: Search\\n Action Input: \"Vittoria Ceretti age\"\\n Observation: 25 years\\n Thought: I need to calculate 25 raised to the 0.43 power\\n Action: Calculator\\n Action Input: 25^0.43\\n Observation: Answer: 3.991298452658078\\n Thought: I now know the final answer\\n Final Answer: Leo DiCaprio\\'s girlfriend is Vittoria Ceretti and her current age raised to the 0.43 power is 3.991298452658078.\\n \\n > Finished chain.\\n\\n\\n\\n\\n\\n {\\'input\\': \"Who is L', metadata={'title': 'ReAct', 'source': 'https://d01.getoutline.com/doc/react-d6rxRS1MHk'})]"
]
},
"execution_count": 4,

@ -6,7 +6,7 @@
"source": [
"# Spark SQL\n",
"\n",
"This notebook shows how to use agents to interact with `Spark SQL`. Similar to [SQL Database Agent](https://python.langchain.com/docs/integrations/toolkits/sql_database), it is designed to address general inquiries about `Spark SQL` and facilitate error recovery.\n",
"This notebook shows how to use agents to interact with `Spark SQL`. Similar to [SQL Database Agent](/docs/integrations/toolkits/sql_database), it is designed to address general inquiries about `Spark SQL` and facilitate error recovery.\n",
"\n",
"**NOTE: Note that, as this agent is in active development, all answers might not be correct. Additionally, it is not guaranteed that the agent won't perform DML statements on your Spark cluster given certain questions. Be careful running it on sensitive data!**"
]

@ -9,8 +9,6 @@
"\n",
"This notebook showcases an agent designed to interact with a `SQL` databases. \n",
"\n",
"The agent builds from [SQLDatabaseChain](https://python.langchain.com/docs/use_cases/tabular/sqlite). \n",
"\n",
"It is designed to answer more general questions about a database, as well as recover from errors.\n",
"\n",
"Note that, as this agent is in active development, all answers might not be correct. \n",
@ -70,7 +68,7 @@
"source": [
"## Quickstart\n",
"\n",
"Following the [SQL use case docs](https://python.langchain.com/docs/use_cases/sql/agents), we can use the `create_sql_agent` helper."
"Following the [SQL use case docs](/docs/use_cases/sql/agents), we can use the `create_sql_agent` helper."
]
},
{
@ -698,7 +696,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -156,7 +156,7 @@
"source": [
"## Using tool with an agent chain\n",
"\n",
"Reddit search functionality is also provided as a multi-input tool. In this example, we adapt [existing code from the docs](https://python.langchain.com/docs/modules/agents/how_to/sharedmemory_for_tools), and use ChatOpenAI to create an agent chain with memory. This agent chain is able to pull information from Reddit and use these posts to respond to subsequent input. \n",
"Reddit search functionality is also provided as a multi-input tool. In this example, we adapt [existing code from the docs](/docs/modules/memory/agent_with_memory), and use ChatOpenAI to create an agent chain with memory. This agent chain is able to pull information from Reddit and use these posts to respond to subsequent input. \n",
"\n",
"To run the example, add your reddit API access information and also get an OpenAI key from the [OpenAI API](https://help.openai.com/en/articles/4936850-where-do-i-find-my-api-key)."
]
@ -167,7 +167,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Adapted code from https://python.langchain.com/docs/modules/agents/how_to/sharedmemory_for_tools\n",
"# Adapted code from /docs/modules/agents/how_to/sharedmemory_for_tools\n",
"\n",
"from langchain.agents import AgentExecutor, StructuredChatAgent, Tool\n",
"from langchain.chains import LLMChain\n",
@ -235,7 +235,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.11.5 64-bit ('langchaindev')",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -249,7 +249,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.9.1"
},
"vscode": {
"interpreter": {
@ -258,5 +258,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

@ -14,7 +14,7 @@
"This notebook shows how to use functionality related to the `FAISS` vector database using `asyncio`.\n",
"LangChain implemented the synchronous and asynchronous vector store functions.\n",
"\n",
"See `synchronous` version [here](https://python.langchain.com/docs/integrations/vectorstores/faiss)."
"See `synchronous` version [here](/docs/integrations/vectorstores/faiss)."
]
},
{

@ -263,7 +263,7 @@
"source": [
"### Create an embedding class instance\n",
"\n",
"You can use any [LangChain embeddings model](https://python.langchain.com/docs/integrations/text_embedding/).\n",
"You can use any [LangChain embeddings model](/docs/integrations/text_embedding/).\n",
"You may need to enable Vertex AI API to use `VertexAIEmbeddings`. We recommend setting the embedding model's version for production, learn more about the [Text embeddings models](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text-embeddings)."
]
},

@ -183,7 +183,7 @@
"`gcloud services enable aiplatform.googleapis.com --project {PROJECT_ID}`\n",
"(replace `{PROJECT_ID}` with the name of your project).\n",
"\n",
"You can use any [LangChain embeddings model](https://python.langchain.com/docs/integrations/text_embedding/)."
"You can use any [LangChain embeddings model](/docs/integrations/text_embedding/)."
]
},
{

@ -264,7 +264,7 @@
"source": [
"### Create an embedding class instance\n",
"\n",
"You can use any [LangChain embeddings model](https://python.langchain.com/docs/integrations/text_embedding/).\n",
"You can use any [LangChain embeddings model](/docs/integrations/text_embedding/).\n",
"You may need to enable Vertex AI API to use `VertexAIEmbeddings`. We recommend setting the embedding model's version for production, learn more about the [Text embeddings models](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text-embeddings)."
]
},

@ -211,7 +211,7 @@
"source": [
"### Create an embedding class instance\n",
"\n",
"You can use any [LangChain embeddings model](https://python.langchain.com/docs/integrations/text_embedding/).\n",
"You can use any [LangChain embeddings model](/docs/integrations/text_embedding/).\n",
"You may need to enable Vertex AI API to use `VertexAIEmbeddings`. We recommend setting the embedding model's version for production, learn more about the [Text embeddings models](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text-embeddings)."
]
},

@ -123,7 +123,7 @@
"\n",
"In the realm of multi-modal data analysis, the integration of diverse information types like images and text has become increasingly crucial. One powerful tool facilitating such integration is [CLIP](https://openai.com/research/clip), a cutting-edge model capable of embedding both images and text into a shared semantic space. By doing so, CLIP enables the retrieval of relevant content across different modalities through similarity search.\n",
"\n",
"To illustrate, let's consider an application scenario where we aim to effectively analyze multi-modal data. In this example, we harness the capabilities of [OpenClip multimodal embeddings](https://python.langchain.com/docs/integrations/text_embedding/open_clip), which leverage CLIP's framework. With OpenClip, we can seamlessly embed textual descriptions alongside corresponding images, enabling comprehensive analysis and retrieval tasks. Whether it's identifying visually similar images based on textual queries or finding relevant text passages associated with specific visual content, OpenClip empowers users to explore and extract insights from multi-modal data with remarkable efficiency and accuracy."
"To illustrate, let's consider an application scenario where we aim to effectively analyze multi-modal data. In this example, we harness the capabilities of [OpenClip multimodal embeddings](/docs/integrations/text_embedding/open_clip), which leverage CLIP's framework. With OpenClip, we can seamlessly embed textual descriptions alongside corresponding images, enabling comprehensive analysis and retrieval tasks. Whether it's identifying visually similar images based on textual queries or finding relevant text passages associated with specific visual content, OpenClip empowers users to explore and extract insights from multi-modal data with remarkable efficiency and accuracy."
]
},
{

@ -307,7 +307,7 @@
"metadata": {},
"source": [
"### Using a Timescale Vector as a Retriever\n",
"After initializing a TimescaleVector store, you can use it as a [retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/)."
"After initializing a TimescaleVector store, you can use it as a [retriever](/docs/modules/data_connection/retrievers/)."
]
},
{
@ -342,7 +342,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's look at an example of using Timescale Vector as a retriever with the [RetrievalQA chain](https://python.langchain.com/docs/use_cases/question_answering/vector_db_qa) and the [stuff chain](https://python.langchain.com/docs/modules/chains/document/stuff).\n",
"Let's look at an example of using Timescale Vector as a retriever with the RetrievalQA chain and the stuff documents chain.\n",
"\n",
"In this example, we'll ask the same query as above, but this time we'll pass the relevant documents returned from Timescale Vector to an LLM to use as context to answer our question.\n",
"\n",
@ -477,7 +477,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we'll define a metadata function to extract the relevant metadata from the JSON record. We'll pass this function to the JSONLoader. See the [JSON document loader docs](https://python.langchain.com/docs/modules/data_connection/document_loaders/json) for more details."
"Next, we'll define a metadata function to extract the relevant metadata from the JSON record. We'll pass this function to the JSONLoader. See the [JSON document loader docs](/docs/modules/data_connection/document_loaders/json) for more details."
]
},
{
@ -1222,7 +1222,7 @@
"\n",
"Timescale Vector also supports the self-querying retriever functionality, which gives it the ability to query itself. Given a natural language query with a query statement and filters (single or composite), the retriever uses a query constructing LLM chain to write a SQL query and then applies it to the underlying PostgreSQL database in the Timescale Vector vectorstore.\n",
"\n",
"For more on self-querying, [see the docs](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/)."
"For more on self-querying, [see the docs](/docs/modules/data_connection/retrievers/self_query/)."
]
},
{
@ -1728,7 +1728,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -388,7 +388,7 @@
"### As retriever\n",
"\n",
"To use this vector store as a\n",
"[LangChain retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/)\n",
"[LangChain retriever](/docs/modules/data_connection/retrievers/)\n",
"simply call the `as_retriever` function, which is a standard vector store\n",
"method:"
]

@ -338,7 +338,7 @@
"metadata": {},
"source": [
"### Hybrid search\n",
"Weaviate also offers hybrid search. See [`WeaviateHybridSearchRetriever`](https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid) for reference."
"Weaviate also offers hybrid search. See [`WeaviateHybridSearchRetriever`](/docs/integrations/retrievers/weaviate-hybrid) for reference."
]
},
{

@ -605,7 +605,7 @@
"\n",
"#### Using astream_log\n",
"\n",
"**Note** You can also use the [astream_log](https://python.langchain.com/docs/expression_language/interface#async-stream-intermediate-steps) API. This API produces a granular log of all events that occur during execution. The log format is based on the [JSONPatch](https://jsonpatch.com/) standard. It's granular, but requires effort to parse. For this reason, we created the `astream_events` API instead."
"**Note** You can also use the [astream_log](/docs/expression_language/interface#async-stream-intermediate-steps) API. This API produces a granular log of all events that occur during execution. The log format is based on the [JSONPatch](https://jsonpatch.com/) standard. It's granular, but requires effort to parse. For this reason, we created the `astream_events` API instead."
]
},
{

@ -6,7 +6,7 @@
"metadata": {},
"source": [
"# Logging to file\n",
"This example shows how to print logs to file. It shows how to use the `FileCallbackHandler`, which does the same thing as [`StdOutCallbackHandler`](https://python.langchain.com/docs/modules/callbacks/#get-started), but instead writes the output to file. It also uses the `loguru` library to log other outputs that are not captured by the handler."
"This example shows how to print logs to file. It shows how to use the `FileCallbackHandler`, which does the same thing as [`StdOutCallbackHandler`](/docs/modules/callbacks/#get-started), but instead writes the output to file. It also uses the `loguru` library to log other outputs that are not captured by the handler."
]
},
{

@ -153,7 +153,7 @@ chain = prompt | chain
chain.invoke({"number":2}, config=config)
```
**Note:** `chain = prompt | chain` is equivalent to `chain = LLMChain(llm=llm, prompt=prompt)` (check [LangChain Expression Language (LCEL) documentation](https://python.langchain.com/docs/expression_language/) for more details)
**Note:** `chain = prompt | chain` is equivalent to `chain = LLMChain(llm=llm, prompt=prompt)` (check [LangChain Expression Language (LCEL) documentation](/docs/expression_language/) for more details)
The `verbose` argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) as a constructor argument, e.g. `LLMChain(verbose=True)`, and it is equivalent to passing a `ConsoleCallbackHandler` to the `callbacks` argument of that object and all child objects. This is useful for debugging, as it will log all events to the console.

@ -370,7 +370,7 @@
"\n",
"We can set up and use a [`Retriever`](/docs/modules/data_connection/retrievers/) to pull domain-specific knowledge for our chatbot. To show this, let's expand the simple chatbot we created above to be able to answer questions about LangSmith.\n",
"\n",
"We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store it in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](https://python.langchain.com/docs/use_cases/question_answering/).\n",
"We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store it in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/docs/use_cases/question_answering/).\n",
"\n",
"Let's set up our retriever. First, we'll install some required deps:"
]

@ -80,7 +80,7 @@
"source": [
"## Creating a retriever\n",
"\n",
"We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](https://python.langchain.com/docs/use_cases/question_answering/).\n",
"We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/docs/use_cases/question_answering/).\n",
"\n",
"Let's use a document loader to pull text from the docs:"
]

@ -56,7 +56,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll follow the structure of [this notebook](https://github.com/cristobalcl/LearningLangChain/blob/master/notebooks/04%20-%20QA%20with%20code.ipynb) and employ [context aware code splitting](https://python.langchain.com/docs/integrations/document_loaders/source_code)."
"We'll follow the structure of [this notebook](https://github.com/cristobalcl/LearningLangChain/blob/master/notebooks/04%20-%20QA%20with%20code.ipynb) and employ [context aware code splitting](/docs/integrations/document_loaders/source_code)."
]
},
{
@ -98,7 +98,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We load the py code using [`LanguageParser`](https://python.langchain.com/docs/integrations/document_loaders/source_code), which will:\n",
"We load the py code using [`LanguageParser`](/docs/integrations/document_loaders/source_code), which will:\n",
"\n",
"* Keep top-level functions and classes together (into a single document)\n",
"* Put remaining code into a separate document\n",

@ -64,8 +64,8 @@
"\n",
"* The [output parser](/docs/modules/model_io/output_parsers/) documentation includes various parser examples for specific types (e.g., lists, datetime, enum, etc).\n",
"* LangChain [document loaders](/docs/modules/data_connection/document_loaders/) to load content from files. Please see list of [integrations](/docs/integrations/document_loaders).\n",
"* The experimental [Anthropic function calling](https://python.langchain.com/docs/integrations/chat/anthropic_functions) support provides similar functionality to Anthropic chat models.\n",
"* [LlamaCPP](https://python.langchain.com/docs/integrations/llms/llamacpp#grammars) natively supports constrained decoding using custom grammars, making it easy to output structured content using local LLMs \n",
"* The experimental [Anthropic function calling](/docs/integrations/chat/anthropic_functions) support provides similar functionality to Anthropic chat models.\n",
"* [LlamaCPP](/docs/integrations/llms/llamacpp#grammars) natively supports constrained decoding using custom grammars, making it easy to output structured content using local LLMs \n",
"* [JSONFormer](/docs/integrations/llms/jsonformer_experimental) offers another way for structured decoding of a subset of the JSON Schema.\n",
"* [Kor](https://eyurtsev.github.io/kor/) is another library for extraction where schema and examples can be provided to the LLM. Kor is optimized to work for a parsing approach.\n",
"* [OpenAI's function and tool calling](https://platform.openai.com/docs/guides/function-calling)\n",

@ -577,7 +577,7 @@
"id": "8edb9976",
"metadata": {},
"source": [
"To address this, we can adjust the initial Cypher prompt of the QA chain. This involves adding guidance to the LLM on how users can refer to specific platforms, such as PS5 in our case. We achieve this using the LangChain [PromptTemplate](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/), creating a modified initial prompt. This modified prompt is then supplied as an argument to our refined Memgraph-LangChain instance."
"To address this, we can adjust the initial Cypher prompt of the QA chain. This involves adding guidance to the LLM on how users can refer to specific platforms, such as PS5 in our case. We achieve this using the LangChain [PromptTemplate](/docs/modules/model_io/prompts/), creating a modified initial prompt. This modified prompt is then supplied as an argument to our refined Memgraph-LangChain instance."
]
},
{
@ -686,7 +686,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -195,7 +195,7 @@
"![graph_chain.webp](../../../static/img/graph_chain.webp)\n",
"\n",
"\n",
"LangChain comes with a built-in chain for this workflow that is designed to work with Neo4j: [GraphCypherQAChain](https://python.langchain.com/docs/use_cases/graph/graph_cypher_qa)"
"LangChain comes with a built-in chain for this workflow that is designed to work with Neo4j: [GraphCypherQAChain](/docs/use_cases/graph/integrations/graph_cypher_qa)"
]
},
{
@ -329,7 +329,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -864,7 +864,7 @@
"id": "c5470a79-258a-4108-8ceb-dfe8180160ca",
"metadata": {},
"source": [
"For more on how to stream intermediate steps check out the [LCEL Interface](https://python.langchain.com/docs/expression_language/interface#async-stream-intermediate-steps) docs."
"For more on how to stream intermediate steps check out the [LCEL Interface](/docs/expression_language/interface#async-stream-intermediate-steps) docs."
]
}
],

@ -393,7 +393,7 @@
"source": [
"### Going deeper\n",
"\n",
"* You can use the [metadata tagger](https://python.langchain.com/docs/integrations/document_transformers/openai_metadata_tagger) document transformer to extract metadata from a LangChain `Document`. \n",
"* You can use the [metadata tagger](/docs/integrations/document_transformers/openai_metadata_tagger) document transformer to extract metadata from a LangChain `Document`. \n",
"* This covers the same basic functionality as the tagging chain, only applied to a LangChain `Document`."
]
}

@ -460,7 +460,7 @@
"\n",
"Related to scraping, we may want to answer specific questions using searched content.\n",
"\n",
"We can automate the process of [web research](https://blog.langchain.dev/automating-web-research/) using a retriever, such as the `WebResearchRetriever` ([docs](https://python.langchain.com/docs/modules/data_connection/retrievers/web_research)).\n",
"We can automate the process of [web research](https://blog.langchain.dev/automating-web-research/) using a retriever, such as the `WebResearchRetriever`.\n",
"\n",
"![Image description](../../static/img/web_research.png)\n",
"\n",
@ -635,9 +635,7 @@
"# Call the Actor to obtain text from the crawled webpages\n",
"loader = apify.call_actor(\n",
" actor_id=\"apify/website-content-crawler\",\n",
" run_input={\n",
" \"startUrls\": [{\"url\": \"https://python.langchain.com/docs/integrations/chat/\"}]\n",
" },\n",
" run_input={\"startUrls\": [{\"url\": \"/docs/integrations/chat/\"}]},\n",
" dataset_mapping_function=lambda item: Document(\n",
" page_content=item[\"text\"] or \"\", metadata={\"source\": item[\"url\"]}\n",
" ),\n",

Loading…
Cancel
Save