docs: reorg and visual refresh (#19765)

- put use cases in main sidebar
- move modules to own sidebar, rename components
- cleanup lcel section
- cleanup guides
- update font, cell highlighting

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
pull/19953/head^2
Jacob Lee 1 month ago committed by GitHub
parent 51bdfe04e9
commit 605c3f23e1
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -34,34 +34,40 @@ conda install langchain -c conda-forge
## 🤔 What is LangChain?
**LangChain** is a framework for developing applications powered by language models. It enables applications that:
- **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
- **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
This framework consists of several parts.
- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
- **[LangChain Templates](templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
- **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API.
- **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
- **[LangGraph](https://python.langchain.com/docs/langgraph)**: LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.
The LangChain libraries themselves are made up of several different packages.
- **[`langchain-core`](libs/core)**: Base abstractions and LangChain Expression Language.
- **[`langchain-community`](libs/community)**: Third party integrations.
- **[`langchain`](libs/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
**LangChain** is a framework for developing applications powered by large language models (LLMs).
For these applications, LangChain simplifies the entire application lifecycle:
- **Open-source libraries**: Build your applications using LangChain's [modular building blocks](https://python.langchain.com/docs/expression_language/) and [components](https://python.langchain.com/docs/modules/). Integrate with hundreds of [third-party providers](https://python.langchain.com/docs/integrations/platforms/).
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://python.langchain.com/docs/langsmith/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn any chain into a REST API with [LangServe](https://python.langchain.com/docs/langserve).
### Open-source libraries
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **`[LangGraph](https://python.langchain.com/docs/langgraph)`**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
### Productionization:
- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
### Deployment:
- **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as REST APIs.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack.svg "LangChain Architecture Overview")
## 🧱 What can you build with LangChain?
**❓ Retrieval augmented generation**
**❓ Question answering with RAG**
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**💬 Analyzing structured data**
**🧱 Extracting structured output**
- [Documentation](https://python.langchain.com/docs/use_cases/qa_structured/sql)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain/tree/master/templates/sql-llama2)
- [Documentation](https://python.langchain.com/docs/use_cases/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
@ -72,34 +78,51 @@ And much more! Head to the [Use cases](https://python.langchain.com/docs/use_cas
## 🚀 How does LangChain help?
The main value props of the LangChain libraries are:
1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
## LangChain Expression Language (LCEL)
LCEL is the foundation of many of LangChain's components, and is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
- **[Overview](https://python.langchain.com/docs/expression_language/)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/docs/expression_language/interface)**: The standard interface for LCEL objects
- **[Primitives](https://python.langchain.com/docs/expression_language/primitives)**: More on the primitives LCEL includes
## Components
Components fall into the following **modules**:
**📃 Model I/O:**
This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.
This includes [prompt management](https://python.langchain.com/docs/modules/model_io/prompts/), [prompt optimization](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/), a generic interface for [chat models](https://python.langchain.com/docs/modules/model_io/chat/) and [LLMs](https://python.langchain.com/docs/modules/model_io/llms/), and common utilities for working with [model outputs](https://python.langchain.com/docs/modules/model_io/output_parsers/).
**📚 Retrieval:**
Data Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/modules/data_connection/document_loaders/) from a variety of sources, [preparing it](https://python.langchain.com/docs/modules/data_connection/document_loaders/), [then retrieving it](https://python.langchain.com/docs/modules/data_connection/retrievers/) for use in the generation step.
**🤖 Agents:**
Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete done. LangChain provides a [standard interface for agents](https://python.langchain.com/docs/modules/agents/), a [selection of agents](https://python.langchain.com/docs/modules/agents/agent_types/) to choose from, and examples of end-to-end agents.
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation, which includes:
- [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples
- Overview of the [interfaces](https://python.langchain.com/docs/expression_language/), [modules](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)
- [Use case](https://python.langchain.com/docs/use_cases/qa_structured/sql) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/adapters/openai)
- [LangSmith](https://python.langchain.com/docs/langsmith/), [LangServe](https://python.langchain.com/docs/langserve), and [LangChain Template](https://python.langchain.com/docs/templates/) overviews
- [Reference](https://api.python.langchain.com): full API docs
- [Use case](https://python.langchain.com/docs/use_cases/) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/)
- Overviews of the [interfaces](https://python.langchain.com/docs/expression_language/), [components](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)
You can also check out the full [API Reference docs](https://api.python.langchain.com).
## 🌐 Ecosystem
- [🦜🛠️ LangSmith](https://python.langchain.com/docs/langsmith/): Tracing and evaluating your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://python.langchain.com/docs/langgraph): Creating stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploying LangChain runnables and chains as REST APIs.
- [LangChain Templates](https://python.langchain.com/docs/templates/): Example applications hosted with LangServe.
## 💁 Contributing

@ -38,9 +38,9 @@
"\n",
"To run locally, we use Ollama.ai. \n",
"\n",
"See [here](https://python.langchain.com/docs/integrations/chat/ollama) for details on installation and setup.\n",
"See [here](/docs/integrations/chat/ollama) for details on installation and setup.\n",
"\n",
"Also, see [here](https://python.langchain.com/docs/guides/local_llms) for our full guide on local LLMs.\n",
"Also, see [here](/docs/guides/development/local_llms) for our full guide on local LLMs.\n",
" \n",
"To use an external API, which is not private, we can use Replicate."
]

@ -191,15 +191,15 @@
"source": [
"## Multi-vector retriever\n",
"\n",
"Use [multi-vector-retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector#summary).\n",
"Use [multi-vector-retriever](/docs/modules/data_connection/retrievers/multi_vector#summary).\n",
"\n",
"Summaries are used to retrieve raw tables and / or raw chunks of text.\n",
"\n",
"### Text and Table summaries\n",
"\n",
"Here, we use ollama.ai to run LLaMA2 locally. \n",
"Here, we use Ollama to run LLaMA2 locally. \n",
"\n",
"See details on installation [here](https://python.langchain.com/docs/guides/local_llms)."
"See details on installation [here](/docs/guides/development/local_llms)."
]
},
{

@ -21,10 +21,10 @@
### Featured courses on Deeplearning.AI
- [LangChain for LLM Application Development](https://learn.deeplearning.ai/langchain)
- [LangChain Chat with Your Data](https://learn.deeplearning.ai/langchain-chat-with-your-data)
- [Functions, Tools and Agents with LangChain](https://learn.deeplearning.ai/functions-tools-agents-langchain)
- [Build LLM Apps with LangChain.js](https://learn.deeplearning.ai/courses/build-llm-apps-with-langchain-js)
- [LangChain for LLM Application Development](https://www.deeplearning.ai/short-courses/langchain-for-llm-application-development/)
- [LangChain Chat with Your Data](https://www.deeplearning.ai/short-courses/langchain-chat-with-your-data/)
- [Functions, Tools and Agents with LangChain](https://www.deeplearning.ai/short-courses/functions-tools-agents-langchain/)
- [Build LLM Apps with LangChain.js](https://www.deeplearning.ai/short-courses/build-llm-apps-with-langchain-js/)
### Online courses

@ -0,0 +1,2 @@
label: 'Documentation'
position: 3

@ -0,0 +1,138 @@
---
sidebar_label: "Style guide"
---
# LangChain Documentation Style Guide
## Introduction
As LangChain continues to grow, the surface area of documentation required to cover it continues to grow too.
This page provides guidelines for anyone writing documentation for LangChain, as well as some of our philosophies around
organization and structure.
## Philosophy
LangChain's documentation aspires to follow the [Diataxis framework](https://diataxis.fr).
Under this framework, all documentation falls under one of four categories:
- **Tutorials**: Lessons that take the reader by the hand through a series of conceptual steps to complete a project.
- An example of this is our [LCEL streaming guide](/docs/expression_language/streaming).
- Our guides on [custom components](/docs/modules/model_io/chat/custom_chat_model) is another one.
- **How-to guides**: Guides that take the reader through the steps required to solve a real-world problem.
- The clearest examples of this are our [Use case](/docs/use_cases/) quickstart pages.
- **Reference**: Technical descriptions of the machinery and how to operate it.
- Our [Runnable interface](/docs/expression_language/interface) page is an example of this.
- The [API reference pages](https://api.python.langchain.com/) are another.
- **Explanation**: Explanations that clarify and illuminate a particular topic.
- The [LCEL primitives pages](/docs/expression_language/primitives/sequence) are an example of this.
Each category serves a distinct purpose and requires a specific approach to writing and structuring the content.
## Taxonomy
Keeping the above in mind, we have sorted LangChain's docs into categories. It is helpful to think in these terms
when contributing new documentation:
### Getting started
The [getting started section](/docs/get_started/introduction) includes a high-level introduction to LangChain, a quickstart that
tours LangChain's various features, and logistical instructions around installation and project setup.
It contains elements of **How-to guides** and **Explanations**.
### Use cases
[Use cases](/docs/use_cases/) are guides that are meant to show how to use LangChain to accomplish a specific task (RAG, information extraction, etc.).
The quickstarts should be good entrypoints for first-time LangChain developers who prefer to learn by getting something practical prototyped,
then taking the pieces apart retrospectively. These should mirror what LangChain is good at.
The quickstart pages here should fit the **How-to guide** category, with the other pages intended to be **Explanations** of more
in-depth concepts and strategies that accompany the main happy paths.
:::note
The below sections are listed roughly in order of increasing level of abstraction.
:::
### Expression Language
[LangChain Expression Language (LCEL)](/docs/expression_language/) is the fundamental way that most LangChain components fit together, and this section is designed to teach
developers how to use it to build with LangChain's primitives effectively.
This section should contains **Tutorials** that teach how to stream and use LCEL primitives for more abstract tasks, **Explanations** of specific behaviors,
and some **References** for how to use different methods in the Runnable interface.
### Components
The [components section](/docs/modules) covers concepts one level of abstraction higher than LCEL.
Abstract base classes like `BaseChatModel` and `BaseRetriever` should be covered here, as well as core implementations of these base classes,
such as `ChatPromptTemplate` and `RecursiveCharacterTextSplitter`. Customization guides belong here too.
This section should contain mostly conceptual **Tutorials**, **References**, and **Explanations** of the components they cover.
:::note
As a general rule of thumb, everything covered in the `Expression Language` and `Components` sections (with the exception of the `Composition` section of components) should
cover only components that exist in `langchain_core`.
:::
### Integrations
The [integrations](/docs/integrations/platforms/) are specific implementations of components. These often involve third-party APIs and services.
If this is the case, as a general rule, these are maintained by the third-party partner.
This section should contain mostly **Explanations** and **References**, though the actual content here is more flexible than other sections and more at the
discretion of the third-party provider.
:::note
Concepts covered in `Integrations` should generally exist in `langchain_community` or specific partner packages.
:::
### Guides and Ecosystem
The [Guides](/docs/guides) and [Ecosystem](/docs/langsmith/) sections should contain guides that address higher-level problems than the sections above.
This includes, but is not limited to, considerations around productionization and development workflows.
These should contain mostly **How-to guides**, **Explanations**, and **Tutorials**.
### API references
LangChain's API references. Should act as **References** (as the name implies) with some **Explanation**-focused content as well.
## Sample developer journey
We have set up our docs to assist a new developer to LangChain. Let's walk through the intended path:
- The developer lands on https://python.langchain.com, and reads through the introduction and the diagram.
- If they are just curious, they may be drawn to the [Quickstart](/docs/get_started/quickstart) to get a high-level tour of what LangChain contains.
- If they have a specific task in mind that they want to accomplish, they will be drawn to the Use-Case section. The use-case should provide a good, concrete hook that shows the value LangChain can provide them and be a good entrypoint to the framework.
- They can then move to learn more about the fundamentals of LangChain through the Expression Language sections.
- Next, they can learn about LangChain's various components and integrations.
- Finally, they can get additional knowledge through the Guides.
This is only an ideal of course - sections will inevitably reference lower or higher-level concepts that are documented in other sections.
## Guidelines
Here are some other guidelines you should think about when writing and organizing documentation.
### Linking to other sections
Because sections of the docs do not exist in a vacuum, it is important to link to other sections as often as possible
to allow a developer to learn more about an unfamiliar topic inline.
This includes linking to the API references as well as conceptual sections!
### Conciseness
In general, take a less-is-more approach. If a section with a good explanation of a concept already exists, you should link to it rather than
re-explain it, unless the concept you are documenting presents some new wrinkle.
Be concise, including in code samples.
### General style
- Use active voice and present tense whenever possible.
- Use examples and code snippets to illustrate concepts and usage.
- Use appropriate header levels (`#`, `##`, `###`, etc.) to organize the content hierarchically.
- Use bullet points and numbered lists to break down information into easily digestible chunks.
- Use tables (especially for **Reference** sections) and diagrams often to present information visually.
- Include the table of contents for longer documentation pages to help readers navigate the content, but hide it for shorter pages.

@ -1,7 +1,4 @@
---
sidebar_position: 3
---
# Contribute Documentation
# Technical logistics
LangChain documentation consists of two components:

@ -12,7 +12,7 @@ As an open-source project in a rapidly developing field, we are extremely open t
There are many ways to contribute to LangChain. Here are some common ways people contribute:
- [**Documentation**](./documentation.mdx): Help improve our docs, including this one!
- [**Documentation**](/docs/contributing/documentation/style_guide): Help improve our docs, including this one!
- [**Code**](./code.mdx): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](integrations.mdx): Help us integrate with your favorite vendors and tools.
- [**Discussions**](https://github.com/langchain-ai/langchain/discussions): Help answer usage questions and discuss issues with users.

@ -41,7 +41,7 @@ There are other files in the root directory level, but their presence should be
The `/docs` directory contains the content for the documentation that is shown
at https://python.langchain.com/ and the associated API Reference https://api.python.langchain.com/en/latest/langchain_api_reference.html.
See the [documentation](./documentation) guidelines to learn how to contribute to the documentation.
See the [documentation](/docs/contributing/documentation/style_guide) guidelines to learn how to contribute to the documentation.
## Code

@ -1,205 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e89f490d",
"metadata": {},
"source": [
"# Agents\n",
"\n",
"You can pass a Runnable into an agent. Make sure you have `langchainhub` installed: `pip install langchainhub`"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "af4381de",
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"from langchain.agents import AgentExecutor, tool\n",
"from langchain.agents.output_parsers import XMLAgentOutputParser\n",
"from langchain_community.chat_models import ChatAnthropic"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "24cc8134",
"metadata": {},
"outputs": [],
"source": [
"model = ChatAnthropic(model=\"claude-2\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "67c0b0e4",
"metadata": {},
"outputs": [],
"source": [
"@tool\n",
"def search(query: str) -> str:\n",
" \"\"\"Search things about current events.\"\"\"\n",
" return \"32 degrees\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7203b101",
"metadata": {},
"outputs": [],
"source": [
"tool_list = [search]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "b68e756d",
"metadata": {},
"outputs": [],
"source": [
"# Get the prompt to use - you can modify this!\n",
"prompt = hub.pull(\"hwchase17/xml-agent-convo\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "61ab3e9a",
"metadata": {},
"outputs": [],
"source": [
"# Logic for going from intermediate steps to a string to pass into model\n",
"# This is pretty tied to the prompt\n",
"def convert_intermediate_steps(intermediate_steps):\n",
" log = \"\"\n",
" for action, observation in intermediate_steps:\n",
" log += (\n",
" f\"<tool>{action.tool}</tool><tool_input>{action.tool_input}\"\n",
" f\"</tool_input><observation>{observation}</observation>\"\n",
" )\n",
" return log\n",
"\n",
"\n",
"# Logic for converting tools to string to go in prompt\n",
"def convert_tools(tools):\n",
" return \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])"
]
},
{
"cell_type": "markdown",
"id": "260f5988",
"metadata": {},
"source": [
"Building an agent from a runnable usually involves a few things:\n",
"\n",
"1. Data processing for the intermediate steps. These need to be represented in a way that the language model can recognize them. This should be pretty tightly coupled to the instructions in the prompt\n",
"\n",
"2. The prompt itself\n",
"\n",
"3. The model, complete with stop tokens if needed\n",
"\n",
"4. The output parser - should be in sync with how the prompt specifies things to be formatted."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "e92f1d6f",
"metadata": {},
"outputs": [],
"source": [
"agent = (\n",
" {\n",
" \"input\": lambda x: x[\"input\"],\n",
" \"agent_scratchpad\": lambda x: convert_intermediate_steps(\n",
" x[\"intermediate_steps\"]\n",
" ),\n",
" }\n",
" | prompt.partial(tools=convert_tools(tool_list))\n",
" | model.bind(stop=[\"</tool_input>\", \"</final_answer>\"])\n",
" | XMLAgentOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "6ce6ec7a",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "fb5cb2e3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m <tool>search</tool><tool_input>weather in New York\u001b[0m\u001b[36;1m\u001b[1;3m32 degrees\u001b[0m\u001b[32;1m\u001b[1;3m <tool>search</tool>\n",
"<tool_input>weather in New York\u001b[0m\u001b[36;1m\u001b[1;3m32 degrees\u001b[0m\u001b[32;1m\u001b[1;3m <final_answer>The weather in New York is 32 degrees\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': 'whats the weather in New york?',\n",
" 'output': 'The weather in New York is 32 degrees'}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.invoke({\"input\": \"whats the weather in New york?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bce86dd8",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,5 +1,15 @@
{
"cells": [
{
"cell_type": "raw",
"id": "1e997ab7",
"metadata": {},
"source": [
"---\n",
"sidebar_class_name: hidden\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f09fd305",

@ -1,163 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "cf4fb76d-c534-485b-8b51-a0714ee3b82e",
"metadata": {},
"source": [
"# Routing by semantic similarity\n",
"\n",
"With LCEL you can easily add [custom routing logic](/docs/expression_language/how_to/routing#using-a-custom-function) to your chain to dynamically determine the chain logic based on user input. All you need to do is define a function that given an input returns a `Runnable`.\n",
"\n",
"One especially useful technique is to use embeddings to route a query to the most relevant prompt. Here's a very simple example."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b793a0aa",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-core langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "eef9020a-5f7c-4291-98eb-fa73f17d4b92",
"metadata": {},
"outputs": [],
"source": [
"from langchain.utils.math import cosine_similarity\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"\n",
"physics_template = \"\"\"You are a very smart physics professor. \\\n",
"You are great at answering questions about physics in a concise and easy to understand manner. \\\n",
"When you don't know the answer to a question you admit that you don't know.\n",
"\n",
"Here is a question:\n",
"{query}\"\"\"\n",
"\n",
"math_template = \"\"\"You are a very good mathematician. You are great at answering math questions. \\\n",
"You are so good because you are able to break down hard problems into their component parts, \\\n",
"answer the component parts, and then put them together to answer the broader question.\n",
"\n",
"Here is a question:\n",
"{query}\"\"\"\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"prompt_templates = [physics_template, math_template]\n",
"prompt_embeddings = embeddings.embed_documents(prompt_templates)\n",
"\n",
"\n",
"def prompt_router(input):\n",
" query_embedding = embeddings.embed_query(input[\"query\"])\n",
" similarity = cosine_similarity([query_embedding], prompt_embeddings)[0]\n",
" most_similar = prompt_templates[similarity.argmax()]\n",
" print(\"Using MATH\" if most_similar == math_template else \"Using PHYSICS\")\n",
" return PromptTemplate.from_template(most_similar)\n",
"\n",
"\n",
"chain = (\n",
" {\"query\": RunnablePassthrough()}\n",
" | RunnableLambda(prompt_router)\n",
" | ChatOpenAI()\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4d22b0f3-24f2-4a47-9440-065b57ebcdbd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using PHYSICS\n",
"A black hole is a region in space where gravity is extremely strong, so strong that nothing, not even light, can escape its gravitational pull. It is formed when a massive star collapses under its own gravity during a supernova explosion. The collapse causes an incredibly dense mass to be concentrated in a small volume, creating a gravitational field that is so intense that it warps space and time. Black holes have a boundary called the event horizon, which marks the point of no return for anything that gets too close. Beyond the event horizon, the gravitational pull is so strong that even light cannot escape, hence the name \"black hole.\" While we have a good understanding of black holes, there is still much to learn, especially about what happens inside them.\n"
]
}
],
"source": [
"print(chain.invoke(\"What's a black hole\"))"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "f261910d-1de1-4a01-8c8a-308db02b81de",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using MATH\n",
"Thank you for your kind words! I will do my best to break down the concept of a path integral for you.\n",
"\n",
"In mathematics and physics, a path integral is a mathematical tool used to calculate the probability amplitude or wave function of a particle or system of particles. It was introduced by Richard Feynman and is an integral over all possible paths that a particle can take to go from an initial state to a final state.\n",
"\n",
"To understand the concept better, let's consider an example. Suppose we have a particle moving from point A to point B in space. Classically, we would describe this particle's motion using a definite trajectory, but in quantum mechanics, particles can simultaneously take multiple paths from A to B.\n",
"\n",
"The path integral formalism considers all possible paths that the particle could take and assigns a probability amplitude to each path. These probability amplitudes are then added up, taking into account the interference effects between different paths.\n",
"\n",
"To calculate a path integral, we need to define an action, which is a mathematical function that describes the behavior of the system. The action is usually expressed in terms of the particle's position, velocity, and time.\n",
"\n",
"Once we have the action, we can write down the path integral as an integral over all possible paths. Each path is weighted by a factor determined by the action and the principle of least action, which states that a particle takes a path that minimizes the action.\n",
"\n",
"Mathematically, the path integral is expressed as:\n",
"\n",
"∫ e^(iS/ħ) D[x(t)]\n",
"\n",
"Here, S is the action, ħ is the reduced Planck's constant, and D[x(t)] represents the integration over all possible paths x(t) of the particle.\n",
"\n",
"By evaluating this integral, we can obtain the probability amplitude for the particle to go from the initial state to the final state. The absolute square of this amplitude gives us the probability of finding the particle in a particular state.\n",
"\n",
"Path integrals have proven to be a powerful tool in various areas of physics, including quantum mechanics, quantum field theory, and statistical mechanics. They allow us to study complex systems and calculate probabilities that are difficult to obtain using other methods.\n",
"\n",
"I hope this explanation helps you understand the concept of a path integral. If you have any further questions, feel free to ask!\n"
]
}
],
"source": [
"print(chain.invoke(\"What's a path integral\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0c1732a-01ca-4d10-977c-29ed7480972b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,11 +0,0 @@
---
sidebar_position: 3
---
# Cookbook
import DocCardList from "@theme/DocCardList";
Example code for accomplishing common tasks with the LangChain Expression Language (LCEL). These examples show how to compose different Runnable (the core LCEL interface) components to achieve various tasks. If you're just getting acquainted with LCEL, the [Prompt + LLM](/docs/expression_language/cookbook/prompt_llm_parser) page is a good place to start.
<DocCardList />

@ -1,194 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5062941a",
"metadata": {},
"source": [
"# Adding memory\n",
"\n",
"This shows how to add memory to an arbitrary chain. Right now, you can use the memory classes but need to hook it up manually"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18753dee",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "7998efd8",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI()\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful chatbot\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fa0087f3",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferMemory(return_messages=True)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "06b531ae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': []}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d9437af6",
"metadata": {},
"outputs": [],
"source": [
"chain = (\n",
" RunnablePassthrough.assign(\n",
" history=RunnableLambda(memory.load_memory_variables) | itemgetter(\"history\")\n",
" )\n",
" | prompt\n",
" | model\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "bed1e260",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Hello Bob! How can I assist you today?', additional_kwargs={}, example=False)"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"input\": \"hi im bob\"}\n",
"response = chain.invoke(inputs)\n",
"response"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "890475b4",
"metadata": {},
"outputs": [],
"source": [
"memory.save_context(inputs, {\"output\": response.content})"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e8fcb77f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [HumanMessage(content='hi im bob', additional_kwargs={}, example=False),\n",
" AIMessage(content='Hello Bob! How can I assist you today?', additional_kwargs={}, example=False)]}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "d837d5c3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Your name is Bob.', additional_kwargs={}, example=False)"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"input\": \"whats my name\"}\n",
"response = chain.invoke(inputs)\n",
"response"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -34,7 +34,7 @@
"from langchain.agents import AgentExecutor, load_tools\n",
"from langchain.agents.format_scratchpad import format_to_openai_function_messages\n",
"from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser\n",
"from langchain.tools import WikipediaQueryRun\n",
"from langchain_community.tools import WikipediaQueryRun\n",
"from langchain_community.utilities import WikipediaAPIWrapper\n",
"from langchain_core.prompt_values import ChatPromptValue\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",

@ -1,492 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "abe47592-909c-4844-bf44-9e55c2fb4bfa",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"title: RAG\n",
"---\n"
]
},
{
"cell_type": "markdown",
"id": "91c5ef3d",
"metadata": {},
"source": [
"Let's look at adding in a retrieval step to a prompt and LLM, which adds up to a \"retrieval-augmented generation\" chain"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "7f25d9e9-d192-42e9-af50-5660a4bfb0d9",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai faiss-cpu tiktoken"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "33be32af",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "bfc47ec1",
"metadata": {},
"outputs": [],
"source": [
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"model = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "eae31755",
"metadata": {},
"outputs": [],
"source": [
"chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | model\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f3040b0c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Harrison worked at Kensho.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"where did harrison work?\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e1d20c7c",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\n",
"Answer in the following language: {language}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"chain = (\n",
" {\n",
" \"context\": itemgetter(\"question\") | retriever,\n",
" \"question\": itemgetter(\"question\"),\n",
" \"language\": itemgetter(\"language\"),\n",
" }\n",
" | prompt\n",
" | model\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7ee8b2d4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Harrison ha lavorato a Kensho.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"question\": \"where did harrison work\", \"language\": \"italian\"})"
]
},
{
"cell_type": "markdown",
"id": "f007669c",
"metadata": {},
"source": [
"## Conversational Retrieval Chain\n",
"\n",
"We can easily add in conversation history. This primarily means adding in chat_message_history"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "3f30c348",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import AIMessage, HumanMessage, get_buffer_string\n",
"from langchain_core.prompts import format_document\n",
"from langchain_core.runnables import RunnableParallel"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "64ab1dbf",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.prompt import PromptTemplate\n",
"\n",
"_template = \"\"\"Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n",
"\n",
"Chat History:\n",
"{chat_history}\n",
"Follow Up Input: {question}\n",
"Standalone question:\"\"\"\n",
"CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "7d628c97",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"ANSWER_PROMPT = ChatPromptTemplate.from_template(template)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f60a5d0f",
"metadata": {},
"outputs": [],
"source": [
"DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template=\"{page_content}\")\n",
"\n",
"\n",
"def _combine_documents(\n",
" docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator=\"\\n\\n\"\n",
"):\n",
" doc_strings = [format_document(doc, document_prompt) for doc in docs]\n",
" return document_separator.join(doc_strings)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "5c32cc89",
"metadata": {},
"outputs": [],
"source": [
"_inputs = RunnableParallel(\n",
" standalone_question=RunnablePassthrough.assign(\n",
" chat_history=lambda x: get_buffer_string(x[\"chat_history\"])\n",
" )\n",
" | CONDENSE_QUESTION_PROMPT\n",
" | ChatOpenAI(temperature=0)\n",
" | StrOutputParser(),\n",
")\n",
"_context = {\n",
" \"context\": itemgetter(\"standalone_question\") | retriever | _combine_documents,\n",
" \"question\": lambda x: x[\"standalone_question\"],\n",
"}\n",
"conversational_qa_chain = _inputs | _context | ANSWER_PROMPT | ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "135c8205",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Harrison was employed at Kensho.')"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversational_qa_chain.invoke(\n",
" {\n",
" \"question\": \"where did harrison work?\",\n",
" \"chat_history\": [],\n",
" }\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "424e7e7a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Harrison worked at Kensho.')"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversational_qa_chain.invoke(\n",
" {\n",
" \"question\": \"where did he work?\",\n",
" \"chat_history\": [\n",
" HumanMessage(content=\"Who wrote this notebook?\"),\n",
" AIMessage(content=\"Harrison\"),\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c5543183",
"metadata": {},
"source": [
"### With Memory and returning source documents\n",
"\n",
"This shows how to use memory with the above. For memory, we need to manage that outside at the memory. For returning the retrieved documents, we just need to pass them through all the way."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e31dd17c",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain.memory import ConversationBufferMemory"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "d4bffe94",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferMemory(\n",
" return_messages=True, output_key=\"answer\", input_key=\"question\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "733be985",
"metadata": {},
"outputs": [],
"source": [
"# First we add a step to load memory\n",
"# This adds a \"memory\" key to the input object\n",
"loaded_memory = RunnablePassthrough.assign(\n",
" chat_history=RunnableLambda(memory.load_memory_variables) | itemgetter(\"history\"),\n",
")\n",
"# Now we calculate the standalone question\n",
"standalone_question = {\n",
" \"standalone_question\": {\n",
" \"question\": lambda x: x[\"question\"],\n",
" \"chat_history\": lambda x: get_buffer_string(x[\"chat_history\"]),\n",
" }\n",
" | CONDENSE_QUESTION_PROMPT\n",
" | ChatOpenAI(temperature=0)\n",
" | StrOutputParser(),\n",
"}\n",
"# Now we retrieve the documents\n",
"retrieved_documents = {\n",
" \"docs\": itemgetter(\"standalone_question\") | retriever,\n",
" \"question\": lambda x: x[\"standalone_question\"],\n",
"}\n",
"# Now we construct the inputs for the final prompt\n",
"final_inputs = {\n",
" \"context\": lambda x: _combine_documents(x[\"docs\"]),\n",
" \"question\": itemgetter(\"question\"),\n",
"}\n",
"# And finally, we do the part that returns the answers\n",
"answer = {\n",
" \"answer\": final_inputs | ANSWER_PROMPT | ChatOpenAI(),\n",
" \"docs\": itemgetter(\"docs\"),\n",
"}\n",
"# And now we put it all together!\n",
"final_chain = loaded_memory | standalone_question | retrieved_documents | answer"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "806e390c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'answer': AIMessage(content='Harrison was employed at Kensho.'),\n",
" 'docs': [Document(page_content='harrison worked at kensho')]}"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"question\": \"where did harrison work?\"}\n",
"result = final_chain.invoke(inputs)\n",
"result"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "977399fd",
"metadata": {},
"outputs": [],
"source": [
"# Note that the memory does not save automatically\n",
"# This will be improved in the future\n",
"# For now you need to save it yourself\n",
"memory.save_context(inputs, {\"answer\": result[\"answer\"].content})"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "f94f7de4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'history': [HumanMessage(content='where did harrison work?'),\n",
" AIMessage(content='Harrison was employed at Kensho.')]}"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"memory.load_memory_variables({})"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "88f2b7cd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'answer': AIMessage(content='Harrison actually worked at Kensho.'),\n",
" 'docs': [Document(page_content='harrison worked at kensho')]}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"inputs = {\"question\": \"but where did he really work?\"}\n",
"result = final_chain.invoke(inputs)\n",
"result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "207a2782",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,225 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "c14da114-1a4a-487d-9cff-e0e8c30ba366",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 3\n",
"title: Querying a SQL DB\n",
"---\n"
]
},
{
"cell_type": "markdown",
"id": "506e9636",
"metadata": {},
"source": [
"We can replicate our SQLDatabaseChain with Runnables."
]
},
{
"cell_type": "code",
"id": "b3121aa8",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "7a927516",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"template = \"\"\"Based on the table schema below, write a SQL query that would answer the user's question:\n",
"{schema}\n",
"\n",
"Question: {question}\n",
"SQL Query:\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3f51f386",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import SQLDatabase"
]
},
{
"cell_type": "markdown",
"id": "7c3449d6-684b-416e-ba16-90a035835a88",
"metadata": {},
"source": [
"We'll need the Chinook sample DB for this example. There's many places to download it from, e.g. https://database.guide/2-sample-databases-sqlite/"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "2ccca6fc",
"metadata": {},
"outputs": [],
"source": [
"db = SQLDatabase.from_uri(\"sqlite:///./Chinook.db\")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "05ba88ee",
"metadata": {},
"outputs": [],
"source": [
"def get_schema(_):\n",
" return db.get_table_info()"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "a4eda902",
"metadata": {},
"outputs": [],
"source": [
"def run_query(query):\n",
" return db.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "5046cb17",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI()\n",
"\n",
"sql_response = (\n",
" RunnablePassthrough.assign(schema=get_schema)\n",
" | prompt\n",
" | model.bind(stop=[\"\\nSQLResult:\"])\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "a5552039",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'SELECT COUNT(*) FROM Employee'"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sql_response.invoke({\"question\": \"How many employees are there?\"})"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "d6fee130",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Based on the table schema below, question, sql query, and sql response, write a natural language response:\n",
"{schema}\n",
"\n",
"Question: {question}\n",
"SQL Query: {query}\n",
"SQL Response: {response}\"\"\"\n",
"prompt_response = ChatPromptTemplate.from_template(template)"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "923aa634",
"metadata": {},
"outputs": [],
"source": [
"full_chain = (\n",
" RunnablePassthrough.assign(query=sql_response).assign(\n",
" schema=get_schema,\n",
" response=lambda x: db.run(x[\"query\"]),\n",
" )\n",
" | prompt_response\n",
" | model\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "e94963d8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='There are 8 employees.', additional_kwargs={}, example=False)"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"full_chain.invoke({\"question\": \"How many employees are there?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4f358d7b-a721-4db3-9f92-f06913428afc",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,122 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "29781123",
"metadata": {},
"source": [
"# Using tools\n",
"\n",
"You can use any Tools with Runnables easily."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a5c579dd-2e22-41b0-a789-346dfdecb5a2",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai duckduckgo-search"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9232d2a9",
"metadata": {},
"outputs": [],
"source": [
"from langchain.tools import DuckDuckGoSearchRun\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a0c64d2c",
"metadata": {},
"outputs": [],
"source": [
"search = DuckDuckGoSearchRun()"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "391969b6",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"turn the following user input into a search query for a search engine:\n",
"\n",
"{input}\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"model = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e3d9d20d",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model | StrOutputParser() | search"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "55f2967d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'What sports games are on TV today & tonight? Watch and stream live sports on TV today, tonight, tomorrow. Today\\'s 2023 sports TV schedule includes football, basketball, baseball, hockey, motorsports, soccer and more. Watch on TV or stream online on ESPN, FOX, FS1, CBS, NBC, ABC, Peacock, Paramount+, fuboTV, local channels and many other networks. MLB Games Tonight: How to Watch on TV, Streaming & Odds - Thursday, September 7. Seattle Mariners\\' Julio Rodriguez greets teammates in the dugout after scoring against the Oakland Athletics in a ... Circle - Country Music and Lifestyle. Live coverage of all the MLB action today is available to you, with the information provided below. The Brewers will look to pick up a road win at PNC Park against the Pirates on Wednesday at 12:35 PM ET. Check out the latest odds and with BetMGM Sportsbook. Use bonus code \"GNPLAY\" for special offers! MLB Games Tonight: How to Watch on TV, Streaming & Odds - Tuesday, September 5. Houston Astros\\' Kyle Tucker runs after hitting a double during the fourth inning of a baseball game against the Los Angeles Angels, Sunday, Aug. 13, 2023, in Houston. (AP Photo/Eric Christian Smith) (APMedia) The Houston Astros versus the Texas Rangers is one of ... The second half of tonight\\'s college football schedule still has some good games remaining to watch on your television.. We\\'ve already seen an exciting one when Colorado upset TCU. And we saw some ...'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"input\": \"I'd like to figure out what games are tonight\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a16949cf-00ea-43c6-a6aa-797ad4f6918d",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -509,7 +509,7 @@
"source": [
"## Next steps\n",
"\n",
"We recommend reading our [Why use LCEL](/docs/expression_language/why) section next to see a side-by-side comparison of the code needed to produce common functionality with and without LCEL."
"We recommend reading our [Advantages of LCEL](/docs/expression_language/why) section next to see a side-by-side comparison of the code needed to produce common functionality with and without LCEL."
]
}
],

@ -5,9 +5,9 @@
"id": "b45110ef",
"metadata": {},
"source": [
"# Create a runnable with the `@chain` decorator\n",
"# Create a runnable with the @chain decorator\n",
"\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionaly equivalent to wrapping in a [`RunnableLambda`](./functions).\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionaly equivalent to wrapping in a [`RunnableLambda`](/docs/expression_language/primitives/functions).\n",
"\n",
"This will have the benefit of improved observability by tracing your chain correctly. Any calls to runnables inside this function will be traced as nested childen.\n",
"\n",

@ -1,310 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "19c9cbd6",
"metadata": {},
"source": [
"# Add fallbacks\n",
"\n",
"There are many possible points of failure in an LLM application, whether that be issues with LLM API's, poor model outputs, issues with other integrations, etc. Fallbacks help you gracefully handle and isolate these issues.\n",
"\n",
"Crucially, fallbacks can be applied not only on the LLM level but on the whole runnable level."
]
},
{
"cell_type": "markdown",
"id": "a6bb9ba9",
"metadata": {},
"source": [
"## Handling LLM API Errors\n",
"\n",
"This is maybe the most common use case for fallbacks. A request to an LLM API can fail for a variety of reasons - the API could be down, you could have hit rate limits, any number of things. Therefore, using fallbacks can help protect against these types of things.\n",
"\n",
"IMPORTANT: By default, a lot of the LLM wrappers catch errors and retry. You will most likely want to turn those off when working with fallbacks. Otherwise the first wrapper will keep on retrying and not failing."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ebb61b1f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d3e893bf",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "markdown",
"id": "4847c82d",
"metadata": {},
"source": [
"First, let's mock out what happens if we hit a RateLimitError from OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dfdd8bf5",
"metadata": {},
"outputs": [],
"source": [
"from unittest.mock import patch\n",
"\n",
"import httpx\n",
"from openai import RateLimitError\n",
"\n",
"request = httpx.Request(\"GET\", \"/\")\n",
"response = httpx.Response(200, request=request)\n",
"error = RateLimitError(\"rate limit\", response=response, body=\"\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "e6fdffc1",
"metadata": {},
"outputs": [],
"source": [
"# Note that we set max_retries = 0 to avoid retrying on RateLimits, etc\n",
"openai_llm = ChatOpenAI(max_retries=0)\n",
"anthropic_llm = ChatAnthropic()\n",
"llm = openai_llm.with_fallbacks([anthropic_llm])"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "584461ab",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hit error\n"
]
}
],
"source": [
"# Let's use just the OpenAI LLm first, to show that we run into an error\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
" try:\n",
" print(openai_llm.invoke(\"Why did the chicken cross the road?\"))\n",
" except RateLimitError:\n",
" print(\"Hit error\")"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "4fc1e673",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=' I don\\'t actually know why the chicken crossed the road, but here are some possible humorous answers:\\n\\n- To get to the other side!\\n\\n- It was too chicken to just stand there. \\n\\n- It wanted a change of scenery.\\n\\n- It wanted to show the possum it could be done.\\n\\n- It was on its way to a poultry farmers\\' convention.\\n\\nThe joke plays on the double meaning of \"the other side\" - literally crossing the road to the other side, or the \"other side\" meaning the afterlife. So it\\'s an anti-joke, with a silly or unexpected pun as the answer.' additional_kwargs={} example=False\n"
]
}
],
"source": [
"# Now let's try with fallbacks to Anthropic\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
" try:\n",
" print(llm.invoke(\"Why did the chicken cross the road?\"))\n",
" except RateLimitError:\n",
" print(\"Hit error\")"
]
},
{
"cell_type": "markdown",
"id": "f00bea25",
"metadata": {},
"source": [
"We can use our \"LLM with Fallbacks\" as we would a normal LLM."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4f8eaaa0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\" I don't actually know why the kangaroo crossed the road, but I'm happy to take a guess! Maybe the kangaroo was trying to get to the other side to find some tasty grass to eat. Or maybe it was trying to get away from a predator or other danger. Kangaroos do need to cross roads and other open areas sometimes as part of their normal activities. Whatever the reason, I'm sure the kangaroo looked both ways before hopping across!\" additional_kwargs={} example=False\n"
]
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You're a nice assistant who always includes a compliment in your response\",\n",
" ),\n",
" (\"human\", \"Why did the {animal} cross the road\"),\n",
" ]\n",
")\n",
"chain = prompt | llm\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
" try:\n",
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
" except RateLimitError:\n",
" print(\"Hit error\")"
]
},
{
"cell_type": "markdown",
"id": "ef9f0f39-0b9f-4723-a394-f61c98c75d41",
"metadata": {},
"source": [
"### Specifying errors to handle\n",
"\n",
"We can also specify the errors to handle if we want to be more specific about when the fallback is invoked:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e4069ca4-1c16-4915-9a8c-b2732869ae27",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hit error\n"
]
}
],
"source": [
"llm = openai_llm.with_fallbacks(\n",
" [anthropic_llm], exceptions_to_handle=(KeyboardInterrupt,)\n",
")\n",
"\n",
"chain = prompt | llm\n",
"with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
" try:\n",
" print(chain.invoke({\"animal\": \"kangaroo\"}))\n",
" except RateLimitError:\n",
" print(\"Hit error\")"
]
},
{
"cell_type": "markdown",
"id": "8d62241b",
"metadata": {},
"source": [
"## Fallbacks for Sequences\n",
"\n",
"We can also create fallbacks for sequences, that are sequences themselves. Here we do that with two different models: ChatOpenAI and then normal OpenAI (which does not use a chat model). Because OpenAI is NOT a chat model, you likely want a different prompt."
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "6d0b8056",
"metadata": {},
"outputs": [],
"source": [
"# First let's create a chain with a ChatModel\n",
"# We add in a string output parser here so the outputs between the two are the same type\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"chat_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You're a nice assistant who always includes a compliment in your response\",\n",
" ),\n",
" (\"human\", \"Why did the {animal} cross the road\"),\n",
" ]\n",
")\n",
"# Here we're going to use a bad model name to easily create a chain that will error\n",
"chat_model = ChatOpenAI(model_name=\"gpt-fake\")\n",
"bad_chain = chat_prompt | chat_model | StrOutputParser()"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "8d1fc2a5",
"metadata": {},
"outputs": [],
"source": [
"# Now lets create a chain with the normal OpenAI model\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"\n",
"prompt_template = \"\"\"Instructions: You should always include a compliment in your response.\n",
"\n",
"Question: Why did the {animal} cross the road?\"\"\"\n",
"prompt = PromptTemplate.from_template(prompt_template)\n",
"llm = OpenAI()\n",
"good_chain = prompt | llm"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "283bfa44",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nAnswer: The turtle crossed the road to get to the other side, and I have to say he had some impressive determination.'"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# We can now create a final chain which combines the two\n",
"chain = bad_chain.with_fallbacks([good_chain])\n",
"chain.invoke({\"animal\": \"turtle\"})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,206 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "ce0e08fd",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"title: \"RunnableLambda: Run Custom Functions\"\n",
"keywords: [RunnableLambda, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "fbc4bf6e",
"metadata": {},
"source": [
"# Run custom functions\n",
"\n",
"You can use arbitrary functions in the pipeline.\n",
"\n",
"Note that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single input and unpacks it into multiple argument."
]
},
{
"cell_type": "raw",
"id": "9a5fe916",
"metadata": {},
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6bb221b3",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"def length_function(text):\n",
" return len(text)\n",
"\n",
"\n",
"def _multiple_length_function(text1, text2):\n",
" return len(text1) * len(text2)\n",
"\n",
"\n",
"def multiple_length_function(_dict):\n",
" return _multiple_length_function(_dict[\"text1\"], _dict[\"text2\"])\n",
"\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"what is {a} + {b}\")\n",
"model = ChatOpenAI()\n",
"\n",
"chain1 = prompt | model\n",
"\n",
"chain = (\n",
" {\n",
" \"a\": itemgetter(\"foo\") | RunnableLambda(length_function),\n",
" \"b\": {\"text1\": itemgetter(\"foo\"), \"text2\": itemgetter(\"bar\")}\n",
" | RunnableLambda(multiple_length_function),\n",
" }\n",
" | prompt\n",
" | model\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5488ec85",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='3 + 9 equals 12.')"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"foo\": \"bar\", \"bar\": \"gah\"})"
]
},
{
"cell_type": "markdown",
"id": "4728ddd9-914d-42ce-ae9b-72c9ce8ec940",
"metadata": {},
"source": [
"## Accepting a Runnable Config\n",
"\n",
"Runnable lambdas can optionally accept a [RunnableConfig](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html#langchain_core.runnables.config.RunnableConfig), which they can use to pass callbacks, tags, and other configuration information to nested runs."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "80b3b5f6-5d58-44b9-807e-cce9a46bf49f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableConfig"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ff0daf0c-49dd-4d21-9772-e5fa133c5f36",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"\n",
"def parse_or_fix(text: str, config: RunnableConfig):\n",
" fixing_chain = (\n",
" ChatPromptTemplate.from_template(\n",
" \"Fix the following text:\\n\\n```text\\n{input}\\n```\\nError: {error}\"\n",
" \" Don't narrate, just respond with the fixed data.\"\n",
" )\n",
" | ChatOpenAI()\n",
" | StrOutputParser()\n",
" )\n",
" for _ in range(3):\n",
" try:\n",
" return json.loads(text)\n",
" except Exception as e:\n",
" text = fixing_chain.invoke({\"input\": text, \"error\": e}, config)\n",
" return \"Failed to parse\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "1a5e709e-9d75-48c7-bb9c-503251990505",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'foo': 'bar'}\n",
"Tokens Used: 65\n",
"\tPrompt Tokens: 56\n",
"\tCompletion Tokens: 9\n",
"Successful Requests: 1\n",
"Total Cost (USD): $0.00010200000000000001\n"
]
}
],
"source": [
"from langchain.callbacks import get_openai_callback\n",
"\n",
"with get_openai_callback() as cb:\n",
" output = RunnableLambda(parse_or_fix).invoke(\n",
" \"{foo: bar}\", {\"tags\": [\"my-tag\"], \"callbacks\": [cb]}\n",
" )\n",
" print(output)\n",
" print(cb)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "29f55c38",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,9 +0,0 @@
---
sidebar_position: 2
---
# How to
import DocCardList from "@theme/DocCardList";
<DocCardList />

@ -30,9 +30,9 @@
"outputs": [],
"source": [
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.vectorstores import FAISS\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings"
]
},

@ -7,7 +7,7 @@
"source": [
"---\n",
"sidebar_position: 3\n",
"title: \"RunnableBranch: Dynamically route logic based on input\"\n",
"title: \"Route logic based on input\"\n",
"keywords: [RunnableBranch, LCEL]\n",
"---"
]
@ -25,7 +25,7 @@
"\n",
"There are two ways to perform routing:\n",
"\n",
"1. Conditionally return runnables from a [`RunnableLambda`](./functions) (recommended)\n",
"1. Conditionally return runnables from a [`RunnableLambda`](/docs/expression_language/primitives/functions) (recommended)\n",
"2. Using a `RunnableBranch`.\n",
"\n",
"We'll illustrate both methods using a two step sequence where the first step classifies an input question as being about `LangChain`, `Anthropic`, or `Other`, then routes to a corresponding prompt chain."
@ -42,22 +42,23 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "8a8a1967",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' Anthropic'"
"'Anthropic'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "display_data"
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
@ -73,7 +74,7 @@
"\n",
"Classification:\"\"\"\n",
" )\n",
" | ChatAnthropic()\n",
" | ChatAnthropic(model_name=\"claude-3-haiku-20240307\")\n",
" | StrOutputParser()\n",
")\n",
"\n",
@ -90,42 +91,33 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"id": "89d7722d",
"metadata": {},
"outputs": [],
"source": [
"langchain_chain = (\n",
" PromptTemplate.from_template(\n",
" \"\"\"You are an expert in langchain. \\\n",
"langchain_chain = PromptTemplate.from_template(\n",
" \"\"\"You are an expert in langchain. \\\n",
"Always answer questions starting with \"As Harrison Chase told me\". \\\n",
"Respond to the following question:\n",
"\n",
"Question: {question}\n",
"Answer:\"\"\"\n",
" )\n",
" | ChatAnthropic()\n",
")\n",
"anthropic_chain = (\n",
" PromptTemplate.from_template(\n",
" \"\"\"You are an expert in anthropic. \\\n",
") | ChatAnthropic(model_name=\"claude-3-haiku-20240307\")\n",
"anthropic_chain = PromptTemplate.from_template(\n",
" \"\"\"You are an expert in anthropic. \\\n",
"Always answer questions starting with \"As Dario Amodei told me\". \\\n",
"Respond to the following question:\n",
"\n",
"Question: {question}\n",
"Answer:\"\"\"\n",
" )\n",
" | ChatAnthropic()\n",
")\n",
"general_chain = (\n",
" PromptTemplate.from_template(\n",
" \"\"\"Respond to the following question:\n",
") | ChatAnthropic(model_name=\"claude-3-haiku-20240307\")\n",
"general_chain = PromptTemplate.from_template(\n",
" \"\"\"Respond to the following question:\n",
"\n",
"Question: {question}\n",
"Answer:\"\"\"\n",
" )\n",
" | ChatAnthropic()\n",
")"
") | ChatAnthropic(model_name=\"claude-3-haiku-20240307\")"
]
},
{
@ -140,7 +132,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 4,
"id": "687492da",
"metadata": {},
"outputs": [],
@ -156,7 +148,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 5,
"id": "02a33c86",
"metadata": {},
"outputs": [],
@ -170,17 +162,17 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 6,
"id": "c2e977a4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' As Dario Amodei told me, to use Anthropic IPC you first need to import it:\\n\\n```python\\nfrom anthroipc import ic\\n```\\n\\nThen you can create a client and connect to the server:\\n\\n```python \\nclient = ic.connect()\\n```\\n\\nAfter that, you can call methods on the client and get responses:\\n\\n```python\\nresponse = client.ask(\"What is the meaning of life?\")\\nprint(response)\\n```\\n\\nYou can also register callbacks to handle events: \\n\\n```python\\ndef on_poke(event):\\n print(\"Got poked!\")\\n\\nclient.on(\\'poke\\', on_poke)\\n```\\n\\nAnd that\\'s the basics of using the Anthropic IPC client library for Python! Let me know if you have any other questions!', additional_kwargs={}, example=False)"
"AIMessage(content=\"As Dario Amodei told me, to use Anthropic, you can start by exploring the company's website and learning about their mission, values, and the different services and products they offer. Anthropic is focused on developing safe and ethical AI systems, so they have a strong emphasis on transparency and responsible AI development. \\n\\nDepending on your specific needs, you can look into Anthropic's AI research and development services, which cover areas like natural language processing, computer vision, and reinforcement learning. They also offer consulting and advisory services to help organizations navigate the challenges and opportunities of AI integration.\\n\\nAdditionally, Anthropic has released some open-source AI models and tools that you can explore and experiment with. These can be a great way to get hands-on experience with Anthropic's approach to AI development.\\n\\nOverall, Anthropic aims to be a reliable and trustworthy partner in the AI space, so I'd encourage you to reach out to them directly to discuss how they can best support your specific requirements.\", response_metadata={'id': 'msg_01CtLFgFSwvTaJomrihE87Ra', 'content': [ContentBlock(text=\"As Dario Amodei told me, to use Anthropic, you can start by exploring the company's website and learning about their mission, values, and the different services and products they offer. Anthropic is focused on developing safe and ethical AI systems, so they have a strong emphasis on transparency and responsible AI development. \\n\\nDepending on your specific needs, you can look into Anthropic's AI research and development services, which cover areas like natural language processing, computer vision, and reinforcement learning. They also offer consulting and advisory services to help organizations navigate the challenges and opportunities of AI integration.\\n\\nAdditionally, Anthropic has released some open-source AI models and tools that you can explore and experiment with. These can be a great way to get hands-on experience with Anthropic's approach to AI development.\\n\\nOverall, Anthropic aims to be a reliable and trustworthy partner in the AI space, so I'd encourage you to reach out to them directly to discuss how they can best support your specific requirements.\", type='text')], 'model': 'claude-3-haiku-20240307', 'role': 'assistant', 'stop_reason': 'end_turn', 'stop_sequence': None, 'type': 'message', 'usage': Usage(input_tokens=53, output_tokens=219)})"
]
},
"execution_count": 12,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@ -191,17 +183,17 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 7,
"id": "48913dc6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' As Harrison Chase told me, to use LangChain you first need to sign up for an API key at platform.langchain.com. Once you have your API key, you can install the Python library and write a simple Python script to call the LangChain API. Here is some sample code to get started:\\n\\n```python\\nimport langchain\\n\\napi_key = \"YOUR_API_KEY\"\\n\\nlangchain.set_key(api_key)\\n\\nresponse = langchain.ask(\"What is the capital of France?\")\\n\\nprint(response.response)\\n```\\n\\nThis will send the question \"What is the capital of France?\" to the LangChain API and print the response. You can customize the request by providing parameters like max_tokens, temperature, etc. The LangChain Python library documentation has more details on the available options. The key things are getting an API key and calling langchain.ask() with your question text. Let me know if you have any other questions!', additional_kwargs={}, example=False)"
"AIMessage(content=\"As Harrison Chase told me, using LangChain involves a few key steps:\\n\\n1. **Set up your environment**: Install the necessary Python packages, including the LangChain library itself, as well as any other dependencies your application might require, such as language models or other integrations.\\n\\n2. **Understand the core concepts**: LangChain revolves around a few core concepts, like Agents, Chains, and Tools. Familiarize yourself with these concepts and how they work together to build powerful language-based applications.\\n\\n3. **Identify your use case**: Determine what kind of task or application you want to build using LangChain, such as a chatbot, a question-answering system, or a document summarization tool.\\n\\n4. **Choose the appropriate components**: Based on your use case, select the right LangChain components, such as agents, chains, and tools, to build your application.\\n\\n5. **Integrate with language models**: LangChain is designed to work seamlessly with various language models, such as OpenAI's GPT-3 or Anthropic's models. Connect your chosen language model to your LangChain application.\\n\\n6. **Implement your application logic**: Use LangChain's building blocks to implement the specific functionality of your application, such as prompting the language model, processing the response, and integrating with other services or data sources.\\n\\n7. **Test and iterate**: Thoroughly test your application, gather feedback, and iterate on your design and implementation to improve its performance and user experience.\\n\\nAs Harrison Chase emphasized, LangChain provides a flexible and powerful framework for building language-based applications, making it easier to leverage the capabilities of modern language models. By following these steps, you can get started with LangChain and create innovative solutions tailored to your specific needs.\", response_metadata={'id': 'msg_01H3UXAAHG4TwxJLpxwuuVU7', 'content': [ContentBlock(text=\"As Harrison Chase told me, using LangChain involves a few key steps:\\n\\n1. **Set up your environment**: Install the necessary Python packages, including the LangChain library itself, as well as any other dependencies your application might require, such as language models or other integrations.\\n\\n2. **Understand the core concepts**: LangChain revolves around a few core concepts, like Agents, Chains, and Tools. Familiarize yourself with these concepts and how they work together to build powerful language-based applications.\\n\\n3. **Identify your use case**: Determine what kind of task or application you want to build using LangChain, such as a chatbot, a question-answering system, or a document summarization tool.\\n\\n4. **Choose the appropriate components**: Based on your use case, select the right LangChain components, such as agents, chains, and tools, to build your application.\\n\\n5. **Integrate with language models**: LangChain is designed to work seamlessly with various language models, such as OpenAI's GPT-3 or Anthropic's models. Connect your chosen language model to your LangChain application.\\n\\n6. **Implement your application logic**: Use LangChain's building blocks to implement the specific functionality of your application, such as prompting the language model, processing the response, and integrating with other services or data sources.\\n\\n7. **Test and iterate**: Thoroughly test your application, gather feedback, and iterate on your design and implementation to improve its performance and user experience.\\n\\nAs Harrison Chase emphasized, LangChain provides a flexible and powerful framework for building language-based applications, making it easier to leverage the capabilities of modern language models. By following these steps, you can get started with LangChain and create innovative solutions tailored to your specific needs.\", type='text')], 'model': 'claude-3-haiku-20240307', 'role': 'assistant', 'stop_reason': 'end_turn', 'stop_sequence': None, 'type': 'message', 'usage': Usage(input_tokens=50, output_tokens=400)})"
]
},
"execution_count": 13,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@ -212,17 +204,17 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 8,
"id": "a14d0dca",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' 4', additional_kwargs={}, example=False)"
"AIMessage(content='4', response_metadata={'id': 'msg_01UAKP81jTZu9fyiyFYhsbHc', 'content': [ContentBlock(text='4', type='text')], 'model': 'claude-3-haiku-20240307', 'role': 'assistant', 'stop_reason': 'end_turn', 'stop_sequence': None, 'type': 'message', 'usage': Usage(input_tokens=28, output_tokens=5)})"
]
},
"execution_count": 14,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@ -249,18 +241,19 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 9,
"id": "2a101418",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" As Dario Amodei told me, here are some ways to use Anthropic:\\n\\n- Sign up for an account on Anthropic's website to access tools like Claude, Constitutional AI, and Writer. \\n\\n- Use Claude for tasks like email generation, customer service chat, and QA. Claude can understand natural language prompts and provide helpful responses.\\n\\n- Use Constitutional AI if you need an AI assistant that is harmless, honest, and helpful. It is designed to be safe and aligned with human values.\\n\\n- Use Writer to generate natural language content for things like marketing copy, stories, reports, and more. Give it a topic and prompt and it will create high-quality written content.\\n\\n- Check out Anthropic's documentation and blog for tips, tutorials, examples, and announcements about new capabilities as they continue to develop their AI technology.\\n\\n- Follow Anthropic on social media or subscribe to their newsletter to stay up to date on new features and releases.\\n\\n- For most people, the easiest way to leverage Anthropic's technology is through their website - just create an account to get started!\", additional_kwargs={}, example=False)"
"AIMessage(content=\"As Dario Amodei told me, to use Anthropic, you should first familiarize yourself with our mission and principles. Anthropic is committed to developing safe and beneficial artificial intelligence that can help solve important problems facing humanity. \\n\\nTo get started, I recommend exploring the resources on our website, which cover our research, products, and approach to AI development. You can also reach out to our team to learn more about how Anthropic's technology and services can support your specific needs.\\n\\nThe key is to engage with us in a way that aligns with our values of transparency, ethical AI, and a commitment to the wellbeing of humanity. We're here to collaborate and help you harness the power of advanced AI responsibly.\", response_metadata={'id': 'msg_0187BVnpniPDJnVvwf3M1LdY', 'content': [ContentBlock(text=\"As Dario Amodei told me, to use Anthropic, you should first familiarize yourself with our mission and principles. Anthropic is committed to developing safe and beneficial artificial intelligence that can help solve important problems facing humanity. \\n\\nTo get started, I recommend exploring the resources on our website, which cover our research, products, and approach to AI development. You can also reach out to our team to learn more about how Anthropic's technology and services can support your specific needs.\\n\\nThe key is to engage with us in a way that aligns with our values of transparency, ethical AI, and a commitment to the wellbeing of humanity. We're here to collaborate and help you harness the power of advanced AI responsibly.\", type='text')], 'model': 'claude-3-haiku-20240307', 'role': 'assistant', 'stop_reason': 'end_turn', 'stop_sequence': None, 'type': 'message', 'usage': Usage(input_tokens=53, output_tokens=160)})"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "display_data"
"output_type": "execute_result"
}
],
"source": [
@ -277,18 +270,19 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 10,
"id": "8d8caf9b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' As Harrison Chase told me, here is how you use LangChain:\\n\\nLangChain is an AI assistant that can have conversations, answer questions, and generate text. To use LangChain, you simply type or speak your input and LangChain will respond. \\n\\nYou can ask LangChain questions, have discussions, get summaries or explanations about topics, and request it to generate text on a subject. Some examples of interactions:\\n\\n- Ask general knowledge questions and LangChain will try to answer factually. For example \"What is the capital of France?\"\\n\\n- Have conversations on topics by taking turns speaking. You can prompt the start of a conversation by saying something like \"Let\\'s discuss machine learning\"\\n\\n- Ask for summaries or high-level explanations on subjects. For example \"Can you summarize the main themes in Shakespeare\\'s Hamlet?\" \\n\\n- Give creative writing prompts or requests to have LangChain generate text in different styles. For example \"Write a short children\\'s story about a mouse\" or \"Generate a poem in the style of Robert Frost about nature\"\\n\\n- Correct LangChain if it makes an inaccurate statement and provide the right information. This helps train it.\\n\\nThe key is interacting naturally and giving it clear prompts and requests', additional_kwargs={}, example=False)"
"AIMessage(content=\"As Harrison Chase told me, using LangChain involves several key steps. First, you'll need to install the LangChain library and import the necessary modules. Then, you'll want to define your language model, any data sources you plan to use, and the specific tasks you want to accomplish, such as question answering, text generation, or agent-based reasoning. \\n\\nLangChain provides a flexible framework for building applications that leverage large language models. It includes abstractions for things like retrievers, prompts, and chains, which allow you to compose different components together to create powerful workflows. \\n\\nThe documentation on the LangChain website is excellent and covers many common use cases in detail. I'd recommend starting there to get a solid understanding of the core concepts and how to apply them to your specific needs. And of course, feel free to reach out if you have any other questions - I'm always happy to share more insights from my conversations with Harrison.\", response_metadata={'id': 'msg_01T1naS99wGPkEAP4LME8iAv', 'content': [ContentBlock(text=\"As Harrison Chase told me, using LangChain involves several key steps. First, you'll need to install the LangChain library and import the necessary modules. Then, you'll want to define your language model, any data sources you plan to use, and the specific tasks you want to accomplish, such as question answering, text generation, or agent-based reasoning. \\n\\nLangChain provides a flexible framework for building applications that leverage large language models. It includes abstractions for things like retrievers, prompts, and chains, which allow you to compose different components together to create powerful workflows. \\n\\nThe documentation on the LangChain website is excellent and covers many common use cases in detail. I'd recommend starting there to get a solid understanding of the core concepts and how to apply them to your specific needs. And of course, feel free to reach out if you have any other questions - I'm always happy to share more insights from my conversations with Harrison.\", type='text')], 'model': 'claude-3-haiku-20240307', 'role': 'assistant', 'stop_reason': 'end_turn', 'stop_sequence': None, 'type': 'message', 'usage': Usage(input_tokens=50, output_tokens=205)})"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "display_data"
"output_type": "execute_result"
}
],
"source": [
@ -297,23 +291,150 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 11,
"id": "26159af7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' 2 + 2 = 4', additional_kwargs={}, example=False)"
"AIMessage(content='4', response_metadata={'id': 'msg_01T6T3TS6hRCtU8JayN93QEi', 'content': [ContentBlock(text='4', type='text')], 'model': 'claude-3-haiku-20240307', 'role': 'assistant', 'stop_reason': 'end_turn', 'stop_sequence': None, 'type': 'message', 'usage': Usage(input_tokens=28, output_tokens=5)})"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "display_data"
"output_type": "execute_result"
}
],
"source": [
"full_chain.invoke({\"question\": \"whats 2 + 2\"})"
]
},
{
"cell_type": "markdown",
"id": "fa0f589d",
"metadata": {},
"source": [
"# Routing by semantic similarity\n",
"\n",
"One especially useful technique is to use embeddings to route a query to the most relevant prompt. Here's an example."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "a23457d7",
"metadata": {},
"outputs": [],
"source": [
"from langchain.utils.math import cosine_similarity\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"physics_template = \"\"\"You are a very smart physics professor. \\\n",
"You are great at answering questions about physics in a concise and easy to understand manner. \\\n",
"When you don't know the answer to a question you admit that you don't know.\n",
"\n",
"Here is a question:\n",
"{query}\"\"\"\n",
"\n",
"math_template = \"\"\"You are a very good mathematician. You are great at answering math questions. \\\n",
"You are so good because you are able to break down hard problems into their component parts, \\\n",
"answer the component parts, and then put them together to answer the broader question.\n",
"\n",
"Here is a question:\n",
"{query}\"\"\"\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"prompt_templates = [physics_template, math_template]\n",
"prompt_embeddings = embeddings.embed_documents(prompt_templates)\n",
"\n",
"\n",
"def prompt_router(input):\n",
" query_embedding = embeddings.embed_query(input[\"query\"])\n",
" similarity = cosine_similarity([query_embedding], prompt_embeddings)[0]\n",
" most_similar = prompt_templates[similarity.argmax()]\n",
" print(\"Using MATH\" if most_similar == math_template else \"Using PHYSICS\")\n",
" return PromptTemplate.from_template(most_similar)\n",
"\n",
"\n",
"chain = (\n",
" {\"query\": RunnablePassthrough()}\n",
" | RunnableLambda(prompt_router)\n",
" | ChatAnthropic(model_name=\"claude-3-haiku-20240307\")\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "664bb851",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using PHYSICS\n",
"As a physics professor, I would be happy to provide a concise and easy-to-understand explanation of what a black hole is.\n",
"\n",
"A black hole is an incredibly dense region of space-time where the gravitational pull is so strong that nothing, not even light, can escape from it. This means that if you were to get too close to a black hole, you would be pulled in and crushed by the intense gravitational forces.\n",
"\n",
"The formation of a black hole occurs when a massive star, much larger than our Sun, reaches the end of its life and collapses in on itself. This collapse causes the matter to become extremely dense, and the gravitational force becomes so strong that it creates a point of no return, known as the event horizon.\n",
"\n",
"Beyond the event horizon, the laws of physics as we know them break down, and the intense gravitational forces create a singularity, which is a point of infinite density and curvature in space-time.\n",
"\n",
"Black holes are fascinating and mysterious objects, and there is still much to be learned about their properties and behavior. If I were unsure about any specific details or aspects of black holes, I would readily admit that I do not have a complete understanding and would encourage further research and investigation.\n"
]
}
],
"source": [
"print(chain.invoke(\"What's a black hole\"))"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "df34e469",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Using MATH\n",
"A path integral is a powerful mathematical concept in physics, particularly in the field of quantum mechanics. It was developed by the renowned physicist Richard Feynman as an alternative formulation of quantum mechanics.\n",
"\n",
"In a path integral, instead of considering a single, definite path that a particle might take from one point to another, as in classical mechanics, the particle is considered to take all possible paths simultaneously. Each path is assigned a complex-valued weight, and the total probability amplitude for the particle to go from one point to another is calculated by summing (integrating) over all possible paths.\n",
"\n",
"The key ideas behind the path integral formulation are:\n",
"\n",
"1. Superposition principle: In quantum mechanics, particles can exist in a superposition of multiple states or paths simultaneously.\n",
"\n",
"2. Probability amplitude: The probability amplitude for a particle to go from one point to another is calculated by summing the complex-valued weights of all possible paths.\n",
"\n",
"3. Weighting of paths: Each path is assigned a weight based on the action (the time integral of the Lagrangian) along that path. Paths with lower action have a greater weight.\n",
"\n",
"4. Feynman's approach: Feynman developed the path integral formulation as an alternative to the traditional wave function approach in quantum mechanics, providing a more intuitive and conceptual understanding of quantum phenomena.\n",
"\n",
"The path integral approach is particularly useful in quantum field theory, where it provides a powerful framework for calculating transition probabilities and understanding the behavior of quantum systems. It has also found applications in various areas of physics, such as condensed matter, statistical mechanics, and even in finance (the path integral approach to option pricing).\n",
"\n",
"The mathematical construction of the path integral involves the use of advanced concepts from functional analysis and measure theory, making it a powerful and sophisticated tool in the physicist's arsenal.\n"
]
}
],
"source": [
"print(chain.invoke(\"What's a path integral\"))"
]
},
{
"cell_type": "markdown",
"id": "927b7498",
"metadata": {},
"source": []
}
],
"metadata": {
@ -332,7 +453,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.5"
}
},
"nbformat": 4,

@ -7,27 +7,27 @@ sidebar_class_name: hidden
LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together.
LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
**Streaming support**
[**First-class streaming support**](/docs/expression_language/streaming)
When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.
**Async support**
[**Async support**](/docs/expression_language/interface)
Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/docs/langsmith) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
**Optimized parallel execution**
[**Optimized parallel execution**](/docs/expression_language/primitives/parallel)
Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.
**Retries and fallbacks**
[**Retries and fallbacks**](/docs/guides/productionization/fallbacks)
Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. Were currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
**Access intermediate results**
[**Access intermediate results**](/docs/expression_language/interface#async-stream-events-beta)
For more complex chains its often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and its available on every [LangServe](/docs/langserve) server.
**Input and output schemas**
[**Input and output schemas**](/docs/expression_language/interface#input-schema)
Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
**Seamless LangSmith tracing integration**
[**Seamless LangSmith tracing**](/docs/langsmith)
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
With LCEL, **all** steps are automatically logged to [LangSmith](/docs/langsmith/) for maximum observability and debuggability.
**Seamless LangServe deployment integration**
[**Seamless LangServe deployment**](/docs/langserve)
Any chain created with LCEL can be easily deployed using [LangServe](/docs/langserve).

@ -7,7 +7,7 @@
"source": [
"---\n",
"sidebar_position: 1\n",
"title: Interface\n",
"title: Runnable interface\n",
"---"
]
},
@ -16,7 +16,8 @@
"id": "9a9acd2e",
"metadata": {},
"source": [
"To make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. The `Runnable` protocol is implemented for most components. \n",
"To make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about [in this section](/docs/expression_language/primitives).\n",
"\n",
"This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. \n",
"The standard interface includes:\n",
"\n",
@ -52,9 +53,11 @@
]
},
{
"cell_type": "raw",
"cell_type": "code",
"execution_count": null,
"id": "57768739",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-core langchain-community langchain-openai"
]

@ -0,0 +1,180 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 6\n",
"title: \"Assign: Add values to state\"\n",
"keywords: [RunnablePassthrough, assign, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Adding values to chain state\n",
"\n",
"The `RunnablePassthrough.assign(...)` static method takes an input value and adds the extra arguments passed to the assign function.\n",
"\n",
"This is useful when additively creating a dictionary to use as input to a later step, which is a common LCEL pattern.\n",
"\n",
"Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[33mWARNING: You are using pip version 22.0.4; however, version 24.0 is available.\n",
"You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n",
"\u001b[0mNote: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'extra': {'num': 1, 'mult': 3}, 'modified': 2}"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"\n",
"runnable = RunnableParallel(\n",
" extra=RunnablePassthrough.assign(mult=lambda x: x[\"num\"] * 3),\n",
" modified=lambda x: x[\"num\"] + 1,\n",
")\n",
"\n",
"runnable.invoke({\"num\": 1})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's break down what's happening here.\n",
"\n",
"- The input to the chain is `{\"num\": 1}`. This is passed into a `RunnableParallel`, which invokes the runnables it is passed in parallel with that input.\n",
"- The value under the `extra` key is invoked. `RunnablePassthrough.assign()` keeps the original keys in the input dict (`{\"num\": 1}`), and assigns a new key called `mult`. The value is `lambda x: x[\"num\"] * 3)`, which is `3`. Thus, the result is `{\"num\": 1, \"mult\": 3}`.\n",
"- `{\"num\": 1, \"mult\": 3}` is returned to the `RunnableParallel` call, and is set as the value to the key `extra`.\n",
"- At the same time, the `modified` key is called. The result is `2`, since the lambda extracts a key called `\"num\"` from its input and adds one.\n",
"\n",
"Thus, the result is `{'extra': {'num': 1, 'mult': 3}, 'modified': 2}`.\n",
"\n",
"## Streaming\n",
"\n",
"One nice feature of this method is that it allows values to pass through as soon as they are available. To show this off, we'll use `RunnablePassthrough.assign()` to immediately return source docs in a retrieval chain:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'question': 'where did harrison work?'}\n",
"{'context': [Document(page_content='harrison worked at kensho')]}\n",
"{'output': ''}\n",
"{'output': 'H'}\n",
"{'output': 'arrison'}\n",
"{'output': ' worked'}\n",
"{'output': ' at'}\n",
"{'output': ' Kens'}\n",
"{'output': 'ho'}\n",
"{'output': '.'}\n",
"{'output': ''}\n"
]
}
],
"source": [
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"\n",
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"model = ChatOpenAI()\n",
"\n",
"generation_chain = prompt | model | StrOutputParser()\n",
"\n",
"retrieval_chain = {\n",
" \"context\": retriever,\n",
" \"question\": RunnablePassthrough(),\n",
"} | RunnablePassthrough.assign(output=generation_chain)\n",
"\n",
"stream = retrieval_chain.stream(\"where did harrison work?\")\n",
"\n",
"for chunk in stream:\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see that the first chunk contains the original `\"question\"` since that is immediately available. The second chunk contains `\"context\"` since the retriever finishes second. Finally, the output from the `generation_chain` streams in chunks as soon as it is available."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

@ -1,13 +1,25 @@
{
"cells": [
{
"cell_type": "raw",
"id": "fe63ffaf",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"title: \"Binding: Attach runtime args\"\n",
"keywords: [RunnableBinding, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "711752cb-4f15-42a3-9838-a0c67f397771",
"metadata": {},
"source": [
"# Bind runtime args\n",
"# Binding: Attach runtime args\n",
"\n",
"Sometimes we want to invoke a Runnable within a Runnable sequence with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use `Runnable.bind()` to easily pass these arguments in.\n",
"Sometimes we want to invoke a Runnable within a Runnable sequence with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use `Runnable.bind()` to pass these arguments in.\n",
"\n",
"Suppose we have a simple prompt + model sequence:"
]

@ -1,5 +1,17 @@
{
"cells": [
{
"cell_type": "raw",
"id": "9ede5870",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 7\n",
"title: \"Configure runtime chain internals\"\n",
"keywords: [ConfigurableField, configurable_fields, ConfigurableAlternatives, configurable_alternatives, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "39eaf61b",

@ -1,52 +1,207 @@
{
"cells": [
{
"cell_type": "raw",
"id": "ce0e08fd",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 3\n",
"title: \"Lambda: Run custom functions\"\n",
"keywords: [RunnableLambda, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "fbc4bf6e",
"metadata": {},
"source": [
"# Stream custom generator functions\n",
"# Run custom functions\n",
"\n",
"You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a LCEL pipeline.\n",
"You can use arbitrary functions in the pipeline.\n",
"\n",
"The signature of these generators should be `Iterator[Input] -> Iterator[Output]`. Or for async generators: `AsyncIterator[Input] -> AsyncIterator[Output]`.\n",
"Note that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single input and unpacks it into multiple argument."
]
},
{
"cell_type": "raw",
"id": "9a5fe916",
"metadata": {},
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6bb221b3",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"def length_function(text):\n",
" return len(text)\n",
"\n",
"\n",
"def _multiple_length_function(text1, text2):\n",
" return len(text1) * len(text2)\n",
"\n",
"These are useful for:\n",
"- implementing a custom output parser\n",
"- modifying the output of a previous step, while preserving streaming capabilities\n",
"\n",
"Let's implement a custom output parser for comma-separated lists."
"def multiple_length_function(_dict):\n",
" return _multiple_length_function(_dict[\"text1\"], _dict[\"text2\"])\n",
"\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"what is {a} + {b}\")\n",
"model = ChatOpenAI()\n",
"\n",
"chain1 = prompt | model\n",
"\n",
"chain = (\n",
" {\n",
" \"a\": itemgetter(\"foo\") | RunnableLambda(length_function),\n",
" \"b\": {\"text1\": itemgetter(\"foo\"), \"text2\": itemgetter(\"bar\")}\n",
" | RunnableLambda(multiple_length_function),\n",
" }\n",
" | prompt\n",
" | model\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5488ec85",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='3 + 9 = 12', response_metadata={'token_usage': {'completion_tokens': 7, 'prompt_tokens': 14, 'total_tokens': 21}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-bd204541-81fd-429a-ad92-dd1913af9b1c-0')"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"foo\": \"bar\", \"bar\": \"gah\"})"
]
},
{
"cell_type": "markdown",
"id": "4728ddd9-914d-42ce-ae9b-72c9ce8ec940",
"metadata": {},
"source": [
"## Sync version"
"## Accepting a Runnable Config\n",
"\n",
"Runnable lambdas can optionally accept a [RunnableConfig](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html#langchain_core.runnables.config.RunnableConfig), which they can use to pass callbacks, tags, and other configuration information to nested runs."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"id": "80b3b5f6-5d58-44b9-807e-cce9a46bf49f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableConfig"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 4,
"id": "ff0daf0c-49dd-4d21-9772-e5fa133c5f36",
"metadata": {},
"outputs": [],
"source": [
"from typing import Iterator, List\n",
"import json\n",
"\n",
"from langchain.prompts.chat import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"def parse_or_fix(text: str, config: RunnableConfig):\n",
" fixing_chain = (\n",
" ChatPromptTemplate.from_template(\n",
" \"Fix the following text:\\n\\n```text\\n{input}\\n```\\nError: {error}\"\n",
" \" Don't narrate, just respond with the fixed data.\"\n",
" )\n",
" | ChatOpenAI()\n",
" | StrOutputParser()\n",
" )\n",
" for _ in range(3):\n",
" try:\n",
" return json.loads(text)\n",
" except Exception as e:\n",
" text = fixing_chain.invoke({\"input\": text, \"error\": e}, config)\n",
" return \"Failed to parse\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "1a5e709e-9d75-48c7-bb9c-503251990505",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'foo': 'bar'}\n",
"Tokens Used: 62\n",
"\tPrompt Tokens: 56\n",
"\tCompletion Tokens: 6\n",
"Successful Requests: 1\n",
"Total Cost (USD): $9.6e-05\n"
]
}
],
"source": [
"from langchain_community.callbacks import get_openai_callback\n",
"\n",
"with get_openai_callback() as cb:\n",
" output = RunnableLambda(parse_or_fix).invoke(\n",
" \"{foo: bar}\", {\"tags\": [\"my-tag\"], \"callbacks\": [cb]}\n",
" )\n",
" print(output)\n",
" print(cb)"
]
},
{
"cell_type": "markdown",
"id": "922b48bd",
"metadata": {},
"source": [
"# Streaming\n",
"\n",
"You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a LCEL pipeline.\n",
"\n",
"The signature of these generators should be `Iterator[Input] -> Iterator[Output]`. Or for async generators: `AsyncIterator[Input] -> AsyncIterator[Output]`.\n",
"\n",
"These are useful for:\n",
"- implementing a custom output parser\n",
"- modifying the output of a previous step, while preserving streaming capabilities\n",
"\n",
"Here's an example of a custom output parser for comma-separated lists:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "29f55c38",
"metadata": {},
"outputs": [],
"source": [
"from typing import Iterator, List\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"Write a comma-separated list of 5 animals similar to: {animal}\"\n",
" \"Write a comma-separated list of 5 animals similar to: {animal}. Do not include numbers\"\n",
")\n",
"model = ChatOpenAI(temperature=0.0)\n",
"\n",
@ -55,7 +210,8 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 7,
"id": "75aa946b",
"metadata": {},
"outputs": [
{
@ -73,7 +229,8 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 8,
"id": "d002a7fe",
"metadata": {},
"outputs": [
{
@ -82,7 +239,7 @@
"'lion, tiger, wolf, gorilla, panda'"
]
},
"execution_count": 3,
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@ -93,7 +250,8 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 9,
"id": "f08b8a5b",
"metadata": {},
"outputs": [],
"source": [
@ -119,7 +277,8 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 10,
"id": "02e414aa",
"metadata": {},
"outputs": [],
"source": [
@ -128,7 +287,8 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 11,
"id": "7ed8799d",
"metadata": {},
"outputs": [
{
@ -150,16 +310,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 12,
"id": "9ea4ddc6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['lion', 'tiger', 'wolf', 'gorilla', 'panda']"
"['lion', 'tiger', 'wolf', 'gorilla', 'elephant']"
]
},
"execution_count": 7,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@ -170,6 +331,7 @@
},
{
"cell_type": "markdown",
"id": "96e320ed",
"metadata": {},
"source": [
"## Async version"
@ -177,7 +339,8 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 13,
"id": "569dbbef",
"metadata": {},
"outputs": [],
"source": [
@ -204,7 +367,8 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 14,
"id": "7a76b713",
"metadata": {},
"outputs": [
{
@ -226,7 +390,8 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 15,
"id": "3a650482",
"metadata": {},
"outputs": [
{
@ -235,7 +400,7 @@
"['lion', 'tiger', 'wolf', 'gorilla', 'panda']"
]
},
"execution_count": 10,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@ -261,9 +426,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 5
}

@ -0,0 +1,15 @@
---
sidebar_class_name: hidden
---
# Primitives
In addition to various [components](/docs/modules) that are usable with LCEL, LangChain also includes various primitives
that help pass around and format data, bind arguments, invoke custom logic, and more.
This section goes into greater depth on where and how some of these components are useful.
import DocCardList from "@theme/DocCardList";
import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items.filter((item) => item.href !== "/docs/expression_language/primitives/")} />

@ -6,8 +6,8 @@
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"title: \"RunnableParallel: Manipulating data\"\n",
"sidebar_position: 1\n",
"title: \"Parallel: Format data\"\n",
"keywords: [RunnableParallel, RunnableMap, LCEL]\n",
"---"
]
@ -17,13 +17,13 @@
"id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2",
"metadata": {},
"source": [
"# Manipulating inputs & output\n",
"# Formatting inputs & output\n",
"\n",
"RunnableParallel can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence.\n",
"The `RunnableParallel` primitive is essentially a dict whose values are runnables (or things that can be coerced to runnables, like functions). It runs all of its values in parallel, and each value is called with the overall input of the `RunnableParallel`. The final return value is a dict with the results of each value under its appropriate key.\n",
"\n",
"Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n",
"It is useful for parallelizing operations, but can also be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence.\n",
"\n",
"\n"
"Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n"
]
},
{

@ -1,14 +1,14 @@
{
"cells": [
{
"cell_type": "markdown",
"cell_type": "raw",
"id": "d35de667-0352-4bfb-a890-cebe7f676fe7",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"title: \"RunnablePassthrough: Passing data through\"\n",
"keywords: [RunnablePassthrough, RunnableParallel, LCEL]\n",
"sidebar_position: 5\n",
"title: \"Passthrough: Pass through inputs\"\n",
"keywords: [RunnablePassthrough, LCEL]\n",
"---"
]
},
@ -19,11 +19,7 @@
"source": [
"# Passing data through\n",
"\n",
"RunnablePassthrough allows to pass inputs unchanged or with the addition of extra keys. This typically is used in conjuction with RunnableParallel to assign data to a new key in the map. \n",
"\n",
"RunnablePassthrough() called on it's own, will simply take the input and pass it through. \n",
"\n",
"RunnablePassthrough called with assign (`RunnablePassthrough.assign(...)`) will take the input, and will add the extra arguments passed to the assign function. \n",
"RunnablePassthrough on its own allows you to pass inputs unchanged. This typically is used in conjuction with RunnableParallel to pass data through to a new key in the map. \n",
"\n",
"See the example below:"
]
@ -60,7 +56,6 @@
"\n",
"runnable = RunnableParallel(\n",
" passed=RunnablePassthrough(),\n",
" extra=RunnablePassthrough.assign(mult=lambda x: x[\"num\"] * 3),\n",
" modified=lambda x: x[\"num\"] + 1,\n",
")\n",
"\n",
@ -74,9 +69,7 @@
"source": [
"As seen above, `passed` key was called with `RunnablePassthrough()` and so it simply passed on `{'num': 1}`. \n",
"\n",
"In the second line, we used `RunnablePastshrough.assign` with a lambda that multiplies the numerical value by 3. In this cased, `extra` was set with `{'num': 1, 'mult': 3}` which is the original value with the `mult` key added. \n",
"\n",
"Finally, we also set a third key in the map with `modified` which uses a lambda to set a single value adding 1 to the num, which resulted in `modified` key with the value of `2`."
"We also set a second key in the map with `modified`. This uses a lambda to set a single value adding 1 to the num, which resulted in `modified` key with the value of `2`."
]
},
{
@ -86,7 +79,7 @@
"source": [
"## Retrieval Example\n",
"\n",
"In the example below, we see a use case where we use RunnablePassthrough along with RunnableMap. "
"In the example below, we see a use case where we use `RunnablePassthrough` along with `RunnableParallel`. "
]
},
{

@ -0,0 +1,243 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"title: \"Sequences: Chaining runnables\"\n",
"keywords: [Runnable, Runnables, LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Chaining runnables\n",
"\n",
"One key advantage of the `Runnable` interface is that any two runnables can be \"chained\" together into sequences. The output of the previous runnable's `.invoke()` call is passed as input to the next runnable. This can be done using the pipe operator (`|`), or the more explicit `.pipe()` method, which does the same thing. The resulting `RunnableSequence` is itself a runnable, which means it can be invoked, streamed, or piped just like any other runnable.\n",
"\n",
"## The pipe operator\n",
"\n",
"To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/docs/modules/model_io/prompts/) to format input into a [chat model](/docs/modules/model_io/chat/), and finally converting the chat message output into a string with an [output parser](/docs/modules/model_io/output_parsers/)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-anthropic"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")\n",
"model = ChatAnthropic(model_name=\"claude-3-haiku-20240307\")\n",
"\n",
"chain = prompt | model | StrOutputParser()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Prompts and models are both runnable, and the output type from the prompt call is the same as the input type of the chat model, so we can chain them together. We can then invoke the resulting sequence like any other runnable:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Here's a bear joke for you:\\n\\nWhy don't bears wear socks? \\nBecause they have bear feet!\\n\\nHow's that? I tried to keep it light and silly. Bears can make for some fun puns and jokes. Let me know if you'd like to hear another one!\""
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"topic\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Coercion\n",
"\n",
"We can even combine this chain with more runnables to create another chain. This may involve some input/output formatting using other types of runnables, depending on the required inputs and outputs of the chain components.\n",
"\n",
"For example, let's say we wanted to compose the joke generating chain with another chain that evaluates whether or not the generated joke was funny.\n",
"\n",
"We would need to be careful with how we format the input into the next chain. In the below example, the dict in the chain is automatically parsed and converted into a [`RunnableParallel`](/docs/expression_language/primitives/parallel), which runs all of its values in parallel and returns a dict with the results.\n",
"\n",
"This happens to be the same format the next prompt template expects. Here it is in action:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"analysis_prompt = ChatPromptTemplate.from_template(\"is this a funny joke? {joke}\")\n",
"\n",
"composed_chain = {\"joke\": chain} | analysis_prompt | model | StrOutputParser()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"That's a pretty classic and well-known bear pun joke. Whether it's considered funny is quite subjective, as humor is very personal. Some people may find that type of pun-based joke amusing, while others may not find it that humorous. Ultimately, the funniness of a joke is in the eye (or ear) of the beholder. If you enjoyed the joke and got a chuckle out of it, then that's what matters most.\""
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"composed_chain.invoke({\"topic\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Functions will also be coerced into runnables, so you can add custom logic to your chains too. The below chain results in the same logical flow as before:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"composed_chain_with_lambda = (\n",
" chain\n",
" | (lambda input: {\"joke\": input})\n",
" | analysis_prompt\n",
" | model\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'I appreciate the effort, but I have to be honest - I didn\\'t find that joke particularly funny. Beet-themed puns can be quite hit-or-miss, and this one falls more on the \"miss\" side for me. The premise is a bit too straightforward and predictable. While I can see the logic behind it, the punchline just doesn\\'t pack much of a comedic punch. \\n\\nThat said, I do admire your willingness to explore puns and wordplay around vegetables. Cultivating a good sense of humor takes practice, and not every joke is going to land. The important thing is to keep experimenting and finding what works. Maybe try for a more unexpected or creative twist on beet-related humor next time. But thanks for sharing - I always appreciate when humans test out jokes on me, even if they don\\'t always make me laugh out loud.'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"composed_chain_with_lambda.invoke({\"topic\": \"beets\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"However, keep in mind that using functions like this may interfere with operations like streaming. See [this section](/docs/expression_language/primitives/functions) for more information."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## The `.pipe()` method\n",
"\n",
"We could also compose the same sequence using the `.pipe()` method. Here's what that looks like:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableParallel\n",
"\n",
"composed_chain_with_pipe = (\n",
" RunnableParallel({\"joke\": chain})\n",
" .pipe(analysis_prompt)\n",
" .pipe(model)\n",
" .pipe(StrOutputParser())\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'That\\'s a pretty good Battlestar Galactica-themed pun! I appreciated the clever play on words with \"Centurion\" and \"center on.\" It\\'s the kind of nerdy, science fiction-inspired humor that fans of the show would likely enjoy. The joke is clever and demonstrates a good understanding of the Battlestar Galactica universe. I\\'d be curious to hear any other Battlestar-related jokes you might have up your sleeve. As long as they don\\'t reproduce copyrighted material, I\\'m happy to provide my thoughts on the humor and appeal for fans of the show.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"composed_chain_with_pipe.invoke({\"topic\": \"battlestar galactica\"})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

@ -201,13 +201,23 @@
" print(chunk, end=\"|\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "868bc412",
"metadata": {},
"source": [
"You might notice above that `parser` actually doesn't block the streaming output from the model, and instead processes each chunk individually. Many of the [LCEL primitives](/docs/expression_language/primitives) also support this kind of transform-style passthrough streaming, which can be very convenient when constructing apps.\n",
"\n",
"Certain runnables, like [prompt templates](/docs/modules/model_io/prompts) and [chat models](/docs/modules/model_io/chat), cannot process individual chunks and instead aggregate all previous steps. This will interrupt the streaming process. Custom functions can be [designed to return generators](/docs/expression_language/primitives/functions#streaming), which"
]
},
{
"cell_type": "markdown",
"id": "1b399fb4-5e3c-4581-9570-6df9b42b623d",
"metadata": {},
"source": [
":::{.callout-note}\n",
"You do not have to use the `LangChain Expression Language` to use LangChain and can instead rely on a standard **imperative** programming approach by\n",
"If the above functionality is not relevant to what you're building, you do not have to use the `LangChain Expression Language` to use LangChain and can instead rely on a standard **imperative** programming approach by\n",
"caling `invoke`, `batch` or `stream` on each component individually, assigning the results to variables and then using them downstream as you see fit.\n",
"\n",
"If that works for your needs, then that's fine by us 👌!\n",

File diff suppressed because it is too large Load Diff

@ -1,3 +1,7 @@
---
sidebar_position: 2
---
# Installation
## Official release
@ -29,18 +33,18 @@ If you want to install from source, you can do so by cloning the repo and be sur
pip install -e .
```
## LangChain community
The `langchain-community` package contains third-party integrations. It is automatically installed by `langchain`, but can also be used separately. Install with:
## LangChain core
The `langchain-core` package contains base abstractions that the rest of the LangChain ecosystem uses, along with the LangChain Expression Language. It is automatically installed by `langchain`, but can also be used separately. Install with:
```bash
pip install langchain-community
pip install langchain-core
```
## LangChain core
The `langchain-core` package contains base abstractions that the rest of the LangChain ecosystem uses, along with the LangChain Expression Language. It is automatically installed by `langchain`, but can also be used separately. Install with:
## LangChain community
The `langchain-community` package contains third-party integrations. It is automatically installed by `langchain`, but can also be used separately. Install with:
```bash
pip install langchain-core
pip install langchain-community
```
## LangChain experimental
@ -51,6 +55,13 @@ Install with:
pip install langchain-experimental
```
## LangGraph
`langgraph` is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain.
Install with:
```bash
pip install langgraph
```
## LangServe
LangServe helps developers deploy LangChain runnables and chains as a REST API.
LangServe is automatically installed by LangChain CLI.

@ -1,18 +1,16 @@
---
sidebar_position: 0
sidebar_class_name: hidden
---
# Introduction
**LangChain** is a framework for developing applications powered by language models. It enables applications that:
- **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
- **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
**LangChain** is a framework for developing applications powered by large language models (LLMs).
This framework consists of several parts.
- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
- **[LangChain Templates](/docs/templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
- **[LangServe](/docs/langserve)**: A library for deploying LangChain chains as a REST API.
- **[LangSmith](/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
LangChain simplifies every stage of the LLM application lifecycle:
- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/expression_language/) and [components](/docs/modules/). Hit the ground running using [third-party integrations](/docs/integrations/platforms/) and [Templates](/docs/templates).
- **Productionization**: Use [LangSmith](/docs/langsmith/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence.
- **Deployment**: Turn any chain into an API with [LangServe](/docs/langserve).
import ThemedImage from '@theme/ThemedImage';
@ -25,31 +23,24 @@ import ThemedImage from '@theme/ThemedImage';
title="LangChain Framework Overview"
/>
Together, these products simplify the entire application lifecycle:
- **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference.
- **Productionize**: Use LangSmith to inspect, test and monitor your chains, so that you can constantly improve and deploy with confidence.
- **Deploy**: Turn any chain into an API with LangServe.
Concretely, the framework consists of the following open-source libraries:
## LangChain Libraries
The main value props of the LangChain packages are:
1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
The LangChain libraries themselves are made up of several different packages.
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Partner packages (e.g. **`langchain-openai`**, **`langchain-anthropic`**, etc.): Some integrations have been further split into their own lightweight packages that only depend on **`langchain-core`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[langgraph](/docs/langgraph)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
- **[langserve](/docs/langserve)**: Deploy LangChain chains as REST APIs.
## Get started
The broader ecosystem includes:
[Heres](/docs/get_started/installation) how to install LangChain, set up your environment, and start building.
- **[LangSmith](/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor LLM applications and seamlessly integrates with LangChain.
## Get started
We recommend following our [Quickstart](/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application.
Read up on our [Security](/docs/security) best practices to make sure you're developing safely with LangChain.
[See here](/docs/get_started/installation) for instructions on how to install LangChain, set up your environment, and start building.
:::note
@ -57,48 +48,53 @@ These docs focus on the Python LangChain library. [Head here](https://js.langcha
:::
## LangChain Expression Language (LCEL)
## Use cases
LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
If you're looking to build something specific or are more of a hands-on learner, check out our [use-cases](/docs/use_cases).
They're walkthroughs and techniques for common end-to-end tasks, such as:
- **[Overview](/docs/expression_language/)**: LCEL and its benefits
- **[Interface](/docs/expression_language/interface)**: The standard interface for LCEL objects
- **[How-to](/docs/expression_language/how_to)**: Key features of LCEL
- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks
- [Question answering with RAG](/docs/use_cases/question_answering/)
- [Extracting structured output](/docs/use_cases/extraction/)
- [Chatbots](/docs/use_cases/chatbots/)
- and more!
## Modules
## Expression Language
LangChain provides standard, extendable interfaces and integrations for the following modules:
LangChain Expression Language (LCEL) is the foundation of many of LangChain's components, and is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
#### [Model I/O](/docs/modules/model_io/)
Interface with language models
- **[Get started](/docs/expression_language/)**: LCEL and its benefits
- **[Runnable interface](/docs/expression_language/interface)**: The standard interface for LCEL objects
- **[Primitives](/docs/expression_language/primitives)**: More on the primitives LCEL includes
- and more!
#### [Retrieval](/docs/modules/data_connection/)
Interface with application-specific data
## Ecosystem
#### [Agents](/docs/modules/agents/)
Let models choose which tools to use given high-level directives
### [🦜🛠️ LangSmith](/docs/langsmith)
Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
### [🦜🕸️ LangGraph](/docs/langgraph)
Build stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
## Examples, ecosystem, and resources
### [🦜🏓 LangServe](/docs/langserve)
Deploy LangChain runnables and chains as REST APIs.
### [Use cases](/docs/use_cases/question_answering/)
Walkthroughs and techniques for common end-to-end use cases, like:
- [Document question answering](/docs/use_cases/question_answering/)
- [Chatbots](/docs/use_cases/chatbots/)
- [Analyzing structured data](/docs/use_cases/sql/)
- and much more...
## [Security](/docs/security)
Read up on our [Security](/docs/security) best practices to make sure you're developing safely with LangChain.
## Additional resources
### [Components](/docs/modules/)
LangChain provides standard, extendable interfaces and integrations for many different components, including:
### [Integrations](/docs/integrations/providers/)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/providers/).
### [Guides](../guides/debugging.md)
### [Guides](/docs/guides/)
Best practices for developing with LangChain.
### [API reference](https://api.python.langchain.com)
Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental Python packages.
### [Developer's guide](/docs/contributing)
### [Contributing](/docs/contributing)
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.

@ -1,3 +1,7 @@
---
sidebar_position: 1
---
# Quickstart
In this quickstart we'll show you how to:

@ -8,11 +8,11 @@ Here are a few different tools and functionalities to aid in debugging.
## Tracing
Platforms with tracing capabilities like [LangSmith](/docs/langsmith/) and [WandB](/docs/integrations/providers/wandb_tracing) are the most comprehensive solutions for debugging. These platforms make it easy to not only log and visualize LLM apps, but also to actively debug, test and refine them.
Platforms with tracing capabilities like [LangSmith](/docs/langsmith/) are the most comprehensive solutions for debugging. These platforms make it easy to not only log and visualize LLM apps, but also to actively debug, test and refine them.
For anyone building production-grade LLM applications, we highly recommend using a platform like this.
When building production-grade LLM applications, platforms like this are essential.
![Screenshot of the LangSmith debugging interface showing an AgentExecutor run with input and output details, and a run tree visualization.](../../static/img/run_details.png "LangSmith Debugging Interface")
![Screenshot of the LangSmith debugging interface showing an AgentExecutor run with input and output details, and a run tree visualization.](../../../static/img/run_details.png "LangSmith Debugging Interface")
## `set_debug` and `set_verbose`

@ -0,0 +1,13 @@
---
sidebar_position: 1
sidebar_class_name: hidden
---
# Development
This section contains guides with general information around building apps with LangChain.
import DocCardList from "@theme/DocCardList";
import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items.filter((item) => item.href !== "/docs/guides/development/")} />

@ -9,7 +9,7 @@
"\n",
"## Use case\n",
"\n",
"The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), [GPT4All](https://github.com/nomic-ai/gpt4all), and [llamafile](https://github.com/Mozilla-Ocho/llamafile) underscore the demand to run LLMs locally (on your own device).\n",
"The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), [GPT4All](https://github.com/nomic-ai/gpt4all), [llamafile](https://github.com/Mozilla-Ocho/llamafile), and others underscore the demand to run LLMs locally (on your own device).\n",
"\n",
"This has at least two important benefits:\n",
"\n",
@ -32,7 +32,7 @@
"1. `Base model`: What is the base-model and how was it trained?\n",
"2. `Fine-tuning approach`: Was the base-model fine-tuned and, if so, what [set of instructions](https://cameronrwolfe.substack.com/p/beyond-llama-the-power-of-open-llms#%C2%A7alpaca-an-instruction-following-llama-model) was used?\n",
"\n",
"![Image description](../../static/img/OSS_LLM_overview.png)\n",
"![Image description](../../../static/img/OSS_LLM_overview.png)\n",
"\n",
"The relative performance of these models can be assessed using several leaderboards, including:\n",
"\n",
@ -56,7 +56,7 @@
"\n",
"In particular, see [this excellent post](https://finbarr.ca/how-is-llama-cpp-possible/) on the importance of quantization.\n",
"\n",
"![Image description](../../static/img/llama-memory-weights.png)\n",
"![Image description](../../../static/img/llama-memory-weights.png)\n",
"\n",
"With less precision, we radically decrease the memory needed to store the LLM in memory.\n",
"\n",
@ -64,7 +64,7 @@
"\n",
"A Mac M2 Max is 5-6x faster than a M1 for inference due to the larger GPU memory bandwidth.\n",
"\n",
"![Image description](../../static/img/llama_t_put.png)\n",
"![Image description](../../../static/img/llama_t_put.png)\n",
"\n",
"## Quickstart\n",
"\n",

@ -0,0 +1,3 @@
# Guides
This section contains deeper dives into the LangChain framework and how to apply it.

@ -1,283 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "920a3c1a",
"metadata": {},
"source": [
"# Model comparison\n",
"\n",
"Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way. \n",
"\n",
"LangChain provides the concept of a ModelLaboratory to test out and try different models."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "12ebae56",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ab9e95ad",
"metadata": {},
"outputs": [],
"source": [
"from langchain.model_laboratory import ModelLaboratory\n",
"from langchain_community.llms import Cohere, HuggingFaceHub\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3dd69cb4",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"# get a new token: https://dashboard.cohere.ai/\n",
"os.environ[\"COHERE_API_KEY\"] = getpass.getpass(\"Cohere API Key:\")\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Open API Key:\")\n",
"os.environ[\"HUGGINGFACEHUB_API_TOKEN\"] = getpass.getpass(\"Hugging Face API Key:\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "32cb94e6",
"metadata": {},
"outputs": [],
"source": [
"llms = [\n",
" OpenAI(temperature=0),\n",
" Cohere(temperature=0),\n",
" HuggingFaceHub(repo_id=\"google/flan-t5-xl\", model_kwargs={\"temperature\": 1}),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "14cde09d",
"metadata": {},
"outputs": [],
"source": [
"model_lab = ModelLaboratory.from_llms(llms)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f186c741",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[1mInput:\u001b[0m\n",
"What color is a flamingo?\n",
"\n",
"\u001b[1mOpenAI\u001b[0m\n",
"Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}\n",
"\u001b[36;1m\u001b[1;3m\n",
"\n",
"Flamingos are pink.\u001b[0m\n",
"\n",
"\u001b[1mCohere\u001b[0m\n",
"Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}\n",
"\u001b[33;1m\u001b[1;3m\n",
"\n",
"Pink\u001b[0m\n",
"\n",
"\u001b[1mHuggingFaceHub\u001b[0m\n",
"Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1}\n",
"\u001b[38;5;200m\u001b[1;3mpink\u001b[0m\n",
"\n"
]
}
],
"source": [
"model_lab.compare(\"What color is a flamingo?\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "248b652a",
"metadata": {},
"outputs": [],
"source": [
"prompt = PromptTemplate(\n",
" template=\"What is the capital of {state}?\", input_variables=[\"state\"]\n",
")\n",
"model_lab_with_prompt = ModelLaboratory.from_llms(llms, prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "f64377ac",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[1mInput:\u001b[0m\n",
"New York\n",
"\n",
"\u001b[1mOpenAI\u001b[0m\n",
"Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}\n",
"\u001b[36;1m\u001b[1;3m\n",
"\n",
"The capital of New York is Albany.\u001b[0m\n",
"\n",
"\u001b[1mCohere\u001b[0m\n",
"Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}\n",
"\u001b[33;1m\u001b[1;3m\n",
"\n",
"The capital of New York is Albany.\u001b[0m\n",
"\n",
"\u001b[1mHuggingFaceHub\u001b[0m\n",
"Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1}\n",
"\u001b[38;5;200m\u001b[1;3mst john s\u001b[0m\n",
"\n"
]
}
],
"source": [
"model_lab_with_prompt.compare(\"New York\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "54336dbf",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents.self_ask_with_search.base import SelfAskWithSearchChain\n",
"from langchain_community.utilities import SerpAPIWrapper\n",
"\n",
"open_ai_llm = OpenAI(temperature=0)\n",
"search = SerpAPIWrapper()\n",
"self_ask_with_search_openai = SelfAskWithSearchChain(\n",
" llm=open_ai_llm, search_chain=search, verbose=True\n",
")\n",
"\n",
"cohere_llm = Cohere(temperature=0)\n",
"search = SerpAPIWrapper()\n",
"self_ask_with_search_cohere = SelfAskWithSearchChain(\n",
" llm=cohere_llm, search_chain=search, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "6a50a9f1",
"metadata": {},
"outputs": [],
"source": [
"chains = [self_ask_with_search_openai, self_ask_with_search_cohere]\n",
"names = [str(open_ai_llm), str(cohere_llm)]"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "d3549e99",
"metadata": {},
"outputs": [],
"source": [
"model_lab = ModelLaboratory(chains, names=names)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "362f7f57",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[1mInput:\u001b[0m\n",
"What is the hometown of the reigning men's U.S. Open champion?\n",
"\n",
"\u001b[1mOpenAI\u001b[0m\n",
"Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}\n",
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"What is the hometown of the reigning men's U.S. Open champion?\n",
"Are follow up questions needed here:\u001b[32;1m\u001b[1;3m Yes.\n",
"Follow up: Who is the reigning men's U.S. Open champion?\u001b[0m\n",
"Intermediate answer: \u001b[33;1m\u001b[1;3mCarlos Alcaraz.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Follow up: Where is Carlos Alcaraz from?\u001b[0m\n",
"Intermediate answer: \u001b[33;1m\u001b[1;3mEl Palmar, Spain.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"So the final answer is: El Palmar, Spain\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[36;1m\u001b[1;3m\n",
"So the final answer is: El Palmar, Spain\u001b[0m\n",
"\n",
"\u001b[1mCohere\u001b[0m\n",
"Params: {'model': 'command-xlarge-20221108', 'max_tokens': 256, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}\n",
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"What is the hometown of the reigning men's U.S. Open champion?\n",
"Are follow up questions needed here:\u001b[32;1m\u001b[1;3m Yes.\n",
"Follow up: Who is the reigning men's U.S. Open champion?\u001b[0m\n",
"Intermediate answer: \u001b[33;1m\u001b[1;3mCarlos Alcaraz.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"So the final answer is:\n",
"\n",
"Carlos Alcaraz\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[33;1m\u001b[1;3m\n",
"So the final answer is:\n",
"\n",
"Carlos Alcaraz\u001b[0m\n",
"\n"
]
}
],
"source": [
"model_lab.compare(\"What is the hometown of the reigning men's U.S. Open champion?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -7,18 +7,19 @@ Building applications with language models involves many moving parts. One of th
The guides in this section review the APIs and functionality LangChain provides to help you better evaluate your applications. Evaluation and testing are both critical when thinking about deploying LLM applications, since production environments require repeatable and useful outcomes.
LangChain offers various types of evaluators to help you measure performance and integrity on diverse data, and we hope to encourage the community to create and share other useful evaluators so everyone can improve. These docs will introduce the evaluator types, how to use them, and provide some examples of their use in real-world scenarios.
These built-in evaluators all integrate smoothly with [LangSmith](/docs/langsmith), and allow you to create feedback loops that improve your application over time and prevent regressions.
Each evaluator type in LangChain comes with ready-to-use implementations and an extensible API that allows for customization according to your unique requirements. Here are some of the types of evaluators we offer:
- [String Evaluators](/docs/guides/evaluation/string/): These evaluators assess the predicted string for a given input, usually comparing it against a reference string.
- [Trajectory Evaluators](/docs/guides/evaluation/trajectory/): These are used to evaluate the entire trajectory of agent actions.
- [Comparison Evaluators](/docs/guides/evaluation/comparison/): These evaluators are designed to compare predictions from two runs on a common input.
- [String Evaluators](/docs/guides/productionization/evaluation/string/): These evaluators assess the predicted string for a given input, usually comparing it against a reference string.
- [Trajectory Evaluators](/docs/guides/productionization/evaluation/trajectory/): These are used to evaluate the entire trajectory of agent actions.
- [Comparison Evaluators](/docs/guides/productionization/evaluation/comparison/): These evaluators are designed to compare predictions from two runs on a common input.
These evaluators can be used across various scenarios and can be applied to different chain and LLM implementations in the LangChain library.
We also are working to share guides and cookbooks that demonstrate how to use these evaluators in real-world scenarios, such as:
- [Chain Comparisons](/docs/guides/evaluation/examples/comparisons): This example uses a comparison evaluator to predict the preferred output. It reviews ways to measure confidence intervals to select statistically significant differences in aggregate preference scores across different models or prompts.
- [Chain Comparisons](/docs/guides/productionization/evaluation/examples/comparisons): This example uses a comparison evaluator to predict the preferred output. It reviews ways to measure confidence intervals to select statistically significant differences in aggregate preference scores across different models or prompts.
## LangSmith Evaluation

@ -0,0 +1,15 @@
---
sidebar_position: 1
sidebar_class_name: hidden
---
# Productionization
After you've developed a prototype of your language model application, the next step is to prepare it for production.
This section contains guides around best practices for getting and keeping your application production-ready,
ensuring it's ready for real-world use.
import DocCardList from "@theme/DocCardList";
import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items.filter((item) => item.href !== "/docs/guides/productionization/")} />

@ -0,0 +1,11 @@
# Privacy & Safety
One of the key concerns with using LLMs is that they may misuse private data or generate harmful or unethical text. This is an area of active research in the field. Here we present some built-in chains inspired by this research, which are intended to make the outputs of LLMs safer.
- [Amazon Comprehend moderation chain](/docs/guides/productionization/safety/amazon_comprehend_chain): Use [Amazon Comprehend](https://aws.amazon.com/comprehend/) to detect and handle Personally Identifiable Information (PII) and toxicity.
- [Constitutional chain](/docs/guides/productionization/safety/constitutional_chain): Prompt the model with a set of principles which should guide the model behavior.
- [Hugging Face prompt injection identification](/docs/guides/productionization/safety/hugging_face_prompt_injection): Detect and handle prompt injection attacks.
- [Layerup Security](/docs/guides/productionization/safety/layerup_security): Easily mask PII & sensitive data, detect and mitigate 10+ LLM-based threat vectors, including PII & sensitive data, prompt injection, hallucination, abuse, and more.
- [Logical Fallacy chain](/docs/guides/productionization/safety/logical_fallacy_chain): Checks the model output against logical fallacies to correct any deviation.
- [Moderation chain](/docs/guides/productionization/safety/moderation): Check if any output text is harmful and flag it.
- [Presidio data anonymization](/docs/guides/productionization/safety/presidio_data_anonymization): Helps to ensure sensitive data is properly managed and governed.

@ -5,9 +5,19 @@
"id": "4927a727-b4c8-453c-8c83-bd87b4fcac14",
"metadata": {},
"source": [
"# Adding moderation\n",
"# Moderation chain\n",
"\n",
"This shows how to add in moderation (or other safeguards) around your LLM application."
"This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. \n",
"Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. \n",
"Some API providers specifically prohibit you, or your end users, from generating some \n",
"types of harmful content. To comply with this (and to just generally prevent your application from being harmful) \n",
"you may want to add a moderation chain to your sequences in order to make sure any output \n",
"the LLM generates is not harmful.\n",
"\n",
"If the content passed into the moderation chain is harmful, there is not one best way to handle it.\n",
"It probably depends on your application. Sometimes you may want to throw an error \n",
"(and have your application handle that). Other times, you may want to return something to \n",
"the user explaining that the text was harmful."
]
},
{

@ -24,7 +24,7 @@
"<img src=\"/img/qa_privacy_protection.png\" width=\"900\"/>\n",
"\n",
"\n",
"In the following notebook, we will not go into the details of how the anonymizer works. If you are interested, please visit [this part of the documentation](/docs/guides/privacy/presidio_data_anonymization/).\n",
"In the following notebook, we will not go into the details of how the anonymizer works. If you are interested, please visit [this part of the documentation](/docs/guides/productionization/safety/presidio_data_anonymization/).\n",
"\n",
"## Quickstart\n",
"\n",

@ -1,10 +0,0 @@
# Safety
One of the key concerns with using LLMs is that they may generate harmful or unethical text. This is an area of active research in the field. Here we present some built-in chains inspired by this research, which are intended to make the outputs of LLMs safer.
- [Amazon Comprehend moderation chain](/docs/guides/safety/amazon_comprehend_chain): Use [Amazon Comprehend](https://aws.amazon.com/comprehend/) to detect and handle Personally Identifiable Information (PII) and toxicity.
- [Constitutional chain](/docs/guides/safety/constitutional_chain): Prompt the model with a set of principles which should guide the model behavior.
- [Hugging Face prompt injection identification](/docs/guides/safety/hugging_face_prompt_injection): Detect and handle prompt injection attacks.
- [Layerup Security](/docs/guides/safety/layerup_security): Easily mask PII & sensitive data, detect and mitigate 10+ LLM-based threat vectors, including PII & sensitive data, prompt injection, hallucination, abuse, and more.
- [Logical Fallacy chain](/docs/guides/safety/logical_fallacy_chain): Checks the model output against logical fallacies to correct any deviation.
- [Moderation chain](/docs/guides/safety/moderation): Check if any output text is harmful and flag it.

@ -1,267 +0,0 @@
# Moderation chain
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so.
Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model.
Some API providers, like OpenAI, [specifically prohibit](https://beta.openai.com/docs/usage-policies/use-case-policy) you, or your end users, from generating some
types of harmful content. To comply with this (and to just generally prevent your application from being harmful)
you may often want to append a moderation chain to any LLMChains, in order to make sure any output
the LLM generates is not harmful.
If the content passed into the moderation chain is harmful, there is not one best way to handle it,
it probably depends on your application. Sometimes you may want to throw an error in the Chain
(and have your application handle that). Other times, you may want to return something to
the user explaining that the text was harmful. There could be other ways to handle it.
We will cover all these ways in this walkthrough.
We'll show:
1. How to run any piece of text through a moderation chain.
2. How to append a Moderation chain to an LLMChain.
```python
from langchain_openai import OpenAI
from langchain.chains import OpenAIModerationChain, SequentialChain, LLMChain, SimpleSequentialChain
from langchain_core.prompts import PromptTemplate
```
## How to use the moderation chain
Here's an example of using the moderation chain with default settings (will return a string
explaining stuff was flagged).
```python
moderation_chain = OpenAIModerationChain()
moderation_chain.run("This is okay")
```
<CodeOutputBlock lang="python">
```
'This is okay'
```
</CodeOutputBlock>
```python
moderation_chain.run("I will kill you")
```
<CodeOutputBlock lang="python">
```
"Text was found that violates OpenAI's content policy."
```
</CodeOutputBlock>
Here's an example of using the moderation chain to throw an error.
```python
moderation_chain_error = OpenAIModerationChain(error=True)
moderation_chain_error.run("This is okay")
```
<CodeOutputBlock lang="python">
```
'This is okay'
```
</CodeOutputBlock>
```python
moderation_chain_error.run("I will kill you")
```
<CodeOutputBlock lang="python">
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[7], line 1
----> 1 moderation_chain_error.run("I will kill you")
File ~/workplace/langchain/langchain/chains/base.py:138, in Chain.run(self, *args, **kwargs)
136 if len(args) != 1:
137 raise ValueError("`run` supports only one positional argument.")
--> 138 return self(args[0])[self.output_keys[0]]
140 if kwargs and not args:
141 return self(kwargs)[self.output_keys[0]]
File ~/workplace/langchain/langchain/chains/base.py:112, in Chain.__call__(self, inputs, return_only_outputs)
108 if self.verbose:
109 print(
110 f"\n\n\033[1m> Entering new {self.__class__.__name__} chain...\033[0m"
111 )
--> 112 outputs = self._call(inputs)
113 if self.verbose:
114 print(f"\n\033[1m> Finished {self.__class__.__name__} chain.\033[0m")
File ~/workplace/langchain/langchain/chains/moderation.py:81, in OpenAIModerationChain._call(self, inputs)
79 text = inputs[self.input_key]
80 results = self.client.create(text)
---> 81 output = self._moderate(text, results["results"][0])
82 return {self.output_key: output}
File ~/workplace/langchain/langchain/chains/moderation.py:73, in OpenAIModerationChain._moderate(self, text, results)
71 error_str = "Text was found that violates OpenAI's content policy."
72 if self.error:
---> 73 raise ValueError(error_str)
74 else:
75 return error_str
ValueError: Text was found that violates OpenAI's content policy.
```
</CodeOutputBlock>
## How to create a custom Moderation chain
Here's an example of creating a custom moderation chain with a custom error message.
It requires some knowledge of OpenAI's moderation endpoint results. See [docs here](https://beta.openai.com/docs/api-reference/moderations).
```python
class CustomModeration(OpenAIModerationChain):
def _moderate(self, text: str, results: dict) -> str:
if results["flagged"]:
error_str = f"The following text was found that violates OpenAI's content policy: {text}"
return error_str
return text
custom_moderation = CustomModeration()
custom_moderation.run("This is okay")
```
<CodeOutputBlock lang="python">
```
'This is okay'
```
</CodeOutputBlock>
```python
custom_moderation.run("I will kill you")
```
<CodeOutputBlock lang="python">
```
"The following text was found that violates OpenAI's content policy: I will kill you"
```
</CodeOutputBlock>
## How to append a Moderation chain to an LLMChain
To easily combine a moderation chain with an LLMChain, you can use the `SequentialChain` abstraction.
Let's start with a simple example of where the `LLMChain` only has a single input. For this purpose,
we will prompt the model, so it says something harmful.
```python
prompt = PromptTemplate.from_template("{text}")
llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="gpt-3.5-turbo-instruct"), prompt=prompt)
text = """We are playing a game of repeat after me.
Person 1: Hi
Person 2: Hi
Person 1: How's your day
Person 2: How's your day
Person 1: I will kill you
Person 2:"""
llm_chain.run(text)
```
<CodeOutputBlock lang="python">
```
' I will kill you'
```
</CodeOutputBlock>
```python
chain = SimpleSequentialChain(chains=[llm_chain, moderation_chain])
chain.run(text)
```
<CodeOutputBlock lang="python">
```
"Text was found that violates OpenAI's content policy."
```
</CodeOutputBlock>
Now let's walk through an example of using it with an LLMChain which has multiple inputs (a bit more tricky because we can't use the SimpleSequentialChain)
```python
prompt = PromptTemplate.from_template("{setup}{new_input}Person2:")
llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="gpt-3.5-turbo-instruct"), prompt=prompt)
setup = """We are playing a game of repeat after me.
Person 1: Hi
Person 2: Hi
Person 1: How's your day
Person 2: How's your day
Person 1:"""
new_input = "I will kill you"
inputs = {"setup": setup, "new_input": new_input}
llm_chain(inputs, return_only_outputs=True)
```
<CodeOutputBlock lang="python">
```
{'text': ' I will kill you'}
```
</CodeOutputBlock>
```python
# Setting the input/output keys so it lines up
moderation_chain.input_key = "text"
moderation_chain.output_key = "sanitized_text"
chain = SequentialChain(chains=[llm_chain, moderation_chain], input_variables=["setup", "new_input"])
chain(inputs, return_only_outputs=True)
```
<CodeOutputBlock lang="python">
```
{'sanitized_text': "Text was found that violates OpenAI's content policy."}
```
</CodeOutputBlock>

@ -72,7 +72,7 @@
"source": [
"## Structured Output\n",
"\n",
"`ChatAnthropicTools` also implements the [`with_structured_output` spec](/docs/guides/structured_output) for extracting values. Note: this may not be as stable as with models that explicitly offer tool calling."
"`ChatAnthropicTools` also implements the [`with_structured_output` spec](/docs/modules/model_io/chat/structured_output) for extracting values. Note: this may not be as stable as with models that explicitly offer tool calling."
]
},
{

@ -283,7 +283,7 @@ We need to install the `boto3` and `nltk` libraries.
pip install boto3 nltk
```
See a [usage example](/docs/guides/safety/amazon_comprehend_chain).
See a [usage example](/docs/guides/productionization/safety/amazon_comprehend_chain).
```python
from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain

@ -346,7 +346,7 @@ pip install langchain-experimental openai presidio-analyzer presidio-anonymizer
python -m spacy download en_core_web_lg
```
See [usage examples](/docs/guides/privacy/presidio_data_anonymization/).
See [usage examples](/docs/guides/productionization/safety/presidio_data_anonymization/).
```python
from langchain_experimental.data_anonymizer import PresidioAnonymizer, PresidioReversibleAnonymizer

@ -111,7 +111,7 @@ For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_
## Chain
See a [usage example](/docs/guides/safety/moderation).
See a [usage example](/docs/guides/productionization/safety/moderation).
```python
from langchain.chains import OpenAIModerationChain

@ -31,4 +31,4 @@ from langchain.agents import load_tools
tools = load_tools(["golden-query"])
```
For more information on tools, see [this page](/docs/modules/agents/tools/).
For more information on tools, see [this page](/docs/modules/tools/).

@ -70,4 +70,4 @@ from langchain.agents import load_tools
tools = load_tools(["google-serper"])
```
For more information on tools, see [this page](/docs/modules/agents/tools/).
For more information on tools, see [this page](/docs/modules/tools/).

@ -7,7 +7,7 @@
>It optimizes setup and configuration details, including GPU usage.
>For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).
See [this guide](/docs/guides/local_llms#quickstart) for more details
See [this guide](/docs/guides/development/local_llms#quickstart) for more details
on how to use `Ollama` with LangChain.
## Installation and Setup

@ -41,4 +41,4 @@ from langchain.agents import load_tools
tools = load_tools(["openweathermap-api"])
```
For more information on tools, see [this page](/docs/modules/agents/tools/).
For more information on tools, see [this page](/docs/modules/tools/).

@ -77,4 +77,4 @@ from langchain.agents import load_tools
tools = load_tools(["searchapi"])
```
For more information on tools, see [this page](/docs/modules/agents/tools/).
For more information on tools, see [this page](/docs/modules/tools/).

@ -87,4 +87,4 @@ arxiv_tool = SearxSearchResults(name="Arxiv", wrapper=wrapper,
})
```
For more information on tools, see [this page](/docs/modules/agents/tools/).
For more information on tools, see [this page](/docs/modules/tools/).

@ -28,4 +28,4 @@ from langchain.agents import load_tools
tools = load_tools(["serpapi"])
```
For more information on this, see [this page](/docs/modules/agents/tools)
For more information on this, see [this page](/docs/modules/tools)

@ -33,4 +33,4 @@ from langchain.agents import load_tools
tools = load_tools(["stackexchange"])
```
For more information on tools, see [this page](/docs/modules/agents/tools/).
For more information on tools, see [this page](/docs/modules/tools/).

@ -36,4 +36,4 @@ from langchain.agents import load_tools
tools = load_tools(["wolfram-alpha"])
```
For more information on tools, see [this page](/docs/modules/agents/tools/).
For more information on tools, see [this page](/docs/modules/tools/).

@ -2,7 +2,7 @@
sidebar_class_name: hidden
---
# LangSmith
# 🦜🛠️ LangSmith
[LangSmith](https://smith.langchain.com) helps you trace and evaluate your language model applications and intelligent agents to help you
move from prototype to production.

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save