Add LCEL to chain doc (#11895)

pull/11789/head
Bagatur 10 months ago committed by GitHub
parent 52bf03d786
commit d62369f478
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -7,5 +7,5 @@ The LangChain Expression Language was designed from day 1 to **support putting p
- optimised parallel execution: whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.
- support for retries and fallbacks: more recently weve added support for configuring retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. Were currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
- accessing intermediate results: for more complex chains its often very useful to access the results of intermediate steps even before the final output is produced. This can be used let end-users know something is happening, or even just to debug your chain. Weve added support for [streaming intermediate results](https://x.com/LangChainAI/status/1711806009097044193?s=20), and its available on every LangServe server.
- [input and output schemas](https://x.com/LangChainAI/status/1711805322195861934?s=20): this week we launched input and output schemas for LCEL, giving every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
- [input and output schemas](https://x.com/LangChainAI/status/1711805322195861934?s=20): input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
- tracing with LangSmith: all chains built with LCEL have first-class tracing support, which can be used to debug your chains, or to understand whats happening in production. To enable this all you have to do is add your [LangSmith](https://www.langchain.com/langsmith) API key as an environment variable.

@ -0,0 +1,190 @@
{
"cells": [
{
"cell_type": "raw",
"id": "bcb4ca40-c3cb-4f23-b09f-4d6c3c46999f",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"title: Chains\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "b872d874-ad6e-49b5-9435-66063a64d1a8",
"metadata": {},
"source": [
"Using an LLM in isolation is fine for simple applications, but more complex applications require chaining LLMs - either with each other or with other components.\n",
"\n",
"LangChain provides two high-level frameworks for \"chaining\" components. The legacy approach is to use the `Chain` interface. The updated approach is to use the [LangChain Expression Language (LCEL)](/docs/expression_language/). When building new applications we recommend using LCEL for chain composition. But there are a number of useful, built-in `Chain`'s that we continue to support, so we document both frameworks here. As we'll touch on below, `Chain`'s can also themselves be used in LCEL, so the two are not mutually exclusive."
]
},
{
"cell_type": "markdown",
"id": "6aedf9f6-b53f-4456-90cb-be3cfec04b4e",
"metadata": {},
"source": [
"## LCEL\n",
"\n",
"The most visible part of LCEL is that it provides an intuitive and readable syntax for composition. But more importantly, it also provides first-class support for:\n",
"* [streaming](/docs/expression_language/interface#stream),\n",
"* [async calls](/docs/expression_language/interface#async-stream),\n",
"* [batching](/docs/expression_language/interface#batch),\n",
"* [parallelization](/docs/expression_language/interface#parallelism),\n",
"* retries,\n",
"* [fallbacks](/docs/expression_language/how_to/fallbacks),\n",
"* tracing,\n",
"* [and more.](/docs/expression_language/why)\n",
"\n",
"As a simple and common example, we can see what it's like to combine a prompt, model and output parser:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "beb2a555-866d-4837-bfe5-988dd4ab09a5",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatAnthropic\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema import StrOutputParser\n",
"\n",
"model = ChatAnthropic()\n",
"prompt = ChatPromptTemplate.from_messages([\n",
" (\"system\", \"You're a very knowledgeable historian who provides accurate and eloquent answers to historical questions.\"),\n",
" (\"human\", \"{question}\"),\n",
"])\n",
"runnable = prompt | model | StrOutputParser()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2c14e850-32fa-4de7-9a9d-9ed0a3fb5e99",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Mansa Musa was the emperor of the Mali Empire in West Africa during the 14th century. He accumulated immense wealth through several means:\n",
"\n",
"- Gold mining - Mali contained very rich gold deposits, especially in the region of Bambuk. Gold mining and gold trade was a major source of wealth for the empire.\n",
"\n",
"- Control of trade routes - Mali dominated the trans-Saharan trade routes connecting West Africa to North Africa and beyond. By taxing the goods that passed through its territory, Mali profited greatly.\n",
"\n",
"- Tributary states - Many lands surrounding Mali paid tribute to the empire. This came in the form of gold, slaves, and other valuable resources.\n",
"\n",
"- Agriculture - Mali also had extensive agricultural lands irrigated by the Niger River. Surplus food produced could be sold or traded. \n",
"\n",
"- Royal monopolies - The emperor claimed monopoly rights over the production and sale of certain goods like salt from the Taghaza mines. This added to his personal wealth.\n",
"\n",
"- Inheritance - As an emperor, Mansa Musa inherited a wealthy state. His predecessors had already consolidated lands and accumulated riches which fell to Musa.\n",
"\n",
"So in summary, mining, trade, taxes,"
]
}
],
"source": [
"for chunk in runnable.stream({\"question\": \"How did Mansa Musa accumulate his wealth?\"}):\n",
" print(chunk, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "b09e115c-ca2d-4ec8-9676-5a37bd2692ab",
"metadata": {},
"source": [
"For more head to the [LCEL section](/docs/expression_language/)."
]
},
{
"cell_type": "markdown",
"id": "e0cc6c2c-bc35-4415-b894-9ef88014ba33",
"metadata": {},
"source": [
"## [Legacy] `Chain` interface\n",
"\n",
"**Chain**'s are the legacy interface for \"chained\" applications. We define a Chain very generically as a sequence of calls to components, which can include other chains. The base interface is simple:\n",
"\n",
"```python\n",
"class Chain(BaseModel, ABC):\n",
" \"\"\"Base interface that all chains should implement.\"\"\"\n",
"\n",
" memory: BaseMemory\n",
" callbacks: Callbacks\n",
"\n",
" def __call__(\n",
" self,\n",
" inputs: Any,\n",
" return_only_outputs: bool = False,\n",
" callbacks: Callbacks = None,\n",
" ) -> Dict[str, Any]:\n",
" ...\n",
"```\n",
"\n",
"We can recreate the LCEL runnable we made above using the built-in `LLMChain`:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1026a508-b2c7-4567-8ecf-487628737c16",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" Mansa Musa was the emperor of the Mali Empire in West Africa in the early 14th century. He accumulated his vast wealth through several means:\\n\\n- Gold mining - Mali contained very rich gold deposits, especially in the southern part of the empire. Gold mining and trade was a major source of wealth.\\n\\n- Control of trade routes - Mali dominated the trans-Saharan trade routes connecting West Africa to North Africa and beyond. By taxing and controlling this lucrative trade, Mansa Musa reaped great riches.\\n\\n- Tributes from conquered lands - The Mali Empire expanded significantly under Mansa Musa's rule. As new lands were conquered, they paid tribute to the mansa in the form of gold, salt, and slaves.\\n\\n- Inheritance - Mansa Musa inherited a wealthy empire from his predecessor. He continued to build the wealth of Mali through the factors above.\\n\\n- Sound fiscal management - Musa is considered to have managed the empire and its finances very effectively, including keeping taxes reasonable and promoting a robust economy. This allowed him to accumulate and maintain wealth.\\n\\nSo in summary, conquest, trade, taxes, mining, and inheritance all contributed to Mansa Musa growing the M\""
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMChain\n",
"\n",
"\n",
"chain = LLMChain(llm=model, prompt=prompt, output_parser=StrOutputParser())\n",
"chain.run(question=\"How did Mansa Musa accumulate his wealth?\")"
]
},
{
"cell_type": "markdown",
"id": "c1767c9e-eebd-4e07-9805-cf445920c38d",
"metadata": {},
"source": [
"For more specifics check out:\n",
"- [How-to](/docs/modules/chains/how_to/) for walkthroughs of different chain features\n",
"- [Foundational](/docs/modules/chains/foundational/) to get acquainted with core building block chains\n",
"- [Document](/docs/modules/chains/document/) to learn how to incorporate documents into chains\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,127 +0,0 @@
---
sidebar_position: 2
---
# Chains
Using an LLM in isolation is fine for simple applications,
but more complex applications require chaining LLMs - either with each other or with other components.
LangChain provides the **Chain** interface for such "chained" applications. We define a Chain very generically as a sequence of calls to components, which can include other chains. The base interface is simple:
```python
class Chain(BaseModel, ABC):
"""Base interface that all chains should implement."""
memory: BaseMemory
callbacks: Callbacks
def __call__(
self,
inputs: Any,
return_only_outputs: bool = False,
callbacks: Callbacks = None,
) -> Dict[str, Any]:
...
```
This idea of composing components together in a chain is simple but powerful. It drastically simplifies and makes more modular the implementation of complex applications, which in turn makes it much easier to debug, maintain, and improve your applications.
For more specifics check out:
- [How-to](/docs/modules/chains/how_to/) for walkthroughs of different chain features
- [Foundational](/docs/modules/chains/foundational/) to get acquainted with core building block chains
- [Document](/docs/modules/chains/document/) to learn how to incorporate documents into chains
## Why do we need chains?
Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.
## Get started
#### Using `LLMChain`
The `LLMChain` is most basic building block chain. It takes in a prompt template, formats it with the user input and returns the response from an LLM.
To use the `LLMChain`, first create a prompt template.
```python
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
```
We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.
```python
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
# Run the chain only specifying the input variable.
print(chain.run("colorful socks"))
```
<CodeOutputBlock lang="python">
```
Colorful Toes Co.
```
</CodeOutputBlock>
If there are multiple variables, you can input them all at once using a dictionary.
```python
prompt = PromptTemplate(
input_variables=["company", "product"],
template="What is a good name for {company} that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run({
'company': "ABC Startup",
'product': "colorful socks"
}))
```
<CodeOutputBlock lang="python">
```
Socktopia Colourful Creations.
```
</CodeOutputBlock>
You can use a chat model in an `LLMChain` as well:
```python
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
)
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="What is a good name for a company that makes {product}?",
input_variables=["product"],
)
)
chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])
chat = ChatOpenAI(temperature=0.9)
chain = LLMChain(llm=chat, prompt=chat_prompt_template)
print(chain.run("colorful socks"))
```
<CodeOutputBlock lang="python">
```
Rainbow Socks Co.
```
</CodeOutputBlock>
Loading…
Cancel
Save