Docs: Add LCEL to chains/foundational/llm (#12213)

pull/12214/head
Bagatur 9 months ago committed by GitHub
parent 922193475a
commit 44dae6936b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,371 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7d46647d-638f-497e-b51a-52bf8dd76e39",
"metadata": {},
"source": [
"# LLM\n",
"\n",
"The most common type of chaining in any LLM application is combining a prompt template with an LLM and optionally an output parser.\n",
"\n",
"The recommended way to do this is using LangChain Expression Language. We also continue to support the legacy `LLMChain`, which is a single class for composing these three components."
]
},
{
"cell_type": "markdown",
"id": "0ad20b88-f2e8-4ba0-b8e6-1892ab4d2190",
"metadata": {},
"source": [
"## Using LCEL\n",
"\n",
"`BasePromptTemplate`, `BaseLanguageModel` and `BaseOutputParser` all implement the `Runnable` interface and are designed to be piped into one another, making LCEL composition very easy:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "92ad7c9d-a1d2-49bd-a4a3-0f6f0fd1656b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'VibrantSocks'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.prompts import PromptTemplate\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.schema import StrOutputParser\n",
"\n",
"prompt = PromptTemplate.from_template(\"What is a good name for a company that makes {product}?\")\n",
"runnable = prompt | ChatOpenAI() | StrOutputParser()\n",
"runnable.invoke({\"product\": \"colorful socks\"})"
]
},
{
"cell_type": "markdown",
"id": "784d8083-a2c8-4172-92b8-0bd0d74f032a",
"metadata": {},
"source": [
"Head to the [LCEL](/docs/expression_language) section for more on the interface, built-in features, and cookbook examples."
]
},
{
"cell_type": "markdown",
"id": "efee07bb-fc45-4e06-999f-a776e6d53333",
"metadata": {},
"source": [
"## [Legacy] LLMChain\n",
"An `LLMChain` is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.\n",
"\n",
"An `LLMChain` consists of a `PromptTemplate` and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.\n",
"\n",
"### Get started"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fc0b7d6c-b808-48d9-bdb5-818ab4a1ccca",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'product': 'colorful socks', 'text': '\\n\\nSocktastic!'}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.prompts import PromptTemplate\n",
"from langchain.llms import OpenAI\n",
"from langchain.chains import LLMChain\n",
"\n",
"prompt_template = \"What is a good name for a company that makes {product}?\"\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"llm_chain = LLMChain(\n",
" llm=llm,\n",
" prompt=PromptTemplate.from_template(prompt_template)\n",
")\n",
"llm_chain(\"colorful socks\")"
]
},
{
"cell_type": "markdown",
"id": "040634f0-fe60-4b0e-b3f6-e9c15146e2cd",
"metadata": {},
"source": [
"### Additional ways of running `LLMChain`\n",
"\n",
"Aside from `__call__` and `run` methods shared by all `Chain` object, `LLMChain` offers a few more ways of calling the chain logic:\n",
"\n",
"- `apply` allows you run the chain against a list of inputs:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8cd8dd72-6d5a-488f-80a6-1a9324c743e8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'text': '\\n\\nSocktastic!'},\n",
" {'text': '\\n\\nTechCore Solutions.'},\n",
" {'text': '\\n\\nFootwear Factory.'}]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"input_list = [\n",
" {\"product\": \"socks\"},\n",
" {\"product\": \"computer\"},\n",
" {\"product\": \"shoes\"}\n",
"]\n",
"llm_chain.apply(input_list)"
]
},
{
"cell_type": "markdown",
"id": "18624d04-474a-425e-bcf3-58748b747e08",
"metadata": {},
"source": [
"- `generate` is similar to `apply`, except it return an `LLMResult` instead of string. `LLMResult` often contains useful generation such as token usages and finish reason."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "67e72139-686d-40eb-9c1e-4342d3b1abfe",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[Generation(text='\\n\\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 36, 'total_tokens': 55}, 'model_name': 'text-davinci-003'}, run=[RunInfo(run_id=UUID('9a423a43-6d35-4e8f-9aca-cacfc8e0dc49')), RunInfo(run_id=UUID('a879c077-b521-461c-8f29-ba63adfc327c')), RunInfo(run_id=UUID('40b892fa-e8c2-47d0-a309-4f7a4ed5b64a'))])"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.generate(input_list)"
]
},
{
"cell_type": "markdown",
"id": "0480da3a-865d-4ec5-9366-e29c3967fef3",
"metadata": {},
"source": [
"- `predict` is similar to `run` method except that the input keys are specified as keyword arguments instead of a Python dict."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f4afb8a4-9113-4082-85cb-55a2d406c99a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nSocktastic!'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Single input example\n",
"llm_chain.predict(product=\"colorful socks\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e58eaab1-4db4-43cb-b523-7b3380332cad",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nQ: What did the duck say when his friend died?\\nA: Quack, quack, goodbye.'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Multiple inputs example\n",
"template = \"\"\"Tell me a {adjective} joke about {subject}.\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"adjective\", \"subject\"])\n",
"llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))\n",
"\n",
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
]
},
{
"cell_type": "markdown",
"id": "63f02d9e-6470-41d3-b91c-b064baf84733",
"metadata": {},
"source": [
"### Parsing the outputs\n",
"\n",
"By default, `LLMChain` does not parse the output even if the underlying `prompt` object has an output parser. If you would like to apply that output parser on the LLM output, use `predict_and_parse` instead of `predict` and `apply_and_parse` instead of `apply`.\n",
"\n",
"With `predict`:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "134126ca-2f1c-4829-94ba-810d91c92138",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nRed, orange, yellow, green, blue, indigo, violet'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.output_parsers import CommaSeparatedListOutputParser\n",
"\n",
"output_parser = CommaSeparatedListOutputParser()\n",
"template = \"\"\"List all the colors in a rainbow\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"llm_chain.predict()"
]
},
{
"cell_type": "markdown",
"id": "7a46f1e8-daaf-43d6-8045-9b187655631b",
"metadata": {},
"source": [
"With `predict_and_parse`:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "7ef9b74d-7ef5-4b80-80cc-f8226f79259b",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/bagatur/langchain/libs/langchain/langchain/chains/llm.py:280: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.\n",
" warnings.warn(\n"
]
},
{
"data": {
"text/plain": [
"['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.predict_and_parse()"
]
},
{
"cell_type": "markdown",
"id": "93446f7f-0a2d-4fc5-99a1-a26cc0605b4b",
"metadata": {},
"source": [
"### Initialize from string\n",
"\n",
"You can also construct an `LLMChain` from a string template directly."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "7e324174-e8ab-4095-87cb-17874a058da9",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Tell me a {adjective} joke about {subject}.\"\"\"\n",
"llm_chain = LLMChain.from_string(llm=llm, template=template)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "a4f10407-6519-4174-89fe-e7507765f1ae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nQ: What did the duck say when his friend died?\\nA: Quack, quack, goodbye.'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,171 +0,0 @@
# LLM
An `LLMChain` is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.
An `LLMChain` consists of a `PromptTemplate` and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.
## Get started
```python
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
from langchain.chains import LLMChain
prompt_template = "What is a good name for a company that makes {product}?"
llm = OpenAI(temperature=0)
llm_chain = LLMChain(
llm=llm,
prompt=PromptTemplate.from_template(prompt_template)
)
llm_chain("colorful socks")
```
<CodeOutputBlock lang="python">
```
{'product': 'colorful socks', 'text': '\n\nSocktastic!'}
```
</CodeOutputBlock>
## Additional ways of running `LLMChain`
Aside from `__call__` and `run` methods shared by all `Chain` object, `LLMChain` offers a few more ways of calling the chain logic:
- `apply` allows you run the chain against a list of inputs:
```python
input_list = [
{"product": "socks"},
{"product": "computer"},
{"product": "shoes"}
]
llm_chain.apply(input_list)
```
<CodeOutputBlock lang="python">
```
[{'text': '\n\nSocktastic!'},
{'text': '\n\nTechCore Solutions.'},
{'text': '\n\nFootwear Factory.'}]
```
</CodeOutputBlock>
- `generate` is similar to `apply`, except it return an `LLMResult` instead of string. `LLMResult` often contains useful generation such as token usages and finish reason.
```python
llm_chain.generate(input_list)
```
<CodeOutputBlock lang="python">
```
LLMResult(generations=[[Generation(text='\n\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'})
```
</CodeOutputBlock>
- `predict` is similar to `run` method except that the input keys are specified as keyword arguments instead of a Python dict.
```python
# Single input example
llm_chain.predict(product="colorful socks")
```
<CodeOutputBlock lang="python">
```
'\n\nSocktastic!'
```
</CodeOutputBlock>
```python
# Multiple inputs example
template = """Tell me a {adjective} joke about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))
llm_chain.predict(adjective="sad", subject="ducks")
```
<CodeOutputBlock lang="python">
```
'\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'
```
</CodeOutputBlock>
## Parsing the outputs
By default, `LLMChain` does not parse the output even if the underlying `prompt` object has an output parser. If you would like to apply that output parser on the LLM output, use `predict_and_parse` instead of `predict` and `apply_and_parse` instead of `apply`.
With `predict`:
```python
from langchain.output_parsers import CommaSeparatedListOutputParser
output_parser = CommaSeparatedListOutputParser()
template = """List all the colors in a rainbow"""
prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)
llm_chain = LLMChain(prompt=prompt, llm=llm)
llm_chain.predict()
```
<CodeOutputBlock lang="python">
```
'\n\nRed, orange, yellow, green, blue, indigo, violet'
```
</CodeOutputBlock>
With `predict_and_parse`:
```python
llm_chain.predict_and_parse()
```
<CodeOutputBlock lang="python">
```
['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']
```
</CodeOutputBlock>
## Initialize from string
You can also construct an `LLMChain` from a string template directly.
```python
template = """Tell me a {adjective} joke about {subject}."""
llm_chain = LLMChain.from_string(llm=llm, template=template)
```
```python
llm_chain.predict(adjective="sad", subject="ducks")
```
<CodeOutputBlock lang="python">
```
'\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'
```
</CodeOutputBlock>
Loading…
Cancel
Save