Improve `llm_chain.ipynb` and `getting_started.ipynb` for chains docs (#3380)

My attempt at improving the `Chain`'s `Getting Started` docs and
`LLMChain` docs. Might need some proof-reading as English is not my
first language.

In LLM examples, I replaced the example use case when a simpler one
(shorter LLM output) to reduce cognitive load.
fix_agent_callbacks
engkheng 1 year ago committed by GitHub
parent b89c258bc5
commit 06f6c49e61
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -2,59 +2,90 @@
"cells": [
{
"cell_type": "markdown",
"id": "d8a5c5d4",
"id": "da7d0df7-f07c-462f-bd46-d0426f11f311",
"metadata": {},
"source": [
"# LLM Chain\n",
"\n",
"This notebook showcases a simple LLM chain."
"## LLM Chain"
]
},
{
"cell_type": "markdown",
"id": "3a55e9a1-becf-4357-889e-f365d23362ff",
"metadata": {},
"source": [
"`LLMChain` is perhaps one of the most popular ways of querying an LLM object. It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output. Below we show additional functionalities of `LLMChain` class."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "835e6978",
"id": "0e720e34-a0f0-4f1a-9732-43bc1460053a",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'product': 'colorful socks', 'text': '\\n\\nSocktastic!'}"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import PromptTemplate, OpenAI, LLMChain\n",
"\n",
"prompt_template = \"What is a good name for a company that makes {product}?\"\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"llm_chain = LLMChain(\n",
" llm=llm,\n",
" prompt=PromptTemplate.from_template(prompt_template)\n",
")\n",
"llm_chain(\"colorful socks\")"
]
},
{
"cell_type": "markdown",
"id": "94304332-6398-4280-a61e-005ba29b5e1e",
"metadata": {},
"outputs": [],
"source": [
"from langchain import PromptTemplate, OpenAI, LLMChain"
"## Additional ways of running LLM Chain"
]
},
{
"cell_type": "markdown",
"id": "06bcb078",
"id": "4e51981f-cde9-4c05-99e1-446c27994e99",
"metadata": {},
"source": [
"## Single Input\n",
"\n",
"First, lets go over an example using a single input"
"Aside from `__call__` and `run` methods shared by all `Chain` object (see [Getting Started](../getting_started.ipynb) to learn more), `LLMChain` offers a few more ways of calling the chain logic:"
]
},
{
"cell_type": "markdown",
"id": "c08d2356-412d-4327-b8a0-233dcc443e30",
"metadata": {},
"source": [
"- `apply` allows you run the chain against a list of inputs:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "51a54c4d",
"metadata": {},
"id": "cf519eb6-2358-4db7-a28a-27433435181e",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mQuestion: What NFL team won the Super Bowl in the year Justin Beiber was born?\n",
"\n",
"Answer: Let's think step by step.\u001B[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"' Justin Bieber was born in 1994, so the NFL team that won the Super Bowl in 1994 was the Dallas Cowboys.'"
"[{'text': '\\n\\nSocktastic!'},\n",
" {'text': '\\n\\nTechCore Solutions.'},\n",
" {'text': '\\n\\nFootwear Factory.'}]"
]
},
"execution_count": 2,
@ -63,112 +94,247 @@
}
],
"source": [
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
"llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)\n",
"\n",
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
"input_list = [\n",
" {\"product\": \"socks\"},\n",
" {\"product\": \"computer\"},\n",
" {\"product\": \"shoes\"}\n",
"]\n",
"\n",
"llm_chain.predict(question=question)"
"llm_chain.apply(input_list)"
]
},
{
"cell_type": "markdown",
"id": "79c3ec4d",
"metadata": {},
"id": "add442fb-baf6-40d9-ae8e-4ac1d8251ad0",
"metadata": {
"tags": []
},
"source": [
"## Multiple Inputs\n",
"Now lets go over an example using multiple inputs."
"- `generate` is similar to `apply`, except it return an `LLMResult` instead of string. `LLMResult` often contains useful generation such as token usages and finish reason."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "03dd6918",
"id": "85cbff83-a5cc-40b7-823c-47274ae4117d",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[Generation(text='\\n\\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.generate(input_list)"
]
},
{
"cell_type": "markdown",
"id": "a178173b-b183-432a-a517-250fe3191173",
"metadata": {},
"source": [
"- `predict` is similar to `run` method except in 2 ways:\n",
" - Input key is specified as keyword argument instead of a Python dict\n",
" - It supports multiple input keys."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "787d9f55-b080-4123-bed2-0598a9cb0466",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001B[32;1m\u001B[1;3mWrite a sad poem about ducks.\u001B[0m\n",
"\n",
"\u001B[1m> Finished LLMChain chain.\u001B[0m\n"
]
},
"data": {
"text/plain": [
"'\\n\\nSocktastic!'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Single input example\n",
"llm_chain.predict(product=\"colorful socks\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "092a769f-9661-42a0-9da1-19d09ccbc4a7",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\"\\n\\nThe ducks swim in the pond,\\nTheir feathers so soft and warm,\\nBut they can't help but feel so forlorn.\\n\\nTheir quacks echo in the air,\\nBut no one is there to hear,\\nFor they have no one to share.\\n\\nThe ducks paddle around in circles,\\nTheir heads hung low in despair,\\nFor they have no one to care.\\n\\nThe ducks look up to the sky,\\nBut no one is there to see,\\nFor they have no one to be.\\n\\nThe ducks drift away in the night,\\nTheir hearts filled with sorrow and pain,\\nFor they have no one to gain.\""
"'\\n\\nQ: What did the duck say when his friend died?\\nA: Quack, quack, goodbye.'"
]
},
"execution_count": 3,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"template = \"\"\"Write a {adjective} poem about {subject}.\"\"\"\n",
"# Multiple inputs example\n",
"\n",
"template = \"\"\"Tell me a {adjective} joke about {subject}.\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"adjective\", \"subject\"])\n",
"llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)\n",
"llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))\n",
"\n",
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
]
},
{
"cell_type": "markdown",
"id": "672f59d4",
"id": "4b72ad22-0a5d-4ca7-9e3f-8c46dc17f722",
"metadata": {},
"source": [
"## From string\n",
"You can also construct an LLMChain from a string template directly."
"## Parsing the outputs"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f8bc262e",
"cell_type": "markdown",
"id": "85a77662-d028-4048-be4b-aa496e2dde22",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Write a {adjective} poem about {subject}.\"\"\"\n",
"llm_chain = LLMChain.from_string(llm=OpenAI(temperature=0), template=template)\n"
"By default, `LLMChain` does not parse the output even if the underlying `prompt` object has an output parser. If you would like to apply that output parser on the LLM output, use `predict_and_parse` instead of `predict` and `apply_and_parse` instead of `apply`. "
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cb164a76",
"cell_type": "markdown",
"id": "b83977f1-847c-45de-b840-f1aff6725f83",
"metadata": {},
"source": [
"With `predict`:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "5feb5177-c20b-4909-890b-a64d7e551f55",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\"\\n\\nThe ducks swim in the pond,\\nTheir feathers so soft and warm,\\nBut they can't help but feel so forlorn.\\n\\nTheir quacks echo in the air,\\nBut no one is there to hear,\\nFor they have no one to share.\\n\\nThe ducks paddle around in circles,\\nTheir heads hung low in despair,\\nFor they have no one to care.\\n\\nThe ducks look up to the sky,\\nBut no one is there to see,\\nFor they have no one to be.\\n\\nThe ducks drift away in the night,\\nTheir hearts filled with sorrow and pain,\\nFor they have no one to gain.\""
"'\\n\\nRed, orange, yellow, green, blue, indigo, violet'"
]
},
"execution_count": 4,
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
"from langchain.output_parsers import CommaSeparatedListOutputParser\n",
"\n",
"output_parser = CommaSeparatedListOutputParser()\n",
"template = \"\"\"List all the colors in a rainbow\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"llm_chain.predict()"
]
},
{
"cell_type": "markdown",
"id": "7b931615-804b-4f34-8086-7bbc2f96b3b2",
"metadata": {},
"source": [
"With `predict_and_parser`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f0adbc7",
"execution_count": 25,
"id": "43a374cd-a179-43e5-9aa0-62f3cbdf510d",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.predict_and_parse()"
]
},
{
"cell_type": "markdown",
"id": "8176f619-4e5c-4a02-91ba-e96ebe2aabda",
"metadata": {},
"source": [
"## Initialize from string"
]
},
{
"cell_type": "markdown",
"id": "9813ac87-e118-413b-b448-2fefdf2319b8",
"metadata": {},
"source": [
"You can also construct an LLMChain from a string template directly."
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "ca88ccb1-974e-41c1-81ce-753e3f1234fa",
"metadata": {
"tags": []
},
"outputs": [],
"source": []
"source": [
"template = \"\"\"Tell me a {adjective} joke about {subject}.\"\"\"\n",
"llm_chain = LLMChain.from_string(llm=llm, template=template)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "4703d1bc-f4fc-44bc-9ea1-b4498835833d",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nQ: What did the duck say when his friend died?\\nA: Quack, quack, goodbye.'"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
]
}
],
"metadata": {
@ -187,7 +353,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.10.10"
}
},
"nbformat": 4,

@ -22,7 +22,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Query an LLM with the `LLMChain`\n",
"## Quick start: Using `LLMChain`\n",
"\n",
"The `LLMChain` is a simple chain that takes in a prompt template, formats it with the user input and returns the response from an LLM.\n",
"\n",
@ -31,7 +31,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"metadata": {
"tags": []
},
@ -56,7 +56,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"metadata": {
"tags": []
},
@ -67,7 +67,7 @@
"text": [
"\n",
"\n",
"Rainbow Socks Co.\n"
"Cheerful Toes.\n"
]
}
],
@ -88,7 +88,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"metadata": {
"tags": []
},
@ -97,9 +97,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Rainbow Threads\n"
"Rainbow Footwear Co.\n"
]
}
],
@ -125,7 +123,231 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains."
"## Different ways of calling chains\n",
"\n",
"All classes inherited from `Chain` offer a few ways of running chain logic. The most direct one is by using `__call__`:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'adjective': 'lame',\n",
" 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatOpenAI(temperature=0)\n",
"prompt_template = \"Tell me a {adjective} joke\"\n",
"llm_chain = LLMChain(\n",
" llm=chat,\n",
" prompt=PromptTemplate.from_template(prompt_template)\n",
")\n",
"\n",
"llm_chain(inputs={\"adjective\":\"lame\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By default, `__call__` returns both the input and output key values. You can configure it to only return output key values by setting `return_only_outputs` to `True`."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain(\"lame\", return_only_outputs=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If the `Chain` only takes one input key (i.e. only has one element in its `input_variables`), you can use `run` method. Note that `run` outputs a string instead of a dictionary."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Why did the tomato turn red? Because it saw the salad dressing!'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.run({\"adjective\":\"lame\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Besides, in the case of one input key, you can input the string directly without specifying the input mapping."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'adjective': 'lame',\n",
" 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# These two are equivalent\n",
"llm_chain.run({\"adjective\":\"lame\"})\n",
"llm_chain.run(\"lame\")\n",
"\n",
"# These two are also equivalent\n",
"llm_chain(\"lame\")\n",
"llm_chain({\"adjective\":\"lame\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Tips: You can easily integrate a `Chain` object as a `Tool` in your `Agent` via its `run` method. See an example [here](../agents/tools/custom_tools.ipynb)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Add memory to chains\n",
"\n",
"`Chain` supports taking a `BaseMemory` object as its `memory` argument, allowing `Chain` object to persist data across multiple calls. In other words, it makes `Chain` a stateful object."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The next four colors of a rainbow are green, blue, indigo, and violet.'"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import ConversationChain\n",
"from langchain.memory import ConversationBufferMemory\n",
"\n",
"conversation = ConversationChain(\n",
" llm=chat,\n",
" memory=ConversationBufferMemory()\n",
")\n",
"\n",
"conversation.run(\"Answer briefly. What are the first 3 colors of a rainbow?\")\n",
"# -> The first three colors of a rainbow are red, orange, and yellow.\n",
"conversation.run(\"And the next 4?\")\n",
"# -> The next four colors of a rainbow are green, blue, indigo, and violet."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Essentially, `BaseMemory` defines an interface of how `langchain` stores memory. It allows reading of stored data through `load_memory_variables` method and storing new data through `save_context` method. You can learn more about it in [Memory](../memory/getting_started.ipynb) section."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Debug Chain\n",
"\n",
"It can be hard to debug `Chain` object solely from its output as most `Chain` objects involve a fair amount of input prompt preprocessing and LLM output post-processing. Setting `verbose` to `True` will print out some internal states of the `Chain` object while it is being ran."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThe following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n",
"\n",
"Current conversation:\n",
"\n",
"Human: What is ChatGPT?\n",
"AI:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'ChatGPT is an AI language model developed by OpenAI. It is based on the GPT-3 architecture and is capable of generating human-like responses to text prompts. ChatGPT has been trained on a massive amount of text data and can understand and respond to a wide range of topics. It is often used for chatbots, virtual assistants, and other conversational AI applications.'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation = ConversationChain(\n",
" llm=chat,\n",
" memory=ConversationBufferMemory(),\n",
" verbose=True\n",
")\n",
"conversation.run(\"What is ChatGPT?\")"
]
},
{
@ -143,7 +365,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
@ -163,7 +385,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 15,
"metadata": {},
"outputs": [
{
@ -173,17 +395,15 @@
"\n",
"\n",
"\u001b[1m> Entering new SimpleSequentialChain chain...\u001b[0m\n",
"\u001b[36;1m\u001b[1;3m\n",
"\n",
"Cheerful Toes.\u001b[0m\n",
"\u001b[36;1m\u001b[1;3mRainbow Socks Co.\u001b[0m\n",
"\u001b[33;1m\u001b[1;3m\n",
"\n",
"\"Spread smiles from your toes!\"\u001b[0m\n",
"\"Step into Color with Rainbow Socks Co!\"\u001b[0m\n",
"\n",
"\u001b[1m> Finished SimpleSequentialChain chain.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"\"Spread smiles from your toes!\"\n"
"\"Step into Color with Rainbow Socks Co!\"\n"
]
}
],
@ -214,7 +434,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
@ -253,7 +473,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 17,
"metadata": {},
"outputs": [
{
@ -263,9 +483,9 @@
"Concatenated output:\n",
"\n",
"\n",
"Rainbow Socks Co.\n",
"Kaleidoscope Socks.\n",
"\n",
"\"Step Into Colorful Comfort!\"\n"
"\"Put Some Color in Your Step!\"\n"
]
}
],
@ -311,7 +531,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
"version": "3.10.10"
},
"vscode": {
"interpreter": {

Loading…
Cancel
Save