mirror of
https://github.com/hwchase17/langchain
synced 2024-11-10 01:10:59 +00:00
21335d43b2
`LLMChain` run method can take multiple input variables.
361 lines
9.0 KiB
Plaintext
361 lines
9.0 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "da7d0df7-f07c-462f-bd46-d0426f11f311",
|
|
"metadata": {},
|
|
"source": [
|
|
"## LLM Chain"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "3a55e9a1-becf-4357-889e-f365d23362ff",
|
|
"metadata": {},
|
|
"source": [
|
|
"`LLMChain` is perhaps one of the most popular ways of querying an LLM object. It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output. Below we show additional functionalities of `LLMChain` class."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "0e720e34-a0f0-4f1a-9732-43bc1460053a",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'product': 'colorful socks', 'text': '\\n\\nSocktastic!'}"
|
|
]
|
|
},
|
|
"execution_count": 1,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain import PromptTemplate, OpenAI, LLMChain\n",
|
|
"\n",
|
|
"prompt_template = \"What is a good name for a company that makes {product}?\"\n",
|
|
"\n",
|
|
"llm = OpenAI(temperature=0)\n",
|
|
"llm_chain = LLMChain(\n",
|
|
" llm=llm,\n",
|
|
" prompt=PromptTemplate.from_template(prompt_template)\n",
|
|
")\n",
|
|
"llm_chain(\"colorful socks\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "94304332-6398-4280-a61e-005ba29b5e1e",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Additional ways of running LLM Chain"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4e51981f-cde9-4c05-99e1-446c27994e99",
|
|
"metadata": {},
|
|
"source": [
|
|
"Aside from `__call__` and `run` methods shared by all `Chain` object (see [Getting Started](../getting_started.ipynb) to learn more), `LLMChain` offers a few more ways of calling the chain logic:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c08d2356-412d-4327-b8a0-233dcc443e30",
|
|
"metadata": {},
|
|
"source": [
|
|
"- `apply` allows you run the chain against a list of inputs:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "cf519eb6-2358-4db7-a28a-27433435181e",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"[{'text': '\\n\\nSocktastic!'},\n",
|
|
" {'text': '\\n\\nTechCore Solutions.'},\n",
|
|
" {'text': '\\n\\nFootwear Factory.'}]"
|
|
]
|
|
},
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"input_list = [\n",
|
|
" {\"product\": \"socks\"},\n",
|
|
" {\"product\": \"computer\"},\n",
|
|
" {\"product\": \"shoes\"}\n",
|
|
"]\n",
|
|
"\n",
|
|
"llm_chain.apply(input_list)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "add442fb-baf6-40d9-ae8e-4ac1d8251ad0",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"source": [
|
|
"- `generate` is similar to `apply`, except it return an `LLMResult` instead of string. `LLMResult` often contains useful generation such as token usages and finish reason."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "85cbff83-a5cc-40b7-823c-47274ae4117d",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"LLMResult(generations=[[Generation(text='\\n\\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'})"
|
|
]
|
|
},
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"llm_chain.generate(input_list)"
|
|
]
|
|
},
|
|
{
|
|
"attachments": {},
|
|
"cell_type": "markdown",
|
|
"id": "a178173b-b183-432a-a517-250fe3191173",
|
|
"metadata": {},
|
|
"source": [
|
|
"- `predict` is similar to `run` method except that the input keys are specified as keyword arguments instead of a Python dict."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"id": "787d9f55-b080-4123-bed2-0598a9cb0466",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'\\n\\nSocktastic!'"
|
|
]
|
|
},
|
|
"execution_count": 12,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"# Single input example\n",
|
|
"llm_chain.predict(product=\"colorful socks\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"id": "092a769f-9661-42a0-9da1-19d09ccbc4a7",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'\\n\\nQ: What did the duck say when his friend died?\\nA: Quack, quack, goodbye.'"
|
|
]
|
|
},
|
|
"execution_count": 14,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"# Multiple inputs example\n",
|
|
"\n",
|
|
"template = \"\"\"Tell me a {adjective} joke about {subject}.\"\"\"\n",
|
|
"prompt = PromptTemplate(template=template, input_variables=[\"adjective\", \"subject\"])\n",
|
|
"llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))\n",
|
|
"\n",
|
|
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4b72ad22-0a5d-4ca7-9e3f-8c46dc17f722",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Parsing the outputs"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "85a77662-d028-4048-be4b-aa496e2dde22",
|
|
"metadata": {},
|
|
"source": [
|
|
"By default, `LLMChain` does not parse the output even if the underlying `prompt` object has an output parser. If you would like to apply that output parser on the LLM output, use `predict_and_parse` instead of `predict` and `apply_and_parse` instead of `apply`. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b83977f1-847c-45de-b840-f1aff6725f83",
|
|
"metadata": {},
|
|
"source": [
|
|
"With `predict`:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 24,
|
|
"id": "5feb5177-c20b-4909-890b-a64d7e551f55",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'\\n\\nRed, orange, yellow, green, blue, indigo, violet'"
|
|
]
|
|
},
|
|
"execution_count": 24,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"from langchain.output_parsers import CommaSeparatedListOutputParser\n",
|
|
"\n",
|
|
"output_parser = CommaSeparatedListOutputParser()\n",
|
|
"template = \"\"\"List all the colors in a rainbow\"\"\"\n",
|
|
"prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)\n",
|
|
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
|
"\n",
|
|
"llm_chain.predict()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7b931615-804b-4f34-8086-7bbc2f96b3b2",
|
|
"metadata": {},
|
|
"source": [
|
|
"With `predict_and_parser`:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 25,
|
|
"id": "43a374cd-a179-43e5-9aa0-62f3cbdf510d",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']"
|
|
]
|
|
},
|
|
"execution_count": 25,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"llm_chain.predict_and_parse()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "8176f619-4e5c-4a02-91ba-e96ebe2aabda",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Initialize from string"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "9813ac87-e118-413b-b448-2fefdf2319b8",
|
|
"metadata": {},
|
|
"source": [
|
|
"You can also construct an LLMChain from a string template directly."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 16,
|
|
"id": "ca88ccb1-974e-41c1-81ce-753e3f1234fa",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"template = \"\"\"Tell me a {adjective} joke about {subject}.\"\"\"\n",
|
|
"llm_chain = LLMChain.from_string(llm=llm, template=template)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 18,
|
|
"id": "4703d1bc-f4fc-44bc-9ea1-b4498835833d",
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'\\n\\nQ: What did the duck say when his friend died?\\nA: Quack, quack, goodbye.'"
|
|
]
|
|
},
|
|
"execution_count": 18,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.10"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|