2022-10-24 21:51:15 +00:00
{
"cells": [
2022-11-14 04:13:23 +00:00
{
"cell_type": "markdown",
2023-04-25 04:49:55 +00:00
"id": "da7d0df7-f07c-462f-bd46-d0426f11f311",
2022-11-14 04:13:23 +00:00
"metadata": {},
"source": [
2023-04-25 04:49:55 +00:00
"## LLM Chain"
]
},
{
"cell_type": "markdown",
"id": "3a55e9a1-becf-4357-889e-f365d23362ff",
"metadata": {},
"source": [
"`LLMChain` is perhaps one of the most popular ways of querying an LLM object. It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output. Below we show additional functionalities of `LLMChain` class."
2022-11-14 04:13:23 +00:00
]
},
2022-10-24 21:51:15 +00:00
{
"cell_type": "code",
2022-11-20 04:39:35 +00:00
"execution_count": 1,
2023-04-25 04:49:55 +00:00
"id": "0e720e34-a0f0-4f1a-9732-43bc1460053a",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'product': 'colorful socks', 'text': '\\n\\nSocktastic!'}"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import PromptTemplate, OpenAI, LLMChain\n",
"\n",
"prompt_template = \"What is a good name for a company that makes {product}?\"\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"llm_chain = LLMChain(\n",
" llm=llm,\n",
" prompt=PromptTemplate.from_template(prompt_template)\n",
")\n",
"llm_chain(\"colorful socks\")"
]
},
{
"cell_type": "markdown",
"id": "94304332-6398-4280-a61e-005ba29b5e1e",
2022-12-07 16:40:08 +00:00
"metadata": {},
"source": [
2023-04-25 04:49:55 +00:00
"## Additional ways of running LLM Chain"
2022-12-07 16:40:08 +00:00
]
},
{
"cell_type": "markdown",
2023-04-25 04:49:55 +00:00
"id": "4e51981f-cde9-4c05-99e1-446c27994e99",
2022-12-07 16:40:08 +00:00
"metadata": {},
"source": [
2023-04-25 04:49:55 +00:00
"Aside from `__call__` and `run` methods shared by all `Chain` object (see [Getting Started](../getting_started.ipynb) to learn more), `LLMChain` offers a few more ways of calling the chain logic:"
]
},
{
"cell_type": "markdown",
"id": "c08d2356-412d-4327-b8a0-233dcc443e30",
"metadata": {},
"source": [
"- `apply` allows you run the chain against a list of inputs:"
2022-12-07 16:40:08 +00:00
]
},
{
"cell_type": "code",
Docs refactor (#480)
Big docs refactor! Motivation is to make it easier for people to find
resources they are looking for. To accomplish this, there are now three
main sections:
- Getting Started: steps for getting started, walking through most core
functionality
- Modules: these are different modules of functionality that langchain
provides. Each part here has a "getting started", "how to", "key
concepts" and "reference" section (except in a few select cases where it
didnt easily fit).
- Use Cases: this is to separate use cases (like summarization, question
answering, evaluation, etc) from the modules, and provide a different
entry point to the code base.
There is also a full reference section, as well as extra resources
(glossary, gallery, etc)
Co-authored-by: Shreya Rajpal <ShreyaR@users.noreply.github.com>
2023-01-02 16:24:09 +00:00
"execution_count": 2,
2023-04-25 04:49:55 +00:00
"id": "cf519eb6-2358-4db7-a28a-27433435181e",
"metadata": {
"tags": []
},
2022-10-24 21:51:15 +00:00
"outputs": [
{
"data": {
"text/plain": [
2023-04-25 04:49:55 +00:00
"[{'text': '\\n\\nSocktastic!'},\n",
" {'text': '\\n\\nTechCore Solutions.'},\n",
" {'text': '\\n\\nFootwear Factory.'}]"
2022-10-24 21:51:15 +00:00
]
},
Docs refactor (#480)
Big docs refactor! Motivation is to make it easier for people to find
resources they are looking for. To accomplish this, there are now three
main sections:
- Getting Started: steps for getting started, walking through most core
functionality
- Modules: these are different modules of functionality that langchain
provides. Each part here has a "getting started", "how to", "key
concepts" and "reference" section (except in a few select cases where it
didnt easily fit).
- Use Cases: this is to separate use cases (like summarization, question
answering, evaluation, etc) from the modules, and provide a different
entry point to the code base.
There is also a full reference section, as well as extra resources
(glossary, gallery, etc)
Co-authored-by: Shreya Rajpal <ShreyaR@users.noreply.github.com>
2023-01-02 16:24:09 +00:00
"execution_count": 2,
2022-10-24 21:51:15 +00:00
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
2023-04-25 04:49:55 +00:00
"input_list = [\n",
" {\"product\": \"socks\"},\n",
" {\"product\": \"computer\"},\n",
" {\"product\": \"shoes\"}\n",
"]\n",
2022-10-24 21:51:15 +00:00
"\n",
2023-04-25 04:49:55 +00:00
"llm_chain.apply(input_list)"
2022-12-07 16:40:08 +00:00
]
},
{
"cell_type": "markdown",
2023-04-25 04:49:55 +00:00
"id": "add442fb-baf6-40d9-ae8e-4ac1d8251ad0",
"metadata": {
"tags": []
},
2022-12-07 16:40:08 +00:00
"source": [
2023-04-25 04:49:55 +00:00
"- `generate` is similar to `apply`, except it return an `LLMResult` instead of string. `LLMResult` often contains useful generation such as token usages and finish reason."
2022-10-24 21:51:15 +00:00
]
},
{
"cell_type": "code",
Docs refactor (#480)
Big docs refactor! Motivation is to make it easier for people to find
resources they are looking for. To accomplish this, there are now three
main sections:
- Getting Started: steps for getting started, walking through most core
functionality
- Modules: these are different modules of functionality that langchain
provides. Each part here has a "getting started", "how to", "key
concepts" and "reference" section (except in a few select cases where it
didnt easily fit).
- Use Cases: this is to separate use cases (like summarization, question
answering, evaluation, etc) from the modules, and provide a different
entry point to the code base.
There is also a full reference section, as well as extra resources
(glossary, gallery, etc)
Co-authored-by: Shreya Rajpal <ShreyaR@users.noreply.github.com>
2023-01-02 16:24:09 +00:00
"execution_count": 3,
2023-04-25 04:49:55 +00:00
"id": "85cbff83-a5cc-40b7-823c-47274ae4117d",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[Generation(text='\\n\\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.generate(input_list)"
]
},
{
2023-05-01 22:50:57 +00:00
"attachments": {},
2023-04-25 04:49:55 +00:00
"cell_type": "markdown",
"id": "a178173b-b183-432a-a517-250fe3191173",
2022-10-24 21:51:15 +00:00
"metadata": {},
2023-04-25 04:49:55 +00:00
"source": [
2023-05-01 22:50:57 +00:00
"- `predict` is similar to `run` method except that the input keys are specified as keyword arguments instead of a Python dict."
2023-04-25 04:49:55 +00:00
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "787d9f55-b080-4123-bed2-0598a9cb0466",
"metadata": {
"tags": []
},
2022-12-07 16:40:08 +00:00
"outputs": [
{
2023-04-25 04:49:55 +00:00
"data": {
"text/plain": [
"'\\n\\nSocktastic!'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Single input example\n",
"llm_chain.predict(product=\"colorful socks\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "092a769f-9661-42a0-9da1-19d09ccbc4a7",
"metadata": {
"tags": []
},
"outputs": [
2022-12-07 16:40:08 +00:00
{
"data": {
"text/plain": [
2023-04-25 04:49:55 +00:00
"'\\n\\nQ: What did the duck say when his friend died?\\nA: Quack, quack, goodbye.'"
2022-12-07 16:40:08 +00:00
]
},
2023-04-25 04:49:55 +00:00
"execution_count": 14,
2022-12-07 16:40:08 +00:00
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
2023-04-25 04:49:55 +00:00
"# Multiple inputs example\n",
"\n",
"template = \"\"\"Tell me a {adjective} joke about {subject}.\"\"\"\n",
2022-12-07 16:40:08 +00:00
"prompt = PromptTemplate(template=template, input_variables=[\"adjective\", \"subject\"])\n",
2023-04-25 04:49:55 +00:00
"llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))\n",
2022-12-07 16:40:08 +00:00
"\n",
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
]
},
2023-02-02 19:35:36 +00:00
{
"cell_type": "markdown",
2023-04-25 04:49:55 +00:00
"id": "4b72ad22-0a5d-4ca7-9e3f-8c46dc17f722",
2023-02-02 19:35:36 +00:00
"metadata": {},
"source": [
2023-04-25 04:49:55 +00:00
"## Parsing the outputs"
2023-02-02 19:35:36 +00:00
]
},
{
2023-04-25 04:49:55 +00:00
"cell_type": "markdown",
"id": "85a77662-d028-4048-be4b-aa496e2dde22",
2023-02-02 19:35:36 +00:00
"metadata": {},
"source": [
2023-04-25 04:49:55 +00:00
"By default, `LLMChain` does not parse the output even if the underlying `prompt` object has an output parser. If you would like to apply that output parser on the LLM output, use `predict_and_parse` instead of `predict` and `apply_and_parse` instead of `apply`. "
2023-02-02 19:35:36 +00:00
]
},
{
2023-04-25 04:49:55 +00:00
"cell_type": "markdown",
"id": "b83977f1-847c-45de-b840-f1aff6725f83",
2023-02-02 19:35:36 +00:00
"metadata": {},
2023-04-25 04:49:55 +00:00
"source": [
"With `predict`:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "5feb5177-c20b-4909-890b-a64d7e551f55",
"metadata": {
"tags": []
},
2023-02-02 19:35:36 +00:00
"outputs": [
{
"data": {
"text/plain": [
2023-04-25 04:49:55 +00:00
"'\\n\\nRed, orange, yellow, green, blue, indigo, violet'"
2023-02-02 19:35:36 +00:00
]
},
2023-04-25 04:49:55 +00:00
"execution_count": 24,
2023-02-02 19:35:36 +00:00
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
2023-04-25 04:49:55 +00:00
"from langchain.output_parsers import CommaSeparatedListOutputParser\n",
"\n",
"output_parser = CommaSeparatedListOutputParser()\n",
"template = \"\"\"List all the colors in a rainbow\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"llm_chain.predict()"
]
},
{
"cell_type": "markdown",
"id": "7b931615-804b-4f34-8086-7bbc2f96b3b2",
"metadata": {},
"source": [
"With `predict_and_parser`:"
2023-02-02 19:35:36 +00:00
]
},
2022-12-07 16:40:08 +00:00
{
"cell_type": "code",
2023-04-25 04:49:55 +00:00
"execution_count": 25,
"id": "43a374cd-a179-43e5-9aa0-62f3cbdf510d",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.predict_and_parse()"
]
},
{
"cell_type": "markdown",
"id": "8176f619-4e5c-4a02-91ba-e96ebe2aabda",
2022-12-07 16:40:08 +00:00
"metadata": {},
2023-04-25 04:49:55 +00:00
"source": [
"## Initialize from string"
]
},
{
"cell_type": "markdown",
"id": "9813ac87-e118-413b-b448-2fefdf2319b8",
"metadata": {},
"source": [
"You can also construct an LLMChain from a string template directly."
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "ca88ccb1-974e-41c1-81ce-753e3f1234fa",
"metadata": {
"tags": []
},
2022-10-24 21:51:15 +00:00
"outputs": [],
2023-04-25 04:49:55 +00:00
"source": [
"template = \"\"\"Tell me a {adjective} joke about {subject}.\"\"\"\n",
"llm_chain = LLMChain.from_string(llm=llm, template=template)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "4703d1bc-f4fc-44bc-9ea1-b4498835833d",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nQ: What did the duck say when his friend died?\\nA: Quack, quack, goodbye.'"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
]
2022-10-24 21:51:15 +00:00
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
2023-04-25 04:49:55 +00:00
"version": "3.10.10"
2022-10-24 21:51:15 +00:00
}
},
"nbformat": 4,
"nbformat_minor": 5
}