You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/docs/docs/expression_language/why.ipynb

1085 lines
31 KiB
Plaintext

{
"cells": [
{
"cell_type": "raw",
"id": "bc346658-6820-413a-bd8f-11bd3082fe43",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0.5\n",
"title: Why use LCEL\n",
"---\n",
"\n",
"import { ColumnContainer, Column } from \\\"@theme/Columns\\\";"
]
},
{
"cell_type": "markdown",
"id": "919a5ae2-ed21-4923-b98f-723c111bac67",
"metadata": {},
"source": [
":::tip \n",
"We recommend reading the LCEL [Get started](/docs/expression_language/get_started) section first.\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "f331037f-be3f-4782-856f-d55dab952488",
"metadata": {},
"source": [
"LCEL makes it easy to build complex chains from basic components. It does this by providing:\n",
"1. **A unified interface**: Every LCEL object implements the `Runnable` interface, which defines a common set of invocation methods (`invoke`, `batch`, `stream`, `ainvoke`, ...). This makes it possible for chains of LCEL objects to also automatically support these invocations. That is, every chain of LCEL objects is itself an LCEL object.\n",
"2. **Composition primitives**: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internal, and more.\n",
"\n",
"To better understand the value of LCEL, it's helpful to see it in action and think about how we might recreate similar functionality without it. In this walkthrough we'll do just that with our [basic example](/docs/expression_language/get_started#basic_example) from the get started section. We'll take our simple prompt + model chain, which under the hood already defines a lot of functionality, and see what it would take to recreate all of it."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dbeac2b8-c441-4d8d-b313-1de0ab9c7e51",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"Tell me a short joke about {topic}\")\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
"output_parser = StrOutputParser()\n",
"\n",
"chain = prompt | model | output_parser"
]
},
{
"cell_type": "markdown",
"id": "e3621b62-a037-42b8-8faa-59575608bb8b",
"metadata": {},
"source": [
"## Invoke\n",
"In the simplest case, we just want to pass in a topic string and get back a joke string:\n",
"\n",
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Without LCEL\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e628905c-430e-4e4a-9d7c-c91d2f42052e",
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"import openai\n",
"\n",
"\n",
"prompt_template = \"Tell me a short joke about {topic}\"\n",
"client = openai.OpenAI()\n",
"\n",
"def call_chat_model(messages: List[dict]) -> str:\n",
" response = client.chat.completions.create(\n",
" model=\"gpt-3.5-turbo\", \n",
" messages=messages,\n",
" )\n",
" return response.choices[0].message.content\n",
"\n",
"def invoke_chain(topic: str) -> str:\n",
" prompt_value = prompt_template.format(topic=topic)\n",
" messages = [{\"role\": \"user\", \"content\": prompt_value}]\n",
" return call_chat_model(messages)\n",
"\n",
"invoke_chain(\"ice cream\")"
]
},
{
"cell_type": "markdown",
"id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0d2a7cf8-1bc7-405c-bb0d-f2ab2ba3b6ab",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"Tell me a short joke about {topic}\"\n",
")\n",
"output_parser = StrOutputParser()\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
"chain = (\n",
" {\"topic\": RunnablePassthrough()} \n",
" | prompt\n",
" | model\n",
" | output_parser\n",
")\n",
"\n",
"chain.invoke(\"ice cream\")"
]
},
{
"cell_type": "markdown",
"id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"## Stream\n",
"If we want to stream results instead, we'll need to change our function:\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Without LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4f2cc6dc-d70a-4c13-9258-452f14290da6",
"metadata": {},
"outputs": [],
"source": [
"from typing import Iterator\n",
"\n",
"\n",
"def stream_chat_model(messages: List[dict]) -> Iterator[str]:\n",
" stream = client.chat.completions.create(\n",
" model=\"gpt-3.5-turbo\",\n",
" messages=messages,\n",
" stream=True,\n",
" )\n",
" for response in stream:\n",
" content = response.choices[0].delta.content\n",
" if content is not None:\n",
" yield content\n",
"\n",
"def stream_chain(topic: str) -> Iterator[str]:\n",
" prompt_value = prompt.format(topic=topic)\n",
" return stream_chat_model([{\"role\": \"user\", \"content\": prompt_value}])\n",
"\n",
"\n",
"for chunk in stream_chain(\"ice cream\"):\n",
" print(chunk, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "f8e36b0e-c7dc-4130-a51b-189d4b756c7f",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "173e1a9c-2a18-4669-b0de-136f39197786",
"metadata": {},
"outputs": [],
"source": [
"for chunk in chain.stream(\"ice cream\"):\n",
" print(chunk, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "b9b41e78-ddeb-44d0-a58b-a0ea0c99a761",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"## Batch\n",
"\n",
"If we want to run on a batch of inputs in parallel, we'll again need a new function:\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Without LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6b492f13-73a6-48ed-8d4f-9ad634da9988",
"metadata": {},
"outputs": [],
"source": [
"from concurrent.futures import ThreadPoolExecutor\n",
"\n",
"\n",
"def batch_chain(topics: list) -> list:\n",
" with ThreadPoolExecutor(max_workers=5) as executor:\n",
" return list(executor.map(invoke_chain, topics))\n",
"\n",
"batch_chain([\"ice cream\", \"spaghetti\", \"dumplings\"])"
]
},
{
"cell_type": "markdown",
"id": "9b3e9d34-6775-43c1-93d8-684b58e341ab",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8f55b292-4e97-4d09-8e71-c71b4d853526",
"metadata": {},
"outputs": [],
"source": [
"chain.batch([\"ice cream\", \"spaghetti\", \"dumplings\"])"
]
},
{
"cell_type": "markdown",
"id": "cc5ba36f-eec1-4fc1-8cfe-fa242a7f7809",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"## Async\n",
"\n",
"If we need an asynchronous version:\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Without LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eabe6621-e815-41e3-9c9d-5aa561a69835",
"metadata": {},
"outputs": [],
"source": [
"async_client = openai.AsyncOpenAI()\n",
"\n",
"async def acall_chat_model(messages: List[dict]) -> str:\n",
" response = await async_client.chat.completions.create(\n",
" model=\"gpt-3.5-turbo\", \n",
" messages=messages,\n",
" )\n",
" return response.choices[0].message.content\n",
"\n",
"async def ainvoke_chain(topic: str) -> str:\n",
" prompt_value = prompt_template.format(topic=topic)\n",
" messages = [{\"role\": \"user\", \"content\": prompt_value}]\n",
" return await acall_chat_model(messages)"
]
},
{
"cell_type": "markdown",
"id": "2f209290-498c-4c17-839e-ee9002919846",
"metadata": {},
"source": [
"```python\n",
"await ainvoke_chain(\"ice cream\")\n",
"```\n",
"\n",
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n",
"```python\n",
"chain.ainvoke(\"ice cream\")\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "f6888245-1ebe-4768-a53b-e1fef6a8b379",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"## LLM instead of chat model\n",
"\n",
"If we want to use a completion endpoint instead of a chat endpoint: \n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Without LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9aca946b-acaa-4f7e-a3d0-ad8e3225e7f2",
"metadata": {},
"outputs": [],
"source": [
"def call_llm(prompt_value: str) -> str:\n",
" response = client.completions.create(\n",
" model=\"gpt-3.5-turbo-instruct\",\n",
" prompt=prompt_value,\n",
" )\n",
" return response.choices[0].text\n",
"\n",
"def invoke_llm_chain(topic: str) -> str:\n",
" prompt_value = prompt_template.format(topic=topic)\n",
" return call_llm(prompt_value)\n",
"\n",
"invoke_llm_chain(\"ice cream\")"
]
},
{
"cell_type": "markdown",
"id": "45342cd6-58c2-4543-9392-773e05ef06e7",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d56efc0c-88e0-4cf8-a46a-e8e9b9cd6805",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import OpenAI\n",
"\n",
"llm = OpenAI(model=\"gpt-3.5-turbo-instruct\")\n",
"llm_chain = (\n",
" {\"topic\": RunnablePassthrough()} \n",
" | prompt\n",
" | llm\n",
" | output_parser\n",
")\n",
"\n",
"llm_chain.invoke(\"ice cream\")"
]
},
{
"cell_type": "markdown",
"id": "ca115eaf-59ef-45c1-aac1-e8b0ce7db250",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"## Different model provider\n",
"\n",
"If we want to use Anthropic instead of OpenAI: \n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Without LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cde2ceb0-f65e-487b-9a32-137b0e9d79d5",
"metadata": {},
"outputs": [],
"source": [
"import anthropic\n",
"\n",
"anthropic_template = f\"Human:\\n\\n{prompt_template}\\n\\nAssistant:\"\n",
"anthropic_client = anthropic.Anthropic()\n",
"\n",
"def call_anthropic(prompt_value: str) -> str:\n",
" response = anthropic_client.completions.create(\n",
" model=\"claude-2\",\n",
" prompt=prompt_value,\n",
" max_tokens_to_sample=256,\n",
" )\n",
" return response.completion \n",
"\n",
"def invoke_anthropic_chain(topic: str) -> str:\n",
" prompt_value = anthropic_template.format(topic=topic)\n",
" return call_anthropic(prompt_value)\n",
"\n",
"invoke_anthropic_chain(\"ice cream\")"
]
},
{
"cell_type": "markdown",
"id": "52a0c9f8-e316-42e1-af85-cabeba4b7059",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b3b800d1-5954-41a4-80b0-f00a7908961e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatAnthropic\n",
"\n",
"anthropic = ChatAnthropic(model=\"claude-2\")\n",
"anthropic_chain = (\n",
" {\"topic\": RunnablePassthrough()} \n",
" | prompt \n",
" | anthropic\n",
" | output_parser\n",
")\n",
"\n",
"anthropic_chain.invoke(\"ice cream\")"
]
},
{
"cell_type": "markdown",
"id": "d7a91eee-d017-420d-b215-f663dcbf8ed2",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"## Runtime configurability\n",
"\n",
"If we wanted to make the choice of chat model or LLM configurable at runtime:\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Without LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d0ef10e4-8e8e-463a-bd0f-59b0715e79b6",
"metadata": {},
"outputs": [],
"source": [
"def invoke_configurable_chain(\n",
" topic: str, \n",
" *, \n",
" model: str = \"chat_openai\"\n",
") -> str:\n",
" if model == \"chat_openai\":\n",
" return invoke_chain(topic)\n",
" elif model == \"openai\":\n",
" return invoke_llm_chain(topic)\n",
" elif model == \"anthropic\":\n",
" return invoke_anthropic_chain(topic)\n",
" else:\n",
" raise ValueError(\n",
" f\"Received invalid model '{model}'.\"\n",
" \" Expected one of chat_openai, openai, anthropic\"\n",
" )\n",
"\n",
"def stream_configurable_chain(\n",
" topic: str, \n",
" *, \n",
" model: str = \"chat_openai\"\n",
") -> Iterator[str]:\n",
" if model == \"chat_openai\":\n",
" return stream_chain(topic)\n",
" elif model == \"openai\":\n",
" # Note we haven't implemented this yet.\n",
" return stream_llm_chain(topic)\n",
" elif model == \"anthropic\":\n",
" # Note we haven't implemented this yet\n",
" return stream_anthropic_chain(topic)\n",
" else:\n",
" raise ValueError(\n",
" f\"Received invalid model '{model}'.\"\n",
" \" Expected one of chat_openai, openai, anthropic\"\n",
" )\n",
"\n",
"def batch_configurable_chain(\n",
" topics: List[str], \n",
" *, \n",
" model: str = \"chat_openai\"\n",
") -> List[str]:\n",
" # You get the idea\n",
" ...\n",
"\n",
"async def abatch_configurable_chain(\n",
" topics: List[str], \n",
" *, \n",
" model: str = \"chat_openai\"\n",
") -> List[str]:\n",
" ...\n",
"\n",
"invoke_configurable_chain(\"ice cream\", model=\"openai\")\n",
"stream = stream_configurable_chain(\n",
" \"ice_cream\", \n",
" model=\"anthropic\"\n",
")\n",
"for chunk in stream:\n",
" print(chunk, end=\"\", flush=True)\n",
"\n",
"# batch_configurable_chain([\"ice cream\", \"spaghetti\", \"dumplings\"])\n",
"# await ainvoke_configurable_chain(\"ice cream\")"
]
},
{
"cell_type": "markdown",
"id": "d1530c5c-6635-4599-9483-6df357ca2d64",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### With LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76809d14-e77a-4125-a2ea-efbebf0b47cc",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import ConfigurableField\n",
"\n",
"\n",
"configurable_model = model.configurable_alternatives(\n",
" ConfigurableField(id=\"model\"), \n",
" default_key=\"chat_openai\", \n",
" openai=llm,\n",
" anthropic=anthropic,\n",
")\n",
"configurable_chain = (\n",
" {\"topic\": RunnablePassthrough()} \n",
" | prompt \n",
" | configurable_model \n",
" | output_parser\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4a3d94d0-cd42-4195-80b8-ef2e12503d6f",
"metadata": {},
"outputs": [],
"source": [
"configurable_chain.invoke(\n",
" \"ice cream\", \n",
" config={\"model\": \"openai\"}\n",
")\n",
"stream = configurable_chain.stream(\n",
" \"ice cream\", \n",
" config={\"model\": \"anthropic\"}\n",
")\n",
"for chunk in stream:\n",
" print(chunk, end=\"\", flush=True)\n",
"\n",
"configurable_chain.batch([\"ice cream\", \"spaghetti\", \"dumplings\"])\n",
"\n",
"# await configurable_chain.ainvoke(\"ice cream\")"
]
},
{
"cell_type": "markdown",
"id": "370dd4d7-b825-40c4-ae3c-2693cba2f22a",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"## Logging\n",
"\n",
"If we want to log our intermediate results:\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Without LCEL\n",
"\n",
"We'll `print` intermediate steps for illustrative purposes\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "383a3c51-926d-48c6-b9ae-42bf8f14ecc8",
"metadata": {},
"outputs": [],
"source": [
"def invoke_anthropic_chain_with_logging(topic: str) -> str:\n",
" print(f\"Input: {topic}\")\n",
" prompt_value = anthropic_template.format(topic=topic)\n",
" print(f\"Formatted prompt: {prompt_value}\")\n",
" output = call_anthropic(prompt_value)\n",
" print(f\"Output: {output}\")\n",
" return output\n",
"\n",
"invoke_anthropic_chain_with_logging(\"ice cream\")"
]
},
{
"cell_type": "markdown",
"id": "16bd20fd-43cd-4aaf-866f-a53d1f20312d",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"Every component has built-in integrations with LangSmith. If we set the following two environment variables, all chain traces are logged to LangSmith.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d6204f21-d2e7-4ac6-871f-b60b34e5bd36",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = \"...\"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"\n",
"anthropic_chain.invoke(\"ice cream\")"
]
},
{
"cell_type": "markdown",
"id": "db37c922-e641-45e4-86fe-9ed7ef468fd8",
"metadata": {},
"source": [
"Here's what our LangSmith trace looks like: https://smith.langchain.com/public/e4de52f8-bcd9-4732-b950-deee4b04e313/r"
]
},
{
"cell_type": "markdown",
"id": "e25ce3c5-27a7-4954-9f0e-b94313597135",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"## Fallbacks\n",
"\n",
"If we wanted to add fallback logic, in case one model API is down:\n",
"\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Without LCEL\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2e49d512-bc83-4c5f-b56e-934b8343b0fe",
"metadata": {},
"outputs": [],
"source": [
"def invoke_chain_with_fallback(topic: str) -> str:\n",
" try:\n",
" return invoke_chain(topic)\n",
" except Exception:\n",
" return invoke_anthropic_chain(topic)\n",
"\n",
"async def ainvoke_chain_with_fallback(topic: str) -> str:\n",
" try:\n",
" return await ainvoke_chain(topic)\n",
" except Exception:\n",
" # Note: we haven't actually implemented this.\n",
" return ainvoke_anthropic_chain(topic)\n",
"\n",
"async def batch_chain_with_fallback(topics: List[str]) -> str:\n",
" try:\n",
" return batch_chain(topics)\n",
" except Exception:\n",
" # Note: we haven't actually implemented this.\n",
" return batch_anthropic_chain(topics)\n",
"\n",
"invoke_chain_with_fallback(\"ice cream\")\n",
"# await ainvoke_chain_with_fallback(\"ice cream\")\n",
"batch_chain_with_fallback([\"ice cream\", \"spaghetti\", \"dumplings\"]))"
]
},
{
"cell_type": "markdown",
"id": "f7ef59b5-2ce3-479e-a7ac-79e1e2f30e9c",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d0d8a0f-66eb-4c35-9529-74bec44ce4b8",
"metadata": {},
"outputs": [],
"source": [
"fallback_chain = chain.with_fallbacks([anthropic_chain])\n",
"\n",
"fallback_chain.invoke(\"ice cream\")\n",
"# await fallback_chain.ainvoke(\"ice cream\")\n",
"fallback_chain.batch([\"ice cream\", \"spaghetti\", \"dumplings\"])"
]
},
{
"cell_type": "markdown",
"id": "3af52d36-37c6-4d89-b515-95d7270bb96a",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>"
]
},
{
"cell_type": "markdown",
"id": "f58af836-26bd-4eab-97a0-76dd56d53430",
"metadata": {},
"source": [
"## Full code comparison\n",
"\n",
"Even in this simple case, our LCEL chain succinctly packs in a lot of functionality. As chains become more complex, this becomes especially valuable.\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"\n",
"#### Without LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8684690a-e450-4ba7-8509-e9815a42ff1c",
"metadata": {},
"outputs": [],
"source": [
"from concurrent.futures import ThreadPoolExecutor\n",
"from typing import Iterator, List, Tuple\n",
"\n",
"import anthropic\n",
"import openai\n",
"\n",
"\n",
"prompt_template = \"Tell me a short joke about {topic}\"\n",
"anthropic_template = f\"Human:\\n\\n{prompt_template}\\n\\nAssistant:\"\n",
"client = openai.OpenAI()\n",
"async_client = openai.AsyncOpenAI()\n",
"anthropic_client = anthropic.Anthropic()\n",
"\n",
"def call_chat_model(messages: List[dict]) -> str:\n",
" response = client.chat.completions.create(\n",
" model=\"gpt-3.5-turbo\", \n",
" messages=messages,\n",
" )\n",
" return response.choices[0].message.content\n",
"\n",
"def invoke_chain(topic: str) -> str:\n",
" print(f\"Input: {topic}\")\n",
" prompt_value = prompt_template.format(topic=topic)\n",
" print(f\"Formatted prompt: {prompt_value}\")\n",
" messages = [{\"role\": \"user\", \"content\": prompt_value}]\n",
" output = call_chat_model(messages)\n",
" print(f\"Output: {output}\")\n",
" return output\n",
"\n",
"def stream_chat_model(messages: List[dict]) -> Iterator[str]:\n",
" stream = client.chat.completions.create(\n",
" model=\"gpt-3.5-turbo\",\n",
" messages=messages,\n",
" stream=True,\n",
" )\n",
" for response in stream:\n",
" content = response.choices[0].delta.content\n",
" if content is not None:\n",
" yield content\n",
"\n",
"def stream_chain(topic: str) -> Iterator[str]:\n",
" print(f\"Input: {topic}\")\n",
" prompt_value = prompt.format(topic=topic)\n",
" print(f\"Formatted prompt: {prompt_value}\")\n",
" stream = stream_chat_model([{\"role\": \"user\", \"content\": prompt_value}])\n",
" for chunk in stream:\n",
" print(f\"Token: {chunk}\", end=\"\")\n",
" yield chunk\n",
"\n",
"def batch_chain(topics: list) -> list:\n",
" with ThreadPoolExecutor(max_workers=5) as executor:\n",
" return list(executor.map(invoke_chain, topics))\n",
"\n",
"def call_llm(prompt_value: str) -> str:\n",
" response = client.completions.create(\n",
" model=\"gpt-3.5-turbo-instruct\",\n",
" prompt=prompt_value,\n",
" )\n",
" return response.choices[0].text\n",
"\n",
"def invoke_llm_chain(topic: str) -> str:\n",
" print(f\"Input: {topic}\")\n",
" prompt_value = promtp_template.format(topic=topic)\n",
" print(f\"Formatted prompt: {prompt_value}\")\n",
" output = call_llm(prompt_value)\n",
" print(f\"Output: {output}\")\n",
" return output\n",
"\n",
"def call_anthropic(prompt_value: str) -> str:\n",
" response = anthropic_client.completions.create(\n",
" model=\"claude-2\",\n",
" prompt=prompt_value,\n",
" max_tokens_to_sample=256,\n",
" )\n",
" return response.completion \n",
"\n",
"def invoke_anthropic_chain(topic: str) -> str:\n",
" print(f\"Input: {topic}\")\n",
" prompt_value = anthropic_template.format(topic=topic)\n",
" print(f\"Formatted prompt: {prompt_value}\")\n",
" output = call_anthropic(prompt_value)\n",
" print(f\"Output: {output}\")\n",
" return output\n",
"\n",
"async def ainvoke_anthropic_chain(topic: str) -> str:\n",
" ...\n",
"\n",
"def stream_anthropic_chain(topic: str) -> Iterator[str]:\n",
" ...\n",
"\n",
"def batch_anthropic_chain(topics: List[str]) -> List[str]:\n",
" ...\n",
"\n",
"def invoke_configurable_chain(\n",
" topic: str, \n",
" *, \n",
" model: str = \"chat_openai\"\n",
") -> str:\n",
" if model == \"chat_openai\":\n",
" return invoke_chain(topic)\n",
" elif model == \"openai\":\n",
" return invoke_llm_chain(topic)\n",
" elif model == \"anthropic\":\n",
" return invoke_anthropic_chain(topic)\n",
" else:\n",
" raise ValueError(\n",
" f\"Received invalid model '{model}'.\"\n",
" \" Expected one of chat_openai, openai, anthropic\"\n",
" )\n",
"\n",
"def stream_configurable_chain(\n",
" topic: str, \n",
" *, \n",
" model: str = \"chat_openai\"\n",
") -> Iterator[str]:\n",
" if model == \"chat_openai\":\n",
" return stream_chain(topic)\n",
" elif model == \"openai\":\n",
" # Note we haven't implemented this yet.\n",
" return stream_llm_chain(topic)\n",
" elif model == \"anthropic\":\n",
" # Note we haven't implemented this yet\n",
" return stream_anthropic_chain(topic)\n",
" else:\n",
" raise ValueError(\n",
" f\"Received invalid model '{model}'.\"\n",
" \" Expected one of chat_openai, openai, anthropic\"\n",
" )\n",
"\n",
"def batch_configurable_chain(\n",
" topics: List[str], \n",
" *, \n",
" model: str = \"chat_openai\"\n",
") -> List[str]:\n",
" ...\n",
"\n",
"async def abatch_configurable_chain(\n",
" topics: List[str], \n",
" *, \n",
" model: str = \"chat_openai\"\n",
") -> List[str]:\n",
" ...\n",
"\n",
"def invoke_chain_with_fallback(topic: str) -> str:\n",
" try:\n",
" return invoke_chain(topic)\n",
" except Exception:\n",
" return invoke_anthropic_chain(topic)\n",
"\n",
"async def ainvoke_chain_with_fallback(topic: str) -> str:\n",
" try:\n",
" return await ainvoke_chain(topic)\n",
" except Exception:\n",
" return ainvoke_anthropic_chain(topic)\n",
"\n",
"async def batch_chain_with_fallback(topics: List[str]) -> str:\n",
" try:\n",
" return batch_chain(topics)\n",
" except Exception:\n",
" return batch_anthropic_chain(topics)"
]
},
{
"cell_type": "markdown",
"id": "9fb3d71d-8c69-4dc4-81b7-95cd46b271c2",
"metadata": {},
"source": [
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"#### LCEL\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "715c469a-545e-434e-bd6e-99745dd880a7",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain_openai import OpenAI\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = \"...\"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"Tell me a short joke about {topic}\"\n",
")\n",
"chat_openai = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
"openai = OpenAI(model=\"gpt-3.5-turbo-instruct\")\n",
"anthropic = ChatAnthropic(model=\"claude-2\")\n",
"model = (\n",
" chat_openai\n",
" .with_fallbacks([anthropic])\n",
" .configurable_alternatives(\n",
" ConfigurableField(id=\"model\"),\n",
" default_key=\"chat_openai\",\n",
" openai=openai,\n",
" anthropic=anthropic,\n",
" )\n",
")\n",
"\n",
"chain = (\n",
" {\"topic\": RunnablePassthrough()} \n",
" | prompt \n",
" | model \n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e3637d39",
"metadata": {},
"source": [
"</Column>\n",
"</ColumnContainer>"
]
},
{
"cell_type": "markdown",
"id": "5e47e773-d0f1-42b5-b509-896807b65c9c",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"To continue learning about LCEL, we recommend:\n",
"- Reading up on the full LCEL [Interface](/docs/expression_language/interface), which we've only partially covered here.\n",
"- Exploring the [How-to](/docs/expression_language/how_to) section to learn about additional composition primitives that LCEL provides.\n",
"- Looking through the [Cookbook](/docs/expression_language/cookbook) section to see LCEL in action for common use cases. A good next use case to look at would be [Retrieval-augmented generation](/docs/expression_language/cookbook/retrieval)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}