**Description:** Update to the streaming tutorial notebook in the LCEL
documentation
**Issue:** Fixed an import and (minor) changes in documentation language
**Dependencies:** None
"1. sync `stream` and async `astream`: a **default implementation** of streaming that streams the **final output** from the chain.\n",
"1. sync `stream` and async `astream`: a **default implementation** of streaming that streams the **final output** from the chain.\n",
"2. async `astream_events` and async `astream_log`: these provide a way to stream both **intermediate steps** and **final output** from the chain.\n",
"2. async `astream_events` and async `astream_log`: these provide a way to stream both **intermediate steps** and **final output** from the chain.\n",
"\n",
"\n",
"Let's take a look at both approaches, and try to understand a how to use them. 🥷\n",
"Let's take a look at both approaches, and try to understand how to use them. 🥷\n",
"\n",
"\n",
"## Using Stream\n",
"## Using Stream\n",
"\n",
"\n",
@ -48,7 +47,25 @@
"\n",
"\n",
"Large language models can take **several seconds** to generate a complete response to a query. This is far slower than the **~200-300 ms** threshold at which an application feels responsive to an end user.\n",
"Large language models can take **several seconds** to generate a complete response to a query. This is far slower than the **~200-300 ms** threshold at which an application feels responsive to an end user.\n",
"\n",
"\n",
"The key strategy to make the application feel more responsive is to show intermediate progress; e.g., to stream the output from the model **token by token**."
"The key strategy to make the application feel more responsive is to show intermediate progress; viz., to stream the output from the model **token by token**."
]
},
{
"cell_type": "markdown",
"id": "9eb73e8b",
"metadata": {},
"source": [
"We will show examples of streaming using the chat model from [Anthropic](https://python.langchain.com/docs/integrations/platforms/anthropic). To use the model, you will need to install the `langchain-anthropic` package. You can do this with the following command:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd351cf4",
"metadata": {},
"outputs": [],
"source": [
"pip install -qU langchain-anthropic"
]
]
},
},
{
{
@ -68,7 +85,7 @@
"source": [
"source": [
"# Showing the example using anthropic, but you can use\n",
"# Showing the example using anthropic, but you can use\n",
"We will use `StrOutputParser` to parse the output from the model. This is a simple parser that extracts the `content` field from an `AIMessageChunk`, giving us the `token` returned by the model.\n",
"We will use `StrOutputParser` to parse the output from the model. This is a simple parser that extracts the `content` field from an `AIMessageChunk`, giving us the `token` returned by the model.\n",
"\n",
"\n",
":::{.callout-tip}\n",
":::{.callout-tip}\n",
"LCEL is a *declarative* way to specify a \"program\" by chainining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of `stream`, and `astream` allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface.\n",
"LCEL is a *declarative* way to specify a \"program\" by chainining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of `stream` and `astream` allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface.\n",