docs: update to the streaming tutorial notebook in the lcel documentation (#18378)

**Description:** Update to the streaming tutorial notebook in the LCEL
documentation
**Issue:** Fixed an import and (minor) changes in documentation language
**Dependencies:** None
pull/18679/head^2
aditya thomas 4 months ago committed by GitHub
parent 32db9e74e4
commit 97de498d39
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -12,7 +12,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "bb7d49db-04d3-4399-bfe1-09f82bbe6015",
"metadata": {},
@ -28,7 +27,7 @@
"1. sync `stream` and async `astream`: a **default implementation** of streaming that streams the **final output** from the chain.\n",
"2. async `astream_events` and async `astream_log`: these provide a way to stream both **intermediate steps** and **final output** from the chain.\n",
"\n",
"Let's take a look at both approaches, and try to understand a how to use them. 🥷\n",
"Let's take a look at both approaches, and try to understand how to use them. 🥷\n",
"\n",
"## Using Stream\n",
"\n",
@ -48,7 +47,25 @@
"\n",
"Large language models can take **several seconds** to generate a complete response to a query. This is far slower than the **~200-300 ms** threshold at which an application feels responsive to an end user.\n",
"\n",
"The key strategy to make the application feel more responsive is to show intermediate progress; e.g., to stream the output from the model **token by token**."
"The key strategy to make the application feel more responsive is to show intermediate progress; viz., to stream the output from the model **token by token**."
]
},
{
"cell_type": "markdown",
"id": "9eb73e8b",
"metadata": {},
"source": [
"We will show examples of streaming using the chat model from [Anthropic](https://python.langchain.com/docs/integrations/platforms/anthropic). To use the model, you will need to install the `langchain-anthropic` package. You can do this with the following command:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd351cf4",
"metadata": {},
"outputs": [],
"source": [
"pip install -qU langchain-anthropic"
]
},
{
@ -68,7 +85,7 @@
"source": [
"# Showing the example using anthropic, but you can use\n",
"# your favorite chat model!\n",
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"model = ChatAnthropic()\n",
"\n",
@ -152,7 +169,7 @@
"We will use `StrOutputParser` to parse the output from the model. This is a simple parser that extracts the `content` field from an `AIMessageChunk`, giving us the `token` returned by the model.\n",
"\n",
":::{.callout-tip}\n",
"LCEL is a *declarative* way to specify a \"program\" by chainining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of `stream`, and `astream` allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface.\n",
"LCEL is a *declarative* way to specify a \"program\" by chainining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of `stream` and `astream` allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface.\n",
":::"
]
},
@ -330,7 +347,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "cab6dca2-2027-414d-a196-2db6e3ebb8a5",
"metadata": {},
@ -1397,7 +1413,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
"version": "3.9.6"
}
},
"nbformat": 4,

Loading…
Cancel
Save