<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
"In an effort to make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.Runnable.html#langchain.schema.runnable.Runnable) protocol that most components implement. This is a standard interface with a few different methods, which makes it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:\n",
"In an effort to make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.Runnable.html#langchain.schema.runnable.Runnable) protocol that most components implement. This is a standard interface with a few different methods, which makes it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:\n",
"\n",
"\n",
"- `stream`: stream back chunks of the response\n",
"- [`stream`](#stream): stream back chunks of the response\n",
"- `invoke`: call the chain on an input\n",
"- [`invoke`](#invoke): call the chain on an input\n",
"- `batch`: call the chain on a list of inputs\n",
"- [`batch`](#batch): call the chain on a list of inputs\n",
"\n",
"\n",
"These also have corresponding async methods:\n",
"These also have corresponding async methods:\n",
"\n",
"\n",
"- `astream`: stream back chunks of the response async\n",
"- [`astream`](#async-stream): stream back chunks of the response async\n",
"- `ainvoke`: call the chain on an input async\n",
"- [`ainvoke`](#async-invoke): call the chain on an input async\n",
"- `abatch`: call the chain on a list of inputs async\n",
"- [`abatch`](#async-batch): call the chain on a list of inputs async\n",
"- [`astream_log`](#async-stream-intermediate-steps): stream back intermediate steps as they happen, in addition to the final response\n",
"\n",
"\n",
"The type of the input varies by component:\n",
"The type of the input varies by component:\n",
"\n",
"\n",
@ -49,6 +50,10 @@
"| Tool | Depends on the tool |\n",
"| Tool | Depends on the tool |\n",
"| OutputParser | Depends on the parser |\n",
"| OutputParser | Depends on the parser |\n",
"\n",
"\n",
"All runnables expose properties to inspect the input and output types:\n",
"- [`input_schema`](#input-schema): an input Pydantic model auto-generated from the structure of the Runnable\n",
"- [`output_schema`](#output-schema): an output Pydantic model auto-generated from the structure of the Runnable\n",
"\n",
"Let's take a look at these methods! To do so, we'll create a super simple PromptTemplate + ChatModel chain."
"Let's take a look at these methods! To do so, we'll create a super simple PromptTemplate + ChatModel chain."
"async for s in chain.astream({\"topic\": \"bears\"}):\n",
"async for s in chain.astream({\"topic\": \"bears\"}):\n",
" print(s.content, end=\"\", flush=True)"
" print(s.content, end=\"\", flush=True)\n"
]
]
},
},
{
{
@ -252,23 +406,23 @@
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 16,
"execution_count": 12,
"id": "ef8c9b20",
"id": "ef8c9b20",
"metadata": {},
"metadata": {},
"outputs": [
"outputs": [
{
{
"data": {
"data": {
"text/plain": [
"text/plain": [
"AIMessage(content=\"Sure, here you go:\\n\\nWhy don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)"
"AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\")"
]
]
},
},
"execution_count": 16,
"execution_count": 12,
"metadata": {},
"metadata": {},
"output_type": "execute_result"
"output_type": "execute_result"
}
}
],
],
"source": [
"source": [
"await chain.ainvoke({\"topic\": \"bears\"})"
"await chain.ainvoke({\"topic\": \"bears\"})\n"
]
]
},
},
{
{
@ -281,28 +435,360 @@
},
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 18,
"execution_count": 13,
"id": "eba2a103",
"id": "eba2a103",
"metadata": {},
"metadata": {},
"outputs": [
"outputs": [
{
{
"data": {
"data": {
"text/plain": [
"text/plain": [
"[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)]"
"[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\")]"
]
]
},
},
"execution_count": 18,
"execution_count": 13,
"metadata": {},
"metadata": {},
"output_type": "execute_result"
"output_type": "execute_result"
}
}
],
],
"source": [
"source": [
"await chain.abatch([{\"topic\": \"bears\"}])"
"await chain.abatch([{\"topic\": \"bears\"}])\n"
]
]
},
},
{
{
"cell_type": "markdown",
"cell_type": "markdown",
"id": "0a1c409d",
"id": "f9cef104",
"metadata": {},
"source": [
"## Async Stream Intermediate Steps\n",
"\n",
"All runnables also have a method `.astream_log()` which can be used to stream (as they happen) all or part of the intermediate steps of your chain/sequence. \n",
"\n",
"This is useful eg. to show progress to the user, to use intermediate results, or even just to debug your chain.\n",
"\n",
"You can choose to stream all steps (default), or include/exclude steps by name, tags or metadata.\n",
"\n",
"This method yields [JSONPatch](https://jsonpatch.com) ops that when applied in the same order as received build up the RunState.\n",
"\n",
"```python\n",
"class LogEntry(TypedDict):\n",
" id: str\n",
" \"\"\"ID of the sub-run.\"\"\"\n",
" name: str\n",
" \"\"\"Name of the object being run.\"\"\"\n",
" type: str\n",
" \"\"\"Type of the object being run, eg. prompt, chain, llm, etc.\"\"\"\n",
" tags: List[str]\n",
" \"\"\"List of tags for the run.\"\"\"\n",
" metadata: Dict[str, Any]\n",
" \"\"\"Key-value pairs of metadata for the run.\"\"\"\n",
" start_time: str\n",
" \"\"\"ISO-8601 timestamp of when the run started.\"\"\"\n",
"\n",
" streamed_output_str: List[str]\n",
" \"\"\"List of LLM tokens streamed by this run, if applicable.\"\"\"\n",
" final_output: Optional[Any]\n",
" \"\"\"Final output of this run.\n",
" Only available after the run has finished successfully.\"\"\"\n",
" end_time: Optional[str]\n",
" \"\"\"ISO-8601 timestamp of when the run ended.\n",
" Only available after the run has finished.\"\"\"\n",
"\n",
"\n",
"class RunState(TypedDict):\n",
" id: str\n",
" \"\"\"ID of the run.\"\"\"\n",
" streamed_output: List[Any]\n",
" \"\"\"List of output chunks streamed by Runnable.stream()\"\"\"\n",
" final_output: Optional[Any]\n",
" \"\"\"Final output of the run, usually the result of aggregating (`+`) streamed_output.\n",
" Only available after the run has finished successfully.\"\"\"\n",
"\n",
" logs: Dict[str, LogEntry]\n",
" \"\"\"Map of run names to sub-runs. If filters were supplied, this list will\n",
" contain only the runs that matched the filters.\"\"\"\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "a146a5df-25be-4fa2-a7e4-df8ebe55a35e",
"metadata": {},
"source": [
"### Streaming JSONPatch chunks\n",
"\n",
"This is useful eg. to stream the JSONPatch in an HTTP server, and then apply the ops on the client to rebuild the run state there. See [LangServe](https://github.com/langchain-ai/langserve) for tooling to make it easier to build a webserver from any Runnable."