Docs: combine LCEL index and why (#13142)

pull/13148/head
Bagatur 11 months ago committed by GitHub
parent 1311450646
commit 84e65533e9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -4,33 +4,27 @@ sidebar_class_name: hidden
# LangChain Expression Language (LCEL)
LangChain Expression Language or LCEL is a declarative way to easily compose chains together.
There are several benefits to writing chains in this manner (as opposed to writing normal code):
LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together.
LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
**Async, Batch, and Streaming Support**
Any chain constructed this way will automatically have full sync, async, batch, and streaming support.
This makes it easy to prototype a chain in a Jupyter notebook using the sync interface, and then expose it as an async streaming interface.
**Streaming support**
When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.
**Fallbacks**
The non-determinism of LLMs makes it important to be able to handle errors gracefully.
With LCEL you can easily attach fallbacks to any chain.
**Async support**
Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/docs/langsmith) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
**Parallelism**
Since LLM applications involve (sometimes long) API calls, it often becomes important to run things in parallel.
With LCEL syntax, any components that can be run in parallel automatically are.
**Optimized parallel execution**
Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.
**Seamless LangSmith Tracing Integration**
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
With LCEL, **all** steps are automatically logged to [LangSmith](https://smith.langchain.com) for maximal observability and debuggability.
#### [Interface](/docs/expression_language/interface)
The base interface shared by all LCEL objects
**Retries and fallbacks**
Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. Were currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
#### [How to](/docs/expression_language/how_to)
How to use core features of LCEL
**Access intermediate results**
For more complex chains its often very useful to access the results of intermediate steps even before the final output is produced. This can be used let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and its available on every [LangServe](/docs/langserve) server.
#### [Cookbook](/docs/expression_language/cookbook)
Examples of common LCEL usage patterns
**Input and output schemas**
Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
#### [Why use LCEL](/docs/expression_language/why)
A deeper dive into the benefits of LCEL
**Seamless LangSmith tracing integration**
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
With LCEL, **all** steps are automatically logged to [LangSmith](/docs/langsmith/) for maximum observability and debuggability.

@ -1,11 +0,0 @@
# Why use LCEL?
The LangChain Expression Language was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully running in production LCEL chains with 100s of steps). To highlight a few of the reasons you might want to use LCEL:
- first-class support for streaming: when you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens. Were constantly improving streaming support, recently we added a [streaming JSON parser](https://twitter.com/LangChainAI/status/1709690468030914584), and more is in the works.
- first-class async support: any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](https://github.com/langchain-ai/langserve) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
- optimised parallel execution: whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.
- support for retries and fallbacks: more recently weve added support for configuring retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. Were currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
- accessing intermediate results: for more complex chains its often very useful to access the results of intermediate steps even before the final output is produced. This can be used let end-users know something is happening, or even just to debug your chain. Weve added support for [streaming intermediate results](https://x.com/LangChainAI/status/1711806009097044193?s=20), and its available on every LangServe server.
- [input and output schemas](https://x.com/LangChainAI/status/1711805322195861934?s=20): input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
- tracing with LangSmith: all chains built with LCEL have first-class tracing support, which can be used to debug your chains, or to understand whats happening in production. To enable this all you have to do is add your [LangSmith](https://www.langchain.com/langsmith) API key as an environment variable.

@ -1,5 +1,9 @@
{
"redirects": [
{
"source": "/docs/expression_language/why",
"destination": "/docs/expression_language/"
},
{
"source": "/docs/modules/model_io/chat/llm_chain",
"destination": "/docs/modules/chains/foundational/llm_chain"

Loading…
Cancel
Save