7dd6b32991
This PR introduces the following Runnables: 1. BaseRateLimiter: an abstraction for specifying a time based rate limiter as a Runnable 2. InMemoryRateLimiter: Provides an in-memory implementation of a rate limiter ## Example ```python from langchain_core.runnables import InMemoryRateLimiter, RunnableLambda from datetime import datetime foo = InMemoryRateLimiter(requests_per_second=0.5) def meow(x): print(datetime.now().strftime("%H:%M:%S.%f")) return x chain = foo | meow for _ in range(10): print(chain.invoke('hello')) ``` Produces: ``` 17:12:07.530151 hello 17:12:09.537932 hello 17:12:11.548375 hello 17:12:13.558383 hello 17:12:15.568348 hello 17:12:17.578171 hello 17:12:19.587508 hello 17:12:21.597877 hello 17:12:23.607707 hello 17:12:25.617978 hello ``` ![image](https://github.com/user-attachments/assets/283af59f-e1e1-408b-8e75-d3910c3c44cc) ## Interface The rate limiter uses the following interface for acquiring a token: ```python class BaseRateLimiter(Runnable[Input, Output], abc.ABC): @abc.abstractmethod def acquire(self, *, blocking: bool = True) -> bool: """Attempt to acquire the necessary tokens for the rate limiter.``` ``` The flag `blocking` has been added to the abstraction to allow supporting streaming (which is easier if blocking=False). ## Limitations - The rate limiter is not designed to work across different processes. It is an in-memory rate limiter, but it is thread safe. - The rate limiter only supports time-based rate limiting. It does not take into account the size of the request or any other factors. - The current implementation does not handle streaming inputs well and will consume all inputs even if the rate limit has been reached. Better support for streaming inputs will be added in the future. - When the rate limiter is combined with another runnable via a RunnableSequence, usage of .batch() or .abatch() will only respect the average rate limit. There will be bursty behavior as .batch() and .abatch() wait for each step to complete before starting the next step. One way to mitigate this is to use batch_as_completed() or abatch_as_completed(). ## Bursty behavior in `batch` and `abatch` When the rate limiter is combined with another runnable via a RunnableSequence, usage of .batch() or .abatch() will only respect the average rate limit. There will be bursty behavior as .batch() and .abatch() wait for each step to complete before starting the next step. This becomes a problem if users are using `batch` and `abatch` with many inputs (e.g., 100). In this case, there will be a burst of 100 inputs into the batch of the rate limited runnable. 1. Using a RunnableBinding The API would look like: ```python from langchain_core.runnables import InMemoryRateLimiter, RunnableLambda rate_limiter = InMemoryRateLimiter(requests_per_second=0.5) def meow(x): return x rate_limited_meow = RunnableLambda(meow).with_rate_limiter(rate_limiter) ``` 2. Another option is to add some init option to RunnableSequence that changes `.batch()` to be depth first (e.g., by delegating to `batch_as_completed`) ```python RunnableSequence(first=rate_limiter, last=model, how='batch-depth-first') ``` Pros: Does not require Runnable Binding Cons: Feels over-complicated |
||
---|---|---|
.. | ||
langchain_core | ||
scripts | ||
tests | ||
extended_testing_deps.txt | ||
Makefile | ||
poetry.lock | ||
pyproject.toml | ||
README.md |
🦜🍎️ LangChain Core
Quick Install
pip install langchain-core
What is it?
LangChain Core contains the base abstractions that power the rest of the LangChain ecosystem.
These abstractions are designed to be as modular and simple as possible. Examples of these abstractions include those for language models, document loaders, embedding models, vectorstores, retrievers, and more.
The benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.
For full documentation see the API reference.
1️⃣ Core Interface: Runnables
The concept of a Runnable is central to LangChain Core – it is the interface that most LangChain Core components implement, giving them
- a common invocation interface (invoke, batch, stream, etc.)
- built-in utilities for retries, fallbacks, schemas and runtime configurability
- easy deployment with LangServe
For more check out the runnable docs. Examples of components that implement the interface include: LLMs, Chat Models, Prompts, Retrievers, Tools, Output Parsers.
You can use LangChain Core objects in two ways:
-
imperative, ie. call them directly, eg.
model.invoke(...)
-
declarative, with LangChain Expression Language (LCEL)
-
or a mix of both! eg. one of the steps in your LCEL sequence can be a custom function
Feature | Imperative | Declarative |
---|---|---|
Syntax | All of Python | LCEL |
Tracing | ✅ – Automatic | ✅ – Automatic |
Parallel | ✅ – with threads or coroutines | ✅ – Automatic |
Streaming | ✅ – by yielding | ✅ – Automatic |
Async | ✅ – by writing async functions | ✅ – Automatic |
⚡️ What is LangChain Expression Language?
LangChain Expression Language (LCEL) is a declarative language for composing LangChain Core runnables into sequences (or DAGs), covering the most common patterns when building with LLMs.
LangChain Core compiles LCEL sequences to an optimized execution plan, with automatic parallelization, streaming, tracing, and async support.
For more check out the LCEL docs.
For more advanced use cases, also check out LangGraph, which is a graph-based runner for cyclic and recursive LLM workflows.
📕 Releases & Versioning
langchain-core
is currently on version 0.1.x
.
As langchain-core
contains the base abstractions and runtime for the whole LangChain ecosystem, we will communicate any breaking changes with advance notice and version bumps. The exception for this is anything in langchain_core.beta
. The reason for langchain_core.beta
is that given the rate of change of the field, being able to move quickly is still a priority, and this module is our attempt to do so.
Minor version increases will occur for:
- Breaking changes for any public interfaces NOT in
langchain_core.beta
Patch version increases will occur for:
- Bug fixes
- New features
- Any changes to private interfaces
- Any changes to
langchain_core.beta
💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see the Contributing Guide.
⛰️ Why build on top of LangChain Core?
The whole LangChain ecosystem is built on top of LangChain Core, so you're in good company when building on top of it. Some of the benefits:
- Modularity: LangChain Core is designed around abstractions that are independent of each other, and not tied to any specific model provider.
- Stability: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps.
- Battle-tested: LangChain Core components have the largest install base in the LLM ecosystem, and are used in production by many companies.
- Community: LangChain Core is developed in the open, and we welcome contributions from the community.