langchain/libs/langserve
Eugene Yurtsev b05bb9e136
LangServe (#11046)
Adds LangServe package

* Integrate Runnables with Fast API creating Server and a RemoteRunnable
client
* Support multiple runnables for a given server
* Support sync/async/batch/abatch/stream/astream/astream_log on the
client side (using async implementations on server)
* Adds validation using annotations (relying on pydantic under the hood)
-- this still has some rough edges -- e.g., open api docs do NOT
generate correctly at the moment
* Uses pydantic v1 namespace

Known issues: type translation code doesn't handle a lot of types (e.g.,
TypedDicts)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2023-09-28 10:52:44 +01:00
..
examples LangServe (#11046) 2023-09-28 10:52:44 +01:00
langserve LangServe (#11046) 2023-09-28 10:52:44 +01:00
tests/unit_tests LangServe (#11046) 2023-09-28 10:52:44 +01:00
__init__.py LangServe (#11046) 2023-09-28 10:52:44 +01:00
LICENSE LangServe (#11046) 2023-09-28 10:52:44 +01:00
Makefile LangServe (#11046) 2023-09-28 10:52:44 +01:00
poetry.lock LangServe (#11046) 2023-09-28 10:52:44 +01:00
pyproject.toml LangServe (#11046) 2023-09-28 10:52:44 +01:00
README.md LangServe (#11046) 2023-09-28 10:52:44 +01:00

LangServe 🦜🔗

Overview

LangServe is a library that allows developers to host their Langchain runnables / call into them remotely from a runnable interface.

Examples

For more examples, see the examples directory.

Server

#!/usr/bin/env python
from fastapi import FastAPI
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatAnthropic, ChatOpenAI
from langserve import add_routes
from typing_extensions import TypedDict


app = FastAPI(
  title="LangChain Server",
  version="1.0",
  description="A simple api server using Langchain's Runnable interfaces",
)


# Serve Open AI and Anthropic models
LLMInput = Union[List[Union[SystemMessage, HumanMessage, str]], str]

add_routes(
    app,
    ChatOpenAI(),
    path="/openai",
    input_type=LLMInput,
    config_keys=[],
)
add_routes(
    app,
    ChatAnthropic(),
    path="/anthropic",
    input_type=LLMInput,
    config_keys=[],
)

# Serve a joke chain
class ChainInput(TypedDict):
    """The input to the chain."""

    topic: str
    """The topic of the joke."""

model = ChatAnthropic()
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
add_routes(app, prompt | model, path="/chain", input_type=ChainInput)

if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="localhost", port=8000)

Client


from langchain.schema import SystemMessage, HumanMessage
from langchain.prompts import ChatPromptTemplate
from langchain.schema.runnable import RunnableMap
from langserve import RemoteRunnable

openai = RemoteRunnable("http://localhost:8000/openai/")
anthropic = RemoteRunnable("http://localhost:8000/anthropic/")
joke_chain = RemoteRunnable("http://localhost:8000/chain/")

joke_chain.invoke({"topic": "parrots"})

# or async
await joke_chain.ainvoke({"topic": "parrots"})

prompt = [
    SystemMessage(content='Act like either a cat or a parrot.'), 
    HumanMessage(content='Hello!')
]

# Supports astream
async for msg in anthropic.astream(prompt):
    print(msg, end="", flush=True)
    
prompt = ChatPromptTemplate.from_messages(
    [("system", "Tell me a long story about {topic}")]
)
    
# Can define custom chains
chain = prompt | RunnableMap({
    "openai": openai,
    "anthropic": anthropic,
})

chain.batch([{ "topic": "parrots" }, { "topic": "cats" }])

Installation

# pip install langserve[all] -- has not been published to pypi yet

or use client extra for client code, and server extra for server code.

Features

  • Deploy runnables with FastAPI
  • Client can use remote runnables almost as if they were local
    • Supports async
    • Supports batch
    • Supports stream

Limitations

  • Chain callbacks cannot be passed from the client to the server