langchain/libs/langserve
Eugene Yurtsev e06e84b293
LangServe: Relax requirements (#11198)
Relax requirements
2023-09-28 21:27:19 -04:00
..
examples LangServe (#11046) 2023-09-28 10:52:44 +01:00
langserve Add langserve version (#11195) 2023-09-28 16:24:00 -04:00
tests/unit_tests Add langserve version (#11195) 2023-09-28 16:24:00 -04:00
LICENSE LangServe (#11046) 2023-09-28 10:52:44 +01:00
Makefile LangServe (#11046) 2023-09-28 10:52:44 +01:00
poetry.lock LangServe: Relax requirements (#11198) 2023-09-28 21:27:19 -04:00
pyproject.toml LangServe: Relax requirements (#11198) 2023-09-28 21:27:19 -04:00
README.md LangServe (#11046) 2023-09-28 10:52:44 +01:00

LangServe 🦜🔗

Overview

LangServe is a library that allows developers to host their Langchain runnables / call into them remotely from a runnable interface.

Examples

For more examples, see the examples directory.

Server

#!/usr/bin/env python
from fastapi import FastAPI
from langchain.prompts import ChatPromptTemplate
from langchain.chat_models import ChatAnthropic, ChatOpenAI
from langserve import add_routes
from typing_extensions import TypedDict


app = FastAPI(
  title="LangChain Server",
  version="1.0",
  description="A simple api server using Langchain's Runnable interfaces",
)


# Serve Open AI and Anthropic models
LLMInput = Union[List[Union[SystemMessage, HumanMessage, str]], str]

add_routes(
    app,
    ChatOpenAI(),
    path="/openai",
    input_type=LLMInput,
    config_keys=[],
)
add_routes(
    app,
    ChatAnthropic(),
    path="/anthropic",
    input_type=LLMInput,
    config_keys=[],
)

# Serve a joke chain
class ChainInput(TypedDict):
    """The input to the chain."""

    topic: str
    """The topic of the joke."""

model = ChatAnthropic()
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
add_routes(app, prompt | model, path="/chain", input_type=ChainInput)

if __name__ == "__main__":
    import uvicorn

    uvicorn.run(app, host="localhost", port=8000)

Client


from langchain.schema import SystemMessage, HumanMessage
from langchain.prompts import ChatPromptTemplate
from langchain.schema.runnable import RunnableMap
from langserve import RemoteRunnable

openai = RemoteRunnable("http://localhost:8000/openai/")
anthropic = RemoteRunnable("http://localhost:8000/anthropic/")
joke_chain = RemoteRunnable("http://localhost:8000/chain/")

joke_chain.invoke({"topic": "parrots"})

# or async
await joke_chain.ainvoke({"topic": "parrots"})

prompt = [
    SystemMessage(content='Act like either a cat or a parrot.'), 
    HumanMessage(content='Hello!')
]

# Supports astream
async for msg in anthropic.astream(prompt):
    print(msg, end="", flush=True)
    
prompt = ChatPromptTemplate.from_messages(
    [("system", "Tell me a long story about {topic}")]
)
    
# Can define custom chains
chain = prompt | RunnableMap({
    "openai": openai,
    "anthropic": anthropic,
})

chain.batch([{ "topic": "parrots" }, { "topic": "cats" }])

Installation

# pip install langserve[all] -- has not been published to pypi yet

or use client extra for client code, and server extra for server code.

Features

  • Deploy runnables with FastAPI
  • Client can use remote runnables almost as if they were local
    • Supports async
    • Supports batch
    • Supports stream

Limitations

  • Chain callbacks cannot be passed from the client to the server