langchain/libs/langserve/examples/llm/server.py
Eugene Yurtsev b05bb9e136
LangServe (#11046)
Adds LangServe package

* Integrate Runnables with Fast API creating Server and a RemoteRunnable
client
* Support multiple runnables for a given server
* Support sync/async/batch/abatch/stream/astream/astream_log on the
client side (using async implementations on server)
* Adds validation using annotations (relying on pydantic under the hood)
-- this still has some rough edges -- e.g., open api docs do NOT
generate correctly at the moment
* Uses pydantic v1 namespace

Known issues: type translation code doesn't handle a lot of types (e.g.,
TypedDicts)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2023-09-28 10:52:44 +01:00

39 lines
928 B
Python
Executable File

#!/usr/bin/env python
"""Example LangChain server exposes multiple runnables (LLMs in this case)."""
from typing import List, Union
from fastapi import FastAPI
from langchain.chat_models import ChatAnthropic, ChatOpenAI
from langchain.prompts.chat import ChatPromptValue
from langchain.schema.messages import HumanMessage, SystemMessage
from langserve import add_routes
app = FastAPI(
title="LangChain Server",
version="1.0",
description="Spin up a simple api server using Langchain's Runnable interfaces",
)
LLMInput = Union[List[Union[SystemMessage, HumanMessage, str]], str, ChatPromptValue]
add_routes(
app,
ChatOpenAI(),
path="/openai",
input_type=LLMInput,
config_keys=[],
)
add_routes(
app,
ChatAnthropic(),
path="/anthropic",
input_type=LLMInput,
config_keys=[],
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="localhost", port=8000)