docs: Added Deploying LLMs into production + a new ecosystem (#4047)

Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>
Co-authored-by: Kamil Kaczmarek <kaczmarek.poczta@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
searx_updates
kourosh hakhamaneshi 12 months ago committed by GitHub
parent 74f8e603d9
commit 625717daa8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,137 @@
===========================
Deploying LLMs in Production
===========================
In today's fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it's crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories:
- **Case 1: Utilizing External LLM Providers (OpenAI, Anthropic, etc.)**
In this scenario, most of the computational burden is handled by the LLM providers, while LangChain simplifies the implementation of business logic around these services. This approach includes features such as prompt templating, chat message generation, caching, vector embedding database creation, preprocessing, etc.
- **Case 2: Self-hosted Open-Source Models**
Alternatively, developers can opt to use smaller, yet comparably capable, self-hosted open-source LLM models. This approach can significantly decrease costs, latency, and privacy concerns associated with transferring data to external LLM providers.
Regardless of the framework that forms the backbone of your product, deploying LLM applications comes with its own set of challenges. It's vital to understand the trade-offs and key considerations when evaluating serving frameworks.
Outline
=======
This guide aims to provide a comprehensive overview of the requirements for deploying LLMs in a production setting, focusing on:
- `Designing a Robust LLM Application Service <#robust>`_
- `Maintaining Cost-Efficiency <#cost>`_
- `Ensuring Rapid Iteration <#iteration>`_
Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include:
- `Ray Serve <../../../ecosystem/ray_serve.html>`_
- `BentoML <https://github.com/ssheng/BentoChain>`_
- `Modal <../../../ecosystem/modal.html>`_
These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs.
Designing a Robust LLM Application Service
===========================================
.. _robust:
When deploying an LLM service in production, it's imperative to provide a seamless user experience free from outages. Achieving 24/7 service availability involves creating and maintaining several sub-systems surrounding your application.
Monitoring
----------
Monitoring forms an integral part of any system running in a production environment. In the context of LLMs, it is essential to monitor both performance and quality metrics.
**Performance Metrics:** These metrics provide insights into the efficiency and capacity of your model. Here are some key examples:
- Query per second (QPS): This measures the number of queries your model processes in a second, offering insights into its utilization.
- Latency: This metric quantifies the delay from when your client sends a request to when they receive a response.
- Tokens Per Second (TPS): This represents the number of tokens your model can generate in a second.
**Quality Metrics:** These metrics are typically customized according to the business use-case. For instance, how does the output of your system compare to a baseline, such as a previous version? Although these metrics can be calculated offline, you need to log the necessary data to use them later.
Fault tolerance
---------------
Your application may encounter errors such as exceptions in your model inference or business logic code, causing failures and disrupting traffic. Other potential issues could arise from the machine running your application, such as unexpected hardware breakdowns or loss of spot-instances during high-demand periods. One way to mitigate these risks is by increasing redundancy through replica scaling and implementing recovery mechanisms for failed replicas. However, model replicas aren't the only potential points of failure. It's essential to build resilience against various failures that could occur at any point in your stack.
Zero down time upgrade
----------------------
System upgrades are often necessary but can result in service disruptions if not handled correctly. One way to prevent downtime during upgrades is by implementing a smooth transition process from the old version to the new one. Ideally, the new version of your LLM service is deployed, and traffic gradually shifts from the old to the new version, maintaining a constant QPS throughout the process.
Load balancing
--------------
Load balancing, in simple terms, is a technique to distribute work evenly across multiple computers, servers, or other resources to optimize the utilization of the system, maximize throughput, minimize response time, and avoid overload of any single resource. Think of it as a traffic officer directing cars (requests) to different roads (servers) so that no single road becomes too congested.
There are several strategies for load balancing. For example, one common method is the *Round Robin* strategy, where each request is sent to the next server in line, cycling back to the first when all servers have received a request. This works well when all servers are equally capable. However, if some servers are more powerful than others, you might use a *Weighted Round Robin* or *Least Connections* strategy, where more requests are sent to the more powerful servers, or to those currently handling the fewest active requests. Let's imagine you're running a LLM chain. If your application becomes popular, you could have hundreds or even thousands of users asking questions at the same time. If one server gets too busy (high load), the load balancer would direct new requests to another server that is less busy. This way, all your users get a timely response and the system remains stable.
Maintaining Cost-Efficiency and Scalability
============================================
.. _cost:
Deploying LLM services can be costly, especially when you're handling a large volume of user interactions. Charges by LLM providers are usually based on tokens used, making a chat system inference on these models potentially expensive. However, several strategies can help manage these costs without compromising the quality of the service.
Self-hosting models
-------------------
Several smaller and open-source LLMs are emerging to tackle the issue of reliance on LLM providers. Self-hosting allows you to maintain similar quality to LLM provider models while managing costs. The challenge lies in building a reliable, high-performing LLM serving system on your own machines.
Resource Management and Auto-Scaling
------------------------------------
Computational logic within your application requires precise resource allocation. For instance, if part of your traffic is served by an OpenAI endpoint and another part by a self-hosted model, it's crucial to allocate suitable resources for each. Auto-scaling—adjusting resource allocation based on traffic—can significantly impact the cost of running your application. This strategy requires a balance between cost and responsiveness, ensuring neither resource over-provisioning nor compromised application responsiveness.
Utilizing Spot Instances
------------------------
On platforms like AWS, spot instances offer substantial cost savings, typically priced at about a third of on-demand instances. The trade-off is a higher crash rate, necessitating a robust fault-tolerance mechanism for effective use.
Independent Scaling
-------------------
When self-hosting your models, you should consider independent scaling. For example, if you have two translation models, one fine-tuned for French and another for Spanish, incoming requests might necessitate different scaling requirements for each.
Batching requests
-----------------
In the context of Large Language Models, batching requests can enhance efficiency by better utilizing your GPU resources. GPUs are inherently parallel processors, designed to handle multiple tasks simultaneously. If you send individual requests to the model, the GPU might not be fully utilized as it's only working on a single task at a time. On the other hand, by batching requests together, you're allowing the GPU to work on multiple tasks at once, maximizing its utilization and improving inference speed. This not only leads to cost savings but can also improve the overall latency of your LLM service.
In summary, managing costs while scaling your LLM services requires a strategic approach. Utilizing self-hosting models, managing resources effectively, employing auto-scaling, using spot instances, independently scaling models, and batching requests are key strategies to consider. Open-source libraries such as Ray Serve and BentoML are designed to deal with these complexities.
Ensuring Rapid Iteration
========================
.. _iteration:
The LLM landscape is evolving at an unprecedented pace, with new libraries and model architectures being introduced constantly. Consequently, it's crucial to avoid tying yourself to a solution specific to one particular framework. This is especially relevant in serving, where changes to your infrastructure can be time-consuming, expensive, and risky. Strive for infrastructure that is not locked into any specific machine learning library or framework, but instead offers a general-purpose, scalable serving layer. Here are some aspects where flexibility plays a key role:
Model composition
-----------------
Deploying systems like LangChain demands the ability to piece together different models and connect them via logic. Take the example of building a natural language input SQL query engine. Querying an LLM and obtaining the SQL command is only part of the system. You need to extract metadata from the connected database, construct a prompt for the LLM, run the SQL query on an engine, collect and feed back the response to the LLM as the query runs, and present the results to the user. This demonstrates the need to seamlessly integrate various complex components built in Python into a dynamic chain of logical blocks that can be served together.
Cloud providers
---------------
Many hosted solutions are restricted to a single cloud provider, which can limit your options in today's multi-cloud world. Depending on where your other infrastructure components are built, you might prefer to stick with your chosen cloud provider.
Infrastructure as Code (IaC)
---------------------------
Rapid iteration also involves the ability to recreate your infrastructure quickly and reliably. This is where Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Kubernetes YAML files come into play. They allow you to define your infrastructure in code files, which can be version controlled and quickly deployed, enabling faster and more reliable iterations.
CI/CD
-----
In a fast-paced environment, implementing CI/CD pipelines can significantly speed up the iteration process. They help automate the testing and deployment of your LLM applications, reducing the risk of errors and enabling faster feedback and iteration.

@ -6,6 +6,11 @@ This section covers several options for that. Note that these options are meant
What follows is a list of template GitHub repositories designed to be easily forked and modified to use your chain. This list is far from exhaustive, and we are EXTREMELY open to contributions here.
## [Anyscale](https://www.anyscale.com/model-serving)
Anyscale is a unified compute platform that makes it easy to develop, deploy, and manage scalable LLM applications in production using Ray.
With Anyscale you can scale the most challenging LLM-based workloads and both develop and deploy LLM-based apps on a single compute platform.
## [Streamlit](https://github.com/hwchase17/langchain-streamlit-template)
This repo serves as a template for how to deploy a LangChain with Streamlit.

@ -0,0 +1,233 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Ray Serve\n",
"\n",
"[Ray Serve](https://docs.ray.io/en/latest/serve/index.html) is a scalable model serving library for building online inference APIs. Serve is particularly well suited for system composition, enabling you to build a complex inference service consisting of multiple chains and business logic all in Python code. "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Goal of this notebook\n",
"This notebook shows a simple example of how to deploy an OpenAI chain into production. You can extend it to deploy your own self-hosted models where you can easily define amount of hardware resources (GPUs and CPUs) needed to run your model in production efficiently. Read more about available options including autoscaling in the Ray Serve [documentation](https://docs.ray.io/en/latest/serve/getting_started.html).\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Ray Serve\n",
"Install ray with `pip install ray[serve]`. "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## General Skeleton"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"The general skeleton for deploying a service is the following:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 0: Import ray serve and request from starlette\n",
"from ray import serve\n",
"from starlette.requests import Request\n",
"\n",
"# 1: Define a Ray Serve deployment.\n",
"@serve.deployment\n",
"class LLMServe:\n",
"\n",
" def __init__(self) -> None:\n",
" # All the initialization code goes here\n",
" pass\n",
"\n",
" async def __call__(self, request: Request) -> str:\n",
" # You can parse the request here\n",
" # and return a response\n",
" return \"Hello World\"\n",
"\n",
"# 2: Bind the model to deployment\n",
"deployment = LLMServe.bind()\n",
"\n",
"# 3: Run the deployment\n",
"serve.api.run(deployment)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Shutdown the deployment\n",
"serve.api.shutdown()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example of deploying and OpenAI chain with custom prompts"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Get an OpenAI API key from [here](https://platform.openai.com/account/api-keys). By running the following code, you will be asked to provide your API key."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain import PromptTemplate, LLMChain"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"OPENAI_API_KEY = getpass()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@serve.deployment\n",
"class DeployLLM:\n",
"\n",
" def __init__(self):\n",
" # We initialize the LLM, template and the chain here\n",
" llm = OpenAI(openai_api_key=OPENAI_API_KEY)\n",
" template = \"Question: {question}\\n\\nAnswer: Let's think step by step.\"\n",
" prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
" self.chain = LLMChain(llm=llm, prompt=prompt)\n",
"\n",
" def _run_chain(self, text: str):\n",
" return self.chain(text)\n",
"\n",
" async def __call__(self, request: Request):\n",
" # 1. Parse the request\n",
" text = request.query_params[\"text\"]\n",
" # 2. Run the chain\n",
" resp = self._run_chain(text)\n",
" # 3. Return the response\n",
" return resp[\"text\"]"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can bind the deployment."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Bind the model to deployment\n",
"deployment = DeployLLM.bind()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We can assign the port number and host when we want to run the deployment. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Example port number\n",
"PORT_NUMBER = 8282\n",
"# Run the deployment\n",
"serve.api.run(deployment, port=PORT_NUMBER)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that service is deployed on port `localhost:8282` we can send a post request to get the results back."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"\n",
"text = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
"response = requests.post(f'http://localhost:{PORT_NUMBER}/?text={text}')\n",
"print(response.content.decode())"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "ray",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

@ -176,6 +176,8 @@ Additional Resources
- `Gallery <https://github.com/kyrolabs/awesome-langchain>`_: A collection of great projects that use Langchain, compiled by the folks at `Kyrolabs <https://kyrolabs.com>`_. Useful for finding inspiration and example implementations.
- `Deploying LLMs in Production <./additional_resources/deploy_llms.html>`_: A collection of best practices and tutorials for deploying LLMs in production.
- `Tracing <./additional_resources/tracing.html>`_: A guide on using tracing in LangChain to visualize the execution of chains and agents.
- `Model Laboratory <./additional_resources/model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
@ -194,6 +196,8 @@ Additional Resources
:hidden:
LangChainHub <https://github.com/hwchase17/langchain-hub>
./additional_resources/deployments.md
./additional_resources/deploy_llms.rst
Gallery <https://github.com/kyrolabs/awesome-langchain>
./additional_resources/tracing.md
./additional_resources/model_laboratory.ipynb

Loading…
Cancel
Save