- Get setup with LangChain, LangSmith and LangServe
- Use the most basic and common components of LangChain: prompt templates, models, and output parsers
- Use LangChain Expression Language, the protocol that LangChain is built on and which facilitates component chaining
- Build simple application with LangChain
- Trace your application with LangSmith
- Serve your application with LangServe
That's a fair amount to cover! Let's dive in.
## Setup
### Installation
To install LangChain run:
To install LangChain run:
@ -20,7 +31,7 @@ import CodeBlock from "@theme/CodeBlock";
For more details, see our [Installation guide](/docs/get_started/installation).
For more details, see our [Installation guide](/docs/get_started/installation).
## Environment setup
### Environment
Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.
Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.
@ -44,7 +55,7 @@ from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(openai_api_key="...")
llm = ChatOpenAI(openai_api_key="...")
```
```
## LangSmith setup
### LangSmith
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.
Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.
As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.
As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.
LangServe helps developers deploy LangChain chains as a REST API. You do not need to use LangServe to use LangChain, but in this guide we'll show how you can deploy your app with LangServe.
Install with:
```bash
pip install "langserve[all]"
```
## Building with LangChain
LangChain provides many modules that can be used to build language model applications.
LangChain provides many modules that can be used to build language model applications.
Modules can be used as standalones in simple applications and they can be composed for more complex use cases.
Modules can be used as standalones in simple applications and they can be composed for more complex use cases.
@ -73,7 +93,7 @@ In this guide we'll cover those three components individually, and then go over
Understanding these concepts will set you up well for being able to use and customize LangChain applications.
Understanding these concepts will set you up well for being able to use and customize LangChain applications.
Most LangChain applications allow you to configure the model and/or the prompt, so knowing how to take advantage of this will be a big enabler.
Most LangChain applications allow you to configure the model and/or the prompt, so knowing how to take advantage of this will be a big enabler.
## LLM / Chat Model
### LLM / Chat Model
There are two types of language models:
There are two types of language models:
@ -142,7 +162,7 @@ To dive deeper on models head to the [Language models](/docs/modules/model_io/mo
</details>
</details>
## Prompt templates
### Prompt templates
Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.
Most LLM applications do not pass user input directly into an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.
This chain will take input variables, pass those to a prompt template to create a prompt, pass the prompt to a language model, and then pass the output through an (optional) output parser.
This chain will take input variables, pass those to a prompt template to create a prompt, pass the prompt to a language model, and then pass the output through an (optional) output parser.
@ -234,15 +254,17 @@ This is a convenient way to bundle up a modular piece of logic.
Let's see it in action!
Let's see it in action!
```python
```python
from typing import List
from langchain.chat_models import ChatOpenAI
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import ChatPromptTemplate
from langchain.prompts import ChatPromptTemplate
from langchain.schema import BaseOutputParser
from langchain.schema import BaseOutputParser
class CommaSeparatedListOutputParser(BaseOutputParser):
class CommaSeparatedListOutputParser(BaseOutputParser[List[str]]):
"""Parse the output of an LLM call to a comma-separated list."""
"""Parse the output of an LLM call to a comma-separated list."""
Note that we are using the `|` syntax to join these components together.
Note that we are using the `|` syntax to join these components together.
This `|` syntax is powered by the LangChain Expression Language (LCEL) and relies on the universal `Runnable` interface that all of these objects implement.
This `|` syntax is powered by the LangChain Expression Language (LCEL) and relies on the universal `Runnable` interface that all of these objects implement.
To learn more about this syntax, read the documentation [here](/docs/expression_language).
To learn more about LCEL, read the documentation [here](/docs/expression_language).
## Seeing this in LangSmith
## Tracing with LangSmith
Assuming we've set our environment variables as shown in the beginning, all of the model and chain calls we've been making will have been automatically logged to LangSmith.
Assuming we've set our environment variables as shown in the beginning, all of the model and chain calls we've been making will have been automatically logged to LangSmith.
Once there, we can use LangSmith to debug and annotate our application traces, then turn them into datasets for evaluating future iterations of the application.
Once there, we can use LangSmith to debug and annotate our application traces, then turn them into datasets for evaluating future iterations of the application.
@ -272,15 +294,106 @@ Once there, we can use LangSmith to debug and annotate our application traces, t
Check out what the trace for the above chain would look like:
Check out what the trace for the above chain would look like:
description="A simple api server using Langchain's Runnable interfaces",
)
# 3. Adding chain route
add_routes(
app,
category_chain,
path="/category_chain",
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="localhost", port=8000)
```
And that's it! If we execute this file:
```bash
python serve.py
```
we should see our chain being served at localhost:8000.
### Playground
Every LangServe service comes with a simple built-in UI for configuring and invoking the application with streaming output and visibility into intermediate steps.
Head to http://localhost:8000/category_chain/playground/ to try it out!
### Client
Now let's set up a client for programmatically interacting with our service. We can easily do this with the `langserve.RemoteRunnable`.
Using this, we can interact with the served chain as if it were running client-side.