beef up getting started (#8139)

Co-authored-by: Bagatur <baskaryan@gmail.com>
This commit is contained in:
Harrison Chase 2023-07-23 19:57:43 -07:00 committed by GitHub
parent fa8906a9b7
commit 33fd6184ba
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
14 changed files with 179 additions and 354 deletions

View File

@ -22,28 +22,74 @@ import OpenAISetup from "@snippets/get_started/quickstart/openai_setup.mdx"
## Building an application ## Building an application
Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications. Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases. Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications.
Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.
The core building block of LangChain applications is the LLMChain.
This combines three things:
- LLM: The language model is the core reasoning engine here. In order to work with LangChain, you need to understand the different types of language models and how to work with them.
- Prompt Templates: This provides instructions to the language model. This controls what the language model outputs, so understanding how to construct prompts and different prompting strategies is crucial.
- Output Parsers: These translate the raw response from the LLM to a more workable format, making it easy to use the output downstream.
In this getting started guide we will cover those three components by themselves, and then cover the LLMChain which combines all of them.
Understanding these concepts will set you up well for being able to use and customize LangChain applications.
Most LangChain applications allow you to configure the LLM and/or the prompt used, so knowing how to take advantage of this will be a big enabler.
## LLMs ## LLMs
#### Get predictions from a language model
The basic building block of LangChain is the LLM, which takes in text and generates more text. There are two types of language models, which in LangChain are called:
As an example, suppose we're building an application that generates a company name based on a company description. In order to do this, we need to initialize an OpenAI model wrapper. In this case, since we want the outputs to be MORE random, we'll initialize our model with a HIGH temperature. - LLMs: this is a language model which takes a string as input and returns a string
- ChatModels: this is a language model which takes a list of messages as input and returns a message
import LLM from "@snippets/get_started/quickstart/llm.mdx" The input/output for LLMs is simple and easy to understand - a string.
But what about ChatModels? The input there is a list of `ChatMessage`s, and the output is a single `ChatMessage`.
A `ChatMessage` has two required components:
<LLM/> - `content`: This is the content of the message.
- `role`: This is the role of the entity from which the `ChatMessage` is coming from.
## Chat models LangChain provides several objects to easily distinguish between different roles:
Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs. - `HumanMessage`: A `ChatMessage` coming from a human/user.
- `AIMessage`: A `ChatMessage` coming from an AI/assistant.
- `SystemMessage`: A `ChatMessage` coming from the system.
- `FunctionMessage`: A `ChatMessage` coming from a function call.
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are `AIMessage`, `HumanMessage`, `SystemMessage`, and `ChatMessage` -- `ChatMessage` takes in an arbitrary role parameter. Most of the time, you'll just be dealing with `HumanMessage`, `AIMessage`, and `SystemMessage`. If none of those roles sound right, there is also a `ChatMessage` class where you can specify the role manually.
For more information on how to use these different messages most effectively, see our prompting guide.
import ChatModel from "@snippets/get_started/quickstart/chat_model.mdx" LangChain exposes a standard interface for both, but it's useful to understand this difference in order to construct prompts for a given language model.
The standard interface that LangChain exposes has two methods:
- `predict`: Takes in a string, returns a string
- `predict_messages`: Takes in a list of messages, returns a message.
Let's see how to work with these different types of models and these different types of inputs.
First, let's import an LLM and a ChatModel.
import ImportLLMs from "@snippets/get_started/quickstart/import_llms.mdx"
<ImportLLMs/>
The `OpenAI` and `ChatOpenAI` objects are basically just configuration objects.
You can initialize them with parameters like `temperature` and others, and pass them around.
Next, let's use the `predict` method to run over a string input.
import InputString from "@snippets/get_started/quickstart/input_string.mdx"
<InputString/>
Finally, let's use the `predict_messages` method to run over a list of messages.
import InputMessages from "@snippets/get_started/quickstart/input_messages.mdx"
<InputMessages/>
For both these methods, you can also pass in parameters as key word arguments.
For example, you could pass in `temperature=0` to adjust the temperature that is used from what the object was configured with.
Whatever values are passed in during run time will always override what the object was configured with.
<ChatModel/>
## Prompt templates ## Prompt templates
@ -51,108 +97,66 @@ Most LLM applications do not pass user input directly into an LLM. Usually they
In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions. In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.
PromptTemplates help with exactly this!
They bundle up all the logic for going from user input into a fully formatted prompt.
This can start off very simple - for example, a prompt to produce the above string would just be:
import PromptTemplateLLM from "@snippets/get_started/quickstart/prompt_templates_llms.mdx" import PromptTemplateLLM from "@snippets/get_started/quickstart/prompt_templates_llms.mdx"
import PromptTemplateChatModel from "@snippets/get_started/quickstart/prompt_templates_chat_models.mdx" import PromptTemplateChatModel from "@snippets/get_started/quickstart/prompt_templates_chat_models.mdx"
<Tabs>
<TabItem value="llms" label="LLMs" default>
With PromptTemplates this is easy! In this case our template would be very simple:
<PromptTemplateLLM/> <PromptTemplateLLM/>
</TabItem>
<TabItem value="chat_models" label="Chat models">
Similar to LLMs, you can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplate`s. You can use `ChatPromptTemplate`'s `format_messages` method to generate the formatted messages. However, the advantages of using these over raw string formatting are several.
You can "partial" out variables - eg you can format only some of the variables at a time.
You can compose them together, easily combining different templates into a single prompt.
For explanations of these functionalities, see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string. Please see the detailed guides on prompts to understand more options available to you here. PromptTemplates can also be used to produce a list of messages.
In this case, the prompt not only contains information about the content, but also each message (its role, its position in the list, etc)
Here, what happens most often is a ChatPromptTemplate is a list of ChatMessageTemplates.
Each ChatMessageTemplate contains instructions for how to format that ChatMessage - its role, and then also its content.
Let's take a look at this below:
<PromptTemplateChatModel/> <PromptTemplateChatModel/>
</TabItem>
</Tabs>
## Chains ChatPromptTemplates can also include other things besides ChatMessageTemplates - see the [section on prompts](/docs/modules/model_io/prompts) for more detail.
Now that we've got a model and a prompt template, we'll want to combine the two. Chains give us a way to link (or chain) together multiple primitives, like models, prompts, and other chains. ## Output Parsers
import ChainLLM from "@snippets/get_started/quickstart/chains_llms.mdx" OutputParsers convert the raw output of an LLM into a format that can be used downstream.
import ChainChatModel from "@snippets/get_started/quickstart/chains_chat_models.mdx" There are few main type of OutputParsers, including:
<Tabs> - Convert text from LLM -> structured information (eg JSON)
<TabItem value="llms" label="LLMs" default> - Convert a ChatMessage into just a string
- Convert the extra information returned from a call besides the message (like OpenAI function invocation) into a string.
The simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template. For full information on this, see the [section on output parsers](/docs/modules/model_io/output_parsers)
<ChainLLM/> In this getting started guide, we will write our own output parser - one that converts a comma separated list into a list.
There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains. import OutputParser from "@snippets/get_started/quickstart/output_parser.mdx"
</TabItem> <OutputParser/>
<TabItem value="chat_models" label="Chat models">
The `LLMChain` can be used with chat models as well: ## LLMChain
<ChainChatModel/> We can now combine all these into one chain.
</TabItem> This chain will take input variables, pass those to a prompt template to create a prompt, pass the prompt to an LLM, and then pass the output through an (optional) output parser.
</Tabs> This is a convenient way to bundle up a modular piece of logic.
Let's see it in action!
## Agents import LLMChain from "@snippets/get_started/quickstart/llm_chain.mdx"
import AgentLLM from "@snippets/get_started/quickstart/agents_llms.mdx" <LLMChain/>
import AgentChatModel from "@snippets/get_started/quickstart/agents_chat_models.mdx"
Our first chain ran a pre-determined sequence of steps. To handle complex workflows, we need to be able to dynamically choose actions based on inputs. ## Next Steps
Agents do just this: they use a language model to determine which actions to take and in what order. Agents are given access to tools, and they repeatedly choose a tool, run the tool, and observe the output until they come up with a final answer. This is it!
We've now gone over how to create the core building block of LangChain applications - the LLMChains.
There is a lot more nuance in all these components (LLMs, prompts, output parsers) and a lot more different components to learn about as well.
To continue on your journey:
To load an agent, you need to choose a(n): - [Dive deeper](/docs/modules/model_io) into LLMs, prompts, and output parsers
- LLM/Chat model: The language model powering the agent. - Learn the other [key components](/docs/modules)
- Tool(s): A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. For a list of predefined tools and their specifications, see the [Tools documentation](/docs/modules/agents/tools/). - Check out our [helpful guides](/docs/guides) for detailed walkthroughs on particular topics
- Agent name: A string that references a supported agent class. An agent class is largely parameterized by the prompt the language model uses to determine which action to take. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see [here](/docs/modules/agents/how_to/custom_agent.html). For a list of supported agents and their specifications, see [here](/docs/modules/agents/agent_types/). - Explore [end-to-end use cases](/docs/use_cases)
For this example, we'll be using SerpAPI to query a search engine.
You'll need to install the SerpAPI Python package:
```bash
pip install google-search-results
```
And set the `SERPAPI_API_KEY` environment variable.
<Tabs>
<TabItem value="llms" label="LLMs" default>
<AgentLLM/>
</TabItem>
<TabItem value="chat_models" label="Chat models">
Agents can also be used with chat models, you can initialize one using `AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION` as the agent type.
<AgentChatModel/>
</TabItem>
</Tabs>
## Memory
The chains and agents we've looked at so far have been stateless, but for many applications it's necessary to reference past interactions. This is clearly the case with a chatbot for example, where you want it to understand new messages in the context of past messages.
The Memory module gives you a way to maintain application state. The base Memory interface is simple: it lets you update state given the latest run inputs and outputs and it lets you modify (or contextualize) the next input using the stored state.
There are a number of built-in memory systems. The simplest of these is a buffer memory which just prepends the last few inputs/outputs to the current input - we will use this in the example below.
import MemoryLLM from "@snippets/get_started/quickstart/memory_llms.mdx"
import MemoryChatModel from "@snippets/get_started/quickstart/memory_chat_models.mdx"
<Tabs>
<TabItem value="llms" label="LLMs" default>
<MemoryLLM/>
</TabItem>
<TabItem value="chat_models" label="Chat models">
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
<MemoryChatModel/>
</TabItem>
</Tabs>

View File

@ -1,55 +0,0 @@
```python
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
```
```pycon
> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
Action:
{
"action": "Search",
"action_input": "Olivia Wilde boyfriend"
}
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought:I need to use a search engine to find Harry Styles' current age.
Action:
{
"action": "Search",
"action_input": "Harry Styles age"
}
Observation: 29 years
Thought:Now I need to calculate 29 raised to the 0.23 power.
Action:
{
"action": "Calculator",
"action_input": "29^0.23"
}
Observation: Answer: 2.169459462491557
Thought:I now know the final answer.
Final Answer: 2.169459462491557
> Finished chain.
'2.169459462491557'
```

View File

@ -1,37 +0,0 @@
```python
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.llms import OpenAI
# The language model we're going to use to control the agent.
llm = OpenAI(temperature=0)
# The tools we'll give the Agent access to. Note that the 'llm-math' tool uses an LLM, so we need to pass that in.
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Let's test it out!
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")
```
```pycon
> Entering new AgentExecutor chain...
Thought: I need to find the temperature first, then use the calculator to raise it to the .023 power.
Action: Search
Action Input: "High temperature in SF yesterday"
Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...
Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.
Action: Calculator
Action Input: 57^.023
Observation: Answer: 1.0974509573251117
Thought: I now know the final answer
Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117.
> Finished chain.
```
```pycon
The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117.
```

View File

@ -1,23 +0,0 @@
```python
from langchain import LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template = "You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
```
```pycon
J'aime programmer.
```

View File

@ -1,17 +0,0 @@
Using this we can replace
```python
llm.predict("What would be a good company name for a company that makes colorful socks?")
```
with
```python
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
chain.run("colorful socks")
```
```pycon
Feetful of Fun
```

View File

@ -1,21 +0,0 @@
```python
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
chat.predict_messages([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# >> AIMessage(content="J'aime programmer.", additional_kwargs={})
```
It is useful to understand how chat models are different from a normal LLM, but it can often be handy to just be able to treat them the same.
LangChain makes that easy by also exposing an interface through which you can interact with a chat model as you would a normal LLM.
You can access this through the `predict` interface.
```python
chat.predict("Translate this sentence from English to French. I love programming.")
# >> J'aime programmer
```

View File

@ -0,0 +1,13 @@
```python
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
llm = OpenAI()
chat_model = ChatOpenAI()
llm.predict("hi!")
>>> "Hi"
chat_model.predict("hi!")
>>> "Hi"
```

View File

@ -0,0 +1,12 @@
```python
from langchain.schema import HumanMessage
text = "What would be a good company name for a company that makes colorful socks?"
messages = [HumanMessage(content=text)]
llm.predict_messages(messages)
# >> Feetful of Fun
chat_model.predict_messages(messages)
# >> Socks O'Color
```

View File

@ -0,0 +1,9 @@
```python
text = "What would be a good company name for a company that makes colorful socks?"
llm.predict(text)
# >> Feetful of Fun
chat_model.predict(text)
# >> Socks O'Color
```

View File

@ -1,13 +0,0 @@
```python
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.9)
```
And now we can pass in text and get predictions!
```python
llm.predict("What would be a good company name for a company that makes colorful socks?")
# >> Feetful of Fun
```

View File

@ -0,0 +1,34 @@
```python
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.chains import LLMChain
from langchain.schema import BaseOutputParser
class CommaSeparatedListOutputParser(BaseOutputParser):
"""Parse the output of an LLM call to a comma-separated list."""
def parse(self, text: str):
"""Parse the output of an LLM call."""
return text.strip().split(", ")
template = """You are a helpful assistant who generates comma separated lists.
A user will pass in a category, and you should generated 5 objects in that category in a comma separated list.
ONLY return a comma separated list, and nothing more."""
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(
llm=ChatOpenAI(),
prompt=chat_prompt,
output_parser=CommaSeparatedListOutputParser()
)
chain.run("colors")
# >> ['red', 'blue', 'green', 'yellow', 'orange']
```

View File

@ -1,44 +0,0 @@
```python
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(
"The following is a friendly conversation between a human and an AI. The AI is talkative and "
"provides lots of specific details from its context. If the AI does not know the answer to a "
"question, it truthfully says it does not know."
),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}")
])
llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
conversation.predict(input="Hi there!")
```
```pycon
Hello! How can I assist you today?
```
```python
conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
```
```pycon
That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?
```
```python
conversation.predict(input="Tell me about yourself.")
```
```pycon
Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?
```

View File

@ -1,51 +0,0 @@
```python
from langchain import OpenAI, ConversationChain
llm = OpenAI(temperature=0)
conversation = ConversationChain(llm=llm, verbose=True)
conversation.run("Hi there!")
```
here's what's going on under the hood
```pycon
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI:
> Finished chain.
>> 'Hello! How are you today?'
```
Now if we run the chain again
```python
conversation.run("I'm doing well! Just having a conversation with an AI.")
```
we'll see that the full prompt that's passed to the model contains the input and output of our first interaction, along with our latest input
```pycon
> Entering new chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: Hi there!
AI: Hello! How are you today?
Human: I'm doing well! Just having a conversation with an AI.
AI:
> Finished chain.
>> "That's great! What would you like to talk about?"
```

View File

@ -0,0 +1,14 @@
```python
from langchain.schema import BaseOutputParser
class CommaSeparatedListOutputParser(BaseOutputParser):
"""Parse the output of an LLM call to a comma-separated list."""
def parse(self, text: str):
"""Parse the output of an LLM call."""
return text.strip().split(", ")
CommaSeparatedListOutputParser().parse("hi, bye")
# >> ['hi', 'bye']
```