big docs refactor (#1978)

Co-authored-by: Ankush Gola <ankush.gola@gmail.com>
This commit is contained in:
Harrison Chase 2023-03-26 19:49:46 -07:00 committed by GitHub
parent b83e826510
commit 705431aecc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
306 changed files with 5696 additions and 7036 deletions

View File

@ -24,4 +24,4 @@ To import this vectorstore:
from langchain.vectorstores import AtlasDB from langchain.vectorstores import AtlasDB
``` ```
For a more detailed walkthrough of the AtlasDB wrapper, see [this notebook](../modules/indexes/vectorstore_examples/atlas.ipynb) For a more detailed walkthrough of the AtlasDB wrapper, see [this notebook](../modules/indexes/vectorstores/examples/atlas.ipynb)

View File

@ -17,4 +17,4 @@ To import this vectorstore:
from langchain.vectorstores import Chroma from langchain.vectorstores import Chroma
``` ```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb) For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/vectorstores/getting_started.ipynb)

View File

@ -22,4 +22,4 @@ There exists an Cohere Embeddings wrapper, which you can access with
```python ```python
from langchain.embeddings import CohereEmbeddings from langchain.embeddings import CohereEmbeddings
``` ```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb) For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/embeddings.ipynb)

View File

@ -22,4 +22,4 @@ from langchain.vectorstores import DeepLake
``` ```
For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](../modules/indexes/vectorstore_examples/deeplake.ipynb) For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](../modules/indexes/vectorstores/examples/deeplake.ipynb)

View File

@ -18,7 +18,7 @@ There exists a GoogleSearchAPIWrapper utility which wraps this API. To import th
from langchain.utilities import GoogleSearchAPIWrapper from langchain.utilities import GoogleSearchAPIWrapper
``` ```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/google_search.ipynb). For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/google_search.ipynb).
### Tool ### Tool
@ -29,4 +29,4 @@ from langchain.agents import load_tools
tools = load_tools(["google-search"]) tools = load_tools(["google-search"])
``` ```
For more information on this, see [this page](../modules/agents/tools.md) For more information on this, see [this page](../modules/agents/tools/getting_started.md)

View File

@ -58,7 +58,7 @@ So the final answer is: El Palmar, Spain
'El Palmar, Spain' 'El Palmar, Spain'
``` ```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/google_serper.ipynb). For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/google_serper.ipynb).
### Tool ### Tool
@ -69,4 +69,4 @@ from langchain.agents import load_tools
tools = load_tools(["google-serper"]) tools = load_tools(["google-serper"])
``` ```
For more information on this, see [this page](../modules/agents/tools.md) For more information on this, see [this page](../modules/agents/tools/getting_started.md)

View File

@ -30,7 +30,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
```python ```python
from langchain.llms import HuggingFaceHub from langchain.llms import HuggingFaceHub
``` ```
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](../modules/llms/integrations/huggingface_hub.ipynb) For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](../modules/models/llms/integrations/huggingface_hub.ipynb)
### Embeddings ### Embeddings
@ -47,7 +47,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
```python ```python
from langchain.embeddings import HuggingFaceHubEmbeddings from langchain.embeddings import HuggingFaceHubEmbeddings
``` ```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb) For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/embeddings.ipynb)
### Tokenizer ### Tokenizer
@ -59,7 +59,7 @@ You can also use it to count tokens when splitting documents with
from langchain.text_splitter import CharacterTextSplitter from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_huggingface_tokenizer(...) CharacterTextSplitter.from_huggingface_tokenizer(...)
``` ```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/textsplitter.ipynb) For a more detailed walkthrough of this, see [this notebook](../modules/indexes/text_splitters/examples/textsplitter.ipynb)
### Datasets ### Datasets

View File

@ -17,4 +17,4 @@ To import this vectorstore:
from langchain.vectorstores import Milvus from langchain.vectorstores import Milvus
``` ```
For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](../modules/indexes/vectorstore_examples/milvus.ipynb) For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](../modules/indexes/vectorstores/examples/milvus.ipynb)

View File

@ -21,7 +21,7 @@ If you are using a model hosted on Azure, you should use different wrapper for t
```python ```python
from langchain.llms import AzureOpenAI from langchain.llms import AzureOpenAI
``` ```
For a more detailed walkthrough of the Azure wrapper, see [this notebook](../modules/llms/integrations/azure_openai_example.ipynb) For a more detailed walkthrough of the Azure wrapper, see [this notebook](../modules/models/llms/integrations/azure_openai_example.ipynb)
@ -31,7 +31,7 @@ There exists an OpenAI Embeddings wrapper, which you can access with
```python ```python
from langchain.embeddings import OpenAIEmbeddings from langchain.embeddings import OpenAIEmbeddings
``` ```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb) For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/embeddings.ipynb)
### Tokenizer ### Tokenizer
@ -44,7 +44,7 @@ You can also use it to count tokens when splitting documents with
from langchain.text_splitter import CharacterTextSplitter from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...) CharacterTextSplitter.from_tiktoken_encoder(...)
``` ```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/textsplitter.ipynb) For a more detailed walkthrough of this, see [this notebook](../modules/indexes/text_splitters/examples/textsplitter.ipynb)
### Moderation ### Moderation
You can also access the OpenAI content moderation endpoint with You can also access the OpenAI content moderation endpoint with

View File

@ -18,4 +18,4 @@ To import this vectorstore:
from langchain.vectorstores import OpenSearchVectorSearch from langchain.vectorstores import OpenSearchVectorSearch
``` ```
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](../modules/indexes/vectorstore_examples/opensearch.ipynb) For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](../modules/indexes/vectorstores/examples/opensearch.ipynb)

View File

@ -26,4 +26,4 @@ from langchain.vectorstores.pgvector import PGVector
### Usage ### Usage
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](../modules/indexes/vectorstore_examples/pgvector.ipynb) For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](../modules/indexes/vectorstores/examples/pgvector.ipynb)

View File

@ -17,4 +17,4 @@ To import this vectorstore:
from langchain.vectorstores import Pinecone from langchain.vectorstores import Pinecone
``` ```
For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/indexes/vectorstore_examples/pinecone.ipynb) For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/indexes/vectorstores/examples/pinecone.ipynb)

View File

@ -46,4 +46,4 @@ This LLM is identical to the [OpenAI LLM](./openai), except that
- you can add `return_pl_id` when instantializing to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9). - you can add `return_pl_id` when instantializing to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](../modules/chat/examples/promptlayer_chat_openai.ipynb) and `PromptLayerOpenAIChat` PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](../modules/models/chat/examples/promptlayer_chat_openai.ipynb) and `PromptLayerOpenAIChat`

View File

@ -17,4 +17,4 @@ To import this vectorstore:
from langchain.vectorstores import Qdrant from langchain.vectorstores import Qdrant
``` ```
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](../modules/indexes/vectorstore_examples/qdrant.ipynb) For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](../modules/indexes/vectorstores/examples/qdrant.ipynb)

View File

@ -15,7 +15,7 @@ custom LLMs, you can use the `SelfHostedPipeline` parent class.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
``` ```
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](../modules/llms/integrations/self_hosted_examples.ipynb) For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](../modules/models/llms/integrations/self_hosted_examples.ipynb)
## Self-hosted Embeddings ## Self-hosted Embeddings
There are several ways to use self-hosted embeddings with LangChain via Runhouse. There are several ways to use self-hosted embeddings with LangChain via Runhouse.
@ -26,6 +26,6 @@ the `SelfHostedEmbedding` class.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
``` ```
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](../modules/indexes/examples/embeddings.ipynb) For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](../modules/models/text_embedding/examples/embeddings.ipynb)
## ##

View File

@ -55,4 +55,4 @@ from langchain.agents import load_tools
tools = load_tools(["searx-search"], searx_host="http://localhost:8888") tools = load_tools(["searx-search"], searx_host="http://localhost:8888")
``` ```
For more information on tools, see [this page](../modules/agents/tools.md) For more information on tools, see [this page](../modules/agents/tools/getting_started.md)

View File

@ -17,7 +17,7 @@ There exists a SerpAPI utility which wraps this API. To import this utility:
from langchain.utilities import SerpAPIWrapper from langchain.utilities import SerpAPIWrapper
``` ```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/serpapi.ipynb). For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/serpapi.ipynb).
### Tool ### Tool
@ -28,4 +28,4 @@ from langchain.agents import load_tools
tools = load_tools(["serpapi"]) tools = load_tools(["serpapi"])
``` ```
For more information on this, see [this page](../modules/agents/tools.md) For more information on this, see [this page](../modules/agents/tools/getting_started.md)

View File

@ -30,4 +30,4 @@ To import this vectorstore:
from langchain.vectorstores import Weaviate from langchain.vectorstores import Weaviate
``` ```
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb) For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](../modules/indexes/vectorstores/getting_started.ipynb)

View File

@ -20,7 +20,7 @@ There exists a WolframAlphaAPIWrapper utility which wraps this API. To import th
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
``` ```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/wolfram_alpha.ipynb). For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/wolfram_alpha.ipynb).
### Tool ### Tool
@ -31,4 +31,4 @@ from langchain.agents import load_tools
tools = load_tools(["wolfram-alpha"]) tools = load_tools(["wolfram-alpha"])
``` ```
For more information on this, see [this page](../modules/agents/tools.md) For more information on this, see [this page](../modules/agents/tools/getting_started.md)

View File

@ -36,7 +36,7 @@ os.environ["OPENAI_API_KEY"] = "..."
``` ```
## Building a Language Model Application ## Building a Language Model Application: LLMs
Now that we have installed LangChain and set up our environment, we can start building our language model application. Now that we have installed LangChain and set up our environment, we can start building our language model application.
@ -160,7 +160,7 @@ This is one of the simpler types of chains, but understanding how it works will
````` `````
`````{dropdown} Agents: Dynamically call chains based on user input `````{dropdown} Agents: Dynamically Call Chains Based on User Input
So far the chains we've looked at run in a predetermined order. So far the chains we've looked at run in a predetermined order.
@ -238,7 +238,7 @@ Final Answer: Jason Sudeikis, Olivia Wilde's boyfriend, is 47 years old and his
````` `````
`````{dropdown} Memory: Add state to chains and agents `````{dropdown} Memory: Add State to Chains and Agents
So far, all the chains and agents we've gone through have been stateless. But often, you may want a chain or agent to have some concept of "memory" so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of "short-term memory". On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of "long-term memory". For more concrete ideas on the latter, see this [awesome paper](https://memprompt.com/). So far, all the chains and agents we've gone through have been stateless. But often, you may want a chain or agent to have some concept of "memory" so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of "short-term memory". On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of "long-term memory". For more concrete ideas on the latter, see this [awesome paper](https://memprompt.com/).
@ -287,4 +287,217 @@ AI:
> Finished chain. > Finished chain.
" That's great! What would you like to talk about?" " That's great! What would you like to talk about?"
``` ```
`````
## Building a Language Model Application: Chat Models
Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
`````{dropdown} Get Message Completions from a Chat Model
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are `AIMessage`, `HumanMessage`, `SystemMessage`, and `ChatMessage` -- `ChatMessage` takes in an arbitrary role parameter. Most of the time, you'll just be dealing with `HumanMessage`, `AIMessage`, and `SystemMessage`.
```python
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
```
You can get completions by passing in a single message.
```python
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
```
You can also pass in multiple messages for OpenAI's gpt-3.5-turbo and gpt-4 models.
```python
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages)
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
```
You can go one step further and generate completions for multiple sets of messages using `generate`. This returns an `LLMResult` with an additional `message` parameter:
```python
batch_messages = [
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.")
],
]
result = chat.generate(batch_messages)
result
# -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}})
```
You can recover things like token usage from this LLMResult:
```
result.llm_output['token_usage']
# -> {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}
```
`````
`````{dropdown} Chat Prompt Templates
Similar to LLMs, you can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplate`s. You can use `ChatPromptTemplate`'s `format_prompt` -- this returns a `PromptValue`, which you can convert to a string or `Message` object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like:
```python
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
```
`````
`````{dropdown} Chains with Chat Models
The `LLMChain` discussed in the above section can be used with chat models as well:
```python
from langchain.chat_models import ChatOpenAI
from langchain import PromptTemplate, LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
# -> "J'aime programmer."
```
`````
`````{dropdown} Agents with Chat Models
Agents can also be used with chat models, you can initialize one using `"chat-zero-shot-react-description"` as the agent type.
```python
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent="chat-zero-shot-react-description", verbose=True)
# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
```
```pycon
> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
Action:
{
"action": "Search",
"action_input": "Olivia Wilde boyfriend"
}
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought:I need to use a search engine to find Harry Styles' current age.
Action:
{
"action": "Search",
"action_input": "Harry Styles age"
}
Observation: 29 years
Thought:Now I need to calculate 29 raised to the 0.23 power.
Action:
{
"action": "Calculator",
"action_input": "29^0.23"
}
Observation: Answer: 2.169459462491557
Thought:I now know the final answer.
Final Answer: 2.169459462491557
> Finished chain.
'2.169459462491557'
```
`````
`````{dropdown} Memory: Add State to Chains and Agents
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
```python
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}")
])
llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
conversation.predict(input="Hi there!")
# -> 'Hello! How can I assist you today?'
conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
# -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?"
conversation.predict(input="Tell me about yourself.")
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?"
```
`````

View File

@ -32,7 +32,7 @@ This induces the to model to think about what action to take, then take it.
Resources: Resources:
- [Paper](https://arxiv.org/pdf/2210.03629.pdf) - [Paper](https://arxiv.org/pdf/2210.03629.pdf)
- [LangChain Example](./modules/agents/implementations/react.ipynb) - [LangChain Example](modules/agents/agents/examples/react.ipynb)
## Self-ask ## Self-ask
@ -42,7 +42,7 @@ In this method, the model explicitly asks itself follow-up questions, which are
Resources: Resources:
- [Paper](https://ofir.io/self-ask.pdf) - [Paper](https://ofir.io/self-ask.pdf)
- [LangChain Example](./modules/agents/implementations/self_ask_with_search.ipynb) - [LangChain Example](modules/agents/agents/examples/self_ask_with_search.ipynb)
## Prompt Chaining ## Prompt Chaining

View File

@ -1,28 +1,14 @@
Welcome to LangChain Welcome to LangChain
========================== ==========================
Large language models (LLMs) are emerging as a transformative technology, enabling LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:
developers to build applications that they previously could not.
But using these LLMs in isolation is often not enough to
create a truly powerful app - the real power comes when you are able to
combine them with other sources of computation or knowledge.
This library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include: - *Be data-aware*: connect a language model to other sources of data
- *Be agentic*: Allow a language model to interact with its environment
**❓ Question Answering over specific documents** The LangChain framework is designed with above objectives in mind.
- `Documentation <./use_cases/question_answering.html>`_ This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see `here <https://docs.langchain.com/docs/>`_. For the JavaScript documentation, see `here <https://js.langchain.com/docs/>`_.
- End-to-end Example: `Question Answering over Notion Database <https://github.com/hwchase17/notion-qa>`_
**💬 Chatbots**
- `Documentation <./use_cases/chatbots.html>`_
- End-to-end Example: `Chat-LangChain <https://github.com/hwchase17/chat-langchain>`_
**🤖 Agents**
- `Documentation <./use_cases/agents.html>`_
- End-to-end Example: `GPT+WolframAlpha <https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain>`_
Getting Started Getting Started
---------------- ----------------
@ -46,25 +32,18 @@ There are several main modules that LangChain provides support for.
For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides. For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.
These modules are, in increasing order of complexity: These modules are, in increasing order of complexity:
- `Models <./modules/models.html>`_: The various model types and model integrations LangChain supports.
- `Prompts <./modules/prompts.html>`_: This includes prompt management, prompt optimization, and prompt serialization. - `Prompts <./modules/prompts.html>`_: This includes prompt management, prompt optimization, and prompt serialization.
- `LLMs <./modules/llms.html>`_: This includes a generic interface for all LLMs, and common utilities for working with LLMs. - `Memory <./modules/memory.html>`_: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
- `Document Loaders <./modules/document_loaders.html>`_: This includes a standard interface for loading documents, as well as specific integrations to all types of text data sources.
- `Utils <./modules/utils.html>`_: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.
- `Chains <./modules/chains.html>`_: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
- `Indexes <./modules/indexes.html>`_: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that. - `Indexes <./modules/indexes.html>`_: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.
- `Chains <./modules/chains.html>`_: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
- `Agents <./modules/agents.html>`_: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents. - `Agents <./modules/agents.html>`_: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.
- `Memory <./modules/memory.html>`_: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
- `Chat <./modules/chat.html>`_: Chat models are a variation on Language Models that expose a different API - rather than working with raw text, they work with messages. LangChain provides a standard interface for working with them and doing all the same things as above.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
@ -72,40 +51,34 @@ These modules are, in increasing order of complexity:
:name: modules :name: modules
:hidden: :hidden:
./modules/prompts.md ./modules/models.rst
./modules/llms.md ./modules/prompts.rst
./modules/document_loaders.md
./modules/utils.md
./modules/indexes.md ./modules/indexes.md
./modules/memory.md
./modules/chains.md ./modules/chains.md
./modules/agents.md ./modules/agents.md
./modules/memory.md
./modules/chat.md
Use Cases Use Cases
---------- ----------
The above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports. The above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.
- `Agents <./use_cases/agents.html>`_: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions. - `Personal Assistants <./use_cases/personal_assistants.html>`_: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.
- `Question Answering <./use_cases/question_answering.html>`_: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.
- `Chatbots <./use_cases/chatbots.html>`_: Since language models are good at producing text, that makes them ideal for creating chatbots. - `Chatbots <./use_cases/chatbots.html>`_: Since language models are good at producing text, that makes them ideal for creating chatbots.
- `Data Augmented Generation <./use_cases/combine_docs.html>`_: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources. - `Querying Tabular Data <./use_cases/tabular.html>`_: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.
- `Question Answering <./use_cases/question_answering.html>`_: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation. - `Interacting with APIs <./use_cases/apis.html>`_: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.
- `Extraction <./use_cases/extraction.html>`_: Extract structured information from text.
- `Summarization <./use_cases/summarization.html>`_: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation. - `Summarization <./use_cases/summarization.html>`_: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.
- `Querying Tabular Data <./use_cases/tabular.html>`_: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.
- `Evaluation <./use_cases/evaluation.html>`_: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this. - `Evaluation <./use_cases/evaluation.html>`_: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
- `Generate similar examples <./use_cases/generate_examples.html>`_: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.
- `Compare models <./use_cases/model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
@ -114,15 +87,12 @@ The above modules can be used in a variety of ways. LangChain also provides guid
:hidden: :hidden:
./use_cases/agents.md ./use_cases/agents.md
./use_cases/chatbots.md
./use_cases/generate_examples.ipynb
./use_cases/combine_docs.md
./use_cases/question_answering.md ./use_cases/question_answering.md
./use_cases/summarization.md ./use_cases/chatbots.md
./use_cases/tabular.rst ./use_cases/tabular.rst
./use_cases/summarization.md
./use_cases/extraction.md ./use_cases/extraction.md
./use_cases/evaluation.rst ./use_cases/evaluation.rst
./use_cases/model_laboratory.ipynb
Reference Docs Reference Docs
@ -173,10 +143,12 @@ Additional collection of resources we think may be useful as you develop your ap
- `Deployments <./deployments.html>`_: A collection of instructions, code snippets, and template repositories for deploying LangChain apps. - `Deployments <./deployments.html>`_: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.
- `Discord <https://discord.gg/6adMQxSpJS>`_: Join us on our Discord to discuss all things LangChain!
- `Tracing <./tracing.html>`_: A guide on using tracing in LangChain to visualize the execution of chains and agents. - `Tracing <./tracing.html>`_: A guide on using tracing in LangChain to visualize the execution of chains and agents.
- `Model Laboratory <./model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
- `Discord <https://discord.gg/6adMQxSpJS>`_: Join us on our Discord to discuss all things LangChain!
- `Production Support <https://forms.gle/57d8AmXBYp8PP8tZA>`_: As you move your LangChains into production, we'd love to offer more comprehensive support. Please fill out this form and we'll set up a dedicated support Slack channel. - `Production Support <https://forms.gle/57d8AmXBYp8PP8tZA>`_: As you move your LangChains into production, we'd love to offer more comprehensive support. Please fill out this form and we'll set up a dedicated support Slack channel.
@ -191,5 +163,6 @@ Additional collection of resources we think may be useful as you develop your ap
./gallery.rst ./gallery.rst
./deployments.md ./deployments.md
./tracing.md ./tracing.md
./use_cases/model_laboratory.ipynb
Discord <https://discord.gg/6adMQxSpJS> Discord <https://discord.gg/6adMQxSpJS>
Production Support <https://forms.gle/57d8AmXBYp8PP8tZA> Production Support <https://forms.gle/57d8AmXBYp8PP8tZA>

View File

@ -1,30 +1,52 @@
Agents Agents
========================== ==========================
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/agents>`_
Some applications will require not just a predetermined chain of calls to LLMs/other tools, Some applications will require not just a predetermined chain of calls to LLMs/other tools,
but potentially an unknown chain that depends on the user's input. but potentially an unknown chain that depends on the user's input.
In these types of chains, there is a “agent” which has access to a suite of tools. In these types of chains, there is a “agent” which has access to a suite of tools.
Depending on the user input, the agent can then decide which, if any, of these tools to call. Depending on the user input, the agent can then decide which, if any, of these tools to call.
The following sections of documentation are provided: In this section of documentation, we first start with a Getting Started notebook to over over how to use all things related to agents in an end-to-end manner.
- `Getting Started <./agents/getting_started.html>`_: A notebook to help you get started working with agents as quickly as possible.
- `Key Concepts <./agents/key_concepts.html>`_: A conceptual guide going over the various concepts related to agents.
- `How-To Guides <./agents/how_to_guides.html>`_: A collection of how-to guides. These highlight how to integrate various types of tools, how to work with different types of agents, and how to customize agents.
- `Reference <../reference/modules/agents.html>`_: API reference documentation for all Agent classes.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:caption: Agents
:name: Agents
:hidden: :hidden:
./agents/getting_started.ipynb ./agents/getting_started.ipynb
./agents/key_concepts.md
./agents/how_to_guides.rst
Reference<../reference/modules/agents.rst> We then split the documentation into the following sections:
**Tools**
An overview of the various tools LangChain supports.
**Agents**
An overview of the different agent types.
**Toolkits**
An overview of toolkits, and examples of the different ones LangChain supports.
**Agent Executor**
An overview of the Agent Executor class and examples of how to use it.
Go Deeper
---------
.. toctree::
:maxdepth: 1
./agents/tools.rst
./agents/agents.rst
./agents/agent_toolkits.rst
./agents/agent_executors.rst

View File

@ -0,0 +1,17 @@
Agent Executors
===============
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/agents/agent-executor>`_
Agent executors take an agent and tools and use the agent to decide which tools to call and in what order.
In this part of the documentation we cover other related functionality to agent executors
.. toctree::
:maxdepth: 1
:glob:
./agent_executors/examples/*

View File

@ -5,7 +5,7 @@
"id": "68b24990", "id": "68b24990",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Agents and Vectorstores\n", "# How to combine agents and vectorstores\n",
"\n", "\n",
"This notebook covers how to combine agents and vectorstores. The use case for this is that you've ingested your data into a vectorstore and want to interact with it in an agentic manner.\n", "This notebook covers how to combine agents and vectorstores. The use case for this is that you've ingested your data into a vectorstore and want to interact with it in an agentic manner.\n",
"\n", "\n",
@ -22,7 +22,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 1, "execution_count": 16,
"id": "2e87c10a", "id": "2e87c10a",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -37,7 +37,23 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 17,
"id": "0b7b772b",
"metadata": {},
"outputs": [],
"source": [
"from pathlib import Path\n",
"relevant_parts = []\n",
"for p in Path(\".\").absolute().parts:\n",
" relevant_parts.append(p)\n",
" if relevant_parts[-3:] == [\"langchain\", \"docs\", \"modules\"]:\n",
" break\n",
"doc_path = str(Path(*relevant_parts) / \"state_of_the_union.txt\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "f2675861", "id": "f2675861",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
@ -52,7 +68,7 @@
], ],
"source": [ "source": [
"from langchain.document_loaders import TextLoader\n", "from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n", "loader = TextLoader(doc_path)\n",
"documents = loader.load()\n", "documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n", "texts = text_splitter.split_documents(documents)\n",

View File

@ -5,7 +5,7 @@
"id": "6fb92deb-d89e-439b-855d-c7f2607d794b", "id": "6fb92deb-d89e-439b-855d-c7f2607d794b",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Async API for Agent\n", "# How to use the async API for Agents\n",
"\n", "\n",
"LangChain provides async support for Agents by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n", "LangChain provides async support for Agents by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n",
"\n", "\n",
@ -403,7 +403,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.9" "version": "3.9.1"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -5,7 +5,7 @@
"id": "b253f4d5", "id": "b253f4d5",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# ChatGPT Clone\n", "# How to create ChatGPT Clone\n",
"\n", "\n",
"This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.\n", "This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.\n",
"\n", "\n",

View File

@ -5,7 +5,7 @@
"id": "5436020b", "id": "5436020b",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Intermediate Steps\n", "# How to access intermediate steps\n",
"\n", "\n",
"In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples." "In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples."
] ]

View File

@ -5,7 +5,7 @@
"id": "75c041b7", "id": "75c041b7",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Max Iterations\n", "# How to cap the max number of iterations\n",
"\n", "\n",
"This notebook walks through how to cap an agent at taking a certain number of steps. This can be useful to ensure that they do not go haywire and take too many steps." "This notebook walks through how to cap an agent at taking a certain number of steps. This can be useful to ensure that they do not go haywire and take too many steps."
] ]

View File

@ -1,12 +1,11 @@
{ {
"cells": [ "cells": [
{ {
"attachments": {},
"cell_type": "markdown", "cell_type": "markdown",
"id": "fa6802ac", "id": "fa6802ac",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Adding SharedMemory to an Agent and its Tools\n", "# How to add SharedMemory to an Agent and its Tools\n",
"\n", "\n",
"This notebook goes over adding memory to **both** of an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:\n", "This notebook goes over adding memory to **both** of an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:\n",
"\n", "\n",
@ -260,7 +259,6 @@
] ]
}, },
{ {
"attachments": {},
"cell_type": "markdown", "cell_type": "markdown",
"id": "4ebd8326", "id": "4ebd8326",
"metadata": {}, "metadata": {},
@ -292,7 +290,6 @@
] ]
}, },
{ {
"attachments": {},
"cell_type": "markdown", "cell_type": "markdown",
"id": "cc3d0aa4", "id": "cc3d0aa4",
"metadata": {}, "metadata": {},
@ -493,7 +490,6 @@
] ]
}, },
{ {
"attachments": {},
"cell_type": "markdown", "cell_type": "markdown",
"id": "d07415da", "id": "d07415da",
"metadata": {}, "metadata": {},
@ -544,7 +540,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.9" "version": "3.9.1"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -0,0 +1,35 @@
Agents
=============
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/agents/agent>`_
In this part of the documentation we cover the different types of agents, disregarding which specific tools they are used with.
For a high level overview of the different types of agents, see the below documentation.
.. toctree::
:maxdepth: 1
:glob:
./agents/agent_types.md
For documentation on how to create a custom agent, see the below.
We also have documentation for an in-depth dive into each agent type.
.. toctree::
:maxdepth: 1
:glob:
./agents/custom_agent.ipynb
We also have documentation for an in-depth dive into each agent type.
.. toctree::
:maxdepth: 1
:glob:
./agents/examples/*

View File

@ -1,12 +1,9 @@
# Agents # Agent Types
Agents use an LLM to determine which actions to take and in what order. Agents use an LLM to determine which actions to take and in what order.
An action can either be using a tool and observing its output, or returning a response to the user. An action can either be using a tool and observing its output, or returning a response to the user.
For a list of easily loadable tools, see [here](tools.md).
Here are the agents available in LangChain. Here are the agents available in LangChain.
For a tutorial on how to load agents, see [here](getting_started.ipynb).
## `zero-shot-react-description` ## `zero-shot-react-description`
This agent uses the ReAct framework to determine which tool to use This agent uses the ReAct framework to determine which tool to use

View File

@ -1,131 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "991b1cc1",
"metadata": {},
"source": [
"# Loading from LangChainHub\n",
"\n",
"This notebook covers how to load agents from [LangChainHub](https://github.com/hwchase17/langchain-hub)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "bd4450a2",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"No `_type` key found, defaulting to `prompt`.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3m Yes.\n",
"Follow up: Who is the reigning men's U.S. Open champion?\u001B[0m\n",
"Intermediate answer: \u001B[36;1m\u001B[1;3m2016 · SUI · Stan Wawrinka ; 2017 · ESP · Rafael Nadal ; 2018 · SRB · Novak Djokovic ; 2019 · ESP · Rafael Nadal.\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mSo the reigning men's U.S. Open champion is Rafael Nadal.\n",
"Follow up: What is Rafael Nadal's hometown?\u001B[0m\n",
"Intermediate answer: \u001B[36;1m\u001B[1;3mIn 2016, he once again showed his deep ties to Mallorca and opened the Rafa Nadal Academy in his hometown of Manacor.\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mSo the final answer is: Manacor, Mallorca, Spain.\u001B[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"'Manacor, Mallorca, Spain.'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import OpenAI, SerpAPIWrapper\n",
"from langchain.agents import initialize_agent, Tool\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name=\"Intermediate Answer\",\n",
" func=search.run,\n",
" description=\"useful for when you need to ask with search\"\n",
" )\n",
"]\n",
"\n",
"self_ask_with_search = initialize_agent(tools, llm, agent_path=\"lc://agents/self-ask-with-search/agent.json\", verbose=True)\n",
"self_ask_with_search.run(\"What is the hometown of the reigning men's U.S. Open champion?\")"
]
},
{
"cell_type": "markdown",
"id": "3aede965",
"metadata": {},
"source": [
"# Pinning Dependencies\n",
"\n",
"Specific versions of LangChainHub agents can be pinned with the `lc@<ref>://` syntax."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e679f7b6",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"No `_type` key found, defaulting to `prompt`.\n"
]
}
],
"source": [
"self_ask_with_search = initialize_agent(tools, llm, agent_path=\"lc@2826ef9e8acdf88465e1e5fc8a7bf59e0f9d0a85://agents/self-ask-with-search/agent.json\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9d3d6697",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -1,154 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "bfe18e28",
"metadata": {},
"source": [
"# Serialization\n",
"\n",
"This notebook goes over how to serialize agents. For this notebook, it is important to understand the distinction we draw between `agents` and `tools`. An agent is the LLM powered decision maker that decides which actions to take and in which order. Tools are various instruments (functions) an agent has access to, through which an agent can interact with the outside world. When people generally use agents, they primarily talk about using an agent WITH tools. However, when we talk about serialization of agents, we are talking about the agent by itself. We plan to add support for serializing an agent WITH tools sometime in the future.\n",
"\n",
"Let's start by creating an agent with tools as we normally do:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "eb729f16",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import load_tools\n",
"from langchain.agents import initialize_agent\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
"agent = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)"
]
},
{
"cell_type": "markdown",
"id": "0578f566",
"metadata": {},
"source": [
"Let's now serialize the agent. To be explicit that we are serializing ONLY the agent, we will call the `save_agent` method."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dc544de6",
"metadata": {},
"outputs": [],
"source": [
"agent.save_agent('agent.json')"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "62dd45bf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\r\n",
" \"llm_chain\": {\r\n",
" \"memory\": null,\r\n",
" \"verbose\": false,\r\n",
" \"prompt\": {\r\n",
" \"input_variables\": [\r\n",
" \"input\",\r\n",
" \"agent_scratchpad\"\r\n",
" ],\r\n",
" \"output_parser\": null,\r\n",
" \"template\": \"Answer the following questions as best you can. You have access to the following tools:\\n\\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [Search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: {input}\\nThought:{agent_scratchpad}\",\r\n",
" \"template_format\": \"f-string\",\r\n",
" \"validate_template\": true,\r\n",
" \"_type\": \"prompt\"\r\n",
" },\r\n",
" \"llm\": {\r\n",
" \"model_name\": \"text-davinci-003\",\r\n",
" \"temperature\": 0.0,\r\n",
" \"max_tokens\": 256,\r\n",
" \"top_p\": 1,\r\n",
" \"frequency_penalty\": 0,\r\n",
" \"presence_penalty\": 0,\r\n",
" \"n\": 1,\r\n",
" \"best_of\": 1,\r\n",
" \"request_timeout\": null,\r\n",
" \"logit_bias\": {},\r\n",
" \"_type\": \"openai\"\r\n",
" },\r\n",
" \"output_key\": \"text\",\r\n",
" \"_type\": \"llm_chain\"\r\n",
" },\r\n",
" \"allowed_tools\": [\r\n",
" \"Search\",\r\n",
" \"Calculator\"\r\n",
" ],\r\n",
" \"return_values\": [\r\n",
" \"output\"\r\n",
" ],\r\n",
" \"_type\": \"zero-shot-react-description\"\r\n",
"}"
]
}
],
"source": [
"!cat agent.json"
]
},
{
"cell_type": "markdown",
"id": "0eb72510",
"metadata": {},
"source": [
"We can now load the agent back in"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "eb660b76",
"metadata": {},
"outputs": [],
"source": [
"agent = initialize_agent(tools, llm, agent_path=\"agent.json\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aa624ea5",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -1,87 +0,0 @@
"""Run NatBot."""
import time
from langchain.chains.natbot.base import NatBotChain
from langchain.chains.natbot.crawler import Crawler
def run_cmd(cmd: str, _crawler: Crawler) -> None:
"""Run command."""
cmd = cmd.split("\n")[0]
if cmd.startswith("SCROLL UP"):
_crawler.scroll("up")
elif cmd.startswith("SCROLL DOWN"):
_crawler.scroll("down")
elif cmd.startswith("CLICK"):
commasplit = cmd.split(",")
id = commasplit[0].split(" ")[1]
_crawler.click(id)
elif cmd.startswith("TYPE"):
spacesplit = cmd.split(" ")
id = spacesplit[1]
text_pieces = spacesplit[2:]
text = " ".join(text_pieces)
# Strip leading and trailing double quotes
text = text[1:-1]
if cmd.startswith("TYPESUBMIT"):
text += "\n"
_crawler.type(id, text)
time.sleep(2)
if __name__ == "__main__":
objective = "Make a reservation for 2 at 7pm at bistro vida in menlo park"
print("\nWelcome to natbot! What is your objective?")
i = input()
if len(i) > 0:
objective = i
quiet = False
nat_bot_chain = NatBotChain.from_default(objective)
_crawler = Crawler()
_crawler.go_to_page("google.com")
try:
while True:
browser_content = "\n".join(_crawler.crawl())
llm_command = nat_bot_chain.execute(_crawler.page.url, browser_content)
if not quiet:
print("URL: " + _crawler.page.url)
print("Objective: " + objective)
print("----------------\n" + browser_content + "\n----------------\n")
if len(llm_command) > 0:
print("Suggested command: " + llm_command)
command = input()
if command == "r" or command == "":
run_cmd(llm_command, _crawler)
elif command == "g":
url = input("URL:")
_crawler.go_to_page(url)
elif command == "u":
_crawler.scroll("up")
time.sleep(1)
elif command == "d":
_crawler.scroll("down")
time.sleep(1)
elif command == "c":
id = input("id:")
_crawler.click(id)
time.sleep(1)
elif command == "t":
id = input("id:")
text = input("text:")
_crawler.type(id, text)
time.sleep(1)
elif command == "o":
objective = input("Objective:")
else:
print(
"(g) to visit url\n(u) scroll up\n(d) scroll down\n(c) to click"
"\n(t) to type\n(h) to view commands again"
"\n(r/enter) to run suggested command\n(o) change objective"
)
except KeyboardInterrupt:
print("\n[!] Ctrl+C detected, exiting gracefully.")
exit(0)

View File

@ -1,16 +0,0 @@
# Key Concepts
## Agents
Agents use an LLM to determine which actions to take and in what order.
For more detailed information on agents, and different types of agents in LangChain, see [this documentation](agents.md).
## Tools
Tools are functions that agents can use to interact with the world.
These tools can be generic utilities (e.g. search), other chains, or even other agents.
For more detailed information on tools, and different types of tools in LangChain, see [this documentation](tools.md).
## ToolKits
Toolkits are groups of tools that are best used together.
They allow you to logically group and initialize a set of tools that share a particular resource (such as a database connection or json object).
They can be used to construct an agent for a specific use-case.
For more detailed information on toolkits and their use cases, see [this documentation](how_to_guides.rst#agent-toolkits) (the "Agent Toolkits" section).

View File

@ -0,0 +1,18 @@
Toolkits
==============
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/agents/toolkit>`_
This section of documentation covers agents with toolkits - eg an agent applied to a particular use case.
See below for a full list of agent toolkits
.. toctree::
:maxdepth: 1
:glob:
./toolkits/examples/*

View File

@ -5,7 +5,7 @@
"id": "82a4c2cc-20ea-4b20-a565-63e905dee8ff", "id": "82a4c2cc-20ea-4b20-a565-63e905dee8ff",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Python Agent\n", "# Python Agent\n",
"\n", "\n",
"This notebook showcases an agent designed to write and execute python code to answer a question." "This notebook showcases an agent designed to write and execute python code to answer a question."
] ]

View File

@ -36,7 +36,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 3,
"id": "345bb078-4ec1-4e3a-827b-cd238c49054d", "id": "345bb078-4ec1-4e3a-827b-cd238c49054d",
"metadata": { "metadata": {
"tags": [] "tags": []
@ -53,7 +53,7 @@
], ],
"source": [ "source": [
"from langchain.document_loaders import TextLoader\n", "from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n", "loader = TextLoader('../../../state_of_the_union.txt')\n",
"documents = loader.load()\n", "documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n", "texts = text_splitter.split_documents(documents)\n",
@ -409,7 +409,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.9" "version": "3.9.1"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -0,0 +1,38 @@
Tools
=============
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/agents/tool>`_
Tools are ways that an agent can use to interact with the outside world.
For an overview of what a tool is, how to use them, and a full list of examples, please see the getting started documentation
.. toctree::
:maxdepth: 1
:glob:
./tools/getting_started.md
Next, we have some examples of customizing and generically working with tools
.. toctree::
:maxdepth: 1
:glob:
./tools/custom_tools.ipynb
./tools/multi_input_tool.ipynb
In this documentation we cover generic tooling functionality (eg how to create your own)
as well as examples of tools and how to use them.
.. toctree::
:maxdepth: 1
:glob:
./tools/examples/*

View File

@ -16,7 +16,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 1,
"id": "d41405b5", "id": "d41405b5",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -28,7 +28,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 2,
"id": "d9e61df5", "id": "d9e61df5",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -38,9 +38,11 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 5, "execution_count": 4,
"id": "edc0ea0e", "id": "edc0ea0e",
"metadata": {}, "metadata": {
"scrolled": false
},
"outputs": [ "outputs": [
{ {
"name": "stdout", "name": "stdout",
@ -58,8 +60,8 @@
"Thought:\u001b[32;1m\u001b[1;3mI need to use the Klarna Shopping API to search for t shirts.\n", "Thought:\u001b[32;1m\u001b[1;3mI need to use the Klarna Shopping API to search for t shirts.\n",
"Action: requests_get\n", "Action: requests_get\n",
"Action Input: https://www.klarna.com/us/shopping/public/openai/v0/products?q=t%20shirts\u001b[0m\n", "Action Input: https://www.klarna.com/us/shopping/public/openai/v0/products?q=t%20shirts\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m{\"products\":[{\"name\":\"Lacoste Men's Pack of Plain T-Shirts\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202043025/Clothing/Lacoste-Men-s-Pack-of-Plain-T-Shirts/?source=openai\",\"price\":\"$28.99\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White,Black\"]},{\"name\":\"Hanes Men's Ultimate 6pk. Crewneck T-Shirts\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201808270/Clothing/Hanes-Men-s-Ultimate-6pk.-Crewneck-T-Shirts/?source=openai\",\"price\":\"$13.40\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White\"]},{\"name\":\"Nike Boy's Jordan Stretch T-shirts\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3201863202/Children-s-Clothing/Nike-Boy-s-Jordan-Stretch-T-shirts/?source=openai\",\"price\":\"$14.99\",\"attributes\":[\"Color:White,Green\",\"Model:Boy\",\"Pattern:Solid Color\",\"Size (Small-Large):S,XL,L,M\"]},{\"name\":\"Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203028500/Clothing/Polo-Classic-Fit-Cotton-V-Neck-T-Shirts-3-Pack/?source=openai\",\"price\":\"$29.95\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White,Blue,Black\"]},{\"name\":\"adidas Comfort T-shirts Men's 3-pack\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202640533/Clothing/adidas-Comfort-T-shirts-Men-s-3-pack/?source=openai\",\"price\":\"$14.99\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White,Black\",\"Pattern:Solid Color\"]}]}\u001b[0m\n", "Observation: \u001b[36;1m\u001b[1;3m{\"products\":[{\"name\":\"Lacoste Men's Pack of Plain T-Shirts\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202043025/Clothing/Lacoste-Men-s-Pack-of-Plain-T-Shirts/?source=openai\",\"price\":\"$28.02\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White,Black\"]},{\"name\":\"Hanes Men's Ultimate 6pk. Crewneck T-Shirts\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201808270/Clothing/Hanes-Men-s-Ultimate-6pk.-Crewneck-T-Shirts/?source=openai\",\"price\":\"$13.82\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White\"]},{\"name\":\"Nike Boy's Jordan Stretch T-shirts\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3201863202/Children-s-Clothing/Nike-Boy-s-Jordan-Stretch-T-shirts/?source=openai\",\"price\":\"$14.99\",\"attributes\":[\"Color:White,Green\",\"Model:Boy\",\"Pattern:Solid Color\",\"Size (Small-Large):S,XL,L,M\"]},{\"name\":\"Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203028500/Clothing/Polo-Classic-Fit-Cotton-V-Neck-T-Shirts-3-Pack/?source=openai\",\"price\":\"$29.95\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White,Blue,Black\"]},{\"name\":\"adidas Comfort T-shirts Men's 3-pack\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202640533/Clothing/adidas-Comfort-T-shirts-Men-s-3-pack/?source=openai\",\"price\":\"$14.99\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White,Black\",\"Pattern:Solid Color\",\"Neckline:Round\"]}]}\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe available t shirts on Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.\n", "Thought:\u001b[32;1m\u001b[1;3mThese are the available t shirts on Klarna: Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.\n",
"Final Answer: The available t shirts on Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.\u001b[0m\n", "Final Answer: The available t shirts on Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.\u001b[0m\n",
"\n", "\n",
"\u001b[1m> Finished chain.\u001b[0m\n" "\u001b[1m> Finished chain.\u001b[0m\n"
@ -71,18 +73,17 @@
"\"The available t shirts on Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.\"" "\"The available t shirts on Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.\""
] ]
}, },
"execution_count": 5, "execution_count": 4,
"metadata": {}, "metadata": {},
"output_type": "execute_result" "output_type": "execute_result"
} }
], ],
"source": [ "source": [
"llm = ChatOpenAI(temperature=0)\n", "llm = ChatOpenAI(temperature=0,)\n",
"tools = load_tools([\"requests\"] )\n", "tools = load_tools([\"requests\"] )\n",
"tools += [tool]\n", "tools += [tool]\n",
"\n", "\n",
"agent_chain = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)\n", "agent_chain = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)\n",
"\n",
"agent_chain.run(\"what t shirts are available in klarna?\")" "agent_chain.run(\"what t shirts are available in klarna?\")"
] ]
}, },

View File

@ -11,10 +11,10 @@
"\n", "\n",
"From https://github.com/SidU/teams-langchain-js/wiki/Connecting-IFTTT-Services.\n", "From https://github.com/SidU/teams-langchain-js/wiki/Connecting-IFTTT-Services.\n",
"\n", "\n",
"# Creating a webhook\n", "## Creating a webhook\n",
"- Go to https://ifttt.com/create\n", "- Go to https://ifttt.com/create\n",
"\n", "\n",
"# Configuring the \"If This\"\n", "## Configuring the \"If This\"\n",
"- Click on the \"If This\" button in the IFTTT interface.\n", "- Click on the \"If This\" button in the IFTTT interface.\n",
"- Search for \"Webhooks\" in the search bar.\n", "- Search for \"Webhooks\" in the search bar.\n",
"- Choose the first option for \"Receive a web request with a JSON payload.\"\n", "- Choose the first option for \"Receive a web request with a JSON payload.\"\n",
@ -24,7 +24,7 @@
"Event Name.\n", "Event Name.\n",
"- Click the \"Create Trigger\" button to save your settings and create your webhook.\n", "- Click the \"Create Trigger\" button to save your settings and create your webhook.\n",
"\n", "\n",
"# Configuring the \"Then That\"\n", "## Configuring the \"Then That\"\n",
"- Tap on the \"Then That\" button in the IFTTT interface.\n", "- Tap on the \"Then That\" button in the IFTTT interface.\n",
"- Search for the service you want to connect, such as Spotify.\n", "- Search for the service you want to connect, such as Spotify.\n",
"- Choose an action from the service, such as \"Add track to a playlist\".\n", "- Choose an action from the service, such as \"Add track to a playlist\".\n",
@ -38,7 +38,7 @@
"- Congratulations! You have successfully connected the Webhook to the desired\n", "- Congratulations! You have successfully connected the Webhook to the desired\n",
"service, and you're ready to start receiving data and triggering actions 🎉\n", "service, and you're ready to start receiving data and triggering actions 🎉\n",
"\n", "\n",
"# Finishing up\n", "## Finishing up\n",
"- To get your webhook URL go to https://ifttt.com/maker_webhooks/settings\n", "- To get your webhook URL go to https://ifttt.com/maker_webhooks/settings\n",
"- Copy the IFTTT key value from there. The URL is of the form\n", "- Copy the IFTTT key value from there. The URL is of the form\n",
"https://maker.ifttt.com/use/YOUR_IFTTT_KEY. Grab the YOUR_IFTTT_KEY value.\n" "https://maker.ifttt.com/use/YOUR_IFTTT_KEY. Grab the YOUR_IFTTT_KEY value.\n"

View File

@ -73,7 +73,7 @@
"jukit_cell_id": "OHyurqUPbS" "jukit_cell_id": "OHyurqUPbS"
}, },
"source": [ "source": [
"# Custom Parameters\n", "## Custom Parameters\n",
"\n", "\n",
"SearxNG supports up to [139 search engines](https://docs.searxng.org/admin/engines/configured_engines.html#configured-engines). You can also customize the Searx wrapper with arbitrary named parameters that will be passed to the Searx search API . In the below example we will making a more interesting use of custom search parameters from searx search api." "SearxNG supports up to [139 search engines](https://docs.searxng.org/admin/engines/configured_engines.html#configured-engines). You can also customize the Searx wrapper with arbitrary named parameters that will be passed to the Searx search API . In the below example we will making a more interesting use of custom search parameters from searx search api."
] ]
@ -104,7 +104,7 @@
"metadata": { "metadata": {
"jukit_cell_id": "3FyQ6yHI8K", "jukit_cell_id": "3FyQ6yHI8K",
"tags": [ "tags": [
"scroll-output" "scroll-output"
] ]
}, },
"outputs": [ "outputs": [
@ -161,7 +161,7 @@
"jukit_cell_id": "d0x164ssV1" "jukit_cell_id": "d0x164ssV1"
}, },
"source": [ "source": [
"# Obtaining results with metadata" "## Obtaining results with metadata"
] ]
}, },
{ {
@ -192,7 +192,7 @@
"metadata": { "metadata": {
"jukit_cell_id": "r7qUtvKNOh", "jukit_cell_id": "r7qUtvKNOh",
"tags": [ "tags": [
"scroll-output" "scroll-output"
] ]
}, },
"outputs": [ "outputs": [
@ -263,7 +263,7 @@
"metadata": { "metadata": {
"jukit_cell_id": "JyNgoFm0vo", "jukit_cell_id": "JyNgoFm0vo",
"tags": [ "tags": [
"scroll-output" "scroll-output"
] ]
}, },
"outputs": [ "outputs": [
@ -444,7 +444,7 @@
"metadata": { "metadata": {
"jukit_cell_id": "5NrlredKxM", "jukit_cell_id": "5NrlredKxM",
"tags": [ "tags": [
"scroll-output" "scroll-output"
] ]
}, },
"outputs": [ "outputs": [
@ -600,7 +600,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.9.11" "version": "3.9.1"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -5,7 +5,7 @@
"id": "16763ed3", "id": "16763ed3",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Zapier Natural Language Actions API\n", "# Zapier Natural Language Actions API\n",
"\\\n", "\\\n",
"Full docs here: https://nla.zapier.com/api/v1/docs\n", "Full docs here: https://nla.zapier.com/api/v1/docs\n",
"\n", "\n",
@ -318,7 +318,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.9" "version": "3.9.1"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -1,4 +1,4 @@
# Tools # Getting Started
Tools are functions that agents can use to interact with the world. Tools are functions that agents can use to interact with the world.
These tools can be generic utilities (e.g. search), other chains, or even other agents. These tools can be generic utilities (e.g. search), other chains, or even other agents.
@ -118,7 +118,7 @@ Below is a list of all supported tools and relevant information:
- Notes: Uses the Google Custom Search API - Notes: Uses the Google Custom Search API
- Requires LLM: No - Requires LLM: No
- Extra Parameters: `google_api_key`, `google_cse_id` - Extra Parameters: `google_api_key`, `google_cse_id`
- For more information on this, see [this page](../../ecosystem/google_search.md) - For more information on this, see [this page](../../../ecosystem/google_search.md)
**searx-search** **searx-search**
@ -135,7 +135,7 @@ Below is a list of all supported tools and relevant information:
- Notes: Calls the [serper.dev](https://serper.dev) Google Search API and then parses results. - Notes: Calls the [serper.dev](https://serper.dev) Google Search API and then parses results.
- Requires LLM: No - Requires LLM: No
- Extra Parameters: `serper_api_key` - Extra Parameters: `serper_api_key`
- For more information on this, see [this page](../../ecosystem/google_serper.md) - For more information on this, see [this page](../../../ecosystem/google_serper.md)
**wikipedia** **wikipedia**

View File

@ -1,6 +1,10 @@
Chains Chains
========================== ==========================
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/chains>`_
Using an LLM in isolation is fine for some simple applications, Using an LLM in isolation is fine for some simple applications,
but many more complex ones require chaining LLMs - either with each other or with other experts. but many more complex ones require chaining LLMs - either with each other or with other experts.
LangChain provides a standard interface for Chains, as well as some common implementations of chains for ease of use. LangChain provides a standard interface for Chains, as well as some common implementations of chains for ease of use.
@ -9,8 +13,6 @@ The following sections of documentation are provided:
- `Getting Started <./chains/getting_started.html>`_: A getting started guide for chains, to get you up and running quickly. - `Getting Started <./chains/getting_started.html>`_: A getting started guide for chains, to get you up and running quickly.
- `Key Concepts <./chains/key_concepts.html>`_: A conceptual guide going over the various concepts related to chains.
- `How-To Guides <./chains/how_to_guides.html>`_: A collection of how-to guides. These highlight how to use various types of chains. - `How-To Guides <./chains/how_to_guides.html>`_: A collection of how-to guides. These highlight how to use various types of chains.
- `Reference <../reference/modules/chains.html>`_: API reference documentation for all Chain classes. - `Reference <../reference/modules/chains.html>`_: API reference documentation for all Chain classes.
@ -25,5 +27,4 @@ The following sections of documentation are provided:
./chains/getting_started.ipynb ./chains/getting_started.ipynb
./chains/how_to_guides.rst ./chains/how_to_guides.rst
./chains/key_concepts.rst
Reference<../reference/modules/chains.rst> Reference<../reference/modules/chains.rst>

View File

@ -34,10 +34,10 @@
"text": [ "text": [
"\n", "\n",
"\n", "\n",
"\u001B[1m> Entering new LLMMathChain chain...\u001B[0m\n", "\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"whats 2 raised to .12\u001B[32;1m\u001B[1;3m\n", "whats 2 raised to .12\u001b[32;1m\u001b[1;3m\n",
"Answer: 1.0791812460476249\u001B[0m\n", "Answer: 1.0791812460476249\u001b[0m\n",
"\u001B[1m> Finished chain.\u001B[0m\n" "\u001b[1m> Finished chain.\u001b[0m\n"
] ]
}, },
{ {

View File

@ -31,7 +31,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"with open('../../state_of_the_union.txt') as f:\n", "with open(\"../../state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()" " state_of_the_union = f.read()"
] ]
}, },
@ -122,7 +122,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.9" "version": "3.9.1"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -1,33 +0,0 @@
Generic Chains
--------------
A chain is made up of links, which can be either primitives or other chains.
Primitives can be either `prompts <../prompts.html>`_, `llms <../llms.html>`_, `utils <../utils.html>`_, or other chains.
The examples here are all generic end-to-end chains that are meant to be used to construct other chains rather than serving a specific purpose.
**LLMChain**
- **Links Used**: PromptTemplate, LLM
- **Notes**: This chain is the simplest chain, and is widely used by almost every other chain. This chain takes arbitrary user input, creates a prompt with it from the PromptTemplate, passes that to the LLM, and then returns the output of the LLM as the final output.
- `Example Notebook <./generic/llm_chain.html>`_
**Transformation Chain**
- **Links Used**: TransformationChain
- **Notes**: This notebook shows how to use the Transformation Chain, which takes an arbitrary python function and applies it to inputs/outputs of other chains.
- `Example Notebook <./generic/transformation.html>`_
**Sequential Chain**
- **Links Used**: Sequential
- **Notes**: This notebook shows how to combine calling multiple other chains in sequence.
- `Example Notebook <./generic/sequential_chains.html>`_
.. toctree::
:maxdepth: 1
:glob:
:caption: Generic Chains
:name: generic
:hidden:
./generic/*

View File

@ -2,23 +2,37 @@ How-To Guides
============= =============
A chain is made up of links, which can be either primitives or other chains. A chain is made up of links, which can be either primitives or other chains.
Primitives can be either `prompts <../prompts.html>`_, `llms <../llms.html>`_, `utils <../utils.html>`_, or other chains. Primitives can be either `prompts <../prompts.html>`_, `models <../models.html>`_, arbitrary functions, or other chains.
The examples here are all end-to-end chains for specific applications. The examples here are broken up into three sections:
They are broken up into three categories:
1. `Generic Chains <./generic_how_to.html>`_: Generic chains, that are meant to help build other chains rather than serve a particular purpose. **Generic Functionality**
2. `Utility Chains <./utility_how_to.html>`_: Chains consisting of an LLMChain interacting with a specific util.
3. `Asynchronous <./async_chain.html>`_: Covering asynchronous functionality. Covers both generic chains (that are useful in a wide variety of applications) as well as generic functionality related to those chains.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:glob: :glob:
:hidden:
./generic_how_to.rst ./generic/*
./utility_how_to.rst
./async_chain.ipynb
In addition to different types of chains, we also have the following how-to guides for working with chains in general: **Index-related Chains**
Chains related to working with indexes.
.. toctree::
:maxdepth: 1
:glob:
./index_examples/*
**All other chains**
All other types of chains!
.. toctree::
:maxdepth: 1
:glob:
./examples/*
`Load From Hub <./generic/from_hub.html>`_: This notebook covers how to load chains from `LangChainHub <https://github.com/hwchase17/langchain-hub>`_.

View File

@ -17,7 +17,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"with open('../../state_of_the_union.txt') as f:\n", "with open(\"../../state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()" " state_of_the_union = f.read()"
] ]
}, },
@ -170,7 +170,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.9" "version": "3.9.1"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -44,7 +44,7 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"from langchain.document_loaders import TextLoader\n", "from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n", "loader = TextLoader(\"../../state_of_the_union.txt\")\n",
"documents = loader.load()" "documents = loader.load()"
] ]
}, },

View File

@ -105,7 +105,6 @@
] ]
}, },
{ {
"attachments": {},
"cell_type": "markdown", "cell_type": "markdown",
"id": "1da90437", "id": "1da90437",
"metadata": {}, "metadata": {},
@ -169,7 +168,7 @@
"from langchain.text_splitter import CharacterTextSplitter\n", "from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Chroma\n", "from langchain.vectorstores import Chroma\n",
"\n", "\n",
"with open('../../state_of_the_union.txt') as f:\n", "with open(\"../../state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()\n", " state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)" "texts = text_splitter.split_text(state_of_the_union)"
@ -236,7 +235,7 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "Python 3", "display_name": "Python 3 (ipykernel)",
"language": "python", "language": "python",
"name": "python3" "name": "python3"
}, },
@ -250,7 +249,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.9.12 (main, Mar 26 2022, 15:51:15) \n[Clang 13.1.6 (clang-1316.0.21.2)]" "version": "3.9.1"
}, },
"vscode": { "vscode": {
"interpreter": { "interpreter": {

View File

@ -42,7 +42,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"with open('../../state_of_the_union.txt') as f:\n", "with open(\"../../state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()\n", " state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)\n", "texts = text_splitter.split_text(state_of_the_union)\n",

View File

@ -61,7 +61,7 @@
], ],
"source": [ "source": [
"from langchain.document_loaders import TextLoader\n", "from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n", "loader = TextLoader(\"../../state_of_the_union.txt\")\n",
"docsearch = index_creator.from_loaders([loader])" "docsearch = index_creator.from_loaders([loader])"
] ]
}, },

View File

@ -43,7 +43,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"with open('../../state_of_the_union.txt') as f:\n", "with open(\"../../state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()\n", " state_of_the_union = f.read()\n",
"texts = text_splitter.split_text(state_of_the_union)" "texts = text_splitter.split_text(state_of_the_union)"
] ]

View File

@ -41,7 +41,7 @@
], ],
"source": [ "source": [
"from langchain.document_loaders import TextLoader\n", "from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n", "loader = TextLoader(\"../../state_of_the_union.txt\")\n",
"documents = loader.load()\n", "documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n", "texts = text_splitter.split_documents(documents)\n",

View File

@ -31,7 +31,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"with open('../../state_of_the_union.txt') as f:\n", "with open(\"../../state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()\n", " state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)\n", "texts = text_splitter.split_text(state_of_the_union)\n",

View File

@ -1,20 +0,0 @@
# Key Concepts
## Chains
A chain is made up of links, which can be either primitives or other chains.
They vary greatly in complexity and are combination of generic, highly configurable pipelines and more narrow (but usually more complex) pipelines.
## Sequential Chain
This is a specific type of chain where multiple other chains are run in sequence, with the outputs being added as inputs
to the next. A subtype of this type of chain is the [`SimpleSequentialChain`](./generic/sequential_chains.html#simplesequentialchain), where all subchains have only one input and one output,
and the output of one is therefore used as sole input to the next chain.
## Prompt Selectors
One thing that we've noticed is that the best prompt to use is really dependent on the model you use.
Some prompts work really good with some models, but not great with others.
One of our goals is provide good chains that "just work" out of the box.
A big part of chains like that is having prompts that "just work".
So rather than having a default prompt for chains, we are moving towards a paradigm where if a prompt is not explicitly
provided we select one with a PromptSelector. This class takes in the model passed in, and returns a default prompt.
The inner workings of the PromptSelector can look at any aspect of the model - LLM vs ChatModel, OpenAI vs Cohere, GPT3 vs GPT4, etc.
Due to this being a newer feature, this may not be implemented for all chains, but this is the direction we are moving.

View File

@ -1,65 +0,0 @@
Utility Chains
--------------
A chain is made up of links, which can be either primitives or other chains.
Primitives can be either `prompts <../prompts.html>`_, `llms <../llms.html>`_, `utils <../utils.html>`_, or other chains.
The examples here are all end-to-end chains for specific applications, focused on interacting an LLMChain with a specific utility.
**LLMMath**
- **Links Used**: Python REPL, LLMChain
- **Notes**: This chain takes user input (a math question), uses an LLMChain to convert it to python code snippet to run in the Python REPL, and then returns that as the result.
- `Example Notebook <./examples/llm_math.html>`_
**PAL**
- **Links Used**: Python REPL, LLMChain
- **Notes**: This chain takes user input (a reasoning question), uses an LLMChain to convert it to python code snippet to run in the Python REPL, and then returns that as the result.
- `Paper <https://arxiv.org/abs/2211.10435>`_
- `Example Notebook <./examples/pal.html>`_
**SQLDatabase Chain**
- **Links Used**: SQLDatabase, LLMChain
- **Notes**: This chain takes user input (a question), uses a first LLM chain to construct a SQL query to run against the SQL database, and then uses another LLMChain to take the results of that query and use it to answer the original question.
- `Example Notebook <./examples/sqlite.html>`_
**API Chain**
- **Links Used**: LLMChain, Requests
- **Notes**: This chain first uses a LLM to construct the url to hit, then makes that request with the Requests wrapper, and finally runs that result through the language model again in order to product a natural language response.
- `Example Notebook <./examples/api.html>`_
**LLMBash Chain**
- **Links Used**: BashProcess, LLMChain
- **Notes**: This chain takes user input (a question), uses an LLM chain to convert it to a bash command to run in the terminal, and then returns that as the result.
- `Example Notebook <./examples/llm_bash.html>`_
**LLMChecker Chain**
- **Links Used**: LLMChain
- **Notes**: This chain takes user input (a question), uses an LLM chain to answer that question, and then uses other LLMChains to self-check that answer.
- `Example Notebook <./examples/llm_checker.html>`_
**LLMRequests Chain**
- **Links Used**: Requests, LLMChain
- **Notes**: This chain takes a URL and other inputs, uses Requests to get the data at that URL, and then passes that along with the other inputs into an LLMChain to generate a response. The example included shows how to ask a question to Google - it firsts constructs a Google url, then fetches the data there, then passes that data + the original question into an LLMChain to get an answer.
- `Example Notebook <./examples/llm_requests.html>`_
**Moderation Chain**
- **Links Used**: LLMChain, ModerationChain
- **Notes**: This chain shows how to use OpenAI's content moderation endpoint to screen output, and shows how to connect this to an LLMChain.
- `Example Notebook <./examples/moderation.html>`_
.. toctree::
:maxdepth: 1
:glob:
:caption: Generic Chains
:name: generic
:hidden:
./examples/*

View File

@ -1,208 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e58f4d5a",
"metadata": {},
"source": [
"# Agent\n",
"This notebook covers how to create a custom agent for a chat model. It will utilize chat specific prompts."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5268c7fa",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import ZeroShotAgent, Tool, AgentExecutor\n",
"from langchain.chains import LLMChain\n",
"from langchain.utilities import SerpAPIWrapper"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fbaa4dbe",
"metadata": {},
"outputs": [],
"source": [
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\"\n",
" )\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "f3ba6f08",
"metadata": {},
"outputs": [],
"source": [
"prefix = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\"\"\"\n",
"suffix = \"\"\"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Args\"\"\"\n",
"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools, \n",
" prefix=prefix, \n",
" suffix=suffix, \n",
" input_variables=[]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3547a37d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
" AIMessagePromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
")\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a78f886f",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" SystemMessagePromptTemplate(prompt=prompt),\n",
" HumanMessagePromptTemplate.from_template(\"{input}\\n\\nThis was your previous work \"\n",
" f\"(but I haven't seen any of it! I only see what \"\n",
" \"you return as final answer):\\n{agent_scratchpad}\")\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "dadadd70",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(messages)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b7180182",
"metadata": {},
"outputs": [],
"source": [
"llm_chain = LLMChain(llm=ChatOpenAI(temperature=0), prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "ddddb07b",
"metadata": {},
"outputs": [],
"source": [
"tool_names = [tool.name for tool in tools]\n",
"agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "36aef054",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "33a4d6cc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mArrr, ye be in luck, matey! I'll find ye the answer to yer question.\n",
"\n",
"Thought: I need to search for the current population of Canada.\n",
"Action: Search\n",
"Action Input: \"current population of Canada 2023\"\n",
"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,623,091 as of Saturday, March 4, 2023, based on Worldometer elaboration of the latest United Nations data.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mAhoy, me hearties! I've found the answer to yer question.\n",
"\n",
"Final Answer: As of March 4, 2023, the population of Canada be 38,623,091. Arrr!\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'As of March 4, 2023, the population of Canada be 38,623,091. Arrr!'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"How many people live in canada as of 2023?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6aefe978",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -1,376 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "134a0785",
"metadata": {},
"source": [
"# Chat Vector DB\n",
"\n",
"This notebook goes over how to set up a chat model to chat with a vector database.\n",
"\n",
"This notebook is very similar to the example of using an LLM in the ConversationalRetrievalChain. The only differences here are (1) using a ChatModel, and (2) passing in a ChatPromptTemplate (optimized for chat models)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "70c4e529",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.chains import ConversationalRetrievalChain"
]
},
{
"cell_type": "markdown",
"id": "cdff94be",
"metadata": {},
"source": [
"Load in documents. You can replace this with a loader for whatever type of data you want"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "01c46e92",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n",
"documents = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "e9be4779",
"metadata": {},
"source": [
"If you had multiple loaders that you wanted to combine, you do something like:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "433363a5",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# loaders = [....]\n",
"# docs = []\n",
"# for loader in loaders:\n",
"# docs.extend(loader.load())"
]
},
{
"cell_type": "markdown",
"id": "239475d2",
"metadata": {},
"source": [
"We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a8930cf7",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"documents = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"vectorstore = Chroma.from_documents(documents, embeddings)"
]
},
{
"cell_type": "markdown",
"id": "18415aca",
"metadata": {},
"source": [
"We are now going to construct a prompt specifically designed for chat models."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "c8805230",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
" AIMessagePromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
")\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "cc86c30e",
"metadata": {},
"outputs": [],
"source": [
"system_template=\"\"\"Use the following pieces of context to answer the users question. \n",
"If you don't know the answer, just say that you don't know, don't try to make up an answer.\n",
"----------------\n",
"{context}\"\"\"\n",
"messages = [\n",
" SystemMessagePromptTemplate.from_template(system_template),\n",
" HumanMessagePromptTemplate.from_template(\"{question}\")\n",
"]\n",
"prompt = ChatPromptTemplate.from_messages(messages)"
]
},
{
"cell_type": "markdown",
"id": "3c96b118",
"metadata": {},
"source": [
"We now initialize the ConversationalRetrievalChain"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "7b4110f3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0), vectorstore,qa_prompt=prompt)"
]
},
{
"cell_type": "markdown",
"id": "3872432d",
"metadata": {},
"source": [
"Here's an example of asking a question with no chat history"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7fe3e730",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "bfff9cc8",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\"The President nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court. He described her as one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and a consensus builder. She has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
]
},
{
"cell_type": "markdown",
"id": "9e46edf7",
"metadata": {},
"source": [
"Here's an example of asking a question with some chat history"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "00b4cf00",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who came before her\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "f01828d1",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\"The President mentioned Circuit Court of Appeals Judge Ketanji Brown Jackson as the nominee for the United States Supreme Court. He described her as one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence. The President did not mention any specific sources of support for Judge Jackson, but he did note that advancing immigration reform is supported by everyone from labor unions to religious leaders to the U.S. Chamber of Commerce.\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "2324cdc6-98bf-4708-b8cd-02a98b1e5b67",
"metadata": {},
"source": [
"## ConversationalRetrievalChain with streaming to `stdout`\n",
"\n",
"Output from the chain will be streamed to `stdout` token by token in this example."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "2efacec3-2690-4b05-8de3-a32fd2ac3911",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains.llm import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain.chains.chat_index.prompts import CONDENSE_QUESTION_PROMPT\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"\n",
"# Construct a ChatVectorDBChain with a streaming llm for combine docs\n",
"# and a separate, non-streaming llm for question generation\n",
"llm = OpenAI(temperature=0)\n",
"streaming_llm = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)\n",
"\n",
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_chain(streaming_llm, chain_type=\"stuff\", prompt=prompt)\n",
"\n",
"qa = ConversationalRetrievalChain(retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator)\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "fd6d43f4-7428-44a4-81bc-26fe88a98762",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The President nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court. He described her as one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and a consensus builder. He also mentioned that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
]
}
],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "5ab38978-f3e8-4fa7-808c-c79dec48379a",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The context does not provide information on who Ketanji Brown Jackson succeeded on the United States Supreme Court."
]
}
],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8e8d0055",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -1,192 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9a9350a6",
"metadata": {},
"source": [
"# Memory\n",
"This notebook goes over how to use Memory with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "110935ae",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import (\n",
" ChatPromptTemplate, \n",
" MessagesPlaceholder, \n",
" SystemMessagePromptTemplate, \n",
" HumanMessagePromptTemplate\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "161b6629",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages([\n",
" SystemMessagePromptTemplate.from_template(\"The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" HumanMessagePromptTemplate.from_template(\"{input}\")\n",
"])"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4976fbda",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import ConversationChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.memory import ConversationBufferMemory"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "12a0bea6",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "f6edcd6a",
"metadata": {},
"source": [
"We can now initialize the memory. Note that we set `return_messages=True` To denote that this should return a list of messages when appropriate"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "f55bea38",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferMemory(return_messages=True)"
]
},
{
"cell_type": "markdown",
"id": "737e8c78",
"metadata": {},
"source": [
"We can now use this in the rest of the chain."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "80152db7",
"metadata": {},
"outputs": [],
"source": [
"conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ac68e766",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hello! How can I assist you today?'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"Hi there!\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "babb33d0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?\""
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"I'm doing well! Just having a conversation with an AI.\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "36f8a1dc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"Tell me about yourself.\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "79fb460b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -1,169 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "07c1e3b9",
"metadata": {},
"source": [
"# Retrieval Question/Answering\n",
"\n",
"This example showcases using a chat model to do question answering over a vector database.\n",
"\n",
"This notebook is very similar to the example of using an LLM in the RetrievalQA. The only differences here are (1) using a ChatModel, and (2) passing in a ChatPromptTemplate (optimized for chat models)."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "82525493",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.chains import RetrievalQA"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5c7049db",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"docsearch = Chroma.from_documents(texts, embeddings)"
]
},
{
"cell_type": "markdown",
"id": "35f99145",
"metadata": {},
"source": [
"We can now set up the chat model and chat model specific prompt"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "32a49412",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
" AIMessagePromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
")\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f231fb9b",
"metadata": {},
"outputs": [],
"source": [
"system_template=\"\"\"Use the following pieces of context to answer the users question. \n",
"If you don't know the answer, just say that you don't know, don't try to make up an answer.\n",
"----------------\n",
"{context}\"\"\"\n",
"messages = [\n",
" SystemMessagePromptTemplate.from_template(system_template),\n",
" HumanMessagePromptTemplate.from_template(\"{question}\")\n",
"]\n",
"prompt = ChatPromptTemplate.from_messages(messages)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "3018f865",
"metadata": {},
"outputs": [],
"source": [
"chain_type_kwargs = {\"prompt\": prompt}\n",
"qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(), chain_type=\"stuff\", retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs)\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "032a47f8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"The president nominated Ketanji Brown Jackson to serve on the United States Supreme Court. He referred to her as one of our nation's top legal minds, a former federal public defender, a consensus builder, and from a family of public school educators and police officers. Since she's been nominated, she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"qa.run(query)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b403637",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -1,206 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "efc5be67",
"metadata": {},
"source": [
"# Retrieval Question Answering with Sources\n",
"\n",
"This notebook goes over how to do question-answering with sources with a chat model over a vector database. It does this by using the `RetrievalQAWithSourcesChain`, which does the lookup of the documents from a vector database. \n",
"\n",
"This notebook is very similar to the example of using an LLM in the RetrievalQAWithSources. The only differences here are (1) using a ChatModel, and (2) passing in a ChatPromptTemplate (optimized for chat models)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1c613960",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.embeddings.cohere import CohereEmbeddings\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch\n",
"from langchain.vectorstores import Chroma"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "17d1306e",
"metadata": {},
"outputs": [],
"source": [
"with open('../../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_text(state_of_the_union)\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "0e745d99",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n",
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{\"source\": f\"{i}-pl\"} for i in range(len(texts))])"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "8aa571ae",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import RetrievalQAWithSourcesChain"
]
},
{
"cell_type": "markdown",
"id": "1f73b14a",
"metadata": {},
"source": [
"We can now set up the chat model and chat model specific prompt"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9643c775",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
" AIMessagePromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
")\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "ed00e906",
"metadata": {},
"outputs": [],
"source": [
"system_template=\"\"\"Use the following pieces of context to answer the users question. \n",
"If you don't know the answer, just say that you don't know, don't try to make up an answer.\n",
"ALWAYS return a \"SOURCES\" part in your answer.\n",
"The \"SOURCES\" part should be a reference to the source of the document from which you got your answer.\n",
"\n",
"Example of your response should be:\n",
"\n",
"```\n",
"The answer is foo\n",
"SOURCES: xyz\n",
"```\n",
"\n",
"Begin!\n",
"----------------\n",
"{summaries}\"\"\"\n",
"messages = [\n",
" SystemMessagePromptTemplate.from_template(system_template),\n",
" HumanMessagePromptTemplate.from_template(\"{question}\")\n",
"]\n",
"prompt = ChatPromptTemplate.from_messages(messages)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "aa859d4c",
"metadata": {},
"outputs": [],
"source": [
"chain_type_kwargs = {\"prompt\": prompt}\n",
"chain = RetrievalQAWithSourcesChain.from_chain_type(\n",
" ChatOpenAI(temperature=0), \n",
" chain_type=\"stuff\", \n",
" retriever=docsearch.as_retriever(),\n",
" chain_type_kwargs=chain_type_kwargs\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "8ba36fa7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'answer': 'The President honored Justice Stephen Breyer, an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, for his dedicated service to the country. \\n',\n",
" 'sources': '31-pl'}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain({\"question\": \"What did the president say about Justice Breyer\"}, return_only_outputs=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8308fbf7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -1,29 +0,0 @@
# Key Concepts
## ChatMessage
A chat message is what we refer to as the modular unit of information.
At the moment, this consists of "content", which refers to the content of the chat message.
At the moment, most chat models are trained to predict sequences of Human <> AI messages.
This is because so far the primary interaction mode has been between a human user and a singular AI system.
At the moment, there are four different classes of Chat Messages
### HumanMessage
A HumanMessage is a ChatMessage that is sent as if from a Human's point of view.
### AIMessage
An AIMessage is a ChatMessage that is sent from the point of view of the AI system to which the Human is corresponding.
### SystemMessage
A SystemMessage is still a bit ambiguous, and so far seems to be a concept unique to OpenAI
### ChatMessage
A chat message is a generic chat message, with not only a "content" field but also a "role" field.
With this field, arbitrary roles may be assigned to a message.
## ChatGeneration
The output of a single prediction of a chat message.
Currently this is just a chat message itself (eg content and a role)
## Chat Model
A model which takes in a list of chat messages, and predicts a chat message in response.

View File

@ -1,87 +0,0 @@
How To Guides
====================================
There are a lot of different document loaders that LangChain supports. Below are how-to guides for working with them
`File Loader <./examples/unstructured_file.html>`_: A walkthrough of how to use Unstructured to load files of arbitrary types (pdfs, txt, html, etc).
`Directory Loader <./examples/directory_loader.html>`_: A walkthrough of how to use Unstructured load files from a given directory.
`Notion <./examples/notion.html>`_: A walkthrough of how to load data for an arbitrary Notion DB.
`ReadTheDocs <./examples/readthedocs_documentation.html>`_: A walkthrough of how to load data for documentation generated by ReadTheDocs.
`HTML <./examples/html.html>`_: A walkthrough of how to load data from an html file.
`PDF <./examples/pdf.html>`_: A walkthrough of how to load data from a PDF file.
`PowerPoint <./examples/powerpoint.html>`_: A walkthrough of how to load data from a powerpoint file.
`Email <./examples/email.html>`_: A walkthrough of how to load data from an email (`.eml`) file.
`GoogleDrive <./examples/googledrive.html>`_: A walkthrough of how to load data from Google drive.
`Obsidian <./examples/obsidian.html>`_: A walkthrough of how to load data from an Obsidian file dump.
`Roam <./examples/roam.html>`_: A walkthrough of how to load data from a Roam file export.
`EverNote <./examples/evernote.html>`_: A walkthrough of how to load data from a EverNote (`.enex`) file.
`YouTube <./examples/youtube.html>`_: A walkthrough of how to load the transcript from a YouTube video.
`Hacker News <./examples/hn.html>`_: A walkthrough of how to load a Hacker News page.
`GitBook <./examples/gitbook.html>`_: A walkthrough of how to load a GitBook page.
`s3 File <./examples/s3_file.html>`_: A walkthrough of how to load a file from s3.
`s3 Directory <./examples/s3_directory.html>`_: A walkthrough of how to load all files in a directory from s3.
`GCS File <./examples/gcs_file.html>`_: A walkthrough of how to load a file from Google Cloud Storage (GCS).
`GCS Directory <./examples/gcs_directory.html>`_: A walkthrough of how to load all files in a directory from Google Cloud Storage (GCS).
`Web Base <./examples/web_base.html>`_: A walkthrough of how to load all text data from webpages.
`IMSDb <./examples/imsdb.html>`_: A walkthrough of how to load all text data from IMSDb webpage.
`AZLyrics <./examples/azlyrics.html>`_: A walkthrough of how to load all text data from AZLyrics webpage.
`College Confidential <./examples/college_confidential.html>`_: A walkthrough of how to load all text data from College Confidential webpage.
`Gutenberg <./examples/gutenberg.html>`_: A walkthrough of how to load data from a Gutenberg ebook text.
`Airbyte Json <./examples/airbyte_json.html>`_: A walkthrough of how to load data from a local Airbyte JSON file.
`CoNLL-U <./examples/CoNLL-U.html>`_: A walkthrough of how to load data from a ConLL-U file.
`iFixit <./examples/ifixit.html>`_: A walkthrough of how to search and load data like guides, technical Q&A's, and device wikis from iFixit.com
`Notebook <./examples/notebook.html>`_: A walkthrough of how to load data from .ipynb notebook.
`Copypaste <./examples/copypaste.html>`_: A walkthrough of how to load a document object from something you just want to copy and paste.
`CSV <./examples/csv.html>`_: A walkthrough of how to load data from a .csv file.
`Facebook Chat <./examples/facebook_chat.html>`_: A walkthrough of how to load data from a Facebook Chat json file.
`Image <./examples/image.html>`_: A walkthrough of how to load images such as JPGs PNGs into a document format that can be used downstream.
`Markdown <./examples/markdown.html>`_: A walkthrough of how to load data from a markdown file.
`SRT <./examples/srt.html>`_: A walkthrough of how to load data from a subtitle (`.srt`) file.
`Telegram <./examples/telegram.html>`_: A walkthrough of how to load data from a Telegram Chat json file.
`URL <./examples/url.html>`_: A walkthrough of how to load HTML documents from a list of URLs into a document format that we can use downstream.
`Word Document <./examples/word_document.html>`_: A walkthrough of how to load data from Microsoft Word files.
`Blackboard <./examples/blackboard.html>`_: A walkthrough of how to load data from a Blackboard course.
.. toctree::
:maxdepth: 1
:glob:
:hidden:
examples/*

View File

@ -1,12 +0,0 @@
# Key Concepts
## Document
This class is a container for document information. This contains two parts:
- `page_content`: The content of the actual page itself.
- `metadata`: The metadata associated with the document. This can be things like the file path, the url, etc.
## Loader
This base class is a way to load documents. It exposes a `load` method that returns `Document` objects.
## [Unstructured](https://github.com/Unstructured-IO/unstructured)
Unstructured is a python package specifically focused on transformations from raw documents to text.

View File

@ -1,6 +1,10 @@
Indexes Indexes
========================== ==========================
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/indexing>`_
Indexes refer to ways to structure documents so that LLMs can best interact with them. Indexes refer to ways to structure documents so that LLMs can best interact with them.
This module contains utility functions for working with documents, different types of indexes, and then examples for using those indexes in chains. This module contains utility functions for working with documents, different types of indexes, and then examples for using those indexes in chains.
@ -14,20 +18,42 @@ For interacting with structured data (SQL tables, etc) or APIs, please see the c
The primary index and retrieval types supported by LangChain are currently centered around vector databases, and therefore The primary index and retrieval types supported by LangChain are currently centered around vector databases, and therefore
a lot of the functionality we dive deep on those topics. a lot of the functionality we dive deep on those topics.
The following sections of documentation are provided: For an overview of everything related to this, please see the below notebook for getting started:
- `Getting Started <./indexes/getting_started.html>`_: An overview of the base "Retriever" interface, and then all the functionality LangChain provides for working with indexes. .. toctree::
:maxdepth: 1
- `Key Concepts <./indexes/key_concepts.html>`_: A conceptual guide going over the various concepts related to indexes and the tools needed to create them. ./indexes/getting_started.ipynb
- `How-To Guides <./indexes/how_to_guides.html>`_: A collection of how-to guides. These highlight how to use all the relevant tools, the different types of vector databases, different types of retrievers, and how to use retrievers and indexes in chains. We then provide a deep dive on the four main components.
**Document Loaders**
How to load documents from a variety of sources.
**Text Splitters**
An overview of the abstractions and implementions around splitting text.
**VectorStores**
An overview of VectorStores and the many integrations LangChain provides.
**Retrievers**
An overview of Retrievers and the implementations LangChain provides.
Go Deeper
---------
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
:name: LLMs
:hidden:
./indexes/getting_started.ipynb ./indexes/document_loaders.rst
./indexes/key_concepts.md ./indexes/text_splitters.rst
./indexes/how_to_guides.rst ./indexes/vectorstores.rst
./indexes/retrievers.rst

View File

@ -1,51 +0,0 @@
# CombineDocuments Chains
CombineDocuments chains are useful for when you need to run a language over multiple documents.
Common use cases for this include question answering, question answering with sources, summarization, and more.
For more information on specific use cases as well as different methods for **fetching** these documents, please see
[this overview](/use_cases/combine_docs.md).
This documentation now picks up from after you've fetched your documents - now what?
How do you pass them to the language model in a format it can understand?
There are a few different methods, or chains, for doing so. LangChain supports four of the more common ones - and
we are actively looking to include more, so if you have any ideas please reach out! Note that there is not
one best method - the decision of which one to use is often very context specific. In order from simplest to
most complex:
## Stuffing
Stuffing is the simplest method, whereby you simply stuff all the related data into the prompt as context
to pass to the language model. This is implemented in LangChain as the `StuffDocumentsChain`.
**Pros:** Only makes a single call to the LLM. When generating text, the LLM has access to all the data at once.
**Cons:** Most LLMs have a context length, and for large documents (or many documents) this will not work as it will result in a prompt larger than the context length.
The main downside of this method is that it only works on smaller pieces of data. Once you are working
with many pieces of data, this approach is no longer feasible. The next two approaches are designed to help deal with that.
## Map Reduce
This method involves running an initial prompt on each chunk of data (for summarization tasks, this
could be a summary of that chunk; for question-answering tasks, it could be an answer based solely on that chunk).
Then a different prompt is run to combine all the initial outputs. This is implemented in the LangChain as the `MapReduceDocumentsChain`.
**Pros:** Can scale to larger documents (and more documents) than `StuffDocumentsChain`. The calls to the LLM on individual documents are independent and can therefore be parallelized.
**Cons:** Requires many more calls to the LLM than `StuffDocumentsChain`. Loses some information during the final combined call.
## Refine
This method involves running an initial prompt on the first chunk of data, generating some output.
For the remaining documents, that output is passed in, along with the next document,
asking the LLM to refine the output based on the new document.
**Pros:** Can pull in more relevant context, and may be less lossy than `MapReduceDocumentsChain`.
**Cons:** Requires many more calls to the LLM than `StuffDocumentsChain`. The calls are also NOT independent, meaning they cannot be paralleled like `MapReduceDocumentsChain`. There is also some potential dependencies on the ordering of the documents.
## Map-Rerank
This method involves running an initial prompt on each chunk of data, that not only tries to complete a
task but also gives a score for how certain it is in its answer. The responses are then
ranked according to this score, and the highest score is returned.
**Pros:** Similar pros as `MapReduceDocumentsChain`. Requires fewer calls, compared to `MapReduceDocumentsChain`.
**Cons:** Cannot combine information between documents. This means it is most useful when you expect there to be a single simple answer in a single document.

Some files were not shown because too many files have changed in this diff Show More