docs: `modules` pages simplified (#5116)

# docs: modules pages simplified

Fixied #5627  issue

Merged several repetitive sections in the `modules` pages. Some texts,
that were hard to understand, were also simplified.


## Who can review?

@hwchase17
@dev2049
searx_updates
Leonid Ganeline 12 months ago committed by GitHub
parent bc875a9df1
commit 95c6ed0568
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -5,108 +5,101 @@ Agents
`Conceptual Guide <https://docs.langchain.com/docs/components/agents>`_
Some applications will require not just a predetermined chain of calls to LLMs/other tools,
Some applications require not just a predetermined chain of calls to LLMs/other tools,
but potentially an unknown chain that depends on the user's input.
In these types of chains, there is a “agent” which has access to a suite of tools.
In these types of chains, there is an **agent** which has access to a suite of **tools**.
Depending on the user input, the agent can then decide which, if any, of these tools to call.
At the moment, there are two main types of agents:
1. "Action Agents": these agents decide an action to take and take that action one step at a time
2. "Plan-and-Execute Agents": these agents first decide a plan of actions to take, and then execute those actions one at a time.
1. **Action Agents**: these agents decide the actions to take and execute that actions one action at a time.
2. **Plan-and-Execute Agents**: these agents first decide a plan of actions to take, and then execute those actions one at a time.
When should you use each one? Action Agents are more conventional, and good for small tasks.
For more complex or long running tasks, the initial planning step helps to maintain long term objectives and focus. However, that comes at the expense of generally more calls and higher latency.
These two agents are also not mutually exclusive - in fact, it is often best to have an Action Agent be in charge of the execution for the Plan and Execute agent.
For more complex or long running tasks, the initial planning step helps to maintain long term objectives and focus.
However, that comes at the expense of generally more calls and higher latency.
These two agents are also not mutually exclusive - in fact, it is often best to have an Action Agent be in charge
of the execution for the Plan and Execute agent.
Action Agents
-------------
High level pseudocode of agents looks something like:
High level pseudocode of the Action Agents:
- Some user input is received
- The `agent` decides which `tool` - if any - to use, and what the input to that tool should be
- That `tool` is then called with that `tool input`, and an `observation` is recorded (this is just the output of calling that tool with that tool input)
- That history of `tool`, `tool input`, and `observation` is passed back into the `agent`, and it decides what step to take next
- This is repeated until the `agent` decides it no longer needs to use a `tool`, and then it responds directly to the user.
- The **user input** is received
- The **agent** decides which **tool** - if any - to use, and what the **tool input** should be
- That **tool** is then called with the **tool input**, and an **observation** is recorded (the output of this calling)
- That history of **tool**, **tool input**, and **observation** is passed back into the **agent**, and it decides the next step
- This is repeated until the **agent** decides it no longer needs to use a **tool**, and then it responds directly to the user.
The different abstractions involved in agents are as follows:
- Agent: this is where the logic of the application lives. Agents expose an interface that takes in user input along with a list of previous steps the agent has taken, and returns either an `AgentAction` or `AgentFinish`
- `AgentAction` corresponds to the tool to use and the input to that tool
- `AgentFinish` means the agent is done, and has information around what to return to the user
- Tools: these are the actions an agent can take. What tools you give an agent highly depend on what you want the agent to do
- Toolkits: these are groups of tools designed for a specific use case. For example, in order for an agent to interact with a SQL database in the best way it may need access to one tool to execute queries and another tool to inspect tables.
- Agent Executor: this wraps an agent and a list of tools. This is responsible for the loop of running the agent iteratively until the stopping criteria is met.
The different abstractions involved in agents are:
The most important abstraction of the four above to understand is that of the agent.
Although an agent can be defined in whatever way one chooses, the typical way to construct an agent is with:
- **Agent**: this is where the logic of the application lives. Agents expose an interface that takes in user input
along with a list of previous steps the agent has taken, and returns either an **AgentAction** or **AgentFinish**
- PromptTemplate: this is responsible for taking the user input and previous steps and constructing a prompt to send to the language model
- Language Model: this takes the prompt constructed by the PromptTemplate and returns some output
- Output Parser: this takes the output of the Language Model and parses it into an `AgentAction` or `AgentFinish` object.
- **AgentAction** corresponds to the tool to use and the input to that tool
- **AgentFinish** means the agent is done, and has information around what to return to the user
- **Tools**: these are the actions an agent can take. What tools you give an agent highly depend on what you want the agent to do
- **Toolkits**: these are groups of tools designed for a specific use case. For example, in order for an agent to
interact with a SQL database in the best way it may need access to one tool to execute queries and another tool to inspect tables.
- **Agent Executor**: this wraps an agent and a list of tools. This is responsible for the loop of running the agent
iteratively until the stopping criteria is met.
In this section of documentation, we first start with a Getting Started notebook to cover how to use all things related to agents in an end-to-end manner.
.. toctree::
:maxdepth: 1
:hidden:
./agents/getting_started.ipynb
|
- `Getting Started <./agents/getting_started.html>`_: An overview of agents. It covers how to use all things related to agents in an end-to-end manner.
We then split the documentation into the following sections:
**Tools**
|
**Agent Construction:**
In this section we cover the different types of tools LangChain supports natively.
We then cover how to add your own tools.
Although an agent can be constructed in many way, the typical way to construct an agent is with:
- **PromptTemplate**: this is responsible for taking the user input and previous steps and constructing a prompt
to send to the language model
- **Language Model**: this takes the prompt constructed by the PromptTemplate and returns some output
- **Output Parser**: this takes the output of the Language Model and parses it into an **AgentAction** or **AgentFinish** object.
**Agents**
In this section we cover the different types of agents LangChain supports natively.
We then cover how to modify and create your own agents.
|
**Additional Documentation:**
**Toolkits**
- `Tools <./agents/tools.html>`_: Different types of **tools** LangChain supports natively. We also cover how to add your own tools.
In this section we go over the various toolkits that LangChain supports out of the box,
and how to create an agent from them.
- `Agents <./agents/agents.html>`_: Different types of **agents** LangChain supports natively. We also cover how to
modify and create your own agents.
- `Toolkits <./agents/toolkits.html>`_: Various **toolkits** that LangChain supports out of the box, and how to
create an agent from them.
**Agent Executor**
- `Agent Executor <./agents/agent_executors.html>`_: The **Agent Executor** class, which is responsible for calling
the agent and tools in a loop. We go over different ways to customize this, and options you can use for more control.
In this section we go over the Agent Executor class, which is responsible for calling
the agent and tools in a loop. We go over different ways to customize this, and options you
can use for more control.
**Go Deeper**
.. toctree::
:maxdepth: 1
./agents/tools.rst
./agents/agents.rst
./agents/toolkits.rst
./agents/agent_executors.rst
Plan-and-Execute Agents
-----------------------
High level pseudocode of the **Plan-and-Execute Agents**:
High level pseudocode of agents looks something like:
- The **user input** is received
- The **planner** lists out the steps to take
- The **executor** goes through the list of steps, executing them
- Some user input is received
- The planner lists out the steps to take
- The executor goes through the list of steps, executing them
The most typical implementation is to have the planner be a language model, and the executor be an action agent.
The most typical implementation is to have the planner be a language model,
and the executor be an action agent.
|
- `Plan-and-Execute Agents <./agents/plan_and_execute.html>`_
**Go Deeper**
.. toctree::
:maxdepth: 1
:hidden:
./agents/getting_started.ipynb
./agents/tools.rst
./agents/agents.rst
./agents/toolkits.rst
./agents/agent_executors.rst
./agents/plan_and_execute.ipynb

@ -6,14 +6,13 @@ Chains
Using an LLM in isolation is fine for some simple applications,
but many more complex ones require chaining LLMs - either with each other or with other experts.
LangChain provides a standard interface for Chains, as well as some common implementations of chains for ease of use.
but more complex applications require chaining LLMs - either with each other or with other experts.
LangChain provides a standard interface for **Chains**, as well as several common implementations of chains.
The following sections of documentation are provided:
|
- `Getting Started <./chains/getting_started.html>`_: An overview of chains.
- `Getting Started <./chains/getting_started.html>`_: A getting started guide for chains, to get you up and running quickly.
- `How-To Guides <./chains/how_to_guides.html>`_: A collection of how-to guides. These highlight how to use various types of chains.
- `How-To Guides <./chains/how_to_guides.html>`_: How-to guides about various types of chains.
- `Reference <../reference/modules/chains.html>`_: API reference documentation for all Chain classes.

@ -5,53 +5,41 @@ Indexes
`Conceptual Guide <https://docs.langchain.com/docs/components/indexing>`_
Indexes refer to ways to structure documents so that LLMs can best interact with them.
This module contains utility functions for working with documents, different types of indexes, and then examples for using those indexes in chains.
**Indexes** refer to ways to structure documents so that LLMs can best interact with them.
The most common way that indexes are used in chains is in a "retrieval" step.
This step refers to taking a user's query and returning the most relevant documents.
We draw this distinction because (1) an index can be used for other things besides retrieval, and (2) retrieval can use other logic besides an index to find relevant documents.
We therefore have a concept of a "Retriever" interface - this is the interface that most chains work with.
We draw this distinction because (1) an index can be used for other things besides retrieval, and
(2) retrieval can use other logic besides an index to find relevant documents.
We therefore have a concept of a **Retriever** interface - this is the interface that most chains work with.
Most of the time when we talk about indexes and retrieval we are talking about indexing and retrieving unstructured data (like text documents).
For interacting with structured data (SQL tables, etc) or APIs, please see the corresponding use case sections for links to relevant functionality.
The primary index and retrieval types supported by LangChain are currently centered around vector databases, and therefore
a lot of the functionality we dive deep on those topics.
Most of the time when we talk about indexes and retrieval we are talking about indexing and retrieving
unstructured data (like text documents).
For interacting with structured data (SQL tables, etc) or APIs, please see the corresponding use case
sections for links to relevant functionality.
For an overview of everything related to this, please see the below notebook for getting started:
|
- `Getting Started <./indexes/getting_started.html>`_: An overview of the indexes.
.. toctree::
:maxdepth: 1
./indexes/getting_started.ipynb
We then provide a deep dive on the four main components.
**Document Loaders**
How to load documents from a variety of sources.
Index Types
---------------------
**Text Splitters**
- `Document Loaders <./indexes/document_loaders.html>`_: How to load documents from a variety of sources.
An overview of the abstractions and implementions around splitting text.
- `Text Splitters <./indexes/text_splitters.html>`_: An overview and different types of the **Text Splitters**.
- `VectorStores <./indexes/vectorstores.html>`_: An overview and different types of the **Vector Stores**.
**VectorStores**
- `Retrievers <./indexes/retrievers.html>`_: An overview and different types of the **Retrievers**.
An overview of VectorStores and the many integrations LangChain provides.
**Retrievers**
An overview of Retrievers and the implementations LangChain provides.
Go Deeper
---------
.. toctree::
:maxdepth: 1
:hidden:
./indexes/getting_started.ipynb
./indexes/document_loaders.rst
./indexes/text_splitters.rst
./indexes/vectorstores.rst

@ -9,16 +9,15 @@ By default, Chains and Agents are stateless,
meaning that they treat each incoming query independently (as are the underlying LLMs and chat models).
In some applications (chatbots being a GREAT example) it is highly important
to remember previous interactions, both at a short term but also at a long term level.
The concept of “Memory” exists to do exactly that.
The **Memory** does exactly that.
LangChain provides memory components in two forms.
First, LangChain provides helper utilities for managing and manipulating previous chat messages.
These are designed to be modular and useful regardless of how they are used.
Secondly, LangChain provides easy ways to incorporate these utilities into chains.
The following sections of documentation are provided:
- `Getting Started <./memory/getting_started.html>`_: An overview of how to get started with different types of memory.
|
- `Getting Started <./memory/getting_started.html>`_: An overview of different types of memory.
- `How-To Guides <./memory/how_to_guides.html>`_: A collection of how-to guides. These highlight different types of memory, as well as how to use memory in chains.
@ -28,6 +27,7 @@ The following sections of documentation are provided:
:maxdepth: 1
:caption: Memory
:name: Memory
:hidden:
./memory/getting_started.ipynb
./memory/getting_started.html
./memory/how_to_guides.rst

@ -11,38 +11,28 @@ but we have individual pages for each model type.
The pages contain more detailed "how-to" guides for working with that model,
as well as a list of different model providers.
**LLMs**
|
- `Getting Started <./models/getting_started.html>`_: An overview of the models.
Large Language Models (LLMs) are the first type of models we cover.
These models take a text string as input, and return a text string as output.
Model Types
-----------
**Chat Models**
- `LLMs <./models/llms.html>`_: **Large Language Models (LLMs)** take a text string as input and return a text string as output.
Chat Models are the second type of models we cover.
These models are usually backed by a language model, but their APIs are more structured.
Specifically, these models take a list of Chat Messages as input, and return a Chat Message.
- `Chat Models <./models/chat.html>`_: **Chat Models** are usually backed by a language model, but their APIs are more structured.
Specifically, these models take a list of Chat Messages as input, and return a Chat Message.
**Text Embedding Models**
- `Text Embedding Models <./models/text_embedding.html>`_: **Text embedding models** take text as input and return a list of floats.
The third type of models we cover are text embedding models.
These models take text as input and return a list of floats.
Getting Started
---------------
.. toctree::
:maxdepth: 1
./models/getting_started.ipynb
Go Deeper
---------
.. toctree::
:maxdepth: 1
:caption: Models
:name: models
:hidden:
./models/getting_started.html
./models/llms.rst
./models/chat.rst
./models/text_embedding.rst

@ -6,53 +6,42 @@ Prompts
The new way of programming models is through prompts.
A "prompt" refers to the input to the model.
This input is rarely hard coded, but rather is often constructed from multiple components.
A PromptTemplate is responsible for the construction of this input.
A **prompt** refers to the input to the model.
This input is often constructed from multiple components.
A **PromptTemplate** is responsible for the construction of this input.
LangChain provides several classes and functions to make constructing and working with prompts easy.
This section of documentation is split into four sections:
|
- `Getting Started <./prompts/getting_started.html>`_: An overview of the prompts.
**LLM Prompt Templates**
How to use PromptTemplates to prompt Language Models.
- `LLM Prompt Templates <./prompts/prompt_templates.html>`_: How to use PromptTemplates to prompt Language Models.
**Chat Prompt Templates**
How to use PromptTemplates to prompt Chat Models.
- `Chat Prompt Templates <./prompts/chat_prompt_template.html>`_: How to use PromptTemplates to prompt Chat Models.
**Example Selectors**
Often times it is useful to include examples in prompts.
These examples can be hardcoded, but it is often more powerful if they are dynamically selected.
This section goes over example selection.
- `Example Selectors <./prompts/example_selectors.html>`_: Often times it is useful to include examples in prompts.
These examples can be dynamically selected. This section goes over example selection.
**Output Parsers**
- `Output Parsers <./prompts/output_parsers.html>`_: Language models (and Chat Models) output text.
But many times you may want to get more structured information. This is where output parsers come in.
Output Parsers:
Language models (and Chat Models) output text.
But many times you may want to get more structured information than just text back.
This is where output parsers come in.
Output Parsers are responsible for (1) instructing the model how output should be formatted,
(2) parsing output into the desired formatting (including retrying if necessary).
- instruct the model how output should be formatted,
- parse output into the desired formatting (including retrying if necessary).
Getting Started
---------------
.. toctree::
:maxdepth: 1
./prompts/getting_started.ipynb
Go Deeper
---------
.. toctree::
:maxdepth: 1
:caption: Prompts
:name: prompts
:hidden:
./prompts/getting_started.html
./prompts/prompt_templates.rst
./prompts/chat_prompt_template.ipynb
./prompts/chat_prompt_template.html
./prompts/example_selectors.rst
./prompts/output_parsers.rst

@ -1,19 +1,18 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "6488fdaf",
"metadata": {},
"source": [
"# Chat Prompt Template\n",
"# Chat Prompt Templates\n",
"\n",
"[Chat Models](../models/chat.rst) takes a list of chat messages as input - this list commonly referred to as a prompt.\n",
"These chat messages differ from raw string (which you would pass into a [LLM](../models/llms.rst) model) in that every message is associated with a role.\n",
"[Chat Models](../models/chat.rst) take a list of `chat messages as` input - this list commonly referred to as a `prompt`.\n",
"These chat messages differ from raw string (which you would pass into a [LLM](../models/llms.rst) model) in that every message is associated with a `role`.\n",
"\n",
"For example, in OpenAI [Chat Completion API](https://platform.openai.com/docs/guides/chat/introduction), a chat message can be associated with the AI, human or system role. The model is supposed to follow instruction from system chat message more closely.\n",
"\n",
"Therefore, LangChain provides several related prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of `PromptTemplate` when querying chat models to fully exploit the potential of underlying chat model.\n"
"LangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of `PromptTemplate` when querying chat models to fully exploit the potential of underlying chat model.\n"
]
},
{
@ -126,7 +125,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "0899f681-012e-4687-a754-199a9a396738",
"metadata": {
@ -364,7 +362,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.10"
"version": "3.10.6"
}
},
"nbformat": 4,

Loading…
Cancel
Save