core[patch]: Document agent schema (#23194)

* Document agent schema
* Refer folks to langgraph for more information on how to create agents.
pull/23118/head^2
Eugene Yurtsev 2 months ago committed by GitHub
parent 255ad39ae3
commit 1fcf875fe3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -1,32 +1,25 @@
"""
**Agent** is a class that uses an LLM to choose a sequence of actions to take.
"""Schema definitions for representing agent actions, observations, and return values.
In Chains, a sequence of actions is hardcoded. In Agents,
a language model is used as a reasoning engine to determine which actions
to take and in which order.
**ATTENTION** The schema definitions are provided for backwards compatibility.
Agents select and use **Tools** and **Toolkits** for actions.
New agents should be built using the langgraph library
(https://github.com/langchain-ai/langgraph)), which provides a simpler
and more flexible way to define agents.
Please see the migration guide for information on how to migrate existing
agents to modern langgraph agents:
https://python.langchain.com/v0.2/docs/how_to/migrate_agent/
**Class hierarchy:**
Agents use language models to choose a sequence of actions to take.
.. code-block::
A basic agent works in the following manner:
BaseSingleActionAgent --> LLMSingleActionAgent
OpenAIFunctionsAgent
XMLAgent
Agent --> <name>Agent # Examples: ZeroShotAgent, ChatAgent
BaseMultiActionAgent --> OpenAIMultiFunctionsAgent
**Main helpers:**
.. code-block::
AgentType, AgentExecutor, AgentOutputParser, AgentExecutorIterator,
AgentAction, AgentFinish, AgentStep
1. Given a prompt an agent uses an LLM to request an action to take (e.g., a tool to run).
2. The agent executes the action (e.g., runs the tool), and receives an observation.
3. The agent returns the observation to the LLM, which can then be used to generate the next action.
4. When the agent reaches a stopping condition, it returns a final return value.
The schemas for the agents themselves are defined in langchain.agents.agent.
""" # noqa: E501
from __future__ import annotations
@ -43,7 +36,11 @@ from langchain_core.messages import (
class AgentAction(Serializable):
"""A full description of an action for an ActionAgent to execute."""
"""Represents a request to execute an action by an agent.
The action consists of the name of the tool to execute and the input to pass
to the tool. The log is used to pass along extra information about the action.
"""
tool: str
"""The name of the Tool to execute."""
@ -59,10 +56,10 @@ class AgentAction(Serializable):
before the tool/tool_input)."""
type: Literal["AgentAction"] = "AgentAction"
# Override init to support instantiation by position for backward compat.
def __init__(
self, tool: str, tool_input: Union[str, dict], log: str, **kwargs: Any
):
"""Override init to support instantiation by position for backward compat."""
super().__init__(tool=tool, tool_input=tool_input, log=log, **kwargs)
@classmethod
@ -82,6 +79,13 @@ class AgentAction(Serializable):
class AgentActionMessageLog(AgentAction):
"""A representation of an action to be executed by an agent.
This is similar to AgentAction, but includes a message log consisting of
chat messages. This is useful when working with ChatModels, and is used
to reconstruct conversation history from the agent's perspective.
"""
message_log: Sequence[BaseMessage]
"""Similar to log, this can be used to pass along extra
information about what exact messages were predicted by the LLM
@ -111,7 +115,10 @@ class AgentStep(Serializable):
class AgentFinish(Serializable):
"""The final return value of an ActionAgent."""
"""The final return value of an ActionAgent.
Agents return an AgentFinish when they have reached a stopping condition.
"""
return_values: dict
"""Dictionary of return values."""

Loading…
Cancel
Save