@ -5216,7 +5216,15 @@
]),
]),
'definitions': dict({
'definitions': dict({
'AIMessage': dict({
'AIMessage': dict({
'description': 'Message from an AI.',
'description': '''
Message from an AI.
AIMessage is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
the raw output as returned by the model together standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -5404,7 +5412,16 @@
'type': 'object',
'type': 'object',
}),
}),
'FunctionMessage': dict({
'FunctionMessage': dict({
'description': 'Message for passing the result of executing a function back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
FunctionMessage are an older version of the ToolMessage schema, and
do not contain the tool_call_id field.
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -5460,7 +5477,30 @@
'type': 'object',
'type': 'object',
}),
}),
'HumanMessage': dict({
'HumanMessage': dict({
'description': 'Message from a human.',
'description': '''
Message from a human.
HumanMessages are messages that are passed in from a human to the model.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Instantiate a chat model and invoke it with the messages
model = ...
print(model.invoke(messages))
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -5571,8 +5611,28 @@
}),
}),
'SystemMessage': dict({
'SystemMessage': dict({
'description': '''
'description': '''
Message for priming AI behavior, usually passed in as the first of a sequence
Message for priming AI behavior.
The system message is usually passed in as the first of a sequence
of input messages.
of input messages.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Define a chat model and invoke it with the messages
print(model.invoke(messages))
''',
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
@ -5651,7 +5711,24 @@
'type': 'object',
'type': 'object',
}),
}),
'ToolMessage': dict({
'ToolMessage': dict({
'description': 'Message for passing the result of executing a tool back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
ToolMessages contain the result of a tool invocation. Typically, the result
is encoded inside the `content` field.
Example: A TooMessage representing a result of 42 from a tool call with id
.. code-block:: python
from langchain_core.messages import ToolMessage
ToolMessage(content='42', tool_call_id='call_Jja7J89XsjrOLA5r!MEOW!SL')
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -5777,7 +5854,15 @@
]),
]),
'definitions': dict({
'definitions': dict({
'AIMessage': dict({
'AIMessage': dict({
'description': 'Message from an AI.',
'description': '''
Message from an AI.
AIMessage is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
the raw output as returned by the model together standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -5965,7 +6050,16 @@
'type': 'object',
'type': 'object',
}),
}),
'FunctionMessage': dict({
'FunctionMessage': dict({
'description': 'Message for passing the result of executing a function back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
FunctionMessage are an older version of the ToolMessage schema, and
do not contain the tool_call_id field.
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -6021,7 +6115,30 @@
'type': 'object',
'type': 'object',
}),
}),
'HumanMessage': dict({
'HumanMessage': dict({
'description': 'Message from a human.',
'description': '''
Message from a human.
HumanMessages are messages that are passed in from a human to the model.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Instantiate a chat model and invoke it with the messages
model = ...
print(model.invoke(messages))
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -6132,8 +6249,28 @@
}),
}),
'SystemMessage': dict({
'SystemMessage': dict({
'description': '''
'description': '''
Message for priming AI behavior, usually passed in as the first of a sequence
Message for priming AI behavior.
The system message is usually passed in as the first of a sequence
of input messages.
of input messages.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Define a chat model and invoke it with the messages
print(model.invoke(messages))
''',
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
@ -6212,7 +6349,24 @@
'type': 'object',
'type': 'object',
}),
}),
'ToolMessage': dict({
'ToolMessage': dict({
'description': 'Message for passing the result of executing a tool back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
ToolMessages contain the result of a tool invocation. Typically, the result
is encoded inside the `content` field.
Example: A TooMessage representing a result of 42 from a tool call with id
.. code-block:: python
from langchain_core.messages import ToolMessage
ToolMessage(content='42', tool_call_id='call_Jja7J89XsjrOLA5r!MEOW!SL')
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -6322,7 +6476,15 @@
]),
]),
'definitions': dict({
'definitions': dict({
'AIMessage': dict({
'AIMessage': dict({
'description': 'Message from an AI.',
'description': '''
Message from an AI.
AIMessage is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
the raw output as returned by the model together standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -6463,7 +6625,16 @@
'type': 'object',
'type': 'object',
}),
}),
'FunctionMessage': dict({
'FunctionMessage': dict({
'description': 'Message for passing the result of executing a function back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
FunctionMessage are an older version of the ToolMessage schema, and
do not contain the tool_call_id field.
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -6519,7 +6690,30 @@
'type': 'object',
'type': 'object',
}),
}),
'HumanMessage': dict({
'HumanMessage': dict({
'description': 'Message from a human.',
'description': '''
Message from a human.
HumanMessages are messages that are passed in from a human to the model.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Instantiate a chat model and invoke it with the messages
model = ...
print(model.invoke(messages))
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -6608,8 +6802,28 @@
}),
}),
'SystemMessage': dict({
'SystemMessage': dict({
'description': '''
'description': '''
Message for priming AI behavior, usually passed in as the first of a sequence
Message for priming AI behavior.
The system message is usually passed in as the first of a sequence
of input messages.
of input messages.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Define a chat model and invoke it with the messages
print(model.invoke(messages))
''',
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
@ -6688,7 +6902,24 @@
'type': 'object',
'type': 'object',
}),
}),
'ToolMessage': dict({
'ToolMessage': dict({
'description': 'Message for passing the result of executing a tool back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
ToolMessages contain the result of a tool invocation. Typically, the result
is encoded inside the `content` field.
Example: A TooMessage representing a result of 42 from a tool call with id
.. code-block:: python
from langchain_core.messages import ToolMessage
ToolMessage(content='42', tool_call_id='call_Jja7J89XsjrOLA5r!MEOW!SL')
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -6786,7 +7017,15 @@
]),
]),
'definitions': dict({
'definitions': dict({
'AIMessage': dict({
'AIMessage': dict({
'description': 'Message from an AI.',
'description': '''
Message from an AI.
AIMessage is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
the raw output as returned by the model together standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -6974,7 +7213,16 @@
'type': 'object',
'type': 'object',
}),
}),
'FunctionMessage': dict({
'FunctionMessage': dict({
'description': 'Message for passing the result of executing a function back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
FunctionMessage are an older version of the ToolMessage schema, and
do not contain the tool_call_id field.
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -7030,7 +7278,30 @@
'type': 'object',
'type': 'object',
}),
}),
'HumanMessage': dict({
'HumanMessage': dict({
'description': 'Message from a human.',
'description': '''
Message from a human.
HumanMessages are messages that are passed in from a human to the model.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Instantiate a chat model and invoke it with the messages
model = ...
print(model.invoke(messages))
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -7141,8 +7412,28 @@
}),
}),
'SystemMessage': dict({
'SystemMessage': dict({
'description': '''
'description': '''
Message for priming AI behavior, usually passed in as the first of a sequence
Message for priming AI behavior.
The system message is usually passed in as the first of a sequence
of input messages.
of input messages.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Define a chat model and invoke it with the messages
print(model.invoke(messages))
''',
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
@ -7221,7 +7512,24 @@
'type': 'object',
'type': 'object',
}),
}),
'ToolMessage': dict({
'ToolMessage': dict({
'description': 'Message for passing the result of executing a tool back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
ToolMessages contain the result of a tool invocation. Typically, the result
is encoded inside the `content` field.
Example: A TooMessage representing a result of 42 from a tool call with id
.. code-block:: python
from langchain_core.messages import ToolMessage
ToolMessage(content='42', tool_call_id='call_Jja7J89XsjrOLA5r!MEOW!SL')
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -7319,7 +7627,15 @@
]),
]),
'definitions': dict({
'definitions': dict({
'AIMessage': dict({
'AIMessage': dict({
'description': 'Message from an AI.',
'description': '''
Message from an AI.
AIMessage is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
the raw output as returned by the model together standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -7507,7 +7823,16 @@
'type': 'object',
'type': 'object',
}),
}),
'FunctionMessage': dict({
'FunctionMessage': dict({
'description': 'Message for passing the result of executing a function back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
FunctionMessage are an older version of the ToolMessage schema, and
do not contain the tool_call_id field.
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -7563,7 +7888,30 @@
'type': 'object',
'type': 'object',
}),
}),
'HumanMessage': dict({
'HumanMessage': dict({
'description': 'Message from a human.',
'description': '''
Message from a human.
HumanMessages are messages that are passed in from a human to the model.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Instantiate a chat model and invoke it with the messages
model = ...
print(model.invoke(messages))
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -7674,8 +8022,28 @@
}),
}),
'SystemMessage': dict({
'SystemMessage': dict({
'description': '''
'description': '''
Message for priming AI behavior, usually passed in as the first of a sequence
Message for priming AI behavior.
The system message is usually passed in as the first of a sequence
of input messages.
of input messages.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Define a chat model and invoke it with the messages
print(model.invoke(messages))
''',
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
@ -7754,7 +8122,24 @@
'type': 'object',
'type': 'object',
}),
}),
'ToolMessage': dict({
'ToolMessage': dict({
'description': 'Message for passing the result of executing a tool back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
ToolMessages contain the result of a tool invocation. Typically, the result
is encoded inside the `content` field.
Example: A TooMessage representing a result of 42 from a tool call with id
.. code-block:: python
from langchain_core.messages import ToolMessage
ToolMessage(content='42', tool_call_id='call_Jja7J89XsjrOLA5r!MEOW!SL')
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -7844,7 +8229,15 @@
dict({
dict({
'definitions': dict({
'definitions': dict({
'AIMessage': dict({
'AIMessage': dict({
'description': 'Message from an AI.',
'description': '''
Message from an AI.
AIMessage is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
the raw output as returned by the model together standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -8032,7 +8425,16 @@
'type': 'object',
'type': 'object',
}),
}),
'FunctionMessage': dict({
'FunctionMessage': dict({
'description': 'Message for passing the result of executing a function back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
FunctionMessage are an older version of the ToolMessage schema, and
do not contain the tool_call_id field.
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -8088,7 +8490,30 @@
'type': 'object',
'type': 'object',
}),
}),
'HumanMessage': dict({
'HumanMessage': dict({
'description': 'Message from a human.',
'description': '''
Message from a human.
HumanMessages are messages that are passed in from a human to the model.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Instantiate a chat model and invoke it with the messages
model = ...
print(model.invoke(messages))
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -8210,8 +8635,28 @@
}),
}),
'SystemMessage': dict({
'SystemMessage': dict({
'description': '''
'description': '''
Message for priming AI behavior, usually passed in as the first of a sequence
Message for priming AI behavior.
The system message is usually passed in as the first of a sequence
of input messages.
of input messages.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Define a chat model and invoke it with the messages
print(model.invoke(messages))
''',
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
@ -8290,7 +8735,24 @@
'type': 'object',
'type': 'object',
}),
}),
'ToolMessage': dict({
'ToolMessage': dict({
'description': 'Message for passing the result of executing a tool back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
ToolMessages contain the result of a tool invocation. Typically, the result
is encoded inside the `content` field.
Example: A TooMessage representing a result of 42 from a tool call with id
.. code-block:: python
from langchain_core.messages import ToolMessage
ToolMessage(content='42', tool_call_id='call_Jja7J89XsjrOLA5r!MEOW!SL')
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -8407,7 +8869,15 @@
]),
]),
'definitions': dict({
'definitions': dict({
'AIMessage': dict({
'AIMessage': dict({
'description': 'Message from an AI.',
'description': '''
Message from an AI.
AIMessage is returned from a chat model as a response to a prompt.
This message represents the output of the model and consists of both
the raw output as returned by the model together standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -8548,7 +9018,16 @@
'type': 'object',
'type': 'object',
}),
}),
'FunctionMessage': dict({
'FunctionMessage': dict({
'description': 'Message for passing the result of executing a function back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
FunctionMessage are an older version of the ToolMessage schema, and
do not contain the tool_call_id field.
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -8604,7 +9083,30 @@
'type': 'object',
'type': 'object',
}),
}),
'HumanMessage': dict({
'HumanMessage': dict({
'description': 'Message from a human.',
'description': '''
Message from a human.
HumanMessages are messages that are passed in from a human to the model.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Instantiate a chat model and invoke it with the messages
model = ...
print(model.invoke(messages))
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',
@ -8693,8 +9195,28 @@
}),
}),
'SystemMessage': dict({
'SystemMessage': dict({
'description': '''
'description': '''
Message for priming AI behavior, usually passed in as the first of a sequence
Message for priming AI behavior.
The system message is usually passed in as the first of a sequence
of input messages.
of input messages.
Example:
.. code-block:: python
from langchain_core.messages import HumanMessage, SystemMessage
messages = [
SystemMessage(
content="You are a helpful assistant! Your name is Bob."
),
HumanMessage(
content="What is your name?"
)
]
# Define a chat model and invoke it with the messages
print(model.invoke(messages))
''',
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
@ -8773,7 +9295,24 @@
'type': 'object',
'type': 'object',
}),
}),
'ToolMessage': dict({
'ToolMessage': dict({
'description': 'Message for passing the result of executing a tool back to a model.',
'description': '''
Message for passing the result of executing a tool back to a model.
ToolMessages contain the result of a tool invocation. Typically, the result
is encoded inside the `content` field.
Example: A TooMessage representing a result of 42 from a tool call with id
.. code-block:: python
from langchain_core.messages import ToolMessage
ToolMessage(content='42', tool_call_id='call_Jja7J89XsjrOLA5r!MEOW!SL')
The tool_call_id field is used to associate the tool call request with the
tool call response. This is useful in situations where a chat model is able
to request multiple tool calls in parallel.
''',
'properties': dict({
'properties': dict({
'additional_kwargs': dict({
'additional_kwargs': dict({
'title': 'Additional Kwargs',
'title': 'Additional Kwargs',