docs: fix trim_messages code blocks (#23271)

pull/23279/head
Bagatur 2 weeks ago committed by GitHub
parent 86326269a1
commit 9eda8f2fe8
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -585,163 +585,137 @@ def trim_messages(
return count return count
First 30 tokens, not allowing partial messages: First 30 tokens, not allowing partial messages:
.. code-block:: python .. code-block:: python
trim_messages(messages, max_tokens=30, token_counter=dummy_token_counter, strategy="first") trim_messages(messages, max_tokens=30, token_counter=dummy_token_counter, strategy="first")
.. code-block:: python .. code-block:: python
[ [
SystemMessage("This is a 4 token text. The full message is 10 tokens."), SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"), HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"),
] ]
First 30 tokens, allowing partial messages: First 30 tokens, allowing partial messages:
.. code-block:: python .. code-block:: python
trim_messages( trim_messages(
messages, messages,
max_tokens=30, max_tokens=30,
token_counter=dummy_token_counter, token_counter=dummy_token_counter,
strategy="first", strategy="first",
allow_partial=True, allow_partial=True,
) )
.. code-block:: python .. code-block:: python
[ [
SystemMessage("This is a 4 token text. The full message is 10 tokens."), SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"), HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"),
AIMessage( [{"type": "text", "text": "This is the FIRST 4 token block."}], id="second"), AIMessage( [{"type": "text", "text": "This is the FIRST 4 token block."}], id="second"),
] ]
First 30 tokens, allowing partial messages, have to end on HumanMessage: First 30 tokens, allowing partial messages, have to end on HumanMessage:
.. code-block:: python .. code-block:: python
trim_messages( trim_messages(
messages, messages,
max_tokens=30, max_tokens=30,
token_counter=dummy_token_counter, token_counter=dummy_token_counter,
strategy="first" strategy="first"
allow_partial=True, allow_partial=True,
end_on="human", end_on="human",
) )
.. code-block:: python .. code-block:: python
[ [
SystemMessage("This is a 4 token text. The full message is 10 tokens."), SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"), HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="first"),
] ]
Last 30 tokens, including system message, not allowing partial messages: Last 30 tokens, including system message, not allowing partial messages:
.. code-block:: python .. code-block:: python
trim_messages(messages, max_tokens=30, include_system=True, token_counter=dummy_token_counter, strategy="last") trim_messages(messages, max_tokens=30, include_system=True, token_counter=dummy_token_counter, strategy="last")
.. code-block:: python .. code-block:: python
[ [
SystemMessage("This is a 4 token text. The full message is 10 tokens."), SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"), HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"), AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"),
] ]
Last 40 tokens, including system message, allowing partial messages: Last 40 tokens, including system message, allowing partial messages:
.. code-block:: python
trim_messages(
messages,
max_tokens=40,
token_counter=dummy_token_counter,
strategy="last",
allow_partial=True,
include_system=True
)
.. code-block:: python
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
AIMessage(
[{"type": "text", "text": "This is the FIRST 4 token block."},],
id="second",
),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"),
]
Last 30 tokens, including system message, allowing partial messages, end on HumanMessage:
.. code-block:: python
trim_messages(
messages,
max_tokens=30,
token_counter=dummy_token_counter,
strategy="last",
end_on="human",
include_system=True,
allow_partial=True,
)
.. code-block:: python
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
AIMessage(
[{"type": "text", "text": "This is the FIRST 4 token block."},],
id="second",
),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
]
Last 40 tokens, including system message, allowing partial messages, start on HumanMessage:
.. code-block:: python
trim_messages(
messages,
max_tokens=40,
token_counter=dummy_token_counter,
strategy="last",
include_system=True,
allow_partial=True,
start_on="human"
)
.. code-block:: python
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"),
]
Using a TextSplitter for splitting parting messages:
.. code-block:: python .. code-block:: python
... trim_messages(
messages,
max_tokens=40,
token_counter=dummy_token_counter,
strategy="last",
allow_partial=True,
include_system=True
)
.. code-block:: python .. code-block:: python
... [
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
AIMessage(
[{"type": "text", "text": "This is the FIRST 4 token block."},],
id="second",
),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"),
]
Using a model for token counting: Last 30 tokens, including system message, allowing partial messages, end on HumanMessage:
.. code-block:: python .. code-block:: python
... trim_messages(
messages,
max_tokens=30,
token_counter=dummy_token_counter,
strategy="last",
end_on="human",
include_system=True,
allow_partial=True,
)
.. code-block:: python .. code-block:: python
... [
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
AIMessage(
[{"type": "text", "text": "This is the FIRST 4 token block."},],
id="second",
),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
]
Chaining: Last 40 tokens, including system message, allowing partial messages, start on HumanMessage:
.. code-block:: python .. code-block:: python
... trim_messages(
messages,
max_tokens=40,
token_counter=dummy_token_counter,
strategy="last",
include_system=True,
allow_partial=True,
start_on="human"
)
.. code-block:: python
[
SystemMessage("This is a 4 token text. The full message is 10 tokens."),
HumanMessage("This is a 4 token text. The full message is 10 tokens.", id="third"),
AIMessage("This is a 4 token text. The full message is 10 tokens.", id="fourth"),
]
""" # noqa: E501 """ # noqa: E501
from langchain_core.language_models import BaseLanguageModel from langchain_core.language_models import BaseLanguageModel

Loading…
Cancel
Save