forked from Archives/langchain
count tokens instead of chars in autogpt prompt (#3841)
This looks like a bug. Overall by using len instead of token_counter the prompt thinks it has less context window than it actually does. Because of this it adds fewer messages. The reduced previous message context makes the agent repetitive when selecting tasks.
This commit is contained in:
parent
c4d3d74148
commit
47a685adcf
@ -63,7 +63,7 @@ class AutoGPTPrompt(BaseChatPromptTemplate, BaseModel):
|
||||
f"from your past:\n{relevant_memory}\n\n"
|
||||
)
|
||||
memory_message = SystemMessage(content=content_format)
|
||||
used_tokens += len(memory_message.content)
|
||||
used_tokens += self.token_counter(memory_message.content)
|
||||
historical_messages: List[BaseMessage] = []
|
||||
for message in previous_messages[-10:][::-1]:
|
||||
message_tokens = self.token_counter(message.content)
|
||||
|
Loading…
Reference in New Issue
Block a user