count tokens instead of chars in autogpt prompt (#3841)

This looks like a bug. 

Overall by using len instead of token_counter the prompt thinks it has
less context window than it actually does. Because of this it adds fewer
messages. The reduced previous message context makes the agent
repetitive when selecting tasks.
fix_agent_callbacks
Jef Packer 1 year ago committed by GitHub
parent c4d3d74148
commit 47a685adcf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -63,7 +63,7 @@ class AutoGPTPrompt(BaseChatPromptTemplate, BaseModel):
f"from your past:\n{relevant_memory}\n\n"
)
memory_message = SystemMessage(content=content_format)
used_tokens += len(memory_message.content)
used_tokens += self.token_counter(memory_message.content)
historical_messages: List[BaseMessage] = []
for message in previous_messages[-10:][::-1]:
message_tokens = self.token_counter(message.content)

Loading…
Cancel
Save