mirror of https://github.com/hwchase17/langchain
count tokens instead of chars in autogpt prompt (#3841)
This looks like a bug. Overall by using len instead of token_counter the prompt thinks it has less context window than it actually does. Because of this it adds fewer messages. The reduced previous message context makes the agent repetitive when selecting tasks.pull/3890/head
parent
c4d3d74148
commit
47a685adcf
Loading…
Reference in New Issue