mirror of https://github.com/hwchase17/langchain
Increase `request_timeout` on ChatOpenAI (#3910)
With longer context and completions, gpt-3.5-turbo and, especially, gpt-4, will more times than not take > 60seconds to respond. Based on some other discussions, it seems like this is an increasingly common problem, especially with summarization tasks. - https://github.com/hwchase17/langchain/issues/3512 - https://github.com/hwchase17/langchain/issues/3005 OpenAI's max 600s timeout seems excessive, so I settled on 120, but I do run into generations that take >240 seconds when using large prompts and completions with GPT-4, so maybe 240 would be a better compromise?pull/3920/head
parent
2451310975
commit
9c89ff8bd9
Loading…
Reference in New Issue