langchain/libs
joshy-deshaw bf5385592e
core, community: propagate context between threads (#15171)
While using `chain.batch`, the default implementation uses a
`ThreadPoolExecutor` and run the chains in separate threads. An issue
with this approach is that that [the token counting
callback](https://python.langchain.com/docs/modules/callbacks/token_counting)
fails to work as a consequence of the context not being propagated
between threads. This PR adds context propagation to the new threads and
adds some thread synchronization in the OpenAI callback. With this
change, the token counting callback works as intended.

Having the context propagation change would be highly beneficial for
those implementing custom callbacks for similar functionalities as well.

---------

Co-authored-by: Nuno Campos <nuno@langchain.dev>
2023-12-28 14:51:22 -08:00
..
cli cli: test_integration group (#14924) 2023-12-19 12:09:04 -08:00
community core, community: propagate context between threads (#15171) 2023-12-28 14:51:22 -08:00
core core, community: propagate context between threads (#15171) 2023-12-28 14:51:22 -08:00
experimental infra: Fix test filesystem paths incompatible with windows (#14388) 2023-12-21 13:45:42 -08:00
langchain Make all json parsing less strict by default (#15287) 2023-12-28 14:48:53 -08:00
partners Fix: fix partners name typo in tests (#15066) 2023-12-22 11:48:39 -08:00