mirror of
https://github.com/hwchase17/langchain
synced 2024-11-06 03:20:49 +00:00
14ff1438e6
- **Description:** before the change I've got 1. propagate InferenceClientException to the caller. 2. stop grpc receiver thread on exception ``` for token in result_queue: > result_str += token E TypeError: can only concatenate str (not "InferenceServerException") to str ../../langchain_nvidia_trt/llms.py:207: TypeError ``` And stream thread keeps running. after the change request thread stops correctly and caller got a root cause exception: ``` E tritonclient.utils.InferenceServerException: [request id: 4529729] expected number of inputs between 2 and 3 but got 10 inputs for model 'vllm_model' ../../langchain_nvidia_trt/llms.py:205: InferenceServerException ``` - **Issue:** the issue # it fixes if applicable, - **Dependencies:** any dependencies required for this change, - **Twitter handle:** [t.me/mkhl_spb](https://t.me/mkhl_spb) I'm not sure about test coverage. Should I setup deep mocks or there's a kind of triton stub via testcontainers or so. |
||
---|---|---|
.. | ||
__init__.py | ||
llms.py | ||
py.typed |