forked from Archives/langchain
Fix output final text for HuggingFaceTextGenInference when streaming (#6211)
The LLM integration [HuggingFaceTextGenInference](https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_text_gen_inference.py) already has streaming support. However, when streaming is enabled, it always returns an empty string as the final output text when the LLM is finished. This is because `text` is instantiated with an empty string and never updated. This PR fixes the collection of the final output text by concatenating new tokens.master
parent
b3bccabc66
commit
ea6a5b03e0
Loading…
Reference in New Issue