Fix output final text for HuggingFaceTextGenInference when streaming (#6211)

The LLM integration
[HuggingFaceTextGenInference](https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_text_gen_inference.py)
already has streaming support.

However, when streaming is enabled, it always returns an empty string as
the final output text when the LLM is finished. This is because `text`
is instantiated with an empty string and never updated.

This PR fixes the collection of the final output text by concatenating
new tokens.
This commit is contained in:
Jan Pawellek 2023-06-19 02:01:15 +02:00 committed by GitHub
parent b3bccabc66
commit ea6a5b03e0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -169,4 +169,5 @@ class HuggingFaceTextGenInference(LLM):
if not token.special:
if text_callback:
text_callback(token.text)
text += token.text
return text