mirror of https://github.com/hwchase17/langchain
community[patch]: huggingface hub character removal bug fix (#16233)
- **Description:** Some text-generation models on huggingface repeat the prompt in their generated response, but not all do! The tests use "gpt2" which DOES repeat the prompt and as such, the HuggingFaceHub class is hardcoded to remove the first few characters of the response (to match the len(prompt)). However, if you are using a model (such as the very popular "meta-llama/Llama-2-7b-chat-hf") that DOES NOT repeat the prompt in it's generated text, then the beginning of the generated text will be cut off. This code change fixes that bug by first checking whether the prompt is repeated in the generated response and removing it conditionally. - **Issue:** #16232 - **Dependencies:** N/A - **Twitter handle:** N/Apull/16003/head^2
parent
3613d8a2ad
commit
9d32af72ce
Loading…
Reference in New Issue