mirror of
https://github.com/hwchase17/langchain
synced 2024-11-10 01:10:59 +00:00
8f158b72fc
Stop sequences are useful if you are doing long-running completions and need to early-out rather than running for the full max_length... not only does this save inference cost on Replicate, it is also much faster if you are going to truncate the output later anyway. Other LLMs support stop sequences natively (e.g. OpenAI) but I didn't see this for Replicate so adding this via their prediction cancel method. Housekeeping: I ran `make format` and `make lint`, no issues reported in the files I touched. I did update the replicate integration test and ran `poetry run pytest tests/integration_tests/llms/test_replicate.py` successfully. Finally, I am @tjaffri https://twitter.com/tjaffri for feature announcement tweets... or if you could please tag @docugami https://twitter.com/docugami we would really appreciate that :-) Co-authored-by: Taqi Jaffri <tjaffri@docugami.com> |
||
---|---|---|
.. | ||
callbacks | ||
chat | ||
document_loaders | ||
document_transformers | ||
llms | ||
memory | ||
providers | ||
retrievers | ||
text_embedding | ||
toolkits | ||
tools | ||
vectorstores |