langchain/tests/integration_tests/llms/utils.py
mrbean fe6695b9e7
Add HuggingFacePipeline LLM (#353)
https://github.com/hwchase17/langchain/issues/354

Add support for running your own HF pipeline locally. This would allow
you to get a lot more dynamic with what HF features and models you
support since you wouldn't be beholden to what is hosted in HF hub. You
could also do stuff with HF Optimum to quantize your models and stuff to
get pretty fast inference even running on a laptop.
2022-12-17 07:00:04 -08:00

17 lines
605 B
Python

"""Utils for LLM Tests."""
from langchain.llms.base import LLM
def assert_llm_equality(llm: LLM, loaded_llm: LLM) -> None:
"""Assert LLM Equality for tests."""
# Check that they are the same type.
assert type(llm) == type(loaded_llm)
# Client field can be session based, so hash is different despite
# all other values being the same, so just assess all other fields
for field in llm.__fields__.keys():
if field != "client" and field != "pipeline":
val = getattr(llm, field)
new_val = getattr(loaded_llm, field)
assert new_val == val