fix: llm caching for replicate (#6396)

Caching wasn't accounting for which model was used so a result for the
first executed model would return for the same prompt on a different
model.

This was because `Replicate._identifying_params` did not include the
`model` parameter.

FYI
- @cbh123
- @hwchase17
- @agola11
This commit is contained in:
Bryce Drennan 2023-06-19 22:47:59 -07:00 committed by GitHub
parent 8a604b93ab
commit 384fa43fc3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -72,6 +72,7 @@ class Replicate(LLM):
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {
"model": self.model,
**{"model_kwargs": self.model_kwargs},
}