fix: llm caching for replicate (#6396)

Caching wasn't accounting for which model was used so a result for the
first executed model would return for the same prompt on a different
model.

This was because `Replicate._identifying_params` did not include the
`model` parameter.

FYI
- @cbh123
- @hwchase17
- @agola11
master
Bryce Drennan 11 months ago committed by GitHub
parent 8a604b93ab
commit 384fa43fc3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -72,6 +72,7 @@ class Replicate(LLM):
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {
"model": self.model,
**{"model_kwargs": self.model_kwargs},
}

Loading…
Cancel
Save