mirror of https://github.com/hwchase17/langchain
fix: llm caching for replicate (#6396)
Caching wasn't accounting for which model was used so a result for the first executed model would return for the same prompt on a different model. This was because `Replicate._identifying_params` did not include the `model` parameter. FYI - @cbh123 - @hwchase17 - @agola11pull/6465/head
parent
8a604b93ab
commit
384fa43fc3
Loading…
Reference in New Issue