mirror of https://github.com/hwchase17/langchain
Check for single prompt in __call__ method of the BaseLLM class (#4892)
# Ensuring that users pass a single prompt when calling a LLM - This PR adds a check to the `__call__` method of the `BaseLLM` class to ensure that it is called with a single prompt - Raises a `ValueError` if users try to call a LLM with a list of prompt and instructs them to use the `generate` method instead ## Why this could be useful I stumbled across this by accident. I accidentally called the OpenAI LLM with a list of prompts instead of a single string and still got a result: ``` >>> from langchain.llms import OpenAI >>> llm = OpenAI() >>> llm(["Tell a joke"]*2) "\n\nQ: Why don't scientists trust atoms?\nA: Because they make up everything!" ``` It might be better to catch such a scenario preventing unnecessary costs and irritation for the user. ## Proposed behaviour ``` >>> from langchain.llms import OpenAI >>> llm = OpenAI() >>> llm(["Tell a joke"]*2) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/marcus/Projects/langchain/langchain/llms/base.py", line 291, in __call__ raise ValueError( ValueError: Argument `prompt` is expected to be a single string, not a list. If you want to run the LLM on multiple prompts, use `generate` instead. ```pull/5013/head
parent
6c60251f52
commit
2aa3754024
Loading…
Reference in New Issue