From 0fa4516ce42418eb896c91362adbb9d346fafd95 Mon Sep 17 00:00:00 2001 From: Jeremy Suriel Date: Mon, 21 Aug 2023 18:54:38 -0400 Subject: [PATCH] Fix typo (#9565) Corrected a minor documentation typo here: https://python.langchain.com/docs/modules/model_io/models/llms/#generate-batch-calls-richer-outputs --- docs/snippets/modules/model_io/models/llms/get_started.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/snippets/modules/model_io/models/llms/get_started.mdx b/docs/snippets/modules/model_io/models/llms/get_started.mdx index 5553a7faa2..54d6a96b93 100644 --- a/docs/snippets/modules/model_io/models/llms/get_started.mdx +++ b/docs/snippets/modules/model_io/models/llms/get_started.mdx @@ -43,7 +43,7 @@ llm("Tell me a joke") ### `generate`: batch calls, richer outputs -`generate` lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information: +`generate` lets you call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information: ```python llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)