From a429145420b2428bc9dda767179ca5fddf3e374b Mon Sep 17 00:00:00 2001 From: Aayush Shah <83115948+AayushSameerShah@users.noreply.github.com> Date: Fri, 11 Aug 2023 13:31:40 +0530 Subject: [PATCH] Minor grammatical error (#9102) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Have corrected a grammatical error in: https://python.langchain.com/docs/modules/model_io/models/llms/ document 😄 --- docs/snippets/modules/model_io/models/llms/get_started.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/snippets/modules/model_io/models/llms/get_started.mdx b/docs/snippets/modules/model_io/models/llms/get_started.mdx index 1ef6c06069..5553a7faa2 100644 --- a/docs/snippets/modules/model_io/models/llms/get_started.mdx +++ b/docs/snippets/modules/model_io/models/llms/get_started.mdx @@ -43,7 +43,7 @@ llm("Tell me a joke") ### `generate`: batch calls, richer outputs -`generate` lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can includes things like multiple top responses and other LLM provider-specific information: +`generate` lets you can call the model with a list of strings, getting back a more complete response than just the text. This complete response can include things like multiple top responses and other LLM provider-specific information: ```python llm_result = llm.generate(["Tell me a joke", "Tell me a poem"]*15)