diff --git a/docs/modules/chains/combine_docs_examples/qa_with_sources.ipynb b/docs/modules/chains/combine_docs_examples/qa_with_sources.ipynb index 1e458d27..f9b97a5b 100644 --- a/docs/modules/chains/combine_docs_examples/qa_with_sources.ipynb +++ b/docs/modules/chains/combine_docs_examples/qa_with_sources.ipynb @@ -365,6 +365,20 @@ "chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)" ] }, + { + "cell_type": "markdown", + "id": "d943c6c1", + "metadata": {}, + "source": [ + "**Batch Size**\n", + "\n", + "When using the `map_reduce` chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:\n", + "\n", + "```python\n", + "llm = OpenAI(batch_size=5, temperature=0)\n", + "```" + ] + }, { "cell_type": "markdown", "id": "5bf0e1ab", diff --git a/docs/modules/chains/combine_docs_examples/question_answering.ipynb b/docs/modules/chains/combine_docs_examples/question_answering.ipynb index 9364bdad..6e388aa6 100644 --- a/docs/modules/chains/combine_docs_examples/question_answering.ipynb +++ b/docs/modules/chains/combine_docs_examples/question_answering.ipynb @@ -356,6 +356,20 @@ "chain({\"input_documents\": docs, \"question\": query}, return_only_outputs=True)" ] }, + { + "cell_type": "markdown", + "id": "6391b7ab", + "metadata": {}, + "source": [ + "**Batch Size**\n", + "\n", + "When using the `map_reduce` chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so:\n", + "\n", + "```python\n", + "llm = OpenAI(batch_size=5, temperature=0)\n", + "```" + ] + }, { "cell_type": "markdown", "id": "6ea50ad0",