diff --git a/docs/extras/modules/model_io/models/llms/custom_llm.ipynb b/docs/extras/modules/model_io/models/llms/custom_llm.ipynb index 2e7e907523..3ff99dc80d 100644 --- a/docs/extras/modules/model_io/models/llms/custom_llm.ipynb +++ b/docs/extras/modules/model_io/models/llms/custom_llm.ipynb @@ -52,6 +52,7 @@ " prompt: str,\n", " stop: Optional[List[str]] = None,\n", " run_manager: Optional[CallbackManagerForLLMRun] = None,\n", + " **kwargs: Any,\n", " ) -> str:\n", " if stop is not None:\n", " raise ValueError(\"stop kwargs are not permitted.\")\n", diff --git a/docs/extras/use_cases/question_answering/question_answering.ipynb b/docs/extras/use_cases/question_answering/question_answering.ipynb index fe61c0ad25..ccadf6debd 100644 --- a/docs/extras/use_cases/question_answering/question_answering.ipynb +++ b/docs/extras/use_cases/question_answering/question_answering.ipynb @@ -14,7 +14,7 @@ "\n", "In this walkthrough we'll go over how to build a question-answering over documents application using LLMs. Two very related use cases which we cover elsewhere are:\n", "- [QA over structured data](/docs/use_cases/sql) (e.g., SQL)\n", - "- [QA over code](/docs/use_cases/code) (e.g., Python)\n", + "- [QA over code](/docs/use_cases/code_understanding) (e.g., Python)\n", "\n", "![intro.png](/img/qa_intro.png)\n", "\n",