From 158701ab3c49fe53854ab33d839dfd45c52ccaf1 Mon Sep 17 00:00:00 2001 From: Bagatur <22008038+baskaryan@users.noreply.github.com> Date: Mon, 17 Jun 2024 12:13:31 -0700 Subject: [PATCH] docs: update universal init title (#22990) --- docs/docs/how_to/chat_models_universal_init.ipynb | 2 +- docs/docs/how_to/index.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/docs/how_to/chat_models_universal_init.ipynb b/docs/docs/how_to/chat_models_universal_init.ipynb index ffa0714790..c77083cdfb 100644 --- a/docs/docs/how_to/chat_models_universal_init.ipynb +++ b/docs/docs/how_to/chat_models_universal_init.ipynb @@ -5,7 +5,7 @@ "id": "cfdf4f09-8125-4ed1-8063-6feed57da8a3", "metadata": {}, "source": [ - "# How to let your end users choose their model\n", + "# How to init any model in one line\n", "\n", "Many LLM applications let end users specify what model provider and model they want the application to be powered by. This requires writing some logic to initialize different ChatModels based on some user configuration. The `init_chat_model()` helper method makes it easy to initialize a number of different model integrations without having to worry about import paths and class names.\n", "\n", diff --git a/docs/docs/how_to/index.mdx b/docs/docs/how_to/index.mdx index db4989a09c..63d4ab9707 100644 --- a/docs/docs/how_to/index.mdx +++ b/docs/docs/how_to/index.mdx @@ -79,7 +79,7 @@ These are the core building blocks you can use when building applications. - [How to: stream a response back](/docs/how_to/chat_streaming) - [How to: track token usage](/docs/how_to/chat_token_usage_tracking) - [How to: track response metadata across providers](/docs/how_to/response_metadata) -- [How to: let your end users choose their model](/docs/how_to/chat_models_universal_init/) +- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/) ### LLMs