From 869a49a0ab36b26ee596a8f9ded01f6507d17f8d Mon Sep 17 00:00:00 2001 From: Leonid Ganeline Date: Wed, 25 Oct 2023 19:13:44 -0700 Subject: [PATCH] removed CardLists for LLMs and ChatModels (#12307) Problem statement: In the `integrations/llms` and `integrations/chat` pages, we have a sidebar with ToC, and we also have a ToC at the end of the page. The ToC at the end of the page is not necessary, and it is confusing when we mix the index page styles; moreover, it requires manual work. So, I removed ToC at the end of the page (it was discussed with and approved by @baskaryan) --- docs/scripts/model_feat_table.py | 6 ------ 1 file changed, 6 deletions(-) diff --git a/docs/scripts/model_feat_table.py b/docs/scripts/model_feat_table.py index d02299f4d8..3ff5be4f80 100644 --- a/docs/scripts/model_feat_table.py +++ b/docs/scripts/model_feat_table.py @@ -33,8 +33,6 @@ sidebar_class_name: hidden # LLMs -import DocCardList from "@theme/DocCardList"; - ## Features (natively supported) All LLMs implement the Runnable interface, which comes with default implementations of all methods, ie. `ainvoke`, `batch`, `abatch`, `stream`, `astream`. This gives all LLMs basic support for async, streaming and batch, which by default is implemented as below: - *Async* support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the LLM is being executed, by moving this call to a background thread. @@ -45,7 +43,6 @@ Each LLM integration can optionally provide native implementations for async, st {table} - """ CHAT_MODEL_TEMPLATE = """\ @@ -56,8 +53,6 @@ sidebar_class_name: hidden # Chat models -import DocCardList from "@theme/DocCardList"; - ## Features (natively supported) All ChatModels implement the Runnable interface, which comes with default implementations of all methods, ie. `ainvoke`, `batch`, `abatch`, `stream`, `astream`. This gives all ChatModels basic support for async, streaming and batch, which by default is implemented as below: - *Async* support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the ChatModel is being executed, by moving this call to a background thread. @@ -69,7 +64,6 @@ The table shows, for each integration, which features have been implemented with {table} - """