diff --git a/docs/scripts/model_feat_table.py b/docs/scripts/model_feat_table.py index d02299f4d8..3ff5be4f80 100644 --- a/docs/scripts/model_feat_table.py +++ b/docs/scripts/model_feat_table.py @@ -33,8 +33,6 @@ sidebar_class_name: hidden # LLMs -import DocCardList from "@theme/DocCardList"; - ## Features (natively supported) All LLMs implement the Runnable interface, which comes with default implementations of all methods, ie. `ainvoke`, `batch`, `abatch`, `stream`, `astream`. This gives all LLMs basic support for async, streaming and batch, which by default is implemented as below: - *Async* support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the LLM is being executed, by moving this call to a background thread. @@ -45,7 +43,6 @@ Each LLM integration can optionally provide native implementations for async, st {table} - """ CHAT_MODEL_TEMPLATE = """\ @@ -56,8 +53,6 @@ sidebar_class_name: hidden # Chat models -import DocCardList from "@theme/DocCardList"; - ## Features (natively supported) All ChatModels implement the Runnable interface, which comes with default implementations of all methods, ie. `ainvoke`, `batch`, `abatch`, `stream`, `astream`. This gives all ChatModels basic support for async, streaming and batch, which by default is implemented as below: - *Async* support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the ChatModel is being executed, by moving this call to a background thread. @@ -69,7 +64,6 @@ The table shows, for each integration, which features have been implemented with {table} - """