langchain/libs/community/tests
Vlad Kolesnikov 11fda490ca
community[minor]: New model parameters and dynamic batching for VertexAIEmbeddings (#13999)
- **Description:** VertexAIEmbeddings performance improvements
  - **Twitter handle:** @vladkol

## Improvements

- Dynamic batch size, starting from 250, lowering down to 5. Batch size
varies across regions.
Some regions support larger batches, and it significantly improves
performance.
When running large batches of texts in `us-central1`, performance gain
can be up to 3.5x.
The dynamic batching also makes sure every batch is below 20K token
limit.
- New model parameter `embeddings_type` that translates to `task_type`
parameter of the API. Newer model versions support [different embeddings
task
types](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings#api_changes_to_models_released_on_or_after_august_2023).
2023-12-17 22:24:22 -05:00
..
examples
integration_tests community[minor]: New model parameters and dynamic batching for VertexAIEmbeddings (#13999) 2023-12-17 22:24:22 -05:00
unit_tests community[minor]: New model parameters and dynamic batching for VertexAIEmbeddings (#13999) 2023-12-17 22:24:22 -05:00
__init__.py