mirror of
https://github.com/hwchase17/langchain
synced 2024-11-06 03:20:49 +00:00
11fda490ca
- **Description:** VertexAIEmbeddings performance improvements - **Twitter handle:** @vladkol ## Improvements - Dynamic batch size, starting from 250, lowering down to 5. Batch size varies across regions. Some regions support larger batches, and it significantly improves performance. When running large batches of texts in `us-central1`, performance gain can be up to 3.5x. The dynamic batching also makes sure every batch is below 20K token limit. - New model parameter `embeddings_type` that translates to `task_type` parameter of the API. Newer model versions support [different embeddings task types](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings#api_changes_to_models_released_on_or_after_august_2023). |
||
---|---|---|
.. | ||
__init__.py | ||
test_deterministic_embedding.py | ||
test_gradient_ai.py | ||
test_imports.py | ||
test_infinity.py | ||
test_openai.py | ||
test_vertexai.py |