Gpt-3.5 sometimes calls with empty string arguments instead of `{}`
I'd assume it's because the typescript representation on their backend
makes it a bit ambiguous.
- **Description:** VertexAIEmbeddings performance improvements
- **Twitter handle:** @vladkol
## Improvements
- Dynamic batch size, starting from 250, lowering down to 5. Batch size
varies across regions.
Some regions support larger batches, and it significantly improves
performance.
When running large batches of texts in `us-central1`, performance gain
can be up to 3.5x.
The dynamic batching also makes sure every batch is below 20K token
limit.
- New model parameter `embeddings_type` that translates to `task_type`
parameter of the API. Newer model versions support [different embeddings
task
types](https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings#api_changes_to_models_released_on_or_after_august_2023).
Now that it's supported again for OAI chat models .
Shame this wouldn't include it in the `.invoke()` output though (it's
not included in the message itself). Would need to do a follow-up for
that to be the case
Fixed:
- `_agenerate` return value in the YandexGPT Chat Model
- duplicate line in the documentation
Co-authored-by: Dmitry Tyumentsev <dmitry.tyumentsev@raftds.com>
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Builds out a developer documentation section in the docs
- Links it from contributing.md
- Adds an initial guide on how to contribute an integration
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Adds the option for `similarity_score_threshold` when using
`MongoDBAtlasVectorSearch` as a vector store retriever.
Example use:
```
vector_search = MongoDBAtlasVectorSearch.from_documents(...)
qa_retriever = vector_search.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={
"score_threshold": 0.5,
}
)
qa = RetrievalQA.from_chain_type(
llm=OpenAI(),
chain_type="stuff",
retriever=qa_retriever,
)
docs = qa({"query": "..."})
```
I've tested this feature locally, using a MongoDB Atlas Cluster with a
vector search index.
… (#14723)
- **Description:** Minor updates per marketing requests. Namely, name
decisions (AI Foundation Models / AI Playground)
- **Tag maintainer:** @hinthornw
Do want to pass around the PR for a bit and ask a few more marketing
questions before merge, but just want to make sure I'm not working in a
vacuum. No major changes to code functionality intended; the PR should
be for documentation and only minor tweaks.
Note: QA model is a bit borked across staging/prod right now. Relevant
teams have been informed and are looking into it, and I'm placeholdered
the response to that of a working version in the notebook.
Co-authored-by: Vadim Kudlay <32310964+VKudlay@users.noreply.github.com>
Replace this entire comment with:
- **Description:** added support for new Google GenerativeAI models
- **Twitter handle:** lkuligin
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
hi! just a simple typo fix in the local LLM python docs
- **Description:** removing a trailing "\`" character in a `!pip install
...` command
- **Issue:** n/a
- **Dependencies:** n/a
- **Tag maintainer:** n/a
- **Twitter handle:** n/a
Description: Added NVIDIA AI Playground Initial support for a selection of models (Llama models, Mistral, etc.)
Dependencies: These models do depend on the AI Playground services in NVIDIA NGC. API keys with a significant amount of trial compute are available (10K queries as of the time of writing).
H/t to @VKudlay