Cohere released the new embedding API (Embed v3:
https://txt.cohere.com/introducing-embed-v3/) that treats document and
query embeddings differently. This PR updated the `CohereEmbeddings` to
use them appropriately. It also works with the old models.
Description: This PR masks API key secrets for the Nebula model from
Symbl.ai
Issue: #12165
Maintainer: @eyurtsev
---------
Co-authored-by: Praveen Venkateswaran <praveen.venkateswaran@ibm.com>
* ChatAnyscale was missing coercion to SecretStr for anyscale api key
* The model inherits from ChatOpenAI so it should not force the openai
api key to be secret str until openai model has the same changes
https://github.com/langchain-ai/langchain/issues/12841
- **Description:** Remove text "LangChain currently does not support"
which appears to be vestigial leftovers from a previous change.
- **Issue:** N/A
- **Dependencies:** N/A
- **Tag maintainer:** @baskaryan, @eyurtsev
- **Twitter handle:** thezanke
- **Description:** Noticed that the Hugging Face Pipeline documentation
was a bit out of date.
Updated with information about passing in a pipeline directly
(consistent with docstring) and a recent contribution of mine on adding
support for multi-gpu specifications with Accelerate in
21eeba075c
Qdrant was incorrectly calculating the cosine similarity and returning
`0.0` for the best match, instead of `1.0`. Internally Qdrant returns a
cosine score from `-1.0` (worst match) to `1.0` (best match), and the
current formula reflects it.
Possibility to pass on_artifacts to a conversation. It can be then
achieved by adding this way:
```python
result = agent.run(
input=message.text,
metadata={
"on_artifact": CALLBACK_FUNCTION
},
)
```
The line removed is not required as there are no other alternative
solutions above than that.
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
This patch fixes a spelling typo in message
within wikibase_agent.ipynb.
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
This PR adds a self-querying template using Qdrant as a vector store.
The template uses an artificial dataset and was implemented in a way
that simplifies passing different components and choosing LLM and
embedding providers.
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
Calls uvicorn directly from cli:
Reload works if you define app by import string instead of object.
(was doing subprocess in order to get reloading)
Version bump to 0.0.14
Remove the need for [serve] for simplicity.
Readmes are updated in #12847 to avoid cluttering this PR
Previously we treated trace_on_chain_group as a command to always start
tracing. This is unintuitive (makes the function do 2 things), and makes
it harder to toggle tracing
**Description**
Removed confusing sentence.
Not clear what "both" was referring to. The two required components
mentioned previously? The two methods listed below?
---------
Co-authored-by: Erick Friis <erick@langchain.dev>