**Description:** Update to the pathspec for 'git grep' in lint check in
the Makefile
**Issue:** The pathspec {docs/docs,templates,cookbook} is not handled
correctly leading to the error during 'make lint' -
"fatal: ambiguous argument '{docs/docs,templates,cookbook}': unknown
revision or path not in the working tree."
See changes made in https://github.com/langchain-ai/langchain/pull/18058
Co-authored-by: Erick Friis <erick@langchain.dev>
### Description
Fixed a small bug in chroma.py add_images(), previously whenever we are
not passing metadata the documents is containing the base64 of the uris
passed, but when we are passing the metadata the documents is containing
normal string uris which should not be the case.
### Issue
In add_images() method when we are calling upsert() we have to use
"b64_texts" instead of normal string "uris".
### Twitter handle
https://twitter.com/whitepegasus01
- [X] Gemini Agent Executor imported `agent.py` has Gemini agent
executor which was not utilised in current template of gemini function
agent 🧑💻 instead openai_function_agent has been used
@sbusso @jarib please someone review it
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
* **Description:** adds `LlamafileEmbeddings` class implementation for
generating embeddings using
[llamafile](https://github.com/Mozilla-Ocho/llamafile)-based models.
Includes related unit tests and notebook showing example usage.
* **Issue:** N/A
* **Dependencies:** N/A
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Remove the assert statement on the `count_documents`
in setup_class. It should just delete if there are documents present
- **Issue:** the issue # Crashes on class setup
- **Dependencies:** None
- **Twitter handle:** @mongodb
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. N/A
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
Co-authored-by: Jib <jib@byblack.us>
Current implementation doesn't have an indexed property that would
optimize the import. I have added a `baseEntityLabel` parameter that
allows you to add a secondary node label, which has an indexed id
`property`. By default, the behaviour is identical to previous version.
Since multi-labeled nodes are terrible for text2cypher, I removed the
secondary label from schema representation object and string, which is
used in text2cypher.
**Description:**
(a) Update to the module import path to reflect the splitting up of
langchain into separate packages
(b) Update to the documentation to include the new calling method
(invoke)
This PR makes `cohere_api_key` in `llms/cohere` a SecretStr, so that the
API Key is not leaked when `Cohere.cohere_api_key` is represented as a
string.
---------
Signed-off-by: Arun <arun@arun.blog>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
**Description:**
The URL of the data to index, specified to `WebBaseLoader` to import is
incorrect, causing the `langsmith_search` retriever to return a `404:
NOT_FOUND`.
Incorrect URL: https://docs.smith.langchain.com/overview
Correct URL: https://docs.smith.langchain.com
**Issue:**
This commit corrects the URL and prevents the LangServe Playground from
returning an error from its inability to use the retriever when
inquiring, "how can langsmith help with testing?".
**Dependencies:**
None.
**Twitter Handle:**
@ryanmeinzer
**Description:** Fix `metadata_extractor` type for `RecursiveUrlLoader`,
the default `_metadata_extractor` returns `dict` instead of `str`.
**Issue:** N/A
**Dependencies:** N/A
**Twitter handle:** N/A
Signed-off-by: Hemslo Wang <hemslo.wang@gmail.com>
- **Description:** Removing this line
```python
response = index.query(query, response_mode="no_text", **self.query_kwargs)
```
to
```python
response = index.query(query, **self.query_kwargs)
```
Since llama index query does not support response_mode anymore : ``` |
TypeError: BaseQueryEngine.query() got an unexpected keyword argument
'response_mode'````
- **Twitter handle:** @maximeperrin_
---------
Co-authored-by: Maxime Perrin <mperrin@doing.fr>
- [ ] **PR title**: "cookbook: using Gemma on LangChain"
- [ ] **PR message**:
- **Description:** added a tutorial how to use Gemma with LangChain
(from VertexAI or locally from Kaggle or HF)
- **Dependencies:** langchain-google-vertexai==0.0.7
- **Twitter handle:** lkuligin
In this commit we update the documentation for Google El Carro for Oracle Workloads. We amend the documentation in the Google Providers page to use the correct name which is El Carro for Oracle Workloads. We also add changes to the document_loaders and memory pages to reflect changes we made in our repo.
If the document loader recieves Pathlib path instead of str, it reads
the file correctly, but the problem begins when the document is added to
Deeplake.
This problem arises from casting the path to str in the metadata.
```python
deeplake = True
fname = Path('./lorem_ipsum.txt')
loader = TextLoader(fname, encoding="utf-8")
docs = loader.load_and_split()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
chunks= text_splitter.split_documents(docs)
if deeplake:
db = DeepLake(dataset_path=ds_path, embedding=embeddings, token=activeloop_token)
db.add_documents(chunks)
else:
db = Chroma.from_documents(docs, embeddings)
```
So using this snippet of code the error message for deeplake looks like
this:
```
[part of error message omitted]
Traceback (most recent call last):
File "/home/mwm/repositories/sources/fixing_langchain/main.py", line 53, in <module>
db.add_documents(chunks)
File "/home/mwm/repositories/sources/langchain/libs/core/langchain_core/vectorstores.py", line 139, in add_documents
return self.add_texts(texts, metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mwm/repositories/sources/langchain/libs/community/langchain_community/vectorstores/deeplake.py", line 258, in add_texts
return self.vectorstore.add(
^^^^^^^^^^^^^^^^^^^^^
File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/vectorstore/deeplake_vectorstore.py", line 226, in add
return self.dataset_handler.add(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/vectorstore/dataset_handlers/client_side_dataset_handler.py", line 139, in add
dataset_utils.extend_or_ingest_dataset(
File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/vectorstore/vector_search/dataset/dataset.py", line 544, in extend_or_ingest_dataset
extend(
File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/vectorstore/vector_search/dataset/dataset.py", line 505, in extend
dataset.extend(batched_processed_tensors, progressbar=False)
File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/dataset/dataset.py", line 3247, in extend
raise SampleExtendError(str(e)) from e.__cause__
deeplake.util.exceptions.SampleExtendError: Failed to append a sample to the tensor 'metadata'. See more details in the traceback. If you wish to skip the samples that cause errors, please specify `ignore_errors=True`.
```
Which is does not explain the error well enough.
The same error for chroma looks like this
```
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mwm/repositories/sources/fixing_langchain/main.py", line 56, in <module>
db = Chroma.from_documents(docs, embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mwm/repositories/sources/langchain/libs/community/langchain_community/vectorstores/chroma.py", line 778, in from_documents
return cls.from_texts(
^^^^^^^^^^^^^^^
File "/home/mwm/repositories/sources/langchain/libs/community/langchain_community/vectorstores/chroma.py", line 736, in from_texts
chroma_collection.add_texts(
File "/home/mwm/repositories/sources/langchain/libs/community/langchain_community/vectorstores/chroma.py", line 309, in add_texts
raise ValueError(e.args[0] + "\n\n" + msg)
ValueError: Expected metadata value to be a str, int, float or bool, got lorem_ipsum.txt which is a <class 'pathlib.PosixPath'>
Try filtering complex metadata from the document using langchain_community.vectorstores.utils.filter_complex_metadata.
```
Which is way more user friendly, so I just added information about
possible mismatch of the type in the error message, the same way it is
covered in chroma
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/chroma.py#L224
Thank you for contributing to LangChain!
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
- Example: "community: add foobar LLM"
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
- **Description**:
[`bigdl-llm`](https://github.com/intel-analytics/BigDL) is a library for
running LLM on Intel XPU (from Laptop to GPU to Cloud) using
INT4/FP4/INT8/FP8 with very low latency (for any PyTorch model). This PR
adds bigdl-llm integrations to langchain.
- **Issue**: NA
- **Dependencies**: `bigdl-llm` library
- **Contribution maintainer**: @shane-huang
Examples added:
- docs/docs/integrations/llms/bigdl.ipynb
Nvidia provider page is missing a Triton Inference Server package
reference.
Changes:
- added the Triton Inference Server reference
- copied the example notebook from the package into the doc files.
- added the Triton Inference Server description and links, the link to
the above example notebook
- formatted page to the consistent format
NOTE:
It seems that the [example
notebook](https://github.com/langchain-ai/langchain/blob/master/libs/partners/nvidia-trt/docs/llms.ipynb)
was originally created in wrong place. It should be in the LangChain
docs
[here](https://github.com/langchain-ai/langchain/tree/master/docs/docs/integrations/llms).
So, I've created a copy of this example. The original example is still
in the nvidia-trt package.
Description-
- Changed the GitHub endpoint as existing was not working and giving 404
not found error
- Also the existing function was failing if file_filter is not passed as
the tree api return all paths including directory as well, and when
get_file_content was iterating over these path, the function was failing
for directory as the api was returning list of files inside the
directory, so added a condition to ignore the paths if it a directory
- Fixes this issue -
https://github.com/langchain-ai/langchain/issues/17453
Co-authored-by: Radhika Bansal <Radhika.Bansal@veritas.com>
## Description
Updates the `langchain_community.embeddings.fastembed` provider as per
the recent updates to [`FastEmbed`](https://github.com/qdrant/fastembed)
library.