* **Description:** adds `LlamafileEmbeddings` class implementation for
generating embeddings using
[llamafile](https://github.com/Mozilla-Ocho/llamafile)-based models.
Includes related unit tests and notebook showing example usage.
* **Issue:** N/A
* **Dependencies:** N/A
**Description:**
(a) Update to the module import path to reflect the splitting up of
langchain into separate packages
(b) Update to the documentation to include the new calling method
(invoke)
**Description:**
The URL of the data to index, specified to `WebBaseLoader` to import is
incorrect, causing the `langsmith_search` retriever to return a `404:
NOT_FOUND`.
Incorrect URL: https://docs.smith.langchain.com/overview
Correct URL: https://docs.smith.langchain.com
**Issue:**
This commit corrects the URL and prevents the LangServe Playground from
returning an error from its inability to use the retriever when
inquiring, "how can langsmith help with testing?".
**Dependencies:**
None.
**Twitter Handle:**
@ryanmeinzer
In this commit we update the documentation for Google El Carro for Oracle Workloads. We amend the documentation in the Google Providers page to use the correct name which is El Carro for Oracle Workloads. We also add changes to the document_loaders and memory pages to reflect changes we made in our repo.
- **Description**:
[`bigdl-llm`](https://github.com/intel-analytics/BigDL) is a library for
running LLM on Intel XPU (from Laptop to GPU to Cloud) using
INT4/FP4/INT8/FP8 with very low latency (for any PyTorch model). This PR
adds bigdl-llm integrations to langchain.
- **Issue**: NA
- **Dependencies**: `bigdl-llm` library
- **Contribution maintainer**: @shane-huang
Examples added:
- docs/docs/integrations/llms/bigdl.ipynb
Nvidia provider page is missing a Triton Inference Server package
reference.
Changes:
- added the Triton Inference Server reference
- copied the example notebook from the package into the doc files.
- added the Triton Inference Server description and links, the link to
the above example notebook
- formatted page to the consistent format
NOTE:
It seems that the [example
notebook](https://github.com/langchain-ai/langchain/blob/master/libs/partners/nvidia-trt/docs/llms.ipynb)
was originally created in wrong place. It should be in the LangChain
docs
[here](https://github.com/langchain-ai/langchain/tree/master/docs/docs/integrations/llms).
So, I've created a copy of this example. The original example is still
in the nvidia-trt package.
This PR migrates the existing MongoDBAtlasVectorSearch abstraction from
the `langchain_community` section to the partners package section of the
codebase.
- [x] Run the partner package script as advised in the partner-packages
documentation.
- [x] Add Unit Tests
- [x] Migrate Integration Tests
- [x] Refactor `MongoDBAtlasVectorStore` (autogenerated) to
`MongoDBAtlasVectorSearch`
- [x] ~Remove~ deprecate the old `langchain_community` VectorStore
references.
## Additional Callouts
- Implemented the `delete` method
- Included any missing async function implementations
- `amax_marginal_relevance_search_by_vector`
- `adelete`
- Added new Unit Tests that test for functionality of
`MongoDBVectorSearch` methods
- Removed [`del
res[self._embedding_key]`](e0c81e1cb0/libs/community/langchain_community/vectorstores/mongodb_atlas.py (L218))
in `_similarity_search_with_score` function as it would make the
`maximal_marginal_relevance` function fail otherwise. The `Document`
needs to store the embedding key in metadata to work.
Checklist:
- [x] PR title: Please title your PR "package: description", where
"package" is whichever of langchain, community, core, experimental, etc.
is being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
- Example: "community: add foobar LLM"
- [x] PR message
- [x] Pass lint and test: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified to check that you're
passing lint and testing. See contribution guidelines for more
information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
- [x] Add tests and docs: If you're adding a new integration, please
include
1. Existing tests supplied in docs/docs do not change. Updated
docstrings for new functions like `delete`
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory. (This already exists)
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
---------
Co-authored-by: Steven Silvester <steven.silvester@ieee.org>
Co-authored-by: Erick Friis <erick@langchain.dev>
This PR adds links to some more free resources for people to get
acquainted with Langhchain without having to configure their system.
<!-- If no one reviews your PR within a few days, please @-mention one
of baskaryan, efriis, eyurtsev, hwchase17. -->
Co-authored-by: Filip Schouwenaars <filipsch@users.noreply.github.com>
**Description:**
In this PR, I am adding a `PolygonFinancials` tool, which can be used to
get financials data for a given ticker. The financials data is the
fundamental data that is found in income statements, balance sheets, and
cash flow statements of public US companies.
**Twitter**:
[@virattt](https://twitter.com/virattt)
Several URL-s were broken (in the yesterday PR). Like
[Integrations/platforms/google/Document
Loaders](https://python.langchain.com/docs/integrations/platforms/google#document-loaders)
page, Example link to "Document Loaders / Cloud SQL for PostgreSQL" and
most of the new example links in the Document Loaders, Vectorstores,
Memory sections.
- fixed URL-s (manually verified all example links)
- sorted sections in page to follow the "integrations/components" menu
item order.
- fixed several page titles to fix Navbar item order
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
**Description:** Update to the list of partner packages in the list of
providers
**Issue:** Google & Nvidia had two entries each, both pointing to the
same page
**Dependencies:** None
- **Description:** A generic document loader adapter for SQLAlchemy on
top of LangChain's `SQLDatabaseLoader`.
- **Needed by:** https://github.com/crate-workbench/langchain/pull/1
- **Depends on:** GH-16655
- **Addressed to:** @baskaryan, @cbornet, @eyurtsev
Hi from CrateDB again,
in the same spirit like GH-16243 and GH-16244, this patch breaks out
another commit from https://github.com/crate-workbench/langchain/pull/1,
in order to reduce the size of this patch before submitting it, and to
separate concerns.
To accompany the SQLAlchemy adapter implementation, the patch includes
integration tests for both SQLite and PostgreSQL. Let me know if
corresponding utility resources should be added at different spots.
With kind regards,
Andreas.
### Software Tests
```console
docker compose --file libs/community/tests/integration_tests/document_loaders/docker-compose/postgresql.yml up
```
```console
cd libs/community
pip install psycopg2-binary
pytest -vvv tests/integration_tests -k sqldatabase
```
```
14 passed
```
![image](https://github.com/langchain-ai/langchain/assets/453543/42be233c-eb37-4c76-a830-474276e01436)
---------
Co-authored-by: Andreas Motl <andreas.motl@crate.io>
**Description**: This PR adds support for using the [LLMLingua project
](https://github.com/microsoft/LLMLingua) especially the LongLLMLingua
(Enhancing Large Language Model Inference via Prompt Compression) as a
document compressor / transformer.
The LLMLingua project is an interesting project that can greatly improve
RAG system by compressing prompts and contexts while keeping their
semantic relevance.
**Issue**: https://github.com/microsoft/LLMLingua/issues/31
**Dependencies**: [llmlingua](https://pypi.org/project/llmlingua/)
@baskaryan
---------
Co-authored-by: Ayodeji Ayibiowu <ayodeji.ayibiowu@getinge.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
### Description
This PR moves the Elasticsearch classes to a partners package.
Note that we will not move (and later remove) `ElasticKnnSearch`. It
were previously deprecated.
`ElasticVectorSearch` is going to stay in the community package since it
is used quite a lot still.
Also note that I left the `ElasticsearchTranslator` for self query
untouched because it resides in main `langchain` package.
### Dependencies
There will be another PR that updates the notebooks (potentially pulling
them into the partners package) and templates and removes the classes
from the community package, see
https://github.com/langchain-ai/langchain/pull/17468
#### Open question
How to make the transition smooth for users? Do we move the import
aliases and require people to install `langchain-elasticsearch`? Or do
we remove the import aliases from the `langchain` package all together?
What has worked well for other partner packages?
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
**Description**
Adding different threshold types to the semantic chunker. I’ve had much
better and predictable performance when using standard deviations
instead of percentiles.
![image](https://github.com/langchain-ai/langchain/assets/44395485/066e84a8-460e-4da5-9fa1-4ff79a1941c5)
For all the documents I’ve tried, the distribution of distances look
similar to the above: positively skewed normal distribution. All skews
I’ve seen are less than 1 so that explains why standard deviations
perform well, but I’ve included IQR if anyone wants something more
robust.
Also, using the percentile method backwards, you can declare the number
of clusters and use semantic chunking to get an ‘optimal’ splitting.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
**Description:** Update the example fiddler notebook to use community
path, instead of langchain.callback
**Dependencies:** None
**Twitter handle:** @bhalder
Co-authored-by: Barun Halder <barun@fiddler.ai>