- Description: updates to Vectara documentation with more details on how
to get started.
- Issue: NA
- Dependencies: NA
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @vectara, @ofermend
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Replace this comment with:
- Description: added a document loader for a list of RSS feeds or OPML.
It iterates through the list and uses NewsURLLoader to load each
article.
- Issue: N/A
- Dependencies: feedparser, listparser
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @ruze
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: run the poetry dependencies
- Issue: #7329
- Dependencies: any dependencies required for this change,
- Tag maintainer: @rlancemartin
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description - Integrates Fireworks within Langchain LLMs to allow users
to use Fireworks models with Langchain, mainly for summarization.
Issue - Not applicable
Dependencies - None
Tag maintainer - @rlancemartin
---------
Co-authored-by: Raj Janardhan <rajjanardhan@Rajs-Laptop.attlocal.net>
Add a StreamlitChatMessageHistory class that stores chat messages in
[Streamlit's Session
State](https://docs.streamlit.io/library/api-reference/session-state).
Note: The integration test uses a currently-experimental Streamlit
testing framework to simulate the execution of a Streamlit app. Marking
this PR as draft until I confirm with the Streamlit team that we're
comfortable supporting it.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
### Summary
Updates the `unstructured` install instructions. For
`unstructured>=0.9.0`, dependencies are broken out by document type and
the base `unstructured` package includes fewer dependencies. `pip
install "unstructured[local-inference]"` has been replace by `pip
install "unstructured[all-docs]"`, though the `local-inference` extra is
still supported for the time being.
### Reviewers
- @rlancemartin
- @eyurtsev
- @hwchase17
## Description
This PR implements a callback handler for SageMaker Experiments which is
similar to that of mlflow.
* When creating the callback handler, it takes the experiment's run
object as an argument. All the callback outputs are then logged to the
run object.
* The output of each callback action (e.g., `on_llm_start`) is saved to
S3 bucket as json file.
* Optionally, you can also log additional information such as the LLM
hyper-parameters to the same run object.
* Once the callback object is no more needed, you will need to call the
`flush_tracker()` method. This makes sure that any intermediate files
are deleted.
* A separate notebook example is provided to show how the callback is
used.
@3coins @agola11
---------
Co-authored-by: Tesfagabir Meharizghi <mehariz@amazon.com>
Description:
This PR adds support for loading documents from Huawei OBS (Object
Storage Service) in Langchain. OBS is a cloud-based object storage
service provided by Huawei Cloud. With this enhancement, Langchain users
can now easily access and load documents stored in Huawei OBS directly
into the system.
Key Changes:
- Added a new document loader module specifically for Huawei OBS
integration.
- Implemented the necessary logic to authenticate and connect to Huawei
OBS using access credentials.
- Enabled the loading of individual documents from a specified bucket
and object key in Huawei OBS.
- Provided the option to specify custom authentication information or
obtain security tokens from Huawei Cloud ECS for easy access.
How to Test:
1. Ensure the required package "esdk-obs-python" is installed.
2. Configure the endpoint, access key, secret key, and bucket details
for Huawei OBS in the Langchain settings.
3. Load documents from Huawei OBS using the updated document loader
module.
4. Verify that documents are successfully retrieved and loaded into
Langchain for further processing.
Please review this PR and let us know if any further improvements are
needed. Your feedback is highly appreciated!
@rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Works just like the GenericLoader but concurrently for those who choose
to optimize their workflow.
@rlancemartin @eyurtsev
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
This PR makes minor improvements to our python notebook, and adds
support for `Rockset` workspaces in our vectorstore client.
@rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
## Description
This PR handles modifying the Chroma DB integration's documentation.
It modifies the **Docker container** example to fix the instructions
mentioned in the documentation.
In the current documentation, the below `client.reset()` line causes a
runtime error:
```py
...
client = chromadb.HttpClient(settings=Settings(allow_reset=True))
client.reset() # resets the database
collection = client.create_collection("my_collection")
...
```
`Exception: {"error":"ValueError('Resetting is not allowed by this
configuration')"}`
This is due to the Chroma DB server needing to have the `allow_reset`
flag set to `true` there as well.
This is fixed by adding the `ALLOW_RESET=TRUE` to the `docker-compose`
file environment variable to the docker container before spinning it
## Issue
This fixes the runtime error that occurs when running the docker
container example code
## Tag Maintainer
@rlancemartin, @eyurtsev
Docs for from_documents() were outdated as seen in
https://github.com/langchain-ai/langchain/issues/8457 .
fixes#8457
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
### Description
In the LangChain Documentation and Comments, I've Noticed that `pip
install faiss` was mentioned, instead of `pip install faiss-gpu`, since
installing `pip install faiss` results in an error. I've gone ahead and
updated the Documentation, and `faiss.ipynb`. This Change will ensure
ease of use for the end user, trying to install `faiss-gpu`.
### Issue:
Documentation / Comments Related.
### Dependencies:
No Dependencies we're changed only updated the files with the wrong
reference.
### Tag maintainer:
@rlancemartin, @eyurtsev (Thank You for your contributions 😄 )
- Install langchain
- Set Pinecone API key and environment as env vars
- Create Pinecone index if it doesn't already exist
---
- Description: Fix a couple minor issues I came across when running this
notebook,
- Issue: the issue # it fixes (if applicable),
- Dependencies: none,
- Tag maintainer: @rlancemartin @eyurtsev,
- Twitter handle: @zackproser (certainly not necessary!)
**Description:**
Add support for Meilisearch vector store.
Resolve#7603
- No external dependencies added
- A notebook has been added
@rlancemartin
https://twitter.com/meilisearch
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Minimax is a great AI startup from China, recently they
released their latest model and chat API, and the API is widely-spread
in China. As a result, I'd like to add the Minimax llm model to
Langchain.
- Tag maintainer: @hwchase17, @baskaryan
---------
Co-authored-by: the <tao.he@hulu.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- [Xorbits
Inference(Xinference)](https://github.com/xorbitsai/inference) is a
powerful and versatile library designed to serve language, speech
recognition, and multimodal models. Xinference supports a variety of
GGML-compatible models including chatglm, whisper, and vicuna, and
utilizes heterogeneous hardware and a distributed architecture for
seamless cross-device and cross-server model deployment.
- This PR integrates Xinference models and Xinference embeddings into
LangChain.
- Dependencies: To install the depenedencies for this integration, run
`pip install "xinference[all]"`
- Example Usage:
To start a local instance of Xinference, run `xinference`.
To deploy Xinference in a distributed cluster, first start an Xinference
supervisor using `xinference-supervisor`:
`xinference-supervisor -H "${supervisor_host}"`
Then, start the Xinference workers using `xinference-worker` on each
server you want to run them on.
`xinference-worker -e "http://${supervisor_host}:9997"`
To use Xinference with LangChain, you also need to launch a model. You
can use command line interface (CLI) to do so. Fo example: `xinference
launch -n vicuna-v1.3 -f ggmlv3 -q q4_0`. This launches a model named
vicuna-v1.3 with `model_format="ggmlv3"` and `quantization="q4_0"`. A
model UID is returned for you to use.
Now you can use Xinference with LangChain:
```python
from langchain.llms import Xinference
llm = Xinference(
server_url="http://0.0.0.0:9997", # suppose the supervisor_host is "0.0.0.0"
model_uid = {model_uid} # model UID returned from launching a model
)
llm(
prompt="Q: where can we visit in the capital of France? A:",
generate_config={"max_tokens": 1024},
)
```
You can also use RESTful client to launch a model:
```python
from xinference.client import RESTfulClient
client = RESTfulClient("http://0.0.0.0:9997")
model_uid = client.launch_model(model_name="vicuna-v1.3", model_size_in_billions=7, quantization="q4_0")
```
The following code block demonstrates how to use Xinference embeddings
with LangChain:
```python
from langchain.embeddings import XinferenceEmbeddings
xinference = XinferenceEmbeddings(
server_url="http://0.0.0.0:9997",
model_uid = model_uid
)
```
```python
query_result = xinference.embed_query("This is a test query")
```
```python
doc_result = xinference.embed_documents(["text A", "text B"])
```
Xinference is still under rapid development. Feel free to [join our
Slack
community](https://xorbitsio.slack.com/join/shared_invite/zt-1z3zsm9ep-87yI9YZ_B79HLB2ccTq4WA)
to get the latest updates!
- Request for review: @hwchase17, @baskaryan
- Twitter handle: https://twitter.com/Xorbitsio
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Added a new tool to the Github toolkit called **Create Pull Request.**
Now we can make our own langchain contributor in langchain 😁
In order to have somewhere to pull from, I also added a new env var,
"GITHUB_BASE_BRANCH." This will allow the existing env var,
"GITHUB_BRANCH," to be a working branch for the bot (so that it doesn't
have to always commit on the main/master). For example, if you want the
bot to work in a branch called `bot_dev` and your repo base is `main`,
you would set up the vars like:
```
GITHUB_BASE_BRANCH = "main"
GITHUB_BRANCH = "bot_dev"
```
Maintainer responsibilities:
- Agents / Tools / Toolkits: @hinthornw
In this PR:
- Removed restricted model loading logic for Petals-Bloom
- Removed petals imports (DistributedBloomForCausalLM,
BloomTokenizerFast)
- Instead imported more generalized versions of loader
(AutoDistributedModelForCausalLM, AutoTokenizer)
- Updated the Petals example notebook to allow for a successful
installation of Petals in Apple Silicon Macs
- Tag maintainer: @hwchase17, @baskaryan
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Adds AwaEmbeddings class for embeddings, which provides
users with a convenient way to do fine-tuning, as well as the potential
need for multimodality
- Tag maintainer: @baskaryan
Create `Awa.ipynb`: an example notebook for AwaEmbeddings class
Modify `embeddings/__init__.py`: Import the class
Create `embeddings/awa.py`: The embedding class
Create `embeddings/test_awa.py`: The test file.
---------
Co-authored-by: taozhiwang <taozhiwa@gmail.com>
Description: Adding support for custom index and scoring profile support
in Azure Cognitive Search
@hwchase17
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description:
I wanted to use the DuckDuckGoSearch tool in an agent to let him get the
latest news for a topic. DuckDuckGoSearch has already an implemented
function for retrieving news articles. But there wasn't a tool to use
it. I simply adapted the SearchResult class with an extra argument
"backend". You can set it to "news" to only get news articles.
Furthermore, I added an example to the DuckDuckGo Notebook on how to
further customize the results by using the DuckDuckGoSearchAPIWrapper.
Dependencies: no new dependencies
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
### Description
This PR includes the following changes:
- Adds AOSS (Amazon OpenSearch Service Serverless) support to
OpenSearch. Please refer to the documentation on how to use it.
- While creating an index, AOSS only supports Approximate Search with
`nmslib` and `faiss` engines. During Search, only Approximate Search and
Script Scoring (on doc values) are supported.
- This PR also adds support to `efficient_filter` which can be used with
`faiss` and `lucene` engines.
- The `lucene_filter` is deprecated. Instead please use the
`efficient_filter` for the lucene engine.
Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
Stop sequences are useful if you are doing long-running completions and
need to early-out rather than running for the full max_length... not
only does this save inference cost on Replicate, it is also much faster
if you are going to truncate the output later anyway.
Other LLMs support stop sequences natively (e.g. OpenAI) but I didn't
see this for Replicate so adding this via their prediction cancel
method.
Housekeeping: I ran `make format` and `make lint`, no issues reported in
the files I touched.
I did update the replicate integration test and ran `poetry run pytest
tests/integration_tests/llms/test_replicate.py` successfully.
Finally, I am @tjaffri https://twitter.com/tjaffri for feature
announcement tweets... or if you could please tag @docugami
https://twitter.com/docugami we would really appreciate that :-)
Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
@rlancemartin
The modification includes:
* etherscanLoader
* test_etherscan
* document ipynb
I have run the test, lint, format, and spell check. I do encounter a
linting error on ipynb, I am not sure how to address that.
```
docs/extras/modules/data_connection/document_loaders/integrations/Etherscan.ipynb:55: error: Name "null" is not defined [name-defined]
docs/extras/modules/data_connection/document_loaders/integrations/Etherscan.ipynb:76: error: Name "null" is not defined [name-defined]
Found 2 errors in 1 file (checked 1 source file)
```
- Description: The Etherscan loader uses etherscan api to load
transaction histories under specific accounts on Ethereum Mainnet.
- No dependency is introduced by this PR.
- Twitter handle: glazecl
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
ChatGLM LLM integration will by default accumulate conversation
history(with_history=True) to ChatGLM backend api, which is not expected
in most cases. This PR set with_history=False by default, user should
explicitly set llm.with_history=True to turn this feature on. Related
PR: #8048#7774
---------
Co-authored-by: mlot <limpo2000@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- **Description:** Simple change of the Class that ContentHandler
inherits from. To create an object of type SagemakerEndpointEmbeddings,
the property content_handler must be of type EmbeddingsContentHandler
not ContentHandlerBase anymore,
- **Twitter handle:** @Juanjo_Torres11
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description:
This PR adds embeddings for LocalAI (
https://github.com/go-skynet/LocalAI ), a self-hosted OpenAI drop-in
replacement. As LocalAI can re-use OpenAI clients it is mostly following
the lines of the OpenAI embeddings, however when embedding documents, it
just uses string instead of sending tokens as sending tokens is
best-effort depending on the model being used in LocalAI. Sending tokens
is also tricky as token id's can mismatch with the model - so it's safer
to just send strings in this case.
Partly related to: https://github.com/hwchase17/langchain/issues/5256
Dependencies: No new dependencies
Twitter: @mudler_it
---------
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: Bagatur <baskaryan@gmail.com>
**PR Description:**
This pull request introduces several enhancements and new features to
the `CubeSemanticLoader`. The changes include the following:
1. Added imports for the `json` and `time` modules.
2. Added new constructor parameters: `load_dimension_values`,
`dimension_values_limit`, `dimension_values_max_retries`, and
`dimension_values_retry_delay`.
3. Updated the class documentation with descriptions for the new
constructor parameters.
4. Added a new private method `_get_dimension_values()` to retrieve
dimension values from Cube's REST API.
5. Modified the `load()` method to load dimension values for string
dimensions if `load_dimension_values` is set to `True`.
6. Updated the API endpoint in the `load()` method from the base URL to
the metadata endpoint.
7. Refactored the code to retrieve metadata from the response JSON.
8. Added the `column_member_type` field to the metadata dictionary to
indicate if a column is a measure or a dimension.
9. Added the `column_values` field to the metadata dictionary to store
the dimension values retrieved from Cube's API.
10. Modified the `page_content` construction to include the column title
and description instead of the table name, column name, data type,
title, and description.
These changes improve the functionality and flexibility of the
`CubeSemanticLoader` class by allowing the loading of dimension values
and providing more detailed metadata for each document.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>