Commit Graph

58 Commits

Author SHA1 Message Date
Lance Martin
d1b95db874
Retriever that can re-phase user inputs (#8026)
Simple retriever that applies an LLM between the user input and the
query pass the to retriever.

It can be used to pre-process the user input in any way.

The default prompt:

```
DEFAULT_QUERY_PROMPT = PromptTemplate(
    input_variables=["question"],
    template="""You are an assistant tasked with taking a natural languge query from a user
    and converting it into a query for a vectorstore. In this process, you strip out
    information that is not relevant for the retrieval task. Here is the user query: {question} """
)
```

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-08-03 21:23:59 -07:00
Harrison Chase
6c3573e7f6
Harrison/aleph alpha (#8735)
Co-authored-by: PiotrMazurek <piotr.mazurek@aleph-alpha.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-03 21:21:15 -07:00
Ofer Mendelevitch
29f51055e8
Updates to Vectara documentation (#8699)
- Description: updates to Vectara documentation with more details on how
to get started.
- Issue: NA
- Dependencies: NA
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @vectara, @ofermend

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-03 20:21:17 -07:00
ruze
8ef7e14a85
RSS Feed / OPML loader (#8694)
Replace this comment with:
- Description: added a document loader for a list of RSS feeds or OPML.
It iterates through the list and uses NewsURLLoader to load each
article.
  - Issue: N/A
  - Dependencies: feedparser, listparser
  - Tag maintainer: @rlancemartin, @eyurtsev
  - Twitter handle: @ruze

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-03 14:58:06 -07:00
Bagatur
b2b71b0d35
Bagatur/eden llm (#8670)
Co-authored-by: RedhaWassim <rwasssim@gmail.com>
Co-authored-by: KyrianC <ckyrian@protonmail.com>
Co-authored-by: sam <melaine.samy@gmail.com>
2023-08-03 10:24:51 -07:00
ruze
71f98db2fe
Newspaper (#8647)
- Description: Added newspaper3k based news article loader. Provide a
list of urls.
  - Issue: N/A
  - Dependencies: newspaper3k,
  - Tag maintainer: @rlancemartin , @eyurtsev 
  - Twitter handle: @ruze

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-02 17:56:08 -07:00
Lance Martin
59194c2214
Add summarization use-case (#8376)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-02 14:25:11 -07:00
Leonid Ganeline
1335f2b9f8
MLflow examples (#8642)
Updated `MLflow` examples with links to the examples from MLflow

 @baskaryan
2023-08-02 13:30:28 -07:00
Comendeiro
5c516945d0
Add local support for audio models (PR #7329) (#7591)
- Description: run the poetry dependencies
  - Issue: #7329 
  - Dependencies: any dependencies required for this change,
  - Tag maintainer: @rlancemartin

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-02 01:24:53 -07:00
rjanardhan3
68113348cc
Fireworks integration (#8322)
Description - Integrates Fireworks within Langchain LLMs to allow users
to use Fireworks models with Langchain, mainly for summarization.

Issue - Not applicable
Dependencies - None
Tag maintainer - @rlancemartin

---------

Co-authored-by: Raj Janardhan <rajjanardhan@Rajs-Laptop.attlocal.net>
2023-08-01 21:17:26 -07:00
Joshua Carroll
6705928b9d
Add StreamlitChatMessageHistory (#8497)
Add a StreamlitChatMessageHistory class that stores chat messages in
[Streamlit's Session
State](https://docs.streamlit.io/library/api-reference/session-state).

Note: The integration test uses a currently-experimental Streamlit
testing framework to simulate the execution of a Streamlit app. Marking
this PR as draft until I confirm with the Streamlit team that we're
comfortable supporting it.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-01 14:28:15 -07:00
Matt Robinson
8961c720b8
docs: update unstructured install instructions (#8596)
### Summary

Updates the `unstructured` install instructions. For
`unstructured>=0.9.0`, dependencies are broken out by document type and
the base `unstructured` package includes fewer dependencies. `pip
install "unstructured[local-inference]"` has been replace by `pip
install "unstructured[all-docs]"`, though the `local-inference` extra is
still supported for the time being.

### Reviewers

- @rlancemartin
- @eyurtsev
- @hwchase17
2023-08-01 14:17:49 -07:00
Bagatur
73072d3db8
mv (#8595) 2023-08-01 14:17:04 -07:00
Tesfagabir Meharizghi
a7000ee89e
Callback handler for Amazon SageMaker Experiments (#8587)
## Description

This PR implements a callback handler for SageMaker Experiments which is
similar to that of mlflow.
* When creating the callback handler, it takes the experiment's run
object as an argument. All the callback outputs are then logged to the
run object.
* The output of each callback action (e.g., `on_llm_start`) is saved to
S3 bucket as json file.
* Optionally, you can also log additional information such as the LLM
hyper-parameters to the same run object.
* Once the callback object is no more needed, you will need to call the
`flush_tracker()` method. This makes sure that any intermediate files
are deleted.
* A separate notebook example is provided to show how the callback is
used.

@3coins  @agola11

---------

Co-authored-by: Tesfagabir Meharizghi <mehariz@amazon.com>
2023-08-01 13:47:08 -07:00
mpb159753
7df2dfc4c2
Add Support for Loading Documents from Huawei OBS (#8573)
Description:
This PR adds support for loading documents from Huawei OBS (Object
Storage Service) in Langchain. OBS is a cloud-based object storage
service provided by Huawei Cloud. With this enhancement, Langchain users
can now easily access and load documents stored in Huawei OBS directly
into the system.

Key Changes:
- Added a new document loader module specifically for Huawei OBS
integration.
- Implemented the necessary logic to authenticate and connect to Huawei
OBS using access credentials.
- Enabled the loading of individual documents from a specified bucket
and object key in Huawei OBS.
- Provided the option to specify custom authentication information or
obtain security tokens from Huawei Cloud ECS for easy access.

How to Test:
1. Ensure the required package "esdk-obs-python" is installed.
2. Configure the endpoint, access key, secret key, and bucket details
for Huawei OBS in the Langchain settings.
3. Load documents from Huawei OBS using the updated document loader
module.
4. Verify that documents are successfully retrieved and loaded into
Langchain for further processing.

Please review this PR and let us know if any further improvements are
needed. Your feedback is highly appreciated!

@rlancemartin, @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-01 09:30:30 -07:00
Kenny
1e8fca5518
Add ConcurrentLoader (#7512)
Works just like the GenericLoader but concurrently for those who choose
to optimize their workflow.

@rlancemartin @eyurtsev

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-07-31 17:56:31 -07:00
Leonid Kuligin
b4a126ae71
Updated docs on Vertex AI going GA (#8531)
#8074

Co-authored-by: Leonid Kuligin <kuligin@google.com>
2023-07-31 17:15:04 -07:00
Jeff Huber
07d6d1ca38
fix error in chroma docker instructions (#8533)
This makes the Chroma instructions for Docker work! 


https://python.langchain.com/docs/integrations/vectorstores/chroma#basic-example-using-the-docker-container
2023-07-31 16:32:53 -07:00
Matthew DeGuzman
844eca98d5
Add LLaMa Formatter and AzureML Chat Endpoint (#8382)
## Description

Microsoft and Meta recently [announced their
collaboration](https://blogs.microsoft.com/blog/2023/07/18/microsoft-and-meta-expand-their-ai-partnership-with-llama-2-on-azure-and-windows/)
on LLaMa2. This PR extends the current LLM wrapper and introduces a new
Chat Model wrapper for AzureML to support LLaMa2.

## Dependencies

No dependencies added :)

## Twitter Handles

[@matthew_d13](https://twitter.com/matthew_d13)
[@prakhar_in](https://twitter.com/prakhar_in)

maintainers - @hwchase17, @baskaryan
2023-07-31 16:26:25 -07:00
Anubhav Bindlish
913a156cff
Minor improvements to rockset vectorstore (#8416)
This PR makes minor improvements to our python notebook, and adds
support for `Rockset` workspaces in our vectorstore client.

@rlancemartin, @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-31 09:54:59 -07:00
Muhammed Al-Dulaimi
9975ba4124
Fix ChromaDB integration -> docker container instructions (#8447)
## Description
This PR handles modifying the Chroma DB integration's documentation.
It modifies the **Docker container** example to fix the instructions
mentioned in the documentation.
In the current documentation, the below `client.reset()` line causes a
runtime error:

```py
...
client = chromadb.HttpClient(settings=Settings(allow_reset=True))
client.reset()  # resets the database
collection = client.create_collection("my_collection")
...
```

`Exception: {"error":"ValueError('Resetting is not allowed by this
configuration')"}`

This is due to the Chroma DB server needing to have the `allow_reset`
flag set to `true` there as well.
This is fixed by adding the `ALLOW_RESET=TRUE` to the `docker-compose`
file environment variable to the docker container before spinning it

## Issue
This fixes the runtime error that occurs when running the docker
container example code

## Tag Maintainer
@rlancemartin, @eyurtsev
2023-07-30 21:11:56 -07:00
Ludwig Hubert
08f5e6b801
Fix documentation for from_documents signature (#8482)
Docs for from_documents() were outdated as seen in
https://github.com/langchain-ai/langchain/issues/8457 .

fixes #8457 

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-07-30 13:24:44 -07:00
Muneeb Ahmad
4923cf029a
Added Proper Documentation for faiss-gpu Installation (#8492)
### Description
In the LangChain Documentation and Comments, I've Noticed that `pip
install faiss` was mentioned, instead of `pip install faiss-gpu`, since
installing `pip install faiss` results in an error. I've gone ahead and
updated the Documentation, and `faiss.ipynb`. This Change will ensure
ease of use for the end user, trying to install `faiss-gpu`.

### Issue: 
Documentation / Comments Related.

### Dependencies:
No Dependencies we're changed only updated the files with the wrong
reference.

### Tag maintainer:
 @rlancemartin, @eyurtsev (Thank You for your contributions 😄 )
2023-07-30 13:24:30 -07:00
Harrison Chase
8f14ddefdf
add anthropic functions wrapper (#8475)
a cheeky wrapper around claude that adds in function calling support
(kind of, hence it going in experimental)
2023-07-30 07:23:46 -07:00
William FH
b7c0eb9ecb
Wfh/ref links (#8454) 2023-07-29 08:44:32 -07:00
Zack Proser
3892cefac6
Minor fixes to enhance notebook usability: (#8389)
- Install langchain
- Set Pinecone API key and environment as env vars
- Create Pinecone index if it doesn't already exist
---
- Description: Fix a couple minor issues I came across when running this
notebook,
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: none,
  - Tag maintainer: @rlancemartin @eyurtsev,
  - Twitter handle: @zackproser (certainly not necessary!)
2023-07-28 17:10:03 -07:00
Amélie
8ee56b9a5b
Feature: Add support for meilisearch vectorstore (#7649)
**Description:**

Add support for Meilisearch vector store.
Resolve #7603 

- No external dependencies added
- A notebook has been added

@rlancemartin

https://twitter.com/meilisearch

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-28 17:06:54 -07:00
Bagatur
2311d57df4
mv dropbox (#8438) 2023-07-28 16:07:56 -07:00
HeTaoPKU
d5884017a9
Add Minimax llm model to langchain (#7645)
- Description: Minimax is a great AI startup from China, recently they
released their latest model and chat API, and the API is widely-spread
in China. As a result, I'd like to add the Minimax llm model to
Langchain.
- Tag maintainer: @hwchase17, @baskaryan

---------

Co-authored-by: the <tao.he@hulu.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 22:53:23 -07:00
Jiayi Ni
1efb9bae5f
FEAT: Integrate Xinference LLMs and Embeddings (#8171)
- [Xorbits
Inference(Xinference)](https://github.com/xorbitsai/inference) is a
powerful and versatile library designed to serve language, speech
recognition, and multimodal models. Xinference supports a variety of
GGML-compatible models including chatglm, whisper, and vicuna, and
utilizes heterogeneous hardware and a distributed architecture for
seamless cross-device and cross-server model deployment.
- This PR integrates Xinference models and Xinference embeddings into
LangChain.
- Dependencies: To install the depenedencies for this integration, run
    
    `pip install "xinference[all]"`
    
- Example Usage:

To start a local instance of Xinference, run `xinference`.

To deploy Xinference in a distributed cluster, first start an Xinference
supervisor using `xinference-supervisor`:

`xinference-supervisor -H "${supervisor_host}"`

Then, start the Xinference workers using `xinference-worker` on each
server you want to run them on.

`xinference-worker -e "http://${supervisor_host}:9997"`

To use Xinference with LangChain, you also need to launch a model. You
can use command line interface (CLI) to do so. Fo example: `xinference
launch -n vicuna-v1.3 -f ggmlv3 -q q4_0`. This launches a model named
vicuna-v1.3 with `model_format="ggmlv3"` and `quantization="q4_0"`. A
model UID is returned for you to use.

Now you can use Xinference with LangChain:

```python
from langchain.llms import Xinference

llm = Xinference(
    server_url="http://0.0.0.0:9997", # suppose the supervisor_host is "0.0.0.0"
    model_uid = {model_uid} # model UID returned from launching a model
)

llm(
    prompt="Q: where can we visit in the capital of France? A:",
    generate_config={"max_tokens": 1024},
)
```

You can also use RESTful client to launch a model:
```python
from xinference.client import RESTfulClient

client = RESTfulClient("http://0.0.0.0:9997")

model_uid = client.launch_model(model_name="vicuna-v1.3", model_size_in_billions=7, quantization="q4_0")
```

The following code block demonstrates how to use Xinference embeddings
with LangChain:
```python
from langchain.embeddings import XinferenceEmbeddings

xinference = XinferenceEmbeddings(
    server_url="http://0.0.0.0:9997",
    model_uid = model_uid
)
```

```python
query_result = xinference.embed_query("This is a test query")
```

```python
doc_result = xinference.embed_documents(["text A", "text B"])
```

Xinference is still under rapid development. Feel free to [join our
Slack
community](https://xorbitsio.slack.com/join/shared_invite/zt-1z3zsm9ep-87yI9YZ_B79HLB2ccTq4WA)
to get the latest updates!

- Request for review: @hwchase17, @baskaryan
- Twitter handle: https://twitter.com/Xorbitsio

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 21:23:19 -07:00
Gordon Clark
e66759cc9d
Github add "Create PR" tool + Docs update (#8235)
Added a new tool to the Github toolkit called **Create Pull Request.**
Now we can make our own langchain contributor in langchain 😁

In order to have somewhere to pull from, I also added a new env var,
"GITHUB_BASE_BRANCH." This will allow the existing env var,
"GITHUB_BRANCH," to be a working branch for the bot (so that it doesn't
have to always commit on the main/master). For example, if you want the
bot to work in a branch called `bot_dev` and your repo base is `main`,
you would set up the vars like:
```
GITHUB_BASE_BRANCH = "main"
GITHUB_BRANCH = "bot_dev"
``` 

Maintainer responsibilities:
  - Agents / Tools / Toolkits: @hinthornw
2023-07-27 19:19:44 -07:00
Karan V
a003a0baf6
fix(petals) allows to run models that aren't Bloom (Support for LLama and newer models) (#8356)
In this PR:

- Removed restricted model loading logic for Petals-Bloom
- Removed petals imports (DistributedBloomForCausalLM,
BloomTokenizerFast)
- Instead imported more generalized versions of loader
(AutoDistributedModelForCausalLM, AutoTokenizer)
- Updated the Petals example notebook to allow for a successful
installation of Petals in Apple Silicon Macs

- Tag maintainer: @hwchase17, @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 18:01:04 -07:00
Taozhi Wang
594f195e54
Add embeddings for AwaEmbedding (#8353)
- Description: Adds AwaEmbeddings class for embeddings, which provides
users with a convenient way to do fine-tuning, as well as the potential
need for multimodality

  - Tag maintainer: @baskaryan

Create `Awa.ipynb`: an example notebook for AwaEmbeddings class
Modify `embeddings/__init__.py`: Import the class
Create `embeddings/awa.py`: The embedding class
Create `embeddings/test_awa.py`: The test file.

---------

Co-authored-by: taozhiwang <taozhiwa@gmail.com>
2023-07-27 17:08:00 -07:00
Sachin Varghese
01217b2247
Update sql database agent example (#8354)
This PR fixes a minor documentation issue on the SQL database toolkit
example notebook.
2023-07-27 13:44:02 -07:00
Bagatur
55beab326c
cleanup warnings (#8379) 2023-07-27 13:43:05 -07:00
Bagatur
68763bd25f
mv popular and additional chains to use cases (#8242) 2023-07-27 12:55:13 -07:00
Ikko Eltociear Ashimine
934ea80780
Fix typo in Etherscan.ipynb (#8340)
specifc  -> specific
2023-07-27 01:57:19 -07:00
William FH
412e29d436
Fix notebook that 'cannot convert' via nbdoc_build (#8333) 2023-07-26 18:54:23 -07:00
Fabrizio Ruocco
ddc353a768
Azure Cognitive Search: Custom index and scoring profile support (#6843)
Description: Adding support for custom index and scoring profile support
in Azure Cognitive Search
@hwchase17

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-26 17:58:01 -07:00
William FH
01a9b06400
Add api cross ref linking (#8275)
Example of how it would show up in our python docs:


![image](https://github.com/langchain-ai/langchain/assets/13333726/0f0a88cc-ba4a-4778-bc47-118c66807f15)


Examples added to the reference docs:

https://api.python.langchain.com/en/wfh-api_crosslink/vectorstores/langchain.vectorstores.chroma.Chroma.html#langchain.vectorstores.chroma.Chroma


![image](https://github.com/langchain-ai/langchain/assets/13333726/dcd150de-cb56-4d42-b49a-a76a002a5a52)
2023-07-26 12:38:58 -07:00
Bagatur
f27176930a
fix geopandas link (#8305) 2023-07-26 11:30:17 -07:00
Timon Palm
70604e590f
DuckDuckGoSearch News Tool (#8292)
Description: 
I wanted to use the DuckDuckGoSearch tool in an agent to let him get the
latest news for a topic. DuckDuckGoSearch has already an implemented
function for retrieving news articles. But there wasn't a tool to use
it. I simply adapted the SearchResult class with an extra argument
"backend". You can set it to "news" to only get news articles.

Furthermore, I added an example to the DuckDuckGo Notebook on how to
further customize the results by using the DuckDuckGoSearchAPIWrapper.

Dependencies: no new dependencies
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-26 11:30:01 -07:00
Aarav Borthakur
8ce661d5a1
Docs: Fix Rockset links (#8214)
Fix broken Rockset links.

Right now links at
https://python.langchain.com/docs/integrations/providers/rockset are
broken.
2023-07-26 10:38:37 -07:00
Jon Bennion
ad38eb2d50
correction to reference to code (#8301)
- Description: fixes typo referencing code

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-26 10:33:18 -07:00
Naveen Tatikonda
9cbefcc56c
[ OpenSearch ] : Add AOSS Support to OpenSearch (#8256)
### Description

This PR includes the following changes:

- Adds AOSS (Amazon OpenSearch Service Serverless) support to
OpenSearch. Please refer to the documentation on how to use it.
- While creating an index, AOSS only supports Approximate Search with
`nmslib` and `faiss` engines. During Search, only Approximate Search and
Script Scoring (on doc values) are supported.
- This PR also adds support to `efficient_filter` which can be used with
`faiss` and `lucene` engines.
- The `lucene_filter` is deprecated. Instead please use the
`efficient_filter` for the lucene engine.


Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
2023-07-25 23:59:36 -07:00
Byron Saltysiak
68a906bb31
added lxml to the pip install example since it is required (#8260)
- Description: The trello dataloader example didn't work without an
additional dependency installed - lxml
  - Issue: na
2023-07-25 18:16:07 -07:00
Emory Petermann
7734a2b5ab
update golden-query notebook and fix typo in golden docs (#8253)
updating the documentation to be consistent for Golden query tool and
have a better introduction to the tool
2023-07-25 18:15:48 -07:00
William FH
0a16b3d84b
Update Integrations links (#8206) 2023-07-24 21:20:32 -07:00
Taqi Jaffri
8f158b72fc
Added stop sequence support to replicate (#8107)
Stop sequences are useful if you are doing long-running completions and
need to early-out rather than running for the full max_length... not
only does this save inference cost on Replicate, it is also much faster
if you are going to truncate the output later anyway.

Other LLMs support stop sequences natively (e.g. OpenAI) but I didn't
see this for Replicate so adding this via their prediction cancel
method.

Housekeeping: I ran `make format` and `make lint`, no issues reported in
the files I touched.

I did update the replicate integration test and ran `poetry run pytest
tests/integration_tests/llms/test_replicate.py` successfully.

Finally, I am @tjaffri https://twitter.com/tjaffri for feature
announcement tweets... or if you could please tag @docugami
https://twitter.com/docugami we would really appreciate that :-)

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
2023-07-24 17:34:13 -07:00
glaze
f7ad14acfa
Add etherscan document loader (#7943)
@rlancemartin 
The modification includes:
* etherscanLoader
* test_etherscan
* document ipynb

I have run the test, lint, format, and spell check. I do encounter a
linting error on ipynb, I am not sure how to address that.
```
docs/extras/modules/data_connection/document_loaders/integrations/Etherscan.ipynb:55: error: Name "null" is not defined  [name-defined]
docs/extras/modules/data_connection/document_loaders/integrations/Etherscan.ipynb:76: error: Name "null" is not defined  [name-defined]
Found 2 errors in 1 file (checked 1 source file)
```
- Description: The Etherscan loader uses etherscan api to load
transaction histories under specific accounts on Ethereum Mainnet.
- No dependency is introduced by this PR.
- Twitter handle: glazecl

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-24 17:09:16 -07:00