* More clarity around how geometry is handled. Not returned by default;
when returned, stored in metadata. This is because it's usually a waste
of tokens, but it should be accessible if needed.
* User can supply layer description to avoid errors when layer
properties are inaccessible due to passthrough access.
* Enhanced testing
* Updated notebook
---------
Co-authored-by: Connor Sutton <connor.sutton@swca.com>
Co-authored-by: connorsutton <135151649+connorsutton@users.noreply.github.com>
- Description: this PR adds the possibility to configure boto3 in the S3
loaders. Any named argument you add will be used to create the Boto3
session. This is useful when the AWS credentials can't be passed as env
variables or can't be read from the credentials file.
- Issue: N/A
- Dependencies: N/A
- Tag maintainer: ?
- Twitter handle: cbornet_
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Hi,
this PR contains loader / parser for Azure Document intelligence which
is a ML-based service to ingest arbitrary PDFs / images, even if
scanned. The loader generates Documents by pages of the original
document. This is my first contribution to LangChain.
Unfortunately I could not find the correct place for test cases. Happy
to add one if you can point me to the location, but as this is a
cloud-based service, a test would require network access and credentials
- so might be of limited help.
Dependencies: The needed dependency was already part of pyproject.toml,
no change.
Twitter: feel free to mention @LarsAC on the announcement
- Description: this PR adds `s3_object_key` and `s3_bucket` to the doc
metadata when loading an S3 file. This is particularly useful when using
`S3DirectoryLoader` to remove the files from the dir once they have been
processed (getting the object keys from the metadata `source` field
seems brittle)
- Dependencies: N/A
- Tag maintainer: ?
- Twitter handle: _cbornet
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Update google drive doc loader and retriever notebooks. Show how to use with langchain-googledrive package.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Uses the shorter import path
`from langchain.document_loaders import` instead of the full path
`from langchain.document_loaders.assemblyai`
Applies those changes to the docs and the unit test.
See #9667 that adds this new loader.
This PR adds a new document loader `AssemblyAIAudioTranscriptLoader`
that allows to transcribe audio files with the [AssemblyAI
API](https://www.assemblyai.com) and loads the transcribed text into
documents.
- Add new document_loader with class `AssemblyAIAudioTranscriptLoader`
- Add optional dependency `assemblyai`
- Add unit tests (using a Mock client)
- Add docs notebook
This is the equivalent to the JS integration already available in
LangChain.js. See the [LangChain JS docs AssemblyAI
page](https://js.langchain.com/docs/modules/data_connection/document_loaders/integrations/web_loaders/assemblyai_audio_transcription).
At its simplest, you can use the loader to get a transcript back from an
audio file like this:
```python
from langchain.document_loaders.assemblyai import AssemblyAIAudioTranscriptLoader
loader = AssemblyAIAudioTranscriptLoader(file_path="./testfile.mp3")
docs = loader.load()
```
To use it, it needs the `assemblyai` python package installed, and the
environment variable `ASSEMBLYAI_API_KEY` set with your API key.
Alternatively, the API key can also be passed as an argument.
Twitter handles to shout out if so kindly 🙇
[@AssemblyAI](https://twitter.com/AssemblyAI) and
[@patloeber](https://twitter.com/patloeber)
---------
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Improve internal consistency in LangChain documentation
- Change occurrences of eg and eg. to e.g.
- Fix headers containing unnecessary capital letters.
- Change instances of "few shot" to "few-shot".
- Add periods to end of sentences where missing.
- Minor spelling and grammar fixes.
The Docugami loader was not returning the source metadata key. This was
triggering this exception when used with retrievers, per
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/schema/prompt_template.py#L193C1-L195C41
The fix is simple and just updates the metadata key name for the
document each chunk is sourced from, from "name" to "source" as
expected.
I tested by running the python notebook that has an end to end scenario
in it.
Tagging DataLoader maintainers @rlancemartin @eyurtsev
### Description
Polars is a DataFrame interface on top of an OLAP Query Engine
implemented in Rust.
Polars is faster to read than pandas, so I'm looking forward to seeing
it added to the document loader.
### Dependencies
polars (https://pola-rs.github.io/polars-book/user-guide/)
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: Updating documentation to add AmazonTextractPDFLoader
according to
[comment](https://github.com/langchain-ai/langchain/pull/8661#issuecomment-1666572992)
from [baskaryan](https://github.com/baskaryan)
Adding one notebook and instructions to the
modules/data_connection/document_loaders/pdf.mdx
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
### Summary
Fixes a bug from #7850 where post processing functions in Unstructured
loaders were not apply. Adds a assertion to the test to verify the post
processing function was applied and also updates the explanation in the
example notebook.
- Description: Adds the ArcGISLoader class to
`langchain.document_loaders`
- Allows users to load data from ArcGIS Online, Portal, and similar
- Users can authenticate with `arcgis.gis.GIS` or retrieve public data
anonymously
- Uses the `arcgis.features.FeatureLayer` class to retrieve the data
- Defines the most relevant keywords arguments and accepts `**kwargs`
- Dependencies: Using this class requires `arcgis` and, optionally,
`bs4.BeautifulSoup`.
Tagging maintainers:
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: Added a new use case category called "Web Scraping", and
a tutorial to scrape websites using OpenAI Functions Extraction chain to
the docs.
- Tag maintainer:@baskaryan @hwchase17 ,
- Twitter handle: https://www.linkedin.com/in/haiphunghiem/ (I'm on
LinkedIn mostly)
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
- Description: Improvement in the Grobid loader documentation, typos and
suggesting to use the docker image instead of installing Grobid in local
(the documentation was also limited to Mac, while docker allow running
in any platform)
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @whitenoise
This PR adds 8 new loaders:
* `AirbyteCDKLoader` This reader can wrap and run all python-based
Airbyte source connectors.
* Separate loaders for the most commonly used APIs:
* `AirbyteGongLoader`
* `AirbyteHubspotLoader`
* `AirbyteSalesforceLoader`
* `AirbyteShopifyLoader`
* `AirbyteStripeLoader`
* `AirbyteTypeformLoader`
* `AirbyteZendeskSupportLoader`
## Documentation and getting started
I added the basic shape of the config to the notebooks. This increases
the maintenance effort a bit, but I think it's worth it to make sure
people can get started quickly with these important connectors. This is
also why I linked the spec and the documentation page in the readme as
these two contain all the information to configure a source correctly
(e.g. it won't suggest using oauth if that's avoidable even if the
connector supports it).
## Document generation
The "documents" produced by these loaders won't have a text part
(instead, all the record fields are put into the metadata). If a text is
required by the use case, the caller needs to do custom transformation
suitable for their use case.
## Incremental sync
All loaders support incremental syncs if the underlying streams support
it. By storing the `last_state` from the reader instance away and
passing it in when loading, it will only load updated records.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Description: This PR improves the function of recursive_url_loader, such
as limiting the depth of the access, and customizable extractors(from
the raw webpage to the text of the Document object), so that users can
use other tools to extract the webpage. This PR also includes the
document and test for the new loader.
Old PR closed due to project structure change. #7756
Because socket requests are not allowed, the old unit test was removed.
Issue: N/A
Dependencies: asyncio, aiohttp
Tag maintainer: @rlancemartin
Twitter handle: @ Zend_Nihility
---------
Co-authored-by: Lance Martin <lance@langchain.dev>
Replace this comment with:
- Description: added a document loader for a list of RSS feeds or OPML.
It iterates through the list and uses NewsURLLoader to load each
article.
- Issue: N/A
- Dependencies: feedparser, listparser
- Tag maintainer: @rlancemartin, @eyurtsev
- Twitter handle: @ruze
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: run the poetry dependencies
- Issue: #7329
- Dependencies: any dependencies required for this change,
- Tag maintainer: @rlancemartin
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
### Summary
Updates the `unstructured` install instructions. For
`unstructured>=0.9.0`, dependencies are broken out by document type and
the base `unstructured` package includes fewer dependencies. `pip
install "unstructured[local-inference]"` has been replace by `pip
install "unstructured[all-docs]"`, though the `local-inference` extra is
still supported for the time being.
### Reviewers
- @rlancemartin
- @eyurtsev
- @hwchase17
Description:
This PR adds support for loading documents from Huawei OBS (Object
Storage Service) in Langchain. OBS is a cloud-based object storage
service provided by Huawei Cloud. With this enhancement, Langchain users
can now easily access and load documents stored in Huawei OBS directly
into the system.
Key Changes:
- Added a new document loader module specifically for Huawei OBS
integration.
- Implemented the necessary logic to authenticate and connect to Huawei
OBS using access credentials.
- Enabled the loading of individual documents from a specified bucket
and object key in Huawei OBS.
- Provided the option to specify custom authentication information or
obtain security tokens from Huawei Cloud ECS for easy access.
How to Test:
1. Ensure the required package "esdk-obs-python" is installed.
2. Configure the endpoint, access key, secret key, and bucket details
for Huawei OBS in the Langchain settings.
3. Load documents from Huawei OBS using the updated document loader
module.
4. Verify that documents are successfully retrieved and loaded into
Langchain for further processing.
Please review this PR and let us know if any further improvements are
needed. Your feedback is highly appreciated!
@rlancemartin, @eyurtsev
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Works just like the GenericLoader but concurrently for those who choose
to optimize their workflow.
@rlancemartin @eyurtsev
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Added a new tool to the Github toolkit called **Create Pull Request.**
Now we can make our own langchain contributor in langchain 😁
In order to have somewhere to pull from, I also added a new env var,
"GITHUB_BASE_BRANCH." This will allow the existing env var,
"GITHUB_BRANCH," to be a working branch for the bot (so that it doesn't
have to always commit on the main/master). For example, if you want the
bot to work in a branch called `bot_dev` and your repo base is `main`,
you would set up the vars like:
```
GITHUB_BASE_BRANCH = "main"
GITHUB_BRANCH = "bot_dev"
```
Maintainer responsibilities:
- Agents / Tools / Toolkits: @hinthornw