Commit Graph

3924 Commits

Author SHA1 Message Date
Luis Valencia
7124377524
Devcontainer README -> Clarification. (#8414)
- Description: The contribution guidlelines using devcontainer refer to
the main repo and not the forked repo. We should create our changes in
our own forked repo, not on langchain/main
  - Issue: Just documentation
  - Dependencies: N/A,
  - Tag maintainer: @baskaryan
  - Twitter handle: @levalencia
2023-07-28 15:09:42 -07:00
lvisdd
abe4c361f9
update get_num_tokens_from_messages model (#8431)
(#8430)

Co-authored-by: Kano Kunihiko <kkano@heroz.co.jp>
2023-07-28 15:07:03 -07:00
Jeffrey Wang
e0de62f6da
Add RoPE Scaling params from llamacpp (#8422)
Description:
Just adding parameters from `llama-python-cpp` that support RoPE
scaling.
@hwchase17, @baskaryan

sources:
papers and explanation:
https://kaiokendev.github.io/context
llamacpp conversation:
https://github.com/ggerganov/llama.cpp/discussions/1965 
Supports models like:
https://huggingface.co/conceptofmind/LLongMA-2-13b
2023-07-28 14:42:41 -07:00
Bagatur
2db2987b1b
add experimental ref (#8435) 2023-07-28 14:26:47 -07:00
Harrison Chase
fab24457bc
remove code (#8425) 2023-07-28 13:19:44 -07:00
Harrison Chase
3a78450883
update experimental (#8402)
some changes were made to experimental, porting them over
2023-07-28 13:01:36 -07:00
Harrison Chase
af7e70d4af
expose function for converting messages to messages (#8426) 2023-07-28 13:00:54 -07:00
Eugene Yurtsev
06bdbe06fe
PromptTemplate update documentation and expand kwarg (#8423)
# PromptTemplate

* Update documentation to highlight the classmethod for instantiating a
prompt template.
* Expand kwargs in the classmethod to make parameters easier to discover

This PR got reverted here:
https://github.com/langchain-ai/langchain/pull/8395/files
2023-07-28 14:11:49 -04:00
Eugene Yurtsev
e62a1686e2
ChatPromptTemplate: minor fix in doc string (#8424)
Minor fix in doc-string to use `ai` rather than `assistant`
2023-07-28 13:01:13 -04:00
Eugene Yurtsev
760c278fe0
ChatPromptTemplate: Expand support for message formats and documentation (#8244)
* Expands support for a variety of message formats in the
`from_messages` classmethod. Ideally, we could deprecate the other
on-ramps to reduce the amount of classmethods users need to know about.
* Expand documentation with code examples.
2023-07-28 12:48:08 -04:00
Bagatur
61dd92f821
bump 246 (#8410) 2023-07-28 01:18:37 -07:00
Harrison Chase
394b67ab92
add kwargs to llm runnables (#8388) 2023-07-28 09:13:11 +01:00
HeTaoPKU
d5884017a9
Add Minimax llm model to langchain (#7645)
- Description: Minimax is a great AI startup from China, recently they
released their latest model and chat API, and the API is widely-spread
in China. As a result, I'd like to add the Minimax llm model to
Langchain.
- Tag maintainer: @hwchase17, @baskaryan

---------

Co-authored-by: the <tao.he@hulu.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 22:53:23 -07:00
James Campbell
0ad2d5f27a
[nit] Add default value for ChatOpenAI client (#7939)
Micro convenience PR to avoid warning regarding missing `client`
parameter. It is always set during initialization.

@baskaryan

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 22:38:32 -07:00
Harrison Chase
82df923f37 Merge branch 'master' of github.com:hwchase17/langchain 2023-07-27 22:01:20 -07:00
Harrison Chase
1b0bfa54cf cr 2023-07-27 22:00:52 -07:00
Jeff Vestal
c7ff5f19a8
ElasticKnnSearch rewrite - bug fix - return Document (#8180)
Fixes: 
https://github.com/hwchase17/langchain/issues/7117
https://github.com/hwchase17/langchain/issues/5760

Adding back `create_index` , `add_texts`, `from_texts` to
ElasticKnnSearch

`from_texts` matches standard `from_texts` methods as quick start up
method

`knn_search` and `hybrid_result` return a list of [`Document()`,
`score`,]

# Test `from_texts` for quick start
```
# create new index using from_text

from langchain.vectorstores.elastic_vector_search import ElasticKnnSearch
from langchain.embeddings import ElasticsearchEmbeddings

model_id = "sentence-transformers__all-distilroberta-v1" 
dims = 768
es_cloud_id = ""
es_user = ""
es_password = ""
test_index = "knn_test_index_305"

embeddings = ElasticsearchEmbeddings.from_credentials(
    model_id,
    #input_field=input_field,
    es_cloud_id=es_cloud_id,
    es_user=es_user,
    es_password=es_password,
)

# add texts and create class instance
texts = ["This is a test document", "This is another test document"]
knnvectorsearch = ElasticKnnSearch.from_texts(
    texts=texts,
    embedding=embeddings,
    index_name= test_index,
    vector_query_field='vector',
    query_field='text',
    model_id=model_id,
    dims=dims,
	es_cloud_id=es_cloud_id, 
	es_user=es_user, 
	es_password=es_password
)

# Test `add_texts` method
texts2 = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knnvectorsearch.add_texts(texts2)

query = "Hello"
knn_result = knnvectorsearch.knn_search(query = query, model_id= model_id, k=2)

hybrid_result = knnvectorsearch.knn_hybrid_search(query = query, model_id= model_id, k=2)

```

The  mapping is as follows:
```
{
  "knn_test_index_012": {
    "mappings": {
      "properties": {
        "text": {
          "type": "text"
        },
        "vector": {
          "type": "dense_vector",
          "dims": 768,
          "index": true,
          "similarity": "dot_product"
        }
      }
    }
  }
}
```

# Check response type
```
>>> hybrid_result
[(Document(page_content='Hello, world!', metadata={}), 0.94232327), (Document(page_content='I love Python.', metadata={}), 0.5321523)]

>>> hybrid_result[0]
(Document(page_content='Hello, world!', metadata={}), 0.94232327)

>>> hybrid_result[0][0]
Document(page_content='Hello, world!', metadata={})

>>> type(hybrid_result[0][0])
<class 'langchain.schema.document.Document'>
```

# Test with existing Index
```
from langchain.vectorstores.elastic_vector_search import ElasticKnnSearch
from langchain.embeddings import ElasticsearchEmbeddings

## Initialize ElasticsearchEmbeddings
model_id = "sentence-transformers__all-distilroberta-v1" 
dims = 768
es_cloud_id = 
es_user = ""
es_password = ""
test_index = "knn_test_index_012"

embeddings = ElasticsearchEmbeddings.from_credentials(
    model_id,
    es_cloud_id=es_cloud_id,
    es_user=es_user,
    es_password=es_password,
)

## Initialize ElasticKnnSearch
knn_search = ElasticKnnSearch(
	es_cloud_id=es_cloud_id, 
	es_user=es_user, 
	es_password=es_password, 
	index_name= test_index, 
	embedding= embeddings
)


## Test adding vectors

### Test `add_texts` method when index created
texts = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knn_search.add_texts(texts)

```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 22:00:18 -07:00
Harrison Chase
a221a9ced0
Harrison/sql query (#8370)
Co-authored-by: Nuno Campos <nuno@boringbits.io>
2023-07-27 21:55:17 -07:00
Bagatur
a1a650c743
Bagatur/from texts bug fix (#8394)
---------

Co-authored-by: Davit Buniatyan <davit@loqsh.com>
Co-authored-by: Davit Buniatyan <d@activeloop.ai>
Co-authored-by: adilkhan <adilkhan.sarsen@nu.edu.kz>
Co-authored-by: Ivo Stranic <istranic@gmail.com>
2023-07-27 21:52:38 -07:00
Jiayi Ni
1efb9bae5f
FEAT: Integrate Xinference LLMs and Embeddings (#8171)
- [Xorbits
Inference(Xinference)](https://github.com/xorbitsai/inference) is a
powerful and versatile library designed to serve language, speech
recognition, and multimodal models. Xinference supports a variety of
GGML-compatible models including chatglm, whisper, and vicuna, and
utilizes heterogeneous hardware and a distributed architecture for
seamless cross-device and cross-server model deployment.
- This PR integrates Xinference models and Xinference embeddings into
LangChain.
- Dependencies: To install the depenedencies for this integration, run
    
    `pip install "xinference[all]"`
    
- Example Usage:

To start a local instance of Xinference, run `xinference`.

To deploy Xinference in a distributed cluster, first start an Xinference
supervisor using `xinference-supervisor`:

`xinference-supervisor -H "${supervisor_host}"`

Then, start the Xinference workers using `xinference-worker` on each
server you want to run them on.

`xinference-worker -e "http://${supervisor_host}:9997"`

To use Xinference with LangChain, you also need to launch a model. You
can use command line interface (CLI) to do so. Fo example: `xinference
launch -n vicuna-v1.3 -f ggmlv3 -q q4_0`. This launches a model named
vicuna-v1.3 with `model_format="ggmlv3"` and `quantization="q4_0"`. A
model UID is returned for you to use.

Now you can use Xinference with LangChain:

```python
from langchain.llms import Xinference

llm = Xinference(
    server_url="http://0.0.0.0:9997", # suppose the supervisor_host is "0.0.0.0"
    model_uid = {model_uid} # model UID returned from launching a model
)

llm(
    prompt="Q: where can we visit in the capital of France? A:",
    generate_config={"max_tokens": 1024},
)
```

You can also use RESTful client to launch a model:
```python
from xinference.client import RESTfulClient

client = RESTfulClient("http://0.0.0.0:9997")

model_uid = client.launch_model(model_name="vicuna-v1.3", model_size_in_billions=7, quantization="q4_0")
```

The following code block demonstrates how to use Xinference embeddings
with LangChain:
```python
from langchain.embeddings import XinferenceEmbeddings

xinference = XinferenceEmbeddings(
    server_url="http://0.0.0.0:9997",
    model_uid = model_uid
)
```

```python
query_result = xinference.embed_query("This is a test query")
```

```python
doc_result = xinference.embed_documents(["text A", "text B"])
```

Xinference is still under rapid development. Feel free to [join our
Slack
community](https://xorbitsio.slack.com/join/shared_invite/zt-1z3zsm9ep-87yI9YZ_B79HLB2ccTq4WA)
to get the latest updates!

- Request for review: @hwchase17, @baskaryan
- Twitter handle: https://twitter.com/Xorbitsio

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 21:23:19 -07:00
Bagatur
877d384bc9
Revert "PromptTemplate update documentation and expand kwargs (#8234)" (#8395)
fyi @eyurtsev was failing a unit test
2023-07-27 21:11:10 -07:00
Gordon Clark
e66759cc9d
Github add "Create PR" tool + Docs update (#8235)
Added a new tool to the Github toolkit called **Create Pull Request.**
Now we can make our own langchain contributor in langchain 😁

In order to have somewhere to pull from, I also added a new env var,
"GITHUB_BASE_BRANCH." This will allow the existing env var,
"GITHUB_BRANCH," to be a working branch for the bot (so that it doesn't
have to always commit on the main/master). For example, if you want the
bot to work in a branch called `bot_dev` and your repo base is `main`,
you would set up the vars like:
```
GITHUB_BASE_BRANCH = "main"
GITHUB_BRANCH = "bot_dev"
``` 

Maintainer responsibilities:
  - Agents / Tools / Toolkits: @hinthornw
2023-07-27 19:19:44 -07:00
William FH
ecd4aae818
Few Shot Chat Prompt (#8038)
Proposal for a few shot chat message example selector

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-07-27 18:46:10 -07:00
Eugene Yurtsev
6dd18eee26
PromptTemplate update documentation and expand kwargs (#8234)
# PromptTemplate

* Update documentation to highlight the classmethod for instantiating a
prompt template.
* Expand kwargs in the classmethod to make parameters easier to discover
2023-07-27 18:11:39 -07:00
Karan V
a003a0baf6
fix(petals) allows to run models that aren't Bloom (Support for LLama and newer models) (#8356)
In this PR:

- Removed restricted model loading logic for Petals-Bloom
- Removed petals imports (DistributedBloomForCausalLM,
BloomTokenizerFast)
- Instead imported more generalized versions of loader
(AutoDistributedModelForCausalLM, AutoTokenizer)
- Updated the Petals example notebook to allow for a successful
installation of Petals in Apple Silicon Macs

- Tag maintainer: @hwchase17, @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 18:01:04 -07:00
lars.gersmann
e758e9e7f5
fix(openapi): openapi chain will work without/empty description/summa… (#8351)
Description: 

This PR will enable the Open API chain to work with valid Open API
specifications missing `description` and `summary` properties for path
and operation nodes in open api specs.

Since both `description` and `summary` property are declared optional we
cannot be sure they are defined. This PR resolves this problem by
providing an empty (`''`) description as fallback.

The previous behavior of the Open API chain was that the underlying LLM
(OpenAI) throw ed an exception since `None` is not of type string:

```
openai.error.InvalidRequestError: None is not of type 'string' - 'functions.0.description'
```

Using this PR the Open API chain will succeed also using Open API specs
lacking `description` and `summary` properties for path and operation
nodes.

Thanks for your amazing work !

Tag maintainer: @baskaryan

---------

Co-authored-by: Lars Gersmann <lars.gersmann@cm4all.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 17:58:43 -07:00
ljeagle
caa6caeb8a
Upgrade the AwaDB from v0.3.7 to v0.3.9 and change the default embeddings (#8281)
1. Upgrade the AwaDB from v0.3.7 to v0.3.9
2. Change the default embedding to AwaEmbedding

---------

Co-authored-by: ljeagle <awadb.vincent@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-07-27 17:20:50 -07:00
Harrison Chase
25b8cc7e3d
Harrison/update memory docs (#8384)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 17:18:19 -07:00
Holt Skinner
d7e6770de8
refactor: Code refactoring & simplification for Google Cloud Enterprise Search retriever (#8369)
Followup to https://github.com/langchain-ai/langchain/pull/7857

- Changes `_convert_search_response()` to use object attributes instead
of converting to dictionary
- Simplifies logic for readability
2023-07-27 17:13:49 -07:00
Taozhi Wang
594f195e54
Add embeddings for AwaEmbedding (#8353)
- Description: Adds AwaEmbeddings class for embeddings, which provides
users with a convenient way to do fine-tuning, as well as the potential
need for multimodality

  - Tag maintainer: @baskaryan

Create `Awa.ipynb`: an example notebook for AwaEmbeddings class
Modify `embeddings/__init__.py`: Import the class
Create `embeddings/awa.py`: The embedding class
Create `embeddings/test_awa.py`: The test file.

---------

Co-authored-by: taozhiwang <taozhiwa@gmail.com>
2023-07-27 17:08:00 -07:00
thehunmonkgroup
ba4e82bb47
fix missing _identifying_params() in _VertexAICommon (#8303)
Full set of params are missing from Vertex* LLMs when `dict()` method is
called.

```
>>> from langchain.chat_models.vertexai import ChatVertexAI
>>> from langchain.llms.vertexai import VertexAI
>>> chat_llm = ChatVertexAI()
l>>> llm = VertexAI()
>>> chat_llm.dict()
{'_type': 'vertexai'}
>>> llm.dict()
{'_type': 'vertexai'}
```

This PR just uses the same mechanism used elsewhere to expose the full
params.

Since `_identifying_params()` is on the `_VertexAICommon` class, it
should cover the chat and non-chat cases.
2023-07-27 16:59:10 -07:00
bheroder
dc3ca44e05
Add an example for azure ml managed feature store (#8324)
We are adding an example of how one can connect to azure ml managed
feature store and use such a prompt template in a llm chain. @baskaryan
2023-07-27 16:56:06 -07:00
Caitlin2694
b2e4b9dca4
Fix exception caused by restrictions in OWL (#8341)
Description: Fix exception caused by restrictions in OWL
Issue: #8331
Dependencies: none
Maintainer: @baskaryan
2023-07-27 16:51:32 -07:00
Harrison Chase
cddd8ae83d
update release yml (#8364)
only do the step that tags and adds release notes if its langchain
2023-07-27 16:49:04 -07:00
Nikita Pokidyshev
f499e6ea6a
Add FunctionMessage to _message_from_dict (#8374)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-07-27 16:45:27 -07:00
evelynmitchell
539574670c
Update tot.ipynb (#8387)
Spelling error fix

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-07-27 16:44:41 -07:00
emarco177
2ab13ab743
added unit tests for mrkl output_parser.py (#8321)
- Description: added unit tests for mrkl output_parser.py, 
  - Tag maintainer: @hinthornw
  - Twitter handle: EdenEmarco177
2023-07-27 13:46:06 -07:00
Sachin Varghese
01217b2247
Update sql database agent example (#8354)
This PR fixes a minor documentation issue on the SQL database toolkit
example notebook.
2023-07-27 13:44:02 -07:00
Bagatur
55beab326c
cleanup warnings (#8379) 2023-07-27 13:43:05 -07:00
William FH
41524304bf
Update local script for docs build (#8377) 2023-07-27 13:13:59 -07:00
Harrison Chase
f5bf893035
rename to str output parser (#8373) 2023-07-27 12:57:34 -07:00
William FH
0e9e5b5202
Retry events on any run type (#8375) 2023-07-27 12:56:46 -07:00
Bagatur
68763bd25f
mv popular and additional chains to use cases (#8242) 2023-07-27 12:55:13 -07:00
William FH
ff98fad2d9
Add Retry Events (#8053)
![image](https://github.com/hwchase17/langchain/assets/13333726/59a5c3b4-4367-47e6-9f58-5b6557576a8a)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-07-27 12:39:39 -07:00
William FH
94a693e2ee
Link to use cases from tutorials (#8371) 2023-07-27 11:54:04 -07:00
Nuno Campos
0eca3e7d90
Add Runnable.bind method to attach kwargs to a Runnable that will be passed to all invoke/stream/batch calls when it is run (#8368)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-07-27 11:16:30 -07:00
Harrison Chase
cf608f876b update link 2023-07-27 09:47:57 -07:00
Nuno Campos
1bbadde77b
Support using RunnableMap directly (#8317)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
2023-07-27 17:24:29 +01:00
Bagatur
944321c6ab
bump 245 (#8359) 2023-07-27 06:53:24 -07:00
Rubén Barragán
ef6332ead6
Support loading files from Dropbox (#8271)
## Description
This commit introduces the `DropboxLoader` class, a new document loader
that allows loading files from Dropbox into the application. The loader
relies on a Dropbox app, which requires creating an app on Dropbox,
obtaining the necessary scope permissions, and generating an access
token. Additionally, the dropbox Python package is required.

The `DropboxLoader` class is designed to be used as a document loader
for processing various file types, including text files, PDFs, and
Dropbox Paper files.

## Dependencies
`pip install dropbox` and `pip install unstructured` for PDF reading.

## Tag maintainer
@rlancemartin, @eyurtsev (from Data Loaders). I'd appreciate some
feedback here 🙏 .

## Social Networks
https://github.com/rubenbarragan
https://www.linkedin.com/in/rgbarragan/
https://twitter.com/RubenBarraganP

---------

Co-authored-by: Ruben Barragan <rbarragan@Rubens-MacBook-Air.local>
2023-07-27 06:36:08 -07:00