Commit Graph

1003 Commits

Author SHA1 Message Date
Ravindra Marella
b3988621c5
Add C Transformers for GGML Models (#5218)
# Add C Transformers for GGML Models
I created Python bindings for the GGML models:
https://github.com/marella/ctransformers

Currently it supports GPT-2, GPT-J, GPT-NeoX, LLaMA, MPT, etc. See
[Supported
Models](https://github.com/marella/ctransformers#supported-models).


It provides a unified interface for all models:

```python
from langchain.llms import CTransformers

llm = CTransformers(model='/path/to/ggml-gpt-2.bin', model_type='gpt2')

print(llm('AI is going to'))
```

It can be used with models hosted on the Hugging Face Hub:

```py
llm = CTransformers(model='marella/gpt-2-ggml')
```

It supports streaming:

```py
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

llm = CTransformers(model='marella/gpt-2-ggml', callbacks=[StreamingStdOutCallbackHandler()])
```

Please see [README](https://github.com/marella/ctransformers#readme) for
more details.
---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 13:42:44 -07:00
Davis Chase
ca88b25da6
Zep sdk version (#5267)
zep-python's sync methods no longer need an asyncio wrapper. This was
causing issues with FastAPI deployment.
Zep also now supports putting and getting of arbitrary message metadata.

Bump zep-python version to v0.30

Remove nest-asyncio from Zep example notebooks.

Modify tests to include metadata.

---------

Co-authored-by: Daniel Chalef <daniel.chalef@private.org>
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
2023-05-25 13:42:10 -07:00
Janil Wörst
5525602df0
Docs link custom agent page in getting started (#5250)
# Docs: link custom agent page in getting started
2023-05-25 13:11:30 -07:00
Davis Chase
3be9ba14f3
OpenSearch top k parameter fix (#5216)
For most queries it's the `size` parameter that determines final number
of documents to return. Since our abstractions refer to this as `k`, set
this to be `k` everywhere instead of expecting a separate param. Would
be great to have someone more familiar with OpenSearch validate that
this is reasonable (e.g. that having `size` and what OpenSearch calls
`k` be the same won't lead to any strange behavior). cc @naveentatikonda

Closes #5212
2023-05-25 09:51:23 -07:00
Yves Maurer
88ed8e1cd6
Added the option of specifying a proxy for the OpenAI API (#5246)
# Added the option of specifying a proxy for the OpenAI API

Fixes #5243

Co-authored-by: Yves Maurer <>
2023-05-25 09:50:25 -07:00
mwinterde
9c0cb90997
Resolve error in StructuredOutputParser docs (#5240)
# Resolve error in StructuredOutputParser docs

Documentation for `StructuredOutputParser` currently not reproducible,
that is, `output_parser.parse(output)` raises an error because the LLM
returns a response with an invalid format

```python
_input = prompt.format_prompt(question="what's the capital of france")
output = model(_input.to_string())

output

# ?
#
# ```json
# {
# 	"answer": "Paris",
# 	"source": "https://www.worldatlas.com/articles/what-is-the-capital-of-france.html"
# }
# ```
```

Was fixed by adding a question mark to the prompt
2023-05-25 07:47:25 -07:00
Shukri
09e246f306
Weaviate: Add QnA with sources example (#5247)
# Add QnA with sources example 

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Fixes: see
https://stackoverflow.com/questions/76207160/langchain-doesnt-work-with-weaviate-vector-database-getting-valueerror/76210017#76210017

## Before submitting

<!-- If you're adding a new integration, include an integration test and
an example notebook showing its use! -->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

        Agents / Tools / Toolkits
        - @vowelparrot
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->
@dev2049
2023-05-25 09:58:33 -04:00
Archon
5cdd9ab7e1
Add MiniMax embeddings (#5174)
- Add support for MiniMax embeddings

Doc: [MiniMax
embeddings](https://api.minimax.chat/document/guides/embeddings?id=6464722084cdc277dfaa966a)

---------

Co-authored-by: Archon <archongum@outlook.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 06:57:49 -07:00
Eugene Yurtsev
5cfa72a130
Bibtex integration for document loader and retriever (#5137)
# Bibtex integration

Wrap bibtexparser to retrieve a list of docs from a bibtex file.
* Get the metadata from the bibtex entries
* `page_content` get from the local pdf referenced in the `file` field
of the bibtex entry using `pymupdf`
* If no valid pdf file, `page_content` set to the `abstract` field of
the bibtex entry
* Support Zotero flavour using regex to get the file path
* Added usage example in
`docs/modules/indexes/document_loaders/examples/bibtex.ipynb`
---------

Co-authored-by: Sébastien M. Popoff <sebastien.popoff@espci.fr>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-25 00:21:31 -07:00
Keno
eff31a3361
Remove API key from docs (#5223)
I found an API key for `serpapi_api_key` while reading the docs. It
seems to have been modified very recently. Removed it in this PR
@hwchase17 - project lead
2023-05-24 22:25:39 -07:00
Leonid Ganeline
2ad29f410d
fix a mistake in concepts.md (#5222)
# fix a mistake in concepts.md


## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
2023-05-24 21:47:22 -07:00
Harrison Chase
a775aa6389
Harrison/vertex (#5049)
Co-authored-by: Leonid Kuligin <kuligin@google.com>
Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru>
Co-authored-by: sasha-gitg <44654632+sasha-gitg@users.noreply.github.com>
Co-authored-by: Justin Flick <Justinjayflick@gmail.com>
Co-authored-by: Justin Flick <jflick@homesite.com>
2023-05-24 15:51:12 -07:00
Davis Chase
dcee8936c1
nit (#5208) 2023-05-24 12:52:20 -07:00
Alon Diament
44abe925df
Add Joplin document loader (#5153)
# Add Joplin document loader

[Joplin](https://joplinapp.org/) is an open source note-taking app.

Joplin has a [REST API](https://joplinapp.org/api/references/rest_api/)
for accessing its local database. The proposed `JoplinLoader` uses the
API to retrieve all notes in the database and their metadata. Joplin
needs to be installed and running locally, and an access token is
required.

- The PR includes an integration test.
- The PR includes an example notebook.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 12:31:55 -07:00
Rodrigo Siqueira
f10be072ff
Add Iugu document loader (#5162)
Create IUGU loader
---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 11:47:01 -07:00
Davis Chase
2b2176a3c1
tfidf retriever (#5114)
Co-authored-by: vempaliakhil96 <vempaliakhil96@gmail.com>
2023-05-24 10:02:09 -07:00
Shukri
b00c77dc62
Improve weaviate vectorstore docs (#5201)
# Improve weaviate vectorstore docs
2023-05-24 09:31:48 -07:00
Harrison Chase
11c26ebb55
Harrison/modelscope (#5156)
Co-authored-by: thomas-yanxin <yx20001210@163.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 08:06:45 -07:00
Jeff Vestal
cf19a2a59f
example usage (#5182)
Adding example usage for elasticsearch knn embeddings
[per](https://github.com/hwchase17/langchain/pull/3401#issuecomment-1548518389)


https://github.com/hwchase17/langchain/blob/master/langchain/embeddings/elasticsearch.py
2023-05-24 07:47:15 -07:00
Ikko Eltociear Ashimine
fff21a0b35
Update rellm_experimental.ipynb (#5189)
# Your PR Title (What it does)

HuggingFace -> Hugging Face
2023-05-24 11:41:00 +00:00
Nolan Tremelling
faa26650c9
Beam (#4996)
# Beam

Calls the Beam API wrapper to deploy and make subsequent calls to an
instance of the gpt2 LLM in a cloud deployment. Requires installation of
the Beam library and registration of Beam Client ID and Client Secret.
Additional calls can then be made through the instance of the large
language model in your code or by calling the Beam API.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 01:25:18 -07:00
Ofer Mendelevitch
c81fb88035
Vectara (#5069)
# Vectara Integration

This PR provides integration with Vectara. Implemented here are:
* langchain/vectorstore/vectara.py
* tests/integration_tests/vectorstores/test_vectara.py
* langchain/retrievers/vectara_retriever.py
And two IPYNB notebooks to do more testing:
* docs/modules/chains/index_examples/vectara_text_generation.ipynb
* docs/modules/indexes/vectorstores/examples/vectara.ipynb

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-24 01:24:58 -07:00
Jason Bosco
9c4b43b494
Add Typesense vector store (#1674)
Closes #931.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-23 23:20:45 -07:00
Leonid Ganeline
33929489b9
docs: added missed document_loaders examples (#5150)
# DOCS added missed document_loader examples

Added missed examples: `JSON`, `Open Document Format (ODT)`,
`Wikipedia`, `tomarkdown`.
Updated them to a consistent format.

## Who can review?

@hwchase17 
@dev2049
2023-05-23 21:56:41 -07:00
Daniel Quinteros
c111134a55
Clarification of the reference to the "get_text_legth" function in ge… (#5154)
# Clarification of the reference to the "get_text_legth" function in
getting_started.md

Reference to the function "get_text_legth" in the documentation did not
make sense. Comment added for clarification.

@hwchase17
2023-05-23 20:43:38 -07:00
Daniel Quinteros
de4ef24f75
Docs: updated getting_started.md (#5151)
# Docs: updated getting_started.md

Just accommodating some unnecessary spaces in the example of "pass few
shot examples to a prompt template".

@vowelparrot
2023-05-23 20:43:26 -07:00
Daniel King
de6e6c764e
Add MosaicML inference endpoints (#4607)
# Add MosaicML inference endpoints
This PR adds support in langchain for MosaicML inference endpoints. We
both serve a select few open source models, and allow customers to
deploy their own models using our inference service. Docs are here
(https://docs.mosaicml.com/en/latest/inference.html), and sign up form
is here (https://forms.mosaicml.com/demo?utm_source=langchain). I'm not
intimately familiar with the details of langchain, or the contribution
process, so please let me know if there is anything that needs fixing or
this is the wrong way to submit a new integration, thanks!

I'm also not sure what the procedure is for integration tests. I have
tested locally with my api key.

## Who can review?
@hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-05-23 15:59:08 -07:00
Adheeban Manoharan
68f0d45485
Adding Weather Loader (#5056)
Co-authored-by: Tyler Hutcherson <tyler.hutcherson@redis.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-23 15:57:33 -07:00
Jeff Vestal
0b542a9706
Add ElasticsearchEmbeddings class for generating embeddings using Elasticsearch models (#3401)
This PR introduces a new module, `elasticsearch_embeddings.py`, which
provides a wrapper around Elasticsearch embedding models. The new
ElasticsearchEmbeddings class allows users to generate embeddings for
documents and query texts using a [model deployed in an Elasticsearch
cluster](https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-model-ref.html#ml-nlp-model-ref-text-embedding).

### Main features:

1. The ElasticsearchEmbeddings class initializes with an Elasticsearch
connection object and a model_id, providing an interface to interact
with the Elasticsearch ML client through
[infer_trained_model](https://elasticsearch-py.readthedocs.io/en/v8.7.0/api.html?highlight=trained%20model%20infer#elasticsearch.client.MlClient.infer_trained_model)
.
2. The `embed_documents()` method generates embeddings for a list of
documents, and the `embed_query()` method generates an embedding for a
single query text.
3. The class supports custom input text field names in case the deployed
model expects a different field name than the default `text_field`.
4. The implementation is compatible with any model deployed in
Elasticsearch that generates embeddings as output.

### Benefits:

1. Simplifies the process of generating embeddings using Elasticsearch
models.
2. Provides a clean and intuitive interface to interact with the
Elasticsearch ML client.
3. Allows users to easily integrate Elasticsearch-generated embeddings.

Related issue https://github.com/hwchase17/langchain/issues/3400

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-23 14:50:33 -07:00
Myeongseop Kim
7a75bb2121
docs: fix minor typo + add wikipedia package installation part in human_input_llm.ipynb (#5118)
# Fix typo + add wikipedia package installation part in
human_input_llm.ipynb
This PR
1. Fixes typo ("the the human input LLM"), 
2. Addes wikipedia package installation part (in accordance with
`WikipediaQueryRun`
[documentation](https://python.langchain.com/en/latest/modules/agents/tools/examples/wikipedia.html))

in `human_input_llm.ipynb`
(`docs/modules/models/llms/examples/human_input_llm.ipynb`)
2023-05-23 10:59:30 -07:00
Ayan Bandyopadhyay
5c87dbf5a8
Add link to Psychic from document loaders documentation page (#5115)
# Add link to Psychic from document loaders documentation page

In my previous PR I forgot to update `document_loaders.rst` to link to
`psychic.ipynb` to make it discoverable from the main documentation.
2023-05-23 06:47:23 -07:00
Tian Wei
d7f807b71f
Add AzureCognitiveServicesToolkit to call Azure Cognitive Services API (#5012)
# Add AzureCognitiveServicesToolkit to call Azure Cognitive Services
API: achieve some multimodal capabilities

This PR adds a toolkit named AzureCognitiveServicesToolkit which bundles
the following tools:
- AzureCogsImageAnalysisTool: calls Azure Cognitive Services image
analysis API to extract caption, objects, tags, and text from images.
- AzureCogsFormRecognizerTool: calls Azure Cognitive Services form
recognizer API to extract text, tables, and key-value pairs from
documents.
- AzureCogsSpeech2TextTool: calls Azure Cognitive Services speech to
text API to transcribe speech to text.
- AzureCogsText2SpeechTool: calls Azure Cognitive Services text to
speech API to synthesize text to speech.

This toolkit can be used to process image, document, and audio inputs.
---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-23 06:45:48 -07:00
Jamie Broomall
d4fd589638
WhyLabs callback (#4906)
# Add a WhyLabs callback handler

* Adds a simple WhyLabsCallbackHandler
* Add required dependencies as optional
* protect against missing modules with imports
* Add docs/ecosystem basic example

based on initial prototype from @andrewelizondo

> this integration gathers privacy preserving telemetry on text with
whylogs and sends stastical profiles to WhyLabs platform to monitoring
these metrics over time. For more information on what WhyLabs is see:
https://whylabs.ai

After you run the notebook (if you have env variables set for the API
Keys, org_id and dataset_id) you get something like this in WhyLabs:
![Screenshot
(443)](https://github.com/hwchase17/langchain/assets/88007022/6bdb3e1c-4243-4ae8-b974-23a8bb12edac)

Co-authored-by: Andre Elizondo <andre@whylabs.ai>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 20:29:47 -07:00
Matt Rickard
de6a401a22
Add OpenLM LLM multi-provider (#4993)
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call
different inference endpoints directly via HTTP. It implements the
OpenAI Completion class so that it can be used as a drop-in replacement
for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added
code.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 18:09:53 -07:00
Gergely Imreh
69de33e024
Add Mastodon toots loader (#5036)
# Add Mastodon toots loader.

Loader works either with public toots, or Mastodon app credentials. Toot
text and user info is loaded.

I've also added integration test for this new loader as it works with
public data, and a notebook with example output run now.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 16:43:07 -07:00
Andreas Liebschner
44dc959584
Improve pinecone hybrid search retriever adding metadata support (#5098)
# Improve pinecone hybrid search retriever adding metadata support

I simply remove the hardwiring of metadata to the existing
implementation allowing one to pass `metadatas` attribute to the
constructors and in `get_relevant_documents`. I also add one missing pip
install to the accompanying notebook (I am not adding dependencies, they
were pre-existing).

First contribution, just hoping to help, feel free to critique :) 
my twitter username is `@andreliebschner`

While looking at hybrid search I noticed #3043 and #1743. I think the
former can be closed as following the example right now (even prior to
my improvements) works just fine, the latter I think can be also closed
safely, maybe pointing out the relevant classes and example. Should I
reply those issues mentioning someone?

@dev2049, @hwchase17

---------

Co-authored-by: Andreas Liebschner <a.liebschner@shopfully.com>
2023-05-22 11:42:54 -07:00
Harrison Chase
10ba201d05
Harrison/neo4j (#5078)
Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-22 07:31:48 -07:00
Leonid Ganeline
443ebe22f4
docs: Deployments page moved into Ecosystem/ (#4949)
# docs: `deployments` page moved into `ecosystem/`

The `Deployments` page moved into the `Ecosystem/` group

Small fixes:
- `index` page: fixed order of items in the `Modules` list, in the `Use
Cases` list
- item `References/Installation` was lost in the `index` page (not on
the Navbar!). Restored it.
- added `|` marker in several places.

NOTE: I also thought about moving the `Additional Resources/Gallery`
page into the `Ecosystem` group but decided to leave it unchanged.
Please, advise on this.

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
@dev2049
2023-05-21 21:18:22 -07:00
Matt Robinson
bf3f554357
feat: batch multiple files in a single Unstructured API request (#4525)
### Submit Multiple Files to the Unstructured API

Enables batching multiple files into a single Unstructured API requests.
Support for requests with multiple files was added to both
`UnstructuredAPIFileLoader` and `UnstructuredAPIFileIOLoader`. Note that
if you submit multiple files in "single" mode, the result will be
concatenated into a single document. We recommend using this feature in
"elements" mode.

### Testing

The following should load both documents, using two of the example docs
from the integration tests folder.

```python
    from langchain.document_loaders import UnstructuredAPIFileLoader

    file_paths = ["examples/layout-parser-paper.pdf",  "examples/whatsapp_chat.txt"]

    loader = UnstructuredAPIFileLoader(
        file_paths=file_paths,
        api_key="FAKE_API_KEY",
        strategy="fast",
        mode="elements",
    )
    docs = loader.load()
```
2023-05-21 20:48:20 -07:00
Harrison Chase
224f73e978 move docs 2023-05-21 09:22:35 -07:00
Harrison Chase
b0431c672b
Harrison/psychic (#5063)
Co-authored-by: Ayan Bandyopadhyay <ayanb9440@gmail.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-21 09:13:20 -07:00
Jeffrey Zheng
424a573266
DOC: Misspelling in agents.rst documentation (#5038)
# Corrected Misspelling in agents.rst Documentation

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get
-->

In the
[documentation](https://python.langchain.com/en/latest/modules/agents.html)
it says "in fact, it is often best to have an Action Agent be in
**change** of the execution for the Plan and Execute agent."

**Suggested Change:** I propose correcting change to charge.

Fix for issue: #5039
2023-05-20 22:24:08 -07:00
Gengliang Wang
f9f08c4b69
Add documentation for Databricks integration (#5013)
# Add documentation for Databricks integration

This is a follow-up of https://github.com/hwchase17/langchain/pull/4702
It documents the details of how to integrate Databricks using langchain.
It also provides examples in a notebook.


## Who can review?
@dev2049 @hwchase17 since you are aware of the context. We will promote
the integration after this doc is ready. Thanks in advance!
2023-05-20 22:06:24 -07:00
tornikeo
a6ef20d7fe
Fix annoying typo in docs (#5029)
# Fixes an annoying typo in docs

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

Fixes Annoying typo in docs - "Therefor" -> "Therefore". It's so
annoying to read that I just had to make this PR.
2023-05-20 22:02:21 -07:00
UmerHA
7388248b3e
Streaming only final output of agent (#2483) (#4630)
# Streaming only final output of agent (#2483)
As requested in issue #2483, this Callback allows to stream only the
final output of an agent (ie not the intermediate steps).

Fixes #2483

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-20 09:20:17 -07:00
Davis Chase
3bc0bf0079
fix prompt saving (#4987)
will add unit tests
2023-05-20 08:21:52 -07:00
domchan
6c60251f52
Add self query translator for weaviate vectorstore (#4804)
# Add self query translator for weaviate vectorstore

Adds support for the EQ comparator and the AND/OR operators. 

Co-authored-by: Dominic Chan <dchan@cppib.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-19 16:41:12 -07:00
SimFG
f07b9fde74
Update the GPTCache example (#4985)
# Update the GPTCache example

Fixes #4757
2023-05-19 16:35:36 -07:00
Nicolas
02632d52b3
docs: Big Mendable Improvements (#4964)
- Higher accuracy on the responses
- New redesigned UI
- Pretty Sources: display the sources by title / sub-section instead of
long URL.
- Fixed Reset Button bugs and some other UI issues
- Other tweaks
2023-05-19 15:31:48 -07:00
Mike McGarry
ddd595fe81
feature/4493 Improve Evernote Document Loader (#4577)
# Improve Evernote Document Loader

When exporting from Evernote you may export more than one note.
Currently the Evernote loader concatenates the content of all notes in
the export into a single document and only attaches the name of the
export file as metadata on the document.

This change ensures that each note is loaded as an independent document
and all available metadata on the note e.g. author, title, created,
updated are added as metadata on each document.

It also uses an existing optional dependency of `html2text` instead of
`pypandoc` to remove the need to download the pandoc application via
`download_pandoc()` to be able to use the `pypandoc` python bindings.

Fixes #4493 

Co-authored-by: Mike McGarry <mike.mcgarry@finbourne.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-05-19 14:28:17 -07:00