This pull request adds a ChatGPT document loader to the document loaders
module in `langchain/document_loaders/chatgpt.py`. Additionally, it
includes an example Jupyter notebook in
`docs/modules/indexes/document_loaders/examples/chatgpt_loader.ipynb`
which uses fake sample data based on the original structure of the
`conversations.json` file.
The following files were added/modified:
- `langchain/document_loaders/__init__.py`
- `langchain/document_loaders/chatgpt.py`
- `docs/modules/indexes/document_loaders/examples/chatgpt_loader.ipynb`
-
`docs/modules/indexes/document_loaders/examples/example_data/fake_conversations.json`
This pull request was made in response to the recent release of ChatGPT
data exports by email:
https://help.openai.com/en/articles/7260999-how-do-i-export-my-chatgpt-history
Hi there!
I'm excited to open this PR to add support for using a fully Postgres
syntax compatible database 'AnalyticDB' as a vector.
As AnalyticDB has been proved can be used with AutoGPT,
ChatGPT-Retrieve-Plugin, and LLama-Index, I think it is also good for
you.
AnalyticDB is a distributed Alibaba Cloud-Native vector database. It
works better when data comes to large scale. The PR includes:
- [x] A new memory: AnalyticDBVector
- [x] A suite of integration tests verifies the AnalyticDB integration
I have read your [contributing
guidelines](72b7d76d79/.github/CONTRIBUTING.md).
And I have passed the tests below
- [x] make format
- [x] make lint
- [x] make coverage
- [x] make test
First cut of a supabase vectorstore loosely patterned on the langchainjs
equivalent. Doesn't support async operations which is a limitation of
the supabase python client.
---------
Co-authored-by: Daniel Chalef <daniel.chalef@private.org>
I have noticed a typo error in the `custom_mrkl_agents.ipynb` document
while trying the example from the documentation page. As a result, I
have opened a pull request (PR) to address this minor issue, even though
it may seem insignificant 😂.
The following calls were throwing an exception:
575b717d10/docs/use_cases/evaluation/agent_vectordb_sota_pg.ipynb (L192)575b717d10/docs/use_cases/evaluation/agent_vectordb_sota_pg.ipynb (L239)
Exception:
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[14], line 1
----> 1 chain_sota = RetrievalQA.from_chain_type(llm=OpenAI(temperature=0), chain_type="stuff", retriever=vectorstore_sota, input_key="question")
File ~/github/langchain/venv/lib/python3.9/site-packages/langchain/chains/retrieval_qa/base.py:89, in BaseRetrievalQA.from_chain_type(cls, llm, chain_type, chain_type_kwargs, **kwargs)
85 _chain_type_kwargs = chain_type_kwargs or {}
86 combine_documents_chain = load_qa_chain(
87 llm, chain_type=chain_type, **_chain_type_kwargs
88 )
---> 89 return cls(combine_documents_chain=combine_documents_chain, **kwargs)
File ~/github/langchain/venv/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for RetrievalQA
retriever
instance of BaseRetriever expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseRetriever)
```
The vectorstores had to be converted to retrievers:
`vectorstore_sota.as_retriever()` and `vectorstore_pg.as_retriever()`.
The PR also:
- adds the file `paul_graham_essay.txt` referenced by this notebook
- adds to gitignore *.pkl and *.bin files that are generated by this
notebook
Interestingly enough, the performance of the prediction greatly
increased (new version of langchain or ne version of OpenAI models since
the last run of the notebook): from 19/33 correct to 28/33 correct!
- Remove dynamic model creation in the `args()` property. _Only infer
for the decorator (and add an argument to NOT infer if someone wishes to
only pass as a string)_
- Update the validation example to make it less likely to be
misinterpreted as a "safe" way to run a repl
There is one example of "Multi-argument tools" in the custom_tools.ipynb
from yesterday, but we could add more. The output parsing for the base
MRKL agent hasn't been adapted to handle structured args at this point
in time
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
## Use `index_id` over `app_id`
We made a major update to index + retrieve based on Metal Indexes
(instead of apps). With this change, we accept an index instead of an
app in each of our respective core apis. [More details
here](https://docs.getmetal.io/api-reference/core/indexing).
## What is this PR for:
* This PR adds a commented line of code in the documentation that shows
how someone can use the Pinecone client with an already existing
Pinecone index
* The documentation currently only shows how to create a pinecone index
from langchain documents but not how to load one that already exists