# Description
This change allows you to customize the prompt used in
`create_extraction_chain` as well as `create_extraction_chain_pydantic`.
It also adds the `verbose` argument to
`create_extraction_chain_pydantic` - because `create_extraction_chain`
had it already and `create_extraction_chain_pydantic` did not.
# Issue
N/A
# Dependencies
N/A
# Twitter
https://twitter.com/CamAHutchison
Hi,
- Description:
- Solves the issue #6478.
- Includes some additional rework on the `JSONLoader` class:
- Getting metadata is decoupled from `_get_text`
- Validating metadata_func is perform now by `_validate_metadata_func`,
instead of `_validate_content_key`
- Issue: #6478
- Dependencies: NA
- Tag maintainer: @hwchase17
Description: Adds tags and dataview fields to ObsidianLoader doc
metadata.
- Issue: #9800, #4991
- Dependencies: none
- Tag maintainer: My best guess is @hwchase17 looking through the git
logs
- Twitter handle: I don't use twitter, sorry!
### Description
There is a really nice class for saving chat messages into a database -
SQLChatMessageHistory.
It leverages SqlAlchemy to be compatible with any supported database (in
contrast with PostgresChatMessageHistory, which is basically the same
but is limited to Postgres).
However, the class is not really customizable in terms of what you can
store. I can imagine a lot of use cases, when one will need to save a
message date, along with some additional metadata.
To solve this, I propose to extract the converting logic from
BaseMessage to SQLAlchemy model (and vice versa) into a separate class -
message converter. So instead of rewriting the whole
SQLChatMessageHistory class, a user will only need to write a custom
model and a simple mapping class, and pass its instance as a parameter.
I also noticed that there is no documentation on this class, so I added
that too, with an example of custom message converter.
### Issue
N/A
### Dependencies
N/A
### Tag maintainer
Not yet
### Twitter handle
N/A
Description: new chain for logical fallacy removal from model output in
chain and docs
Issue: n/a see above
Dependencies: none
Tag maintainer: @hinthornw in past from my end but not sure who that
would be for maintenance of chains
Twitter handle: no twitter feel free to call out my git user if shout
out j-space-b
Note: created documentation in docs/extras
---------
Co-authored-by: Jon Bennion <jb@Jons-MacBook-Pro.local>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Issue: closes#9855
* consolidates `from_texts` and `add_texts` functions for pinecone
upsert
* adds two types of batching (one for embeddings and one for index
upsert)
* adds thread pool size when instantiating pinecone index
## Description
When the `MultiQueryRetriever` is used to get the list of documents
relevant according to a query, inside a vector store, and at least one
of these contain metadata with nested dictionaries, a `TypeError:
unhashable type: 'dict'` exception is thrown.
This is caused by the `unique_union` function which, to guarantee the
uniqueness of the returned documents, tries, unsuccessfully, to hash the
nested dictionaries and use them as a part of key.
```python
unique_documents_dict = {
(doc.page_content, tuple(sorted(doc.metadata.items()))): doc
for doc in documents
}
```
## Issue
#9872 (MultiQueryRetriever (get_relevant_documents) raises TypeError:
unhashable type: 'dict' with dic metadata)
## Solution
A possible solution is to dump the metadata dict to a string and use it
as a part of hashed key.
```python
unique_documents_dict = {
(doc.page_content, json.dumps(doc.metadata, sort_keys=True)): doc
for doc in documents
}
```
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Hi, this PR enables configuring the html2text package, instead of being
bound to use the hardcoded values. While simply passing `ignore_links`
and `ignore_images` to the `transform_documents` method was possible, I
preferred passing them to the `__init__` method for 2 reasons:
1. It is more efficient in case of subsequent calls to
`transform_documents`.
2. It allows to move the "complexity" to the instantiation, keeping the
actual execution simple and general enough. IMO the transformers should
all follow this pattern, allowing something like this:
```python
# Instantiate transformers
transformers = [
TransformerA(foo='bar'),
TransformerB(bar='foo'),
# others
]
# During execution, call them sequentially
documents = ...
for tr in transformers:
documents = tr.transform_documents(documents)
```
Thanks for the reviews!
---------
Co-authored-by: taamedag <Davide.Menini@swisscom.com>
If last_accessed_at metadata is a float use it as a timestamp. This
allows to support vector stores that do not store datetime objects like
ChromaDb.
Fixes: https://github.com/langchain-ai/langchain/issues/3685
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
- Description: Adds two optional parameters to the
DynamoDBChatMessageHistory class to enable users to pass in a name for
their PrimaryKey, or a Key object itself to enable the use of composite
keys, a common DynamoDB paradigm.
[AWS DynamoDB Key
docs](https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/)
- Issue: N/A
- Dependencies: N/A
- Twitter handle: N/A
---------
Co-authored-by: Josh White <josh@ctrlstack.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Add SQLDatabaseSequentialChain Class to __init__.py so it can be
accessed and used
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: SQLDatabaseSequentialChain is not found when importing
Langchain_experimental package, when I open __init__.py
Langchain_expermental.sql, I found that SQLDatabaseSequentialChain is
imported and add to __all__ list
- Issue: SQLDatabaseSequentialChain is not found in
Langchain_experimental package
- Dependencies: None,
- Tag maintainer: None,
- Twitter handle: None,
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
The output at times lacks the closing markdown code block. The prompt is
changed to explicitly request the closing backticks.
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
-->
## Description
This PR introduces a minor change to the TitanTakeoff integration.
Instead of specifying a port on localhost, this PR will allow users to
specify a baseURL instead. This will allow users to use the integration
if they have TitanTakeoff deployed externally (not on localhost). This
removes the hardcoded reference to localhost "http://localhost:{port}".
### Info about Titan Takeoff
Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.
Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)
### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.
Thanks for your help and please let me know if you have any questions.
cc: @hwchase17 @baskaryan
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Hi,
this PR contains loader / parser for Azure Document intelligence which
is a ML-based service to ingest arbitrary PDFs / images, even if
scanned. The loader generates Documents by pages of the original
document. This is my first contribution to LangChain.
Unfortunately I could not find the correct place for test cases. Happy
to add one if you can point me to the location, but as this is a
cloud-based service, a test would require network access and credentials
- so might be of limited help.
Dependencies: The needed dependency was already part of pyproject.toml,
no change.
Twitter: feel free to mention @LarsAC on the announcement
This small PR aims at supporting the following missing parameters in the
`HuggingfaceTextGen` LLM:
- `return_full_text` - sometimes useful for completion tasks
- `do_sample` - quite handy to control the randomness of the model.
- `watermark`
@hwchase17 @baskaryan
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
This PR follows the **Eden AI (LLM + embeddings) integration**. #8633
We added an optional parameter to choose different AI models for
providers (like 'text-bison' for provider 'google', 'text-davinci-003'
for provider 'openai', etc.).
Usage:
```python
llm = EdenAI(
feature="text",
provider="google",
params={
"model": "text-bison", # new
"temperature": 0.2,
"max_tokens": 250,
},
)
```
You can also change the provider + model after initialization
```python
llm = EdenAI(
feature="text",
provider="google",
params={
"temperature": 0.2,
"max_tokens": 250,
},
)
prompt = """
hi
"""
llm(prompt, providers='openai', model='text-davinci-003') # change provider & model
```
The jupyter notebook as been updated with an example well.
Ping: @hwchase17, @baskaryan
---------
Co-authored-by: RedhaWassim <rwasssim@gmail.com>
Co-authored-by: sam <melaine.samy@gmail.com>
Adapting Microsoft Presidio to other languages requires a bit more work,
so for now it will be good idea to remove the language option to choose,
so as not to cause errors and confusion.
https://microsoft.github.io/presidio/analyzer/languages/
I will handle different languages after the weekend 😄
This adds sqlite-vss as an option for a vector database. Contains the
code and a few tests. Tests are passing and the library sqlite-vss is
added as optional as explained in the contributing guidelines. I
adjusted the code for lint/black/ and mypy. It looks that everything is
currently passing.
Adding sqlite-vss was mentioned in this issue:
https://github.com/langchain-ai/langchain/issues/1019.
Also mentioned here in the sqlite-vss repo for the curious:
https://github.com/asg017/sqlite-vss/issues/66
Maintainer tag: @baskaryan
---------
Co-authored-by: Philippe Oger <philippe.oger@adevinta.com>
This PR fixes an issues I found when upgrading to a more recent version
of Langchain. I was using 0.0.142 before, and this issue popped up
already when the `_custom_parser` was added to `output_parsers/json`.
Anyway, the issue is that the parser tries to escape quotes when they
are double-escaped (e.g. `\\"`), leading to OutputParserException.
This is particularly undesired in my app, because I have an Agent that
uses a single input Tool, which expects as input a JSON string with the
structure:
```python
{
"foo": string,
"bar": string
}
```
The LLM (GPT3.5) response is (almost) always something like
`"action_input": "{\\"foo\\": \\"bar\\", \\"bar\\": \\"foo\\"}"` and
since the upgrade this is not correctly parsed.
---------
Co-authored-by: taamedag <Davide.Menini@swisscom.com>
Adds a call to Pydantic's `update_forward_refs` for the `Run` class (in
addition to the `ChainRun` and `ToolRun` classes, for which that method
is already called). Without it, the self-reference of child classes
(type `List[Run]`) is problematic. For example:
```python
from langchain.callbacks import StdOutCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from wandb.integration.langchain import WandbTracer
llm = OpenAI()
prompt = PromptTemplate.from_template("1 + {number} = ")
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[StdOutCallbackHandler(), WandbTracer()])
print(chain.run(number=2))
```
results in the following output before the change
```
WARNING:root:Error in on_chain_start callback: field "child_runs" not yet prepared so type is still a ForwardRef, you might need to call Run.update_forward_refs().
> Entering new LLMChain chain...
Prompt after formatting:
1 + 2 =
WARNING:root:Error in on_chain_end callback: No chain Run found to be traced
> Finished chain.
3
```
but afterwards the callback error messages are gone.
Hi there!
I'm excited to open this PR to add support for using 'Tencent Cloud
VectorDB' as a vector store.
Tencent Cloud VectorDB is a fully-managed, self-developed,
enterprise-level distributed database service designed for storing,
retrieving, and analyzing multi-dimensional vector data. The database
supports multiple index types and similarity calculation methods, with a
single index supporting vector scales up to 1 billion and capable of
handling millions of QPS with millisecond-level query latency. Tencent
Cloud VectorDB not only provides external knowledge bases for large
models to improve their accuracy, but also has wide applications in AI
fields such as recommendation systems, NLP services, computer vision,
and intelligent customer service.
The PR includes:
Implementation of Vectorstore.
I have read your [contributing
guidelines](72b7d76d79/.github/CONTRIBUTING.md).
And I have passed the tests below
make format
make lint
make coverage
make test
This PR brings structural updates to `PlaywrightURLLoader`, aiming at
making the code more readable and extensible through the abstraction of
page evaluation logic. These changes also align this implementation with
a similar structure used in LangChain.js.
The key enhancements include:
1. Introduction of 'PlaywrightEvaluator', an abstract base class for all
evaluators.
2. Creation of 'UnstructuredHtmlEvaluator', a concrete class
implementing 'PlaywrightEvaluator', which uses `unstructured` library
for processing page's HTML content.
3. Extension of 'PlaywrightURLLoader' constructor to optionally accept
an evaluator of the type 'PlaywrightEvaluator'. It defaults to
'UnstructuredHtmlEvaluator' if no evaluator is provided.
4. Refactoring of 'load' and 'aload' methods to use the 'evaluate' and
'evaluate_async' methods of the provided 'PageEvaluator' for page
content handling.
This update brings flexibility to 'PlaywrightURLLoader' as it can now
utilize different evaluators for page processing depending on the
requirement. The abstraction also improves code maintainability and
readability.
Twitter: @ywkim
- Description: Add bloomz_7b, llama-2-7b, llama-2-13b, llama-2-70b to
ErnieBotChat, which only supported ERNIE-Bot-turbo and ERNIE-Bot.
- Issue: #10022,
- Dependencies: no extra dependencies
---------
Co-authored-by: hetianfeng <hetianfeng@meituan.com>
### Description
The feature for anonymizing data has been implemented. In order to
protect private data, such as when querying external APIs (OpenAI), it
is worth pseudonymizing sensitive data to maintain full privacy.
Anonynization consists of two steps:
1. **Identification:** Identify all data fields that contain personally
identifiable information (PII).
2. **Replacement**: Replace all PIIs with pseudo values or codes that do
not reveal any personal information about the individual but can be used
for reference. We're not using regular encryption, because the language
model won't be able to understand the meaning or context of the
encrypted data.
We use *Microsoft Presidio* together with *Faker* framework for
anonymization purposes because of the wide range of functionalities they
provide. The full implementation is available in `PresidioAnonymizer`.
### Future works
- **deanonymization** - add the ability to reverse anonymization. For
example, the workflow could look like this: `anonymize -> LLMChain ->
deanonymize`. By doing this, we will retain anonymity in requests to,
for example, OpenAI, and then be able restore the original data.
- **instance anonymization** - at this point, each occurrence of PII is
treated as a separate entity and separately anonymized. Therefore, two
occurrences of the name John Doe in the text will be changed to two
different names. It is therefore worth introducing support for full
instance detection, so that repeated occurrences are treated as a single
object.
### Twitter handle
@deepsense_ai / @MaksOpp
---------
Co-authored-by: MaksOpp <maks.operlejn@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: this PR adds `s3_object_key` and `s3_bucket` to the doc
metadata when loading an S3 file. This is particularly useful when using
`S3DirectoryLoader` to remove the files from the dir once they have been
processed (getting the object keys from the metadata `source` field
seems brittle)
- Dependencies: N/A
- Tag maintainer: ?
- Twitter handle: _cbornet
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
This PR makes the following changes:
1. Documents become serializable using langhchain serialization
2. Make a utility to create a docstore kw store
Will help to address issue here:
https://github.com/langchain-ai/langchain/issues/9345
In the function _load_run_evaluators the function _get_keys was not
called if only custom_evaluators parameter is used
- Description: In the function _load_run_evaluators the function
_get_keys was not called if only custom_evaluators parameter is used,
- Issue: no issue created for this yet,
- Dependencies: None,
- Tag maintainer: @vowelparrot,
- Twitter handle: Buckler89
---------
Co-authored-by: ddroghini <d.droghini@mflgroup.com>
Description: This commit uses the new Service object in Selenium
webdriver as executable_path has been [deprecated and removed in
selenium version
4.11.2](9f5801c82f)
Issue: https://github.com/langchain-ai/langchain/issues/9808
Tag Maintainer: @eyurtsev