Instead of accessing `langchain.debug`, `langchain.verbose`, or
`langchain.llm_cache`, please use the new getter/setter functions in
`langchain.globals`:
- `langchain.globals.set_debug()` and `langchain.globals.get_debug()`
- `langchain.globals.set_verbose()` and
`langchain.globals.get_verbose()`
- `langchain.globals.set_llm_cache()` and
`langchain.globals.get_llm_cache()`
Using the old globals directly will now raise a warning.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
**Description**
It is for #10423 that it will be a useful feature if we can extract
images from pdf and recognize text on them. I have implemented it with
`PyPDFLoader`, `PyPDFium2Loader`, `PyPDFDirectoryLoader`,
`PyMuPDFLoader`, `PDFMinerLoader`, and `PDFPlumberLoader`.
[RapidOCR](https://github.com/RapidAI/RapidOCR.git) is used to recognize
text on extracted images. It is time-consuming for ocr so a boolen
parameter `extract_images` is set to control whether to extract and
recognize. I have tested the time usage for each parser on my own laptop
thinkbook 14+ with AMD R7-6800H by unit test and the result is:
| extract_images | PyPDFParser | PDFMinerParser | PyMuPDFParser |
PyPDFium2Parser | PDFPlumberParser |
| ------------- | ------------- | ------------- | ------------- |
------------- | ------------- |
| False | 0.27s | 0.39s | 0.06s | 0.08s | 1.01s |
| True | 17.01s | 20.67s | 20.32s | 19,75s | 20.55s |
**Issue**
#10423
**Dependencies**
rapidocr_onnxruntime in
[RapidOCR](https://github.com/RapidAI/RapidOCR/tree/main)
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- **Description:** Just docs related to csharp code splitter
- **Issue:** It's related to a request made by @baskaryan in a comment
on my previous PR #10350
- **Dependencies:** None
- **Twitter handle:** @ather19
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
**Description:**
Examples in the "Select by similarity" section were not really
highlighting capabilities of similarity search.
E.g. "# Input is a measurement, so should select the tall/short example"
was still outputting the "mood" example.
I tweaked the inputs a bit and fixed the examples (checking that those
are indeed what the search outputs).
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- **Description:** Fix typo about `RetrievalQAWithSourceChain` ->
`RetrievalQAWithSourcesChain`
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
- **Description:** Adds Kotlin language to `TextSplitter`
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
- **Description:** use term keyword according to the official python doc
glossary, see https://docs.python.org/3/glossary.html
- **Issue:** not applicable
- **Dependencies:** not applicable
- **Tag maintainer:** @hwchase17
- **Twitter handle:** vreyespue
continuation of PR #8550
@hwchase17 please see and merge. And also close the PR #8550.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
### Description
When I was reading the document, I found that some examples had extra
spaces and violated "Unexpected spaces around keyword / parameter equals
(E251)" in pep8. I removed these extra spaces.
### Tag maintainer
@eyurtsev
### Twitter handle
[billvsme](https://twitter.com/billvsme)
### Description
renamed several repository links from `hwchase17` to `langchain-ai`.
### Why
I discovered that the README file in the devcontainer contains an old
repository name, so I took the opportunity to rename the old repository
name in all files within the repository, excluding those that do not
require changes.
### Dependencies
none
### Tag maintainer
@baskaryan
### Twitter handle
[kzk_maeda](https://twitter.com/kzk_maeda)
Fixed Typo Error in Update get_started.mdx file by addressing a minor
typographical error.
This improvement enhances the readability and correctness of the
notebook, making it easier for users to understand and follow the
demonstration. The commit aims to maintain the quality and accuracy of
the content within the repository.
please review the change at your convenience.
@baskaryan , @hwaking
- Description:
Updated JSONLoader usage documentation which was making it unusable
- Issue: JSONLoader if used with the documented arguments was failing on
various JSON documents.
- Dependencies:
no dependencies
- Twitter handle: @TheSlnArchitect
Various improvements to the Model I/O section of the documentation
- Changed "Chat Model" to "chat model" in a few spots for internal
consistency
- Minor spelling & grammar fixes to improve readability & comprehension
Various miscellaneous fixes to most pages in the 'Retrievers' section of
the documentation:
- "VectorStore" and "vectorstore" changed to "vector store" for
consistency
- Various spelling, grammar, and formatting improvements for readability
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Description: updated the prompt name in a sequential chain example so
that it is not overwritten by the same prompt name in the next chain
(this is a sequential chain example)
Issue: n/a
Dependencies: none
Tag maintainer: not known
Twitter handle: not on twitter, feel free to use my git username for
anything
Improve internal consistency in LangChain documentation
- Change occurrences of eg and eg. to e.g.
- Fix headers containing unnecessary capital letters.
- Change instances of "few shot" to "few-shot".
- Add periods to end of sentences where missing.
- Minor spelling and grammar fixes.
Description: Updating documentation to add AmazonTextractPDFLoader
according to
[comment](https://github.com/langchain-ai/langchain/pull/8661#issuecomment-1666572992)
from [baskaryan](https://github.com/baskaryan)
Adding one notebook and instructions to the
modules/data_connection/document_loaders/pdf.mdx
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- Description: added filter to query methods in VectorStoreIndexWrapper
for filtering by metadata (i.e. search_kwargs)
- Tag maintainer: @rlancemartin, @eyurtsev
Updated the doc snippet on this topic as well. It took me a long while
to figure out how to filter the vectorstore by filename, so this might
help someone else out.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
- Description: I have added an example showing how to pass a custom
template to ConversationRetrievalChain. Instead of
CONDENSE_QUESTION_PROMPT we can pass any prompt in the argument
condense_question_prompt. Look in Use cases -> QA over Documents -> How
to -> Store and reference chat history,
- Issue: #8864,
- Dependencies: NA,
- Tag maintainer: @hinthornw,
- Twitter handle:
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
* Documentation to favor creation without declaring input_variables
* Cut out obvious examples, but add more description in a few places
---------
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
- Description: Added a missing word and rearranged a sentence in the
documentation of Self Query Retrievers.,
- Issue: NA,
- Dependencies: NA,
- Tag maintainer: @baskaryan,
- Twitter handle: NA
Thanks for your time.
<!-- Thank you for contributing to LangChain!
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @baskaryan
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @baskaryan
- Memory: @hwchase17
- Agents / Tools / Toolkits: @hinthornw
- Tracing / Callbacks: @agola11
- Async: @agola11
If no one reviews your PR within a few days, feel free to @-mention the
same people again.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#7854
Added the ability to use the `separator` ase a regex or a simple
character.
Fixed a bug where `start_index` was incorrectly counting from -1.
Who can review?
@eyurtsev
@hwchase17
@mmz-001
## Description:
1)Map reduce example in docs is missing an important import statement.
Figured other people would benefit from being able to copy 🍝 the code.
2)RefineDocumentsChain example also broken.
## Issue:
None
## Dependencies:
None. One liner.
## Tag maintainer:
@baskaryan
## Twitter handle:
I mean, it's a one line fix lol. But @will_thompson_k is my twitter
handle.
- Description: Adds an optional buffer arg to the memory's
from_messages() method. If provided the existing memory will be loaded
instead of regenerating a summary from the loaded messages.
Why? If we have past messages to load from, it is likely we also have an
existing summary. This is particularly helpful in cases where the chat
is ephemeral and/or is backed by serverless where the chat history is
not stored but where the updated chat history is passed back and forth
between a backend/frontend.
Eg: Take a stateless qa backend implementation that loads messages on
every request and generates a response — without this addition, each
time the messages are loaded via from_messages, the summaries are
recomputed even though they may have just been computed during the
previous response. With this, the previously computed summary can be
passed in and avoid:
1) spending extra $$$ on tokens, and
2) increased response time by avoiding regenerating previously generated
summary.
Tag maintainer: @hwchase17
Twitter handle: https://twitter.com/ShantanuNair
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>