Google Scholar outputs a nice list of scientific and research articles
that use LangChain.
I added a link to the Google Scholar page to the `gallery` doc page
Method confluence.get_all_pages_by_label, returns only metadata about
documents with a certain label (such as pageId, titles, ...). To return
all documents with a certain label we need to extract all page ids given
a certain label and get pages content by these ids.
---------
Co-authored-by: Andrea Biondo <a.biondo@reply.it>
A incorrect data type error happened when executing _construct_path in
`chain.py` as follows:
```python
Error with message replace() argument 2 must be str, not int
```
The path is always a string. But the result of `args.pop(param, "")` is
undefined.
This PR includes two main changes:
- Refactor the `TelegramChatLoader` and `FacebookChatLoader` classes by
removing the dependency on pandas and simplifying the message filtering
process.
- Add test cases for the `TelegramChatLoader` and `FacebookChatLoader`
classes. This test ensures that the class correctly loads and processes
the example chat data, providing better test coverage for this
functionality.
The Blockchain Document Loader's default behavior is to return 100
tokens at a time which is the Alchemy API limit. The Document Loader
exposes a startToken that can be used for pagination against the API.
This enhancement includes an optional get_all_tokens param (default:
False) which will:
- Iterate over the Alchemy API until it receives all the tokens, and
return the tokens in a single call to the loader.
- Manage all/most tokenId formats (this can be int, hex16 with zero or
all the leading zeros). There aren't constraints as to how smart
contracts can represent this value, but these three are most common.
Note that a contract with 10,000 tokens will issue 100 calls to the
Alchemy API, and could take about a minute, which is why this param will
default to False. But I've been using the doc loader with these
utilities on the side, so figured it might make sense to build them in
for others to use.
Single edit to: models/text_embedding/examples/openai.ipynb - Line 88:
changed from: "embeddings = OpenAIEmbeddings(model_name=\"ada\")" to
"embeddings = OpenAIEmbeddings()" as model_name is no longer part of the
OpenAIEmbeddings class.
@vowelparrot @hwchase17 Here a new implementation of
`acompress_documents` for `LLMChainExtractor ` without changes to the
sync-version, as you suggested in #3587 / [Async Support for
LLMChainExtractor](https://github.com/hwchase17/langchain/pull/3587) .
I created a new PR to avoid cluttering history with reverted commits,
hope that is the right way.
Happy for any improvements/suggestions.
(PS:
I also tried an alternative implementation with a nested helper function
like
``` python
async def acompress_documents_old(
self, documents: Sequence[Document], query: str
) -> Sequence[Document]:
"""Compress page content of raw documents."""
async def _compress_concurrently(doc):
_input = self.get_input(query, doc)
output = await self.llm_chain.apredict_and_parse(**_input)
return Document(page_content=output, metadata=doc.metadata)
outputs=await asyncio.gather(*[_compress_concurrently(doc) for doc in documents])
compressed_docs=list(filter(lambda x: len(x.page_content)>0,outputs))
return compressed_docs
```
But in the end I found the commited version to be better readable and
more "canonical" - hope you agree.
Related to [this
issue.](https://github.com/hwchase17/langchain/issues/3655#issuecomment-1529415363)
The `Mapped` SQLAlchemy class is introduced in SQLAlchemy 1.4 but the
migration from 1.3 to 1.4 is quite challenging so, IMO, it's better to
keep backwards compatibility and not change the SQLAlchemy requirements
just because of type annotations.
This PR fixes the "SyntaxError: invalid escape sequence" error in the
pydantic.py file. The issue was caused by the backslashes in the regular
expression pattern being treated as escape characters. By using a raw
string literal for the regex pattern (e.g., r"\{.*\}"), this fix ensures
that backslashes are treated as literal characters, thus preventing the
error.
Co-authored-by: Tomer Levy <tomer.levy@tipalti.com>
Seems the pyllamacpp package is no longer the supported bindings from
gpt4all. Tested that this works locally.
Given that the older models weren't very performant, I think it's better
to migrate now without trying to include a lot of try / except blocks
---------
Co-authored-by: Nissan Pow <npow@users.noreply.github.com>
Co-authored-by: Nissan Pow <pownissa@amazon.com>