- Update the RunCreate object to work with recent changes
- Add optional Example ID to the tracer
- Adjust default persist_session behavior to attempt to load the session
if it exists
- Raise more useful HTTP errors for logging
- Add unit testing
- Fix the default ID to be a UUID for v2 tracer sessions
Broken out from the big draft here:
https://github.com/hwchase17/langchain/pull/4061
- confirm creation
- confirm functionality with a simple dimension check.
The test now is calling OpenAI API directly, but learning from
@vowelparrot that we’re caching the requests, so that it’s not that
expensive. I also found we’re calling OpenAI api in other integration
tests. Please lmk if there is any concern of real external API calls. I
can alternatively make a fake LLM for this test. Thanks
This implements a loader of text passages in JSON format. The `jq`
syntax is used to define a schema for accessing the relevant contents
from the JSON file. This requires dependency on the `jq` package:
https://pypi.org/project/jq/.
---------
Signed-off-by: Aivin V. Solatorio <avsolatorio@gmail.com>
This commit adds support for passing additional arguments to the
`SeleniumURLLoader ` when creating Chrome or Firefox web drivers.
Previously, only a few arguments such as `headless` could be passed in.
With this change, users can pass any additional arguments they need as a
list of strings using the `arguments` parameter.
The `arguments` parameter allows users to configure the driver with any
options that are available for that particular browser. For example,
users can now pass custom `user_agent` strings or `proxy` settings using
this parameter.
This change also includes updated documentation and type hints to
reflect the new `arguments` parameter and its usage.
fixes#4120
This PR updates the `message_line_regex` used by `WhatsAppChatLoader` to
support different date-time formats used in WhatsApp chat exports;
resolves#4153.
The new regex handles the following input formats:
```terminal
[05.05.23, 15:48:11] James: Hi here
[11/8/21, 9:41:32 AM] User name: Message 123
1/23/23, 3:19 AM - User 2: Bye!
1/23/23, 3:22_AM - User 1: And let me know if anything changes
```
Tests have been added to verify that the loader works correctly with all
formats.
expand is not an allowed parameter for the method
confluence.get_all_pages_by_label, since it doesn't return the body of
the text but just metadata of documents
Co-authored-by: Andrea Biondo <a.biondo@reply.it>
The forward ref annotations don't get updated if we only iimport with
type checking
---------
Co-authored-by: Abhinav Verma <abhinav_win12@yahoo.co.in>
`run_manager` was not being passed downstream. Not sure if this was a
deliberate choice but it seems like it broke many agent callbacks like
`agent_action` and `agent_finish`. This fix needs a proper review.
Co-authored-by: blob42 <spike@w530>
Bump threshold to 1.4 from 1.3. Change import to be compatible
Resolves#4142 and #4129
---------
Co-authored-by: ndaugreal <ndaugreal@gmail.com>
Co-authored-by: Jeremy Lopez <lopez86@users.noreply.github.com>
Having dev containers makes its easier, faster and secure to setup the
dev environment for the repository.
The pull request consists of:
- .devcontainer folder with:
- **devcontainer.json :** (minimal necessary vscode extensions and
settings)
- **docker-compose.yaml :** (could be modified to run necessary services
as per need. Ex vectordbs, databases)
- **Dockerfile:**(non root with dev tools)
- Changes to README - added the Open in Github Codespaces Badge - added
the Open in dev container Badge
Co-authored-by: Jinto Jose <129657162+jj701@users.noreply.github.com>
As of right now when trying to use functions like
`max_marginal_relevance_search()` or
`max_marginal_relevance_search_by_vector()` the rest of the kwargs are
not propagated to `self._search_helper()`. For example a user cannot
explicitly state the distance_metric they want to use when calling
`max_marginal_relevance_search`
If the library user has to decrease the `max_token_limit`, he would
probably want to prune the summary buffer even though he haven't added
any new messages.
Personally, I need it because I want to serialise memory buffer object
and save to database, and when I load it, I may have re-configured my
code to have a shorter memory to save on tokens.
In the example for creating a Python REPL tool under the Agent module,
the ".run" was omitted in the example. I believe this is required when
defining a Tool.
In the section `Get Message Completions from a Chat Model` of the quick
start guide, the HumanMessage doesn't need to include `Translate this
sentence from English to French.` when there is a system message.
Simplify HumanMessages in these examples can further demonstrate the
power of LLM.
* implemented arun, results, and aresults. Reuses aiosession if
available.
* helper tools GoogleSerperRun and GoogleSerperResults
* support for Google Images, Places and News (examples given) and
filtering based on time (e.g. past hour)
* updated docs
The deeplake integration was/is very verbose (see e.g. [the
documentation
example](https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html)
when loading or creating a deeplake dataset with only limited options to
dial down verbosity.
Additionally, the warning that a "Deep Lake Dataset already exists" was
confusing, as there is as far as I can tell no other way to load a
dataset.
This small PR changes that and introduces an explicit `verbose` argument
which is also passed to the deeplake library.
There should be minimal changes to the default output (the loading line
is printed instead of warned to make it consistent with `ds.summary()`
which also prints.
Google Scholar outputs a nice list of scientific and research articles
that use LangChain.
I added a link to the Google Scholar page to the `gallery` doc page
Method confluence.get_all_pages_by_label, returns only metadata about
documents with a certain label (such as pageId, titles, ...). To return
all documents with a certain label we need to extract all page ids given
a certain label and get pages content by these ids.
---------
Co-authored-by: Andrea Biondo <a.biondo@reply.it>
A incorrect data type error happened when executing _construct_path in
`chain.py` as follows:
```python
Error with message replace() argument 2 must be str, not int
```
The path is always a string. But the result of `args.pop(param, "")` is
undefined.
This PR includes two main changes:
- Refactor the `TelegramChatLoader` and `FacebookChatLoader` classes by
removing the dependency on pandas and simplifying the message filtering
process.
- Add test cases for the `TelegramChatLoader` and `FacebookChatLoader`
classes. This test ensures that the class correctly loads and processes
the example chat data, providing better test coverage for this
functionality.
The Blockchain Document Loader's default behavior is to return 100
tokens at a time which is the Alchemy API limit. The Document Loader
exposes a startToken that can be used for pagination against the API.
This enhancement includes an optional get_all_tokens param (default:
False) which will:
- Iterate over the Alchemy API until it receives all the tokens, and
return the tokens in a single call to the loader.
- Manage all/most tokenId formats (this can be int, hex16 with zero or
all the leading zeros). There aren't constraints as to how smart
contracts can represent this value, but these three are most common.
Note that a contract with 10,000 tokens will issue 100 calls to the
Alchemy API, and could take about a minute, which is why this param will
default to False. But I've been using the doc loader with these
utilities on the side, so figured it might make sense to build them in
for others to use.