- [x] PR title:
community: Add OCI Generative AI new model support
- [x] PR message:
- Description: adding support for new models offered by OCI Generative
AI services. This is a moderate update of our initial integration PR
16548 and includes a new integration for our chat models under
/langchain_community/chat_models/oci_generative_ai.py
- Issue: NA
- Dependencies: No new Dependencies, just latest version of our OCI sdk
- Twitter handle: NA
- [x] Add tests and docs:
1. we have updated our unit tests
2. we have updated our documentation including a new ipynb for our new
chat integration
- [x] Lint and test:
`make format`, `make lint`, and `make test` run successfully
---------
Co-authored-by: RHARPAZ <RHARPAZ@RHARPAZ-5750.us.oracle.com>
Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
** Description**
This is the community integration of ZenGuard AI - the fastest
guardrails for GenAI applications. ZenGuard AI protects against:
- Prompts Attacks
- Veering of the pre-defined topics
- PII, sensitive info, and keywords leakage.
- Toxicity
- Etc.
**Twitter Handle** : @zenguardai
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. Added an integration test
2. Added colab
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.
---------
Co-authored-by: Nuradil <nuradil.maksut@icloud.com>
Co-authored-by: Nuradil <133880216+yaksh0nti@users.noreply.github.com>
They are now rejecting with code 401 calls from users with expired or
invalid tokens (while before they were being considered anonymous).
Thus, the authorization header has to be removed when there is no token.
Related to: #23178
---------
Signed-off-by: Joffref <mariusjoffre@gmail.com>
Description: 2 feature flags added to SharePointLoader in this PR:
1. load_auth: if set to True, adds authorised identities to metadata
2. load_extended_metadata, adds source, owner and full_path to metadata
Unit tests:N/A
Documentation: To be done.
---------
Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
This fixes processing issue for nodes with numbers in their labels (e.g.
`"node_1"`, which would previously be relabeled as `"node__"`, and now
are correctly processed as `"node_1"`)
**Description:**
Fix "`TypeError: 'NoneType' object is not iterable`" when the
auth_context is absent in PebbloRetrievalQA. The auth_context is
optional; hence, PebbloRetrievalQA should work without it, but it throws
an error at the moment. This PR fixes that issue.
**Issue:** NA
**Dependencies:** None
**Unit tests:** NA
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Description: file_metadata_ was not getting propagated to returned
documents. Changed the lookup key to the name of the blob's path.
Changed blob.path key to blob.path.name for metadata_dict key lookup.
Documentation: N/A
Unit tests: N/A
Co-authored-by: ccurme <chester.curme@gmail.com>
**Description:**
Currently, the `langchain_pinecone` library forces the `async_req`
(asynchronous required) argument to Pinecone to `True`. This design
choice causes problems when deploying to environments that do not
support multiprocessing, such as AWS Lambda. In such environments, this
restriction can prevent users from successfully using
`langchain_pinecone`.
This PR introduces a change that allows users to specify whether they
want to use asynchronous requests by passing the `async_req` parameter
through `**kwargs`. By doing so, users can set `async_req=False` to
utilize synchronous processing, making the library compatible with AWS
Lambda and other environments that do not support multithreading.
**Issue:**
This PR does not address a specific issue number but aims to resolve
compatibility issues with AWS Lambda by allowing synchronous processing.
**Dependencies:**
None, that I'm aware of.
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
- **Description:** When use
RunnableWithMessageHistory/SQLChatMessageHistory in async mode, we'll
get the following error:
```
Error in RootListenersTracer.on_chain_end callback: RuntimeError("There is no current event loop in thread 'asyncio_3'.")
```
which throwed by
ddfbca38df/libs/community/langchain_community/chat_message_histories/sql.py (L259).
and no message history will be add to database.
In this patch, a new _aexit_history function which will'be called in
async mode is added, and in turn aadd_messages will be called.
In this patch, we use `afunc` attribute of a Runnable to check if the
end listener should be run in async mode or not.
- **Issue:** #22021, #22022
- **Dependencies:** N/A
The SelfQuery PGVectorTranslator is not correct. The operator is "eq"
and not "$eq".
This patch use a new version of PGVectorTranslator from
langchain_postgres.
It's necessary to release a new version of langchain_postgres (see
[here](https://github.com/langchain-ai/langchain-postgres/pull/75)
before accepting this PR in langchain.
fix systax warning in `create_json_chat_agent`
```
.../langchain/agents/json_chat/base.py:22: SyntaxWarning: invalid escape sequence '\ '
"""Create an agent that uses JSON to format its logic, build for Chat Models.
```
- **Description:** AsyncRootListenersTracer support on_chat_model_start,
it's schema_format should be "original+chat".
- **Issue:** N/A
- **Dependencies:**
minor changes to module import error handling and minor issues in
tutorial documents.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
**Desscription**: When the ``sql_database.from_databricks`` is executed
from a Workflow Job, the ``context`` object does not have a
"browserHostName" property, resulting in an error. This change manages
the error so the "DATABRICKS_HOST" env variable value is used instead of
stoping the flow
Co-authored-by: lmorosdb <lmorosdb>
This change updates the requirements in
`libs/partners/pinecone/pyproject.toml` to allow all versions of
`pinecone-client` greater than or equal to 3.2.2.
This change resolves issue
[21955](https://github.com/langchain-ai/langchain/issues/21955).
---------
Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
The return type of `json.loads` is `Any`.
In fact, the return type of `dumpd` must be based on `json.loads`, so
the correction here is understandable.
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- Fix bug with TypedDicts rendering inherited methods if inherting from
typing_extensions.TypedDict rather than typing.TypedDict
- Do not surface inherited pydantic methods for subclasses of BaseModel
- Subclasses of RunnableSerializable will not how methods inherited from
Runnable or from BaseModel
- Subclasses of Runnable that not pydantic models will include a link to
RunnableInterface (they still show inherited methods, we can fix this
later)
Currently, calling `with_structured_output()` with an invalid method
argument raises `Unrecognized method argument. Expected one of
'function_calling' or 'json_format'`, but the JSON mode option [is now
referred
to](https://python.langchain.com/v0.2/docs/how_to/structured_output/#the-with_structured_output-method)
by `'json_mode'`. This fixes that.
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>