langchain/libs/experimental
Oleksandr Yaremchuk 9428923bab
experimental[minor]: upgrade the prompt injection model (#20783)
- **Description:** In January, Laiyer.ai became part of ProtectAI, which
means the model became owned by ProtectAI. In addition to that,
yesterday, we released a new version of the model addressing issues the
Langchain's community and others mentioned to us about false-positives.
The new model has a better accuracy compared to the previous version,
and we thought the Langchain community would benefit from using the
[latest version of the
model](https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2).
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** @alex_yaremchuk
2024-04-23 10:23:39 -04:00
..
langchain_experimental experimental[minor]: upgrade the prompt injection model (#20783) 2024-04-23 10:23:39 -04:00
scripts
tests experimental[patch]: prompts import fix (#20534) 2024-04-18 16:09:11 -04:00
LICENSE
Makefile
poetry.lock experimental[patch]: Release 0.0.57 (#20243) 2024-04-09 17:08:01 -05:00
poetry.toml
pyproject.toml experimental[patch]: Release 0.0.57 (#20243) 2024-04-09 17:08:01 -05:00
README.md

🦜🧪 LangChain Experimental

This package holds experimental LangChain code, intended for research and experimental uses.

Warning

Portions of the code in this package may be dangerous if not properly deployed in a sandboxed environment. Please be wary of deploying experimental code to production unless you've taken appropriate precautions and have already discussed it with your security team.

Some of the code here may be marked with security notices. However, given the exploratory and experimental nature of the code in this package, the lack of a security notice on a piece of code does not mean that the code in question does not require additional security considerations in order to be safe to use.