Add a new ChatGoogleGenerativeAI class in a `langchain-google-genai`
package.
Still todo: add a deprecation warning in PALM
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Fix some circular deps:
- move PromptValue into top level module bc both PromptTemplates and
OutputParsers import
- move tracer context vars to `tracers.context` and import them in
functions in `callbacks.manager`
- add core import tests
My thought is that the ==version would prevent pip from finding the
package on regular [pypi.org](http://pypi.org/), so it would look at
[test.pypi.org](http://test.pypi.org/) for that. Otherwise it'll pull
package from [pypi.org](http://pypi.org/) (e.g. sub deps)
Right now, the cli release is failing because it's going to
test.pypi.org by default, so it finds this incorrect FASTAPI package
instead of the real one: https://test.pypi.org/project/FASTAPI/
The new ruff version fixed the blocking bugs, and I was able to fairly
easily us to a passing state: ruff fixed some issues on its own, I fixed
a handful by hand, and I added a list of narrowly-targeted exclusions
for files that are currently failing ruff rules that we probably should
look into eventually.
I went pretty lenient on the docs / cookbooks rules, allowing dead code
and such things. Perhaps in the future we may want to tighten the rules
further, but this is already a good set of checks that found real issues
and will prevent them going forward.
PyPI trusted publishing wants to know which workflow is expected to do
the publish. We always want to publish from the same workflow, so we're
making `_test_release.yml` the only workflow that publishes to Test
PyPI.
This follows the principle of least privilege. Our `poetry build` step
doesn't need, and shouldn't get, access to our GitHub OIDC capability.
This is the same structure as I used in the already-merged PR for
refactoring the regular PyPI release workflow: #12578.
Before making a new `langchain` release, we want to test that everything
works as expected. This PR lets us publish `langchain` to test PyPI,
then install it from there and run checks to ensure everything works
normally before publishing it "for real".
It also takes the opportunity to refactor the build process, splitting
up the build, release-creation, and PyPI upload steps into separate jobs
that do not share their elevated permissions with each other.
Adds a `langchain-location` param to lint, so we can properly locate it.
Regular langchain and experimental lint steps are passing, so default
value seems to be working.
Will run all CI because of _test change, but future PRs against CLI will
only trigger the new CLI one
Has a bunch of file changes related to formatting/linting.
No mypy yet - coming soon
We don't use any of the new functionality at the moment. Just making
sure we don't fall back on versions and fail to benefit from new
patches. This is an easy upgrade and it's always harder to upgrade
across multiple major versions at once.
<!-- Thank you for contributing to LangChain!
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes (if applicable),
- **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
Adds LangServe package
* Integrate Runnables with Fast API creating Server and a RemoteRunnable
client
* Support multiple runnables for a given server
* Support sync/async/batch/abatch/stream/astream/astream_log on the
client side (using async implementations on server)
* Adds validation using annotations (relying on pydantic under the hood)
-- this still has some rough edges -- e.g., open api docs do NOT
generate correctly at the moment
* Uses pydantic v1 namespace
Known issues: type translation code doesn't handle a lot of types (e.g.,
TypedDicts)
---------
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
`mypy` cannot type-check code that relies on dependencies that aren't
installed.
Eventually we'll probably want to install as many optional dependencies
as possible. However, the full "extended deps" setup for langchain
creates a 3GB cache file and takes a while to unpack and install. We'll
probably want something a bit more targeted.
This is a first step toward something better.
A test file was accidentally dropping a `results.json` file in the
current working directory as a result of running `make test`.
This is undesirable, since we don't want to risk accidentally adding
stray files into the repo if we run tests locally and then do `git add
.` without inspecting the file list very closely.
Make sure that changes to CI infrastructure get tested on CI before
being merged.
Without this PR, changes to the poetry setup action don't trigger a CI
run and in principle could break `master` when merged.
### Description
The feature for anonymizing data has been implemented. In order to
protect private data, such as when querying external APIs (OpenAI), it
is worth pseudonymizing sensitive data to maintain full privacy.
Anonynization consists of two steps:
1. **Identification:** Identify all data fields that contain personally
identifiable information (PII).
2. **Replacement**: Replace all PIIs with pseudo values or codes that do
not reveal any personal information about the individual but can be used
for reference. We're not using regular encryption, because the language
model won't be able to understand the meaning or context of the
encrypted data.
We use *Microsoft Presidio* together with *Faker* framework for
anonymization purposes because of the wide range of functionalities they
provide. The full implementation is available in `PresidioAnonymizer`.
### Future works
- **deanonymization** - add the ability to reverse anonymization. For
example, the workflow could look like this: `anonymize -> LLMChain ->
deanonymize`. By doing this, we will retain anonymity in requests to,
for example, OpenAI, and then be able restore the original data.
- **instance anonymization** - at this point, each occurrence of PII is
treated as a separate entity and separately anonymized. Therefore, two
occurrences of the name John Doe in the text will be changed to two
different names. It is therefore worth introducing support for full
instance detection, so that repeated occurrences are treated as a single
object.
### Twitter handle
@deepsense_ai / @MaksOpp
---------
Co-authored-by: MaksOpp <maks.operlejn@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
With this PR:
- All lint and test jobs use the exact same Python + Poetry installation
approach, instead of lints doing it one way and tests doing it another
way.
- The Poetry installation itself is cached, which saves ~15s per run.
- We no longer pass shell commands as workflow arguments to a workflow
that just runs them in a shell. This makes our actions more resilient to
shell code injection.
If y'all like this approach, I can modify the scheduled tests workflow
and the release workflow to use this too.
If another push to the same PR or branch happens while its CI is still
running, cancel the earlier run in favor of the next run.
There's no point in testing an outdated version of the code. GitHub only
allows a limited number of job runners to be active at the same time, so
it's better to cancel pointless jobs early so that more useful jobs can
run sooner.
It's possible that langchain-experimental works fine with the latest
*published* langchain, but is broken with the langchain on `master`.
Unfortunately, you can see this is currently the case — this is why this
PR also includes a minor fix for the `langchain` package itself.
We want to catch situations like that *before* releasing a new
langchain, hence this test.
The current timeouts are too long, and mean that if the GitHub cache
decides to act up, jobs get bogged down for 15min at a time. This has
happened 2-3 times already this week -- a tiny fraction of our total
workflows but really annoying when it happens to you. We can do better.
Installing deps on cache miss takes about ~4min, so it's not worth
waiting more than 4min for the deps cache. The black and mypy caches
save 1 and 2min, respectively, so wait only up to that long to download
them.
The previous approach was relying on `_test.yml` taking an input
parameter, and then doing almost completely orthogonal things for each
parameter value. I've separated out each of those test situations as its
own job or workflow file, which eliminated all the special-casing and,
in my opinion, improved maintainability by making it much more obvious
what code runs when.
Trusted Publishing is the current best practice for publishing Python
packages. Rather than long-lived secret keys, it uses OpenID Connect
(OIDC) to allow our GitHub runner to directly authenticate itself to
PyPI and get a short-lived publishing token. This locks down publishing
quite a bit:
- There's no long-lived publish key to steal anymore.
- Publishing is *only* allowed via the *specifically designated* GitHub
workflow in the designated repo.
It also is operationally easier: no keys means there's nothing that
needs to be periodically rotated, nothing to worry about leaking, and
nobody can accidentally publish a release from their laptop because they
happened to have PyPI keys set up.
After this gets merged, we'll need to configure PyPI to start expecting
trusted publishing. It's only a few clicks and should only take a
minute; instructions are here:
https://docs.pypi.org/trusted-publishers/adding-a-publisher/
More info:
- https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
- https://github.com/pypa/gh-action-pypi-publish
This is safer than the prior approach, since it's safe by default: the
release workflows never get triggered for non-merged PRs, so there's no
possibility of a buggy conditional accidentally letting a workflow
proceed when it shouldn't have.
The only loss is that publishing no longer requires a `release` label on
the merged PR that bumps the version. We can add a separate CI step that
enforces that part as a condition for merging into `master`, if
desirable.
I have discovered a bug located within `.github/workflows/_release.yml`
which is the primary cause of continuous integration (CI) errors. The
problem can be solved; therefore, I have constructed a PR to address the
issue.
## The Issue
Access the following link to view the exact errors: [Langhain Release
Workflow](https://github.com/langchain-ai/langchain/actions/workflows/langchain_release.yml)
The instances of these errors take place for **each PR** that updates
`pyproject.toml`, excluding those specifically associated with bumping
PRs.
See below for the specific error message:
```
Error: Error 422: Validation Failed: {"resource":"Release","code":"already_exists","field":"tag_name"}
```
An image of the error can be viewed here:
![Image](https://github.com/langchain-ai/langchain/assets/13769670/13125f73-9b53-49b7-a83e-653bb01a1da1)
The `_release.yml` document contains the following if-condition:
```yaml
if: |
${{ github.event.pull_request.merged == true }}
&& ${{ contains(github.event.pull_request.labels.*.name, 'release') }}
```
## The Root Cause
The above job constantly runs as the `if-condition` is always identified
as `true`.
## The Logic
The `if-condition` can be defined as `if: ${{ b1 }} && ${{ b2 }}`, where
`b1` and `b2` are boolean values. However, in terms of condition
evaluation with GitHub Actions, `${{ false }}` is identified as a string
value, thereby rendering it as truthy as per the [official
documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idif).
I have run some tests regarding this behavior within my forked
repository. You can consult my [debug
PR](https://github.com/zawakin/langchain/pull/1) for reference.
Here is the result of the tests:
|If-Condition|Outcome|
|:--:|:--:|
|`if: true && ${{ false }}`|Execution|
|`if: ${{ false }}` |Skipped|
|`if: true && false` |Skipped|
|`if: false`|Skipped|
|`if: ${{ true && false }}` |Skipped|
In view of the first and second results, we can infer that `${{ false
}}` can only be interpreted as `true` for conditions composed of some
expressions.
It is consistent that the condition of `if: ${{ inputs.working-directory
== 'libs/langchain' }}` works.
It is surprised to be skipped for the second case but it seems the spec
of GitHub Actions 😓
Anyway, the PR would fix these errors, I believe 👍
Could you review this? @hwchase17 or @shoelsch , who is the author of
[PR](https://github.com/langchain-ai/langchain/pull/360).
Ternary operators in GitHub Actions syntax are pretty ugly and hard to
read: `inputs.working-directory == '' && '.' ||
inputs.working-directory` means "if the condition is true, use `'.'` and
otherwise use the expression after the `||`".
This PR performs the ternary as few times as possible, assigning its
outcome to an env var we can then reuse as needed.
Only lint on the min and max supported Python versions.
It's extremely unlikely that there's a lint issue on any version in
between that doesn't show up on the min or max versions.
GitHub rate-limits how many jobs can be running at any one time.
Starting new jobs is also relatively slow, so linting on fewer versions
makes CI faster.
Using `poetry add` to install `pydantic@2.1` was also causing poetry to
change its lockfile. This prevented dependency caching from working:
- When attempting to restore a cache, it would hash the lockfile in git
and use it as part of the cache key. Say this is a cache miss.
- Then, it would attempt to save the cache -- but the lockfile will have
changed, so the cache key would be *different* than the key in the
lookup. So the cache save would succeed, but to a key that cannot be
looked up in the next run -- meaning we never get a cache hit.
In addition to busting the cache, the lockfile update itself is also
non-trivially long, over 30s:
![image](https://github.com/langchain-ai/langchain/assets/2348618/d84d3b56-484d-45eb-818d-54126a094a40)
This PR fixes the problems by using `pip` to perform the installation,
avoiding the lockfile change.
* PR updates test.yml to test with both pydantic versions
* Code should be refactored to make it easier to do testing in matrix
format w/ packages
* Added steps to assert that pydantic version in the environment is as
expected