langchain/libs/partners/google-genai
Charlie Marsh 24c0bab57b
infra, multiple: Upgrade configuration for Ruff v0.2.0 (#16905)
## Summary

This PR upgrades LangChain's Ruff configuration in preparation for
Ruff's v0.2.0 release. (The changes are compatible with Ruff v0.1.5,
which LangChain uses today.) Specifically, we're now warning when
linter-only options are specified under `[tool.ruff]` instead of
`[tool.ruff.lint]`.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-09 14:28:02 -08:00
..
langchain_google_genai google-genai[patch]: fix tool format, use protos (#17284) 2024-02-08 19:36:49 -08:00
scripts [Partner] Add langchain-google-genai package (gemini) (#14621) 2023-12-13 11:57:59 -08:00
tests google-genai[patch]: fix streaming, function calling (#17268) 2024-02-08 17:29:53 -08:00
.gitignore [Partner] Add langchain-google-genai package (gemini) (#14621) 2023-12-13 11:57:59 -08:00
LICENSE [Partner] Add langchain-google-genai package (gemini) (#14621) 2023-12-13 11:57:59 -08:00
Makefile [Partner] Add langchain-google-genai package (gemini) (#14621) 2023-12-13 11:57:59 -08:00
poetry.lock google-genai[patch]: release 0.0.8 (#17285) 2024-02-08 19:39:44 -08:00
pyproject.toml infra, multiple: Upgrade configuration for Ruff v0.2.0 (#16905) 2024-02-09 14:28:02 -08:00
README.md google-genai[patch]: add google-genai integration deps and extras (#14731) 2023-12-14 13:20:10 -08:00

langchain-google-genai

This package contains the LangChain integrations for Gemini through their generative-ai SDK.

Installation

pip install -U langchain-google-genai

Image utilities

To use image utility methods, like loading images from GCS urls, install with extras group 'images':

pip install -e "langchain-google-genai[images]"

Chat Models

This package contains the ChatGoogleGenerativeAI class, which is the recommended way to interface with the Google Gemini series of models.

To use, install the requirements, and configure your environment.

export GOOGLE_API_KEY=your-api-key

Then initialize

from langchain_google_genai import ChatGoogleGenerativeAI

llm = ChatGoogleGenerativeAI(model="gemini-pro")
llm.invoke("Sing a ballad of LangChain.")

Multimodal inputs

Gemini vision model supports image inputs when providing a single chat message. Example:

from langchain_core.messages import HumanMessage
from langchain_google_genai import ChatGoogleGenerativeAI

llm = ChatGoogleGenerativeAI(model="gemini-pro-vision")
# example
message = HumanMessage(
    content=[
        {
            "type": "text",
            "text": "What's in this image?",
        },  # You can optionally provide text parts
        {"type": "image_url", "image_url": "https://picsum.photos/seed/picsum/200/300"},
    ]
)
llm.invoke([message])

The value of image_url can be any of the following:

  • A public image URL
  • An accessible gcs file (e.g., "gcs://path/to/file.png")
  • A local file path
  • A base64 encoded image (e.g., data:image/png;base64,abcd124)
  • A PIL image

Embeddings

This package also adds support for google's embeddings models.

from langchain_google_genai import GoogleGenerativeAIEmbeddings

embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")
embeddings.embed_query("hello, world!")