You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/libs/partners/google-genai
Bagatur 73382a579f
google-genai[patch]: Release 0.0.2 (#14677)
9 months ago
..
langchain_google_genai [Partner] Add langchain-google-genai package (gemini) (#14621) 9 months ago
scripts [Partner] Add langchain-google-genai package (gemini) (#14621) 9 months ago
tests [Partner] Add langchain-google-genai package (gemini) (#14621) 9 months ago
.gitignore [Partner] Add langchain-google-genai package (gemini) (#14621) 9 months ago
LICENSE [Partner] Add langchain-google-genai package (gemini) (#14621) 9 months ago
Makefile [Partner] Add langchain-google-genai package (gemini) (#14621) 9 months ago
README.md [Partner] Add langchain-google-genai package (gemini) (#14621) 9 months ago
poetry.lock [Partner] Add langchain-google-genai package (gemini) (#14621) 9 months ago
pyproject.toml google-genai[patch]: Release 0.0.2 (#14677) 9 months ago

README.md

langchain-google-genai

This package contains the LangChain integrations for Gemini through their generative-ai SDK.

Installation

pip install -U langchain-google-genai

Chat Models

This package contains the ChatGoogleGenerativeAI class, which is the recommended way to interface with the Google Gemini series of models.

To use, install the requirements, and configure your environment.

export GOOGLE_API_KEY=your-api-key

Then initialize

from langchain_google_genai import ChatGoogleGenerativeAI

llm = ChatGoogleGenerativeAI(model="gemini-pro")
llm.invoke("Sing a ballad of LangChain.")

Multimodal inputs

Gemini vision model supports image inputs when providing a single chat message. Example:

from langchain_core.messages import HumanMessage
from langchain_google_genai import ChatGoogleGenerativeAI

llm = ChatGoogleGenerativeAI(model="gemini-pro-vision")
# example
message = HumanMessage(
    content=[
        {
            "type": "text",
            "text": "What's in this image?",
        },  # You can optionally provide text parts
        {"type": "image_url", "image_url": "https://picsum.photos/seed/picsum/200/300"},
    ]
)
llm.invoke([message])

The value of image_url can be any of the following:

  • A public image URL
  • An accessible gcs file (e.g., "gcs://path/to/file.png")
  • A local file path
  • A base64 encoded image (e.g., data:image/png;base64,abcd124)
  • A PIL image