langchain/libs/partners/google-vertexai
Erick Friis a99c667c22
partners: version constraints (#17492)
Core should be ^0.1 by default

Careful about 0.x.y and 0.0.z packages
2024-02-14 08:57:46 -08:00
..
langchain_google_vertexai google-vertexai[patch]: Fixed SafetySettings handling in streaming API in VertexAI (#17278) 2024-02-08 17:25:28 -08:00
scripts infra: add print rule to ruff (#16221) 2024-02-09 16:13:30 -08:00
tests google-vertexai[patch]: fix _parse_response_candidate issue (#16647) 2024-02-08 11:48:25 -08:00
.gitignore
LICENSE
Makefile google-vertexai[patch]: Harrison/vertex function calling (#16223) 2024-01-18 12:17:40 -08:00
poetry.lock partners: version constraints (#17492) 2024-02-14 08:57:46 -08:00
pyproject.toml partners: version constraints (#17492) 2024-02-14 08:57:46 -08:00
README.md cli[patch], google-vertexai[patch]: readme template (#16470) 2024-01-23 12:08:17 -07:00

langchain-google-vertexai

This package contains the LangChain integrations for Google Cloud generative models.

Installation

pip install -U langchain-google-vertexai

Chat Models

ChatVertexAI class exposes models such as gemini-pro and chat-bison.

To use, you should have Google Cloud project with APIs enabled, and configured credentials. Initialize the model as:

from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model_name="gemini-pro")
llm.invoke("Sing a ballad of LangChain.")

You can use other models, e.g. chat-bison:

from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model_name="chat-bison", temperature=0.3)
llm.invoke("Sing a ballad of LangChain.")

Multimodal inputs

Gemini vision model supports image inputs when providing a single chat message. Example:

from langchain_core.messages import HumanMessage
from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model_name="gemini-pro-vision")
# example
message = HumanMessage(
    content=[
        {
            "type": "text",
            "text": "What's in this image?",
        },  # You can optionally provide text parts
        {"type": "image_url", "image_url": {"url": "https://picsum.photos/seed/picsum/200/300"}},
    ]
)
llm.invoke([message])

The value of image_url can be any of the following:

  • A public image URL
  • An accessible gcs file (e.g., "gcs://path/to/file.png")
  • A local file path
  • A base64 encoded image (e.g., data:image/png;base64,abcd124)

Embeddings

You can use Google Cloud's embeddings models as:

from langchain_google_vertexai import VertexAIEmbeddings

embeddings = VertexAIEmbeddings()
embeddings.embed_query("hello, world!")

LLMs

You can use Google Cloud's generative AI models as Langchain LLMs:

from langchain.prompts import PromptTemplate
from langchain_google_vertexai import VertexAI

template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)

chain = prompt | llm

question = "Who was the president in the year Justin Beiber was born?"
print(chain.invoke({"question": question}))

You can use Gemini and Palm models, including code-generations ones:

from langchain_google_vertexai import VertexAI

llm = VertexAI(model_name="code-bison", max_output_tokens=1000, temperature=0.3)

question = "Write a python function that checks if a string is a valid email address"

output = llm(question)