langchain/libs/partners/google-vertexai
2024-01-17 10:16:59 -08:00
..
langchain_google_vertexai google-vertexai[minor]: add function calling on VertexAI (#15822) 2024-01-16 17:01:26 -08:00
scripts google-vertexai: added langchain_google_vertexai package (#15218) 2024-01-05 10:44:10 -08:00
tests google-vertexai[minor]: add function calling on VertexAI (#15822) 2024-01-16 17:01:26 -08:00
.gitignore google-vertexai: added langchain_google_vertexai package (#15218) 2024-01-05 10:44:10 -08:00
LICENSE google-vertexai: added langchain_google_vertexai package (#15218) 2024-01-05 10:44:10 -08:00
Makefile google-vertexai[minor]: add function calling on VertexAI (#15822) 2024-01-16 17:01:26 -08:00
poetry.lock google-vertexai[patch]: typing, release 0.0.2 (#16153) 2024-01-17 10:16:59 -08:00
pyproject.toml google-vertexai[patch]: typing, release 0.0.2 (#16153) 2024-01-17 10:16:59 -08:00
README.md google-vertexai: added langchain_google_vertexai package (#15218) 2024-01-05 10:44:10 -08:00

langchain-google-vertexai

This package contains the LangChain integrations for Google Cloud generative models.

Installation

pip install -U langchain-google-vertexai

Chat Models

ChatVertexAI class exposes models .

To use, you should have Google Cloud project with APIs enabled, and configured credentials. Initialize the model as:

from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model_name="gemini-pro")
llm.invoke("Sing a ballad of LangChain.")

You can use other models, e.g. chat-bison:

from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model_name="chat-bison", temperature=0.3)
llm.invoke("Sing a ballad of LangChain.")

Multimodal inputs

Gemini vision model supports image inputs when providing a single chat message. Example:

from langchain_core.messages import HumanMessage
from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model_name="gemini-pro-vision")
# example
message = HumanMessage(
    content=[
        {
            "type": "text",
            "text": "What's in this image?",
        },  # You can optionally provide text parts
        {"type": "image_url", "image_url": {"url": "https://picsum.photos/seed/picsum/200/300"}},
    ]
)
llm.invoke([message])

The value of image_url can be any of the following:

  • A public image URL
  • An accessible gcs file (e.g., "gcs://path/to/file.png")
  • A local file path
  • A base64 encoded image (e.g., data:image/png;base64,abcd124)

Embeddings

You can use Google Cloud's embeddings models as:

from langchain_google_vertexai import VertexAIEmbeddings

embeddings = VertexAIEmbeddings()
embeddings.embed_query("hello, world!")

LLMs

You can use Google Cloud's generative AI models as Langchain LLMs:

from langchain.prompts import PromptTemplate
from langchain_google_vertexai import VertexAI

template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)

chain = prompt | llm

question = "Who was the president in the year Justin Beiber was born?"
print(chain.invoke({"question": question}))

You can use Gemini and Palm models, including code-generations ones:

from langchain_google_vertexai import VertexAI

llm = VertexAI(model_name="code-bison", max_output_tokens=1000, temperature=0.3)

question = "Write a python function that checks if a string is a valid email address"

output = llm(question)