langchain/libs/partners/google-vertexai
y2noda 54f90fc6bc
langchain_google_vertexai:Enable the use of langchain's built-in tools in Gemini's function calling (#16341)
- **Issue:** This is a PR about #16340 

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Co-authored-by: yuhei.tsunoda <yuhei.tsunoda@brainpad.co.jp>
2024-01-22 11:16:36 -07:00
..
langchain_google_vertexai langchain_google_vertexai:Enable the use of langchain's built-in tools in Gemini's function calling (#16341) 2024-01-22 11:16:36 -07:00
scripts google-vertexai: added langchain_google_vertexai package (#15218) 2024-01-05 10:44:10 -08:00
tests google-vertexai[patch]: more integration test fixes (#16234) 2024-01-18 13:59:23 -08:00
.gitignore google-vertexai: added langchain_google_vertexai package (#15218) 2024-01-05 10:44:10 -08:00
LICENSE google-vertexai: added langchain_google_vertexai package (#15218) 2024-01-05 10:44:10 -08:00
Makefile google-vertexai[patch]: Harrison/vertex function calling (#16223) 2024-01-18 12:17:40 -08:00
poetry.lock google-vertexai[minor]: added safety_settings property to gemini wrapper (#15344) 2024-01-18 08:54:30 -08:00
pyproject.toml google-vertexai[patch]: typing, release 0.0.2 (#16153) 2024-01-17 10:16:59 -08:00
README.md google-vertexai: added langchain_google_vertexai package (#15218) 2024-01-05 10:44:10 -08:00

langchain-google-vertexai

This package contains the LangChain integrations for Google Cloud generative models.

Installation

pip install -U langchain-google-vertexai

Chat Models

ChatVertexAI class exposes models .

To use, you should have Google Cloud project with APIs enabled, and configured credentials. Initialize the model as:

from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model_name="gemini-pro")
llm.invoke("Sing a ballad of LangChain.")

You can use other models, e.g. chat-bison:

from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model_name="chat-bison", temperature=0.3)
llm.invoke("Sing a ballad of LangChain.")

Multimodal inputs

Gemini vision model supports image inputs when providing a single chat message. Example:

from langchain_core.messages import HumanMessage
from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model_name="gemini-pro-vision")
# example
message = HumanMessage(
    content=[
        {
            "type": "text",
            "text": "What's in this image?",
        },  # You can optionally provide text parts
        {"type": "image_url", "image_url": {"url": "https://picsum.photos/seed/picsum/200/300"}},
    ]
)
llm.invoke([message])

The value of image_url can be any of the following:

  • A public image URL
  • An accessible gcs file (e.g., "gcs://path/to/file.png")
  • A local file path
  • A base64 encoded image (e.g., data:image/png;base64,abcd124)

Embeddings

You can use Google Cloud's embeddings models as:

from langchain_google_vertexai import VertexAIEmbeddings

embeddings = VertexAIEmbeddings()
embeddings.embed_query("hello, world!")

LLMs

You can use Google Cloud's generative AI models as Langchain LLMs:

from langchain.prompts import PromptTemplate
from langchain_google_vertexai import VertexAI

template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)

chain = prompt | llm

question = "Who was the president in the year Justin Beiber was born?"
print(chain.invoke({"question": question}))

You can use Gemini and Palm models, including code-generations ones:

from langchain_google_vertexai import VertexAI

llm = VertexAI(model_name="code-bison", max_output_tokens=1000, temperature=0.3)

question = "Write a python function that checks if a string is a valid email address"

output = llm(question)