langchain/libs/partners/anthropic
kaijietti a812839f0c
community: add request_timeout and max_retries to ChatAnthropic (#19402)
This PR make `request_timeout` and `max_retries` configurable for
ChatAnthropic.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-08 21:04:17 +00:00
..
langchain_anthropic community: add request_timeout and max_retries to ChatAnthropic (#19402) 2024-04-08 21:04:17 +00:00
scripts infra: add print rule to ruff (#16221) 2024-02-09 16:13:30 -08:00
tests anthropic[patch]: standardize init args (#20161) 2024-04-08 12:09:06 -05:00
.gitignore anthropic: beta messages integration (#14928) 2023-12-19 18:55:19 -08:00
LICENSE anthropic: beta messages integration (#14928) 2023-12-19 18:55:19 -08:00
Makefile anthropic[patch]: de-beta anthropic messages, release 0.0.2 (#17540) 2024-02-14 10:31:45 -08:00
poetry.lock anthropic[patch]: use anthropic 0.23 (#20022) 2024-04-04 14:23:53 -07:00
pyproject.toml anthropic[patch]: Release 0.1.6 (#20026) 2024-04-04 14:29:50 -07:00
README.md anthropic[minor]: add tool calling (#18554) 2024-03-05 08:30:16 -08:00

langchain-anthropic

This package contains the LangChain integration for Anthropic's generative models.

Installation

pip install -U langchain-anthropic

Chat Models

Anthropic recommends using their chat models over text completions.

You can see their recommended models here.

To use, you should have an Anthropic API key configured. Initialize the model as:

from langchain_anthropic import ChatAnthropic
from langchain_core.messages import AIMessage, HumanMessage

model = ChatAnthropic(model="claude-3-opus-20240229", temperature=0, max_tokens=1024)

Define the input message

message = HumanMessage(content="What is the capital of France?")

Generate a response using the model

response = model.invoke([message])

For a more detailed walkthrough see here.

LLMs (Legacy)

You can use the Claude 2 models for text completions.

from langchain_anthropic import AnthropicLLM

model = AnthropicLLM(model="claude-2.1", temperature=0, max_tokens=1024)
response = model.invoke("The best restaurant in San Francisco is: ")