You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/libs/partners/anthropic
Eugene Yurtsev 2d693c484e
docs: fix some spelling mistakes caught by newest version of code spell (#22090)
Going to merge this even though it doesn't pass all tests, and open a
separate PR for the remaining spelling mistakes.
4 months ago
..
langchain_anthropic core, partners: add token usage attribute to AIMessage (#21944) 4 months ago
scripts infra: add print rule to ruff (#16221) 8 months ago
tests docs: fix some spelling mistakes caught by newest version of code spell (#22090) 4 months ago
.gitignore anthropic: beta messages integration (#14928) 9 months ago
LICENSE anthropic: beta messages integration (#14928) 9 months ago
Makefile anthropic[patch]: de-beta anthropic messages, release 0.0.2 (#17540) 7 months ago
README.md anthropic[minor]: add tool calling (#18554) 7 months ago
poetry.lock anthropic, openai: cut pre-releases (#22083) 4 months ago
pyproject.toml anthropic, openai: cut pre-releases (#22083) 4 months ago

README.md

langchain-anthropic

This package contains the LangChain integration for Anthropic's generative models.

Installation

pip install -U langchain-anthropic

Chat Models

Anthropic recommends using their chat models over text completions.

You can see their recommended models here.

To use, you should have an Anthropic API key configured. Initialize the model as:

from langchain_anthropic import ChatAnthropic
from langchain_core.messages import AIMessage, HumanMessage

model = ChatAnthropic(model="claude-3-opus-20240229", temperature=0, max_tokens=1024)

Define the input message

message = HumanMessage(content="What is the capital of France?")

Generate a response using the model

response = model.invoke([message])

For a more detailed walkthrough see here.

LLMs (Legacy)

You can use the Claude 2 models for text completions.

from langchain_anthropic import AnthropicLLM

model = AnthropicLLM(model="claude-2.1", temperature=0, max_tokens=1024)
response = model.invoke("The best restaurant in San Francisco is: ")