mirror of
https://github.com/hwchase17/langchain
synced 2024-11-06 03:20:49 +00:00
481d3855dc
- `llm(prompt)` -> `llm.invoke(prompt)` - `llm(prompt=prompt` -> `llm.invoke(prompt)` (same with `messages=`) - `llm(prompt, callbacks=callbacks)` -> `llm.invoke(prompt, config={"callbacks": callbacks})` - `llm(prompt, **kwargs)` -> `llm.invoke(prompt, **kwargs)` |
||
---|---|---|
.. | ||
langchain_anthropic | ||
scripts | ||
tests | ||
.gitignore | ||
LICENSE | ||
Makefile | ||
poetry.lock | ||
pyproject.toml | ||
README.md |
langchain-anthropic
This package contains the LangChain integration for Anthropic's generative models.
Installation
pip install -U langchain-anthropic
Chat Models
Anthropic recommends using their chat models over text completions.
You can see their recommended models here.
To use, you should have an Anthropic API key configured. Initialize the model as:
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import AIMessage, HumanMessage
model = ChatAnthropic(model="claude-3-opus-20240229", temperature=0, max_tokens=1024)
Define the input message
message = HumanMessage(content="What is the capital of France?")
Generate a response using the model
response = model.invoke([message])
For a more detailed walkthrough see here.
LLMs (Legacy)
You can use the Claude 2 models for text completions.
from langchain_anthropic import AnthropicLLM
model = AnthropicLLM(model="claude-2.1", temperature=0, max_tokens=1024)
response = model.invoke("The best restaurant in San Francisco is: ")