mirror of
https://github.com/hwchase17/langchain
synced 2024-11-10 01:10:59 +00:00
181a61982f
Preserves string content chunks for non tool call requests for convenience. One thing - Anthropic events look like this: ``` RawContentBlockStartEvent(content_block=TextBlock(text='', type='text'), index=0, type='content_block_start') RawContentBlockDeltaEvent(delta=TextDelta(text='<thinking>\nThe', type='text_delta'), index=0, type='content_block_delta') RawContentBlockDeltaEvent(delta=TextDelta(text=' provide', type='text_delta'), index=0, type='content_block_delta') ... RawContentBlockStartEvent(content_block=ToolUseBlock(id='toolu_01GJ6x2ddcMG3psDNNe4eDqb', input={}, name='get_weather', type='tool_use'), index=1, type='content_block_start') RawContentBlockDeltaEvent(delta=InputJsonDelta(partial_json='', type='input_json_delta'), index=1, type='content_block_delta') ``` Note that `delta` has a `type` field. With this implementation, I'm dropping it because `merge_list` behavior will concatenate strings. We currently have `index` as a special field when merging lists, would it be worth adding `type` too? If so, what do we set as a context block chunk? `text` vs. `text_delta`/`tool_use` vs `input_json_delta`? CC @ccurme @efriis @baskaryan |
||
---|---|---|
.. | ||
langchain_anthropic | ||
scripts | ||
tests | ||
.gitignore | ||
LICENSE | ||
Makefile | ||
poetry.lock | ||
pyproject.toml | ||
README.md |
langchain-anthropic
This package contains the LangChain integration for Anthropic's generative models.
Installation
pip install -U langchain-anthropic
Chat Models
Anthropic recommends using their chat models over text completions.
You can see their recommended models here.
To use, you should have an Anthropic API key configured. Initialize the model as:
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import AIMessage, HumanMessage
model = ChatAnthropic(model="claude-3-opus-20240229", temperature=0, max_tokens=1024)
Define the input message
message = HumanMessage(content="What is the capital of France?")
Generate a response using the model
response = model.invoke([message])
For a more detailed walkthrough see here.
LLMs (Legacy)
You can use the Claude 2 models for text completions.
from langchain_anthropic import AnthropicLLM
model = AnthropicLLM(model="claude-2.1", temperature=0, max_tokens=1024)
response = model.invoke("The best restaurant in San Francisco is: ")