You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/libs/partners/ai21
Eugene Yurtsev 4cdaca67dc
ai21[patch]: Upgrade @root_validators for pydantic 2 migration (#25454)
Upgrade @root_validators usage to match pydantic 2 semantics
1 month ago
..
langchain_ai21 ai21[patch]: Upgrade @root_validators for pydantic 2 migration (#25454) 1 month ago
scripts patch[Partners] Unified fix of incorrect variable declarations in all check_imports (#25014) 2 months ago
tests ai21: apply rate limiter in integration tests (#24717) 2 months ago
.gitignore ai21: init package (#17592) 7 months ago
LICENSE ai21: init package (#17592) 7 months ago
Makefile infra: update mypy 1.10, ruff 0.5 (#23721) 3 months ago
README.md partners: AI21 Labs Jamba Streaming Support (#23538) 3 months ago
poetry.lock infra: update mypy 1.10, ruff 0.5 (#23721) 3 months ago
pyproject.toml all: add release notes to pypi (#24519) 2 months ago

README.md

langchain-ai21

This package contains the LangChain integrations for AI21 models and tools.

Installation and Setup

  • Install the AI21 partner package
pip install langchain-ai21
  • Get an AI21 api key and set it as an environment variable (AI21_API_KEY)

Chat Models

This package contains the ChatAI21 class, which is the recommended way to interface with AI21 chat models, including Jamba-Instruct and any Jurassic chat models.

To use, install the requirements and configure your environment.

export AI21_API_KEY=your-api-key

Then initialize

from langchain_core.messages import HumanMessage
from langchain_ai21.chat_models import ChatAI21

chat = ChatAI21(model="jamba-instruct")
messages = [HumanMessage(content="Hello from AI21")]
chat.invoke(messages)

For a list of the supported models, see this page

Streaming in Chat

Streaming is supported by the latest models. To use streaming, set the streaming parameter to True when initializing the model.

from langchain_core.messages import HumanMessage
from langchain_ai21.chat_models import ChatAI21

chat = ChatAI21(model="jamba-instruct", streaming=True)
messages = [HumanMessage(content="Hello from AI21")]

response = chat.invoke(messages)

or use the stream method directly

from langchain_core.messages import HumanMessage
from langchain_ai21.chat_models import ChatAI21

chat = ChatAI21(model="jamba-instruct")
messages = [HumanMessage(content="Hello from AI21")]

for chunk in chat.stream(messages):
    print(chunk)

LLMs

You can use AI21's Jurassic generative AI models as LangChain LLMs. To use the newer Jamba model, use the ChatAI21 chat model, which supports single-turn instruction/question answering capabilities.

from langchain_core.prompts import PromptTemplate
from langchain_ai21 import AI21LLM

llm = AI21LLM(model="j2-ultra")

template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)

chain = prompt | llm

question = "Which scientist discovered relativity?"
print(chain.invoke({"question": question}))

Embeddings

You can use AI21's embeddings model as shown here:

Query

from langchain_ai21 import AI21Embeddings

embeddings = AI21Embeddings()
embeddings.embed_query("Hello! This is some query")

Document

from langchain_ai21 import AI21Embeddings

embeddings = AI21Embeddings()
embeddings.embed_documents(["Hello! This is document 1", "And this is document 2!"])

Task-Specific Models

Contextual Answers

You can use AI21's contextual answers model to parse given text and answer a question based entirely on the provided information.

This means that if the answer to your question is not in the document, the model will indicate it (instead of providing a false answer)

from langchain_ai21 import AI21ContextualAnswers

tsm = AI21ContextualAnswers()

response = tsm.invoke(input={"context": "Lots of information here", "question": "Your question about the context"})

You can also use it with chains and output parsers and vector DBs:

from langchain_ai21 import AI21ContextualAnswers
from langchain_core.output_parsers import StrOutputParser

tsm = AI21ContextualAnswers()
chain = tsm | StrOutputParser()

response = chain.invoke(
    {"context": "Your context", "question": "Your question"},
)

Text Splitters

Semantic Text Splitter

You can use AI21's semantic text segmentation model to split a text into segments by topic. Text is split at each point where the topic changes.

For a list for examples, see this page.

from langchain_ai21 import AI21SemanticTextSplitter

splitter = AI21SemanticTextSplitter()
response = splitter.split_text("Your text")