2024-02-15 20:25:05 +00:00
# langchain-ai21
2024-06-27 17:58:22 +00:00
This package contains the LangChain integrations for [AI21 ](https://docs.ai21.com/ ) models and tools.
2024-02-15 20:25:05 +00:00
## Installation and Setup
- Install the AI21 partner package
```bash
pip install langchain-ai21
```
- Get an AI21 api key and set it as an environment variable (`AI21_API_KEY`)
## Chat Models
2024-06-27 17:58:22 +00:00
This package contains the `ChatAI21` class, which is the recommended way to interface with AI21 chat models, including Jamba-Instruct
and any Jurassic chat models.
2024-02-15 20:25:05 +00:00
2024-06-27 17:58:22 +00:00
To use, install the requirements and configure your environment.
2024-02-15 20:25:05 +00:00
```bash
export AI21_API_KEY=your-api-key
```
Then initialize
```python
from langchain_core.messages import HumanMessage
from langchain_ai21.chat_models import ChatAI21
2024-07-02 21:15:46 +00:00
chat = ChatAI21(model="jamba-instruct")
2024-02-15 20:25:05 +00:00
messages = [HumanMessage(content="Hello from AI21")]
chat.invoke(messages)
```
2024-05-21 19:27:46 +00:00
For a list of the supported models, see [this page ](https://docs.ai21.com/reference/python-sdk#chat )
2024-07-02 21:15:46 +00:00
### Streaming in Chat
Streaming is supported by the latest models. To use streaming, set the `streaming` parameter to `True` when initializing the model.
```python
from langchain_core.messages import HumanMessage
from langchain_ai21.chat_models import ChatAI21
chat = ChatAI21(model="jamba-instruct", streaming=True)
messages = [HumanMessage(content="Hello from AI21")]
response = chat.invoke(messages)
```
or use the `stream` method directly
```python
from langchain_core.messages import HumanMessage
from langchain_ai21.chat_models import ChatAI21
chat = ChatAI21(model="jamba-instruct")
messages = [HumanMessage(content="Hello from AI21")]
for chunk in chat.stream(messages):
print(chunk)
```
2024-02-15 20:25:05 +00:00
## LLMs
2024-06-27 17:58:22 +00:00
You can use AI21's Jurassic generative AI models as LangChain LLMs.
To use the newer Jamba model, use the [ChatAI21 chat model ](#chat-models ), which
supports single-turn instruction/question answering capabilities.
2024-02-15 20:25:05 +00:00
```python
2024-06-27 17:58:22 +00:00
from langchain_core.prompts import PromptTemplate
2024-02-15 20:25:05 +00:00
from langchain_ai21 import AI21LLM
llm = AI21LLM(model="j2-ultra")
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
chain = prompt | llm
question = "Which scientist discovered relativity?"
print(chain.invoke({"question": question}))
```
## Embeddings
2024-06-27 17:58:22 +00:00
You can use AI21's [embeddings model ](https://docs.ai21.com/reference/embeddings-ref ) as shown here:
2024-02-15 20:25:05 +00:00
### Query
```python
from langchain_ai21 import AI21Embeddings
embeddings = AI21Embeddings()
embeddings.embed_query("Hello! This is some query")
```
### Document
```python
from langchain_ai21 import AI21Embeddings
embeddings = AI21Embeddings()
embeddings.embed_documents(["Hello! This is document 1", "And this is document 2!"])
```
2024-03-05 22:42:04 +00:00
2024-06-27 17:58:22 +00:00
## Task-Specific Models
2024-03-05 22:42:04 +00:00
### Contextual Answers
2024-06-27 17:58:22 +00:00
You can use AI21's [contextual answers model ](https://docs.ai21.com/reference/contextual-answers-ref ) to parse
given text and answer a question based entirely on the provided information.
2024-03-05 22:42:04 +00:00
This means that if the answer to your question is not in the document,
the model will indicate it (instead of providing a false answer)
```python
from langchain_ai21 import AI21ContextualAnswers
tsm = AI21ContextualAnswers()
2024-06-27 17:58:22 +00:00
response = tsm.invoke(input={"context": "Lots of information here", "question": "Your question about the context"})
2024-03-05 22:42:04 +00:00
```
You can also use it with chains and output parsers and vector DBs:
```python
from langchain_ai21 import AI21ContextualAnswers
from langchain_core.output_parsers import StrOutputParser
tsm = AI21ContextualAnswers()
chain = tsm | StrOutputParser()
response = chain.invoke(
{"context": "Your context", "question": "Your question"},
)
2024-03-26 01:39:37 +00:00
```
## Text Splitters
### Semantic Text Splitter
2024-06-27 17:58:22 +00:00
You can use AI21's semantic [text segmentation model ](https://docs.ai21.com/reference/text-segmentation-ref ) to split a text into segments by topic.
Text is split at each point where the topic changes.
2024-03-26 01:39:37 +00:00
For a list for examples, see [this page ](https://github.com/langchain-ai/langchain/blob/master/docs/docs/modules/data_connection/document_transformers/semantic_text_splitter.ipynb ).
```python
from langchain_ai21 import AI21SemanticTextSplitter
splitter = AI21SemanticTextSplitter()
response = splitter.split_text("Your text")
2024-03-05 22:42:04 +00:00
```