16a293cc3a
Small bug fixes according to your comments --------- Signed-off-by: Joffref <mariusjoffre@gmail.com> Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com> Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com> Co-authored-by: Baskar Gopinath <73015364+baskargopinath@users.noreply.github.com> Co-authored-by: Chester Curme <chester.curme@gmail.com> Co-authored-by: Mathis Joffre <51022808+Joffref@users.noreply.github.com> Co-authored-by: Baur <baur.krykpayev@gmail.com> Co-authored-by: Nuradil <nuradil.maksut@icloud.com> Co-authored-by: Nuradil <133880216+yaksh0nti@users.noreply.github.com> Co-authored-by: Jacob Lee <jacoblee93@gmail.com> Co-authored-by: Rave Harpaz <rave.harpaz@oracle.com> Co-authored-by: RHARPAZ <RHARPAZ@RHARPAZ-5750.us.oracle.com> Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com> Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com> Co-authored-by: RUO <61719257+comsa33@users.noreply.github.com> Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com> Co-authored-by: Bagatur <baskaryan@gmail.com> Co-authored-by: Luis Rueda <userlerueda@gmail.com> Co-authored-by: Jib <Jibzade@gmail.com> Co-authored-by: Eugene Yurtsev <eugene@langchain.dev> Co-authored-by: S M Zia Ur Rashid <smziaurrashid@gmail.com> Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com> Co-authored-by: yuncliu <lyc1990@qq.com> Co-authored-by: wenngong <76683249+wenngong@users.noreply.github.com> Co-authored-by: gongwn1 <gongwn1@lenovo.com> Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com> Co-authored-by: Rahul Triptahi <rahul.psit.ec@gmail.com> Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com> Co-authored-by: maang-h <55082429+maang-h@users.noreply.github.com> Co-authored-by: asafg <asafg@ai21.com> Co-authored-by: Asaf Joseph Gardin <39553475+Josephasafg@users.noreply.github.com> |
||
---|---|---|
.. | ||
langchain_ai21 | ||
scripts | ||
tests | ||
.gitignore | ||
LICENSE | ||
Makefile | ||
poetry.lock | ||
pyproject.toml | ||
README.md |
langchain-ai21
This package contains the LangChain integrations for AI21 models and tools.
Installation and Setup
- Install the AI21 partner package
pip install langchain-ai21
- Get an AI21 api key and set it as an environment variable (
AI21_API_KEY
)
Chat Models
This package contains the ChatAI21
class, which is the recommended way to interface with AI21 chat models, including Jamba-Instruct
and any Jurassic chat models.
To use, install the requirements and configure your environment.
export AI21_API_KEY=your-api-key
Then initialize
from langchain_core.messages import HumanMessage
from langchain_ai21.chat_models import ChatAI21
chat = ChatAI21(model="jamba-instruct-preview")
messages = [HumanMessage(content="Hello from AI21")]
chat.invoke(messages)
For a list of the supported models, see this page
LLMs
You can use AI21's Jurassic generative AI models as LangChain LLMs. To use the newer Jamba model, use the ChatAI21 chat model, which supports single-turn instruction/question answering capabilities.
from langchain_core.prompts import PromptTemplate
from langchain_ai21 import AI21LLM
llm = AI21LLM(model="j2-ultra")
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
chain = prompt | llm
question = "Which scientist discovered relativity?"
print(chain.invoke({"question": question}))
Embeddings
You can use AI21's embeddings model as shown here:
Query
from langchain_ai21 import AI21Embeddings
embeddings = AI21Embeddings()
embeddings.embed_query("Hello! This is some query")
Document
from langchain_ai21 import AI21Embeddings
embeddings = AI21Embeddings()
embeddings.embed_documents(["Hello! This is document 1", "And this is document 2!"])
Task-Specific Models
Contextual Answers
You can use AI21's contextual answers model to parse given text and answer a question based entirely on the provided information.
This means that if the answer to your question is not in the document, the model will indicate it (instead of providing a false answer)
from langchain_ai21 import AI21ContextualAnswers
tsm = AI21ContextualAnswers()
response = tsm.invoke(input={"context": "Lots of information here", "question": "Your question about the context"})
You can also use it with chains and output parsers and vector DBs:
from langchain_ai21 import AI21ContextualAnswers
from langchain_core.output_parsers import StrOutputParser
tsm = AI21ContextualAnswers()
chain = tsm | StrOutputParser()
response = chain.invoke(
{"context": "Your context", "question": "Your question"},
)
Text Splitters
Semantic Text Splitter
You can use AI21's semantic text segmentation model to split a text into segments by topic. Text is split at each point where the topic changes.
For a list for examples, see this page.
from langchain_ai21 import AI21SemanticTextSplitter
splitter = AI21SemanticTextSplitter()
response = splitter.split_text("Your text")