For more see the [ChatFireworks](https://api.python.langchain.com/en/latest/chat_models/langchain_fireworks.chat_models.ChatFireworks.html#langchain_fireworks.chat_models.ChatFireworks.bind_tools) reference.
</TabItem>
<TabItem value="mistral" label="Mistral">
Install dependencies and set API keys:
```python
%pip install -qU langchain-mistralai
```
```python
os.environ["MISTRAL_API_KEY"] = getpass.getpass()
```
We can use the `ChatMistralAI.bind_tools()` method to handle converting
`Multiply` to a valid function schema and binding it to the model (i.e.,
For more see the [ChatMistralAI API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html#langchain_mistralai.chat_models.ChatMistralAI).
</TabItem>
<TabItem value="together" label="Together">
Since TogetherAI is a drop-in replacement for OpenAI, we can just use
"While chat models use language models under the hood, the interface they use is a bit different.\n",
"While chat models use language models under the hood, the interface they use is a bit different.\n",
"Rather than using a \"text in, text out\" API, they use an interface where \"chat messages\" are the inputs and outputs.\n",
"Rather than using a \"text in, text out\" API, they use an interface where \"chat messages\" are the inputs and outputs.\n",
"\n",
"\n",
"## Setup\n",
"## Setup\n"
"\n",
"For this example we'll need to install the OpenAI partner package:\n",
"\n",
"```bash\n",
"pip install langchain-openai\n",
"```\n",
"\n",
"Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running:\n",
"\n",
"```bash\n",
"export OPENAI_API_KEY=\"...\"\n",
"```\n",
"If you'd prefer not to set an environment variable you can pass the key in directly via the `openai_api_key` named parameter when initiating the OpenAI LLM class:\n"
]
]
},
},
{
{
"cell_type": "code",
"cell_type": "markdown",
"execution_count": null,
"id": "e230abb2-bc84-438b-b9ff-dd124acb1375",
"id": "e230abb2-bc84-438b-b9ff-dd124acb1375",
"metadata": {},
"metadata": {},
"outputs": [],
"source": [
"source": [
"from langchain_openai import ChatOpenAI\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"\n",
"chat = ChatOpenAI(openai_api_key=\"...\")"
"<ChatModelTabs customVarName=\"chat\" />\n",
"```"
]
]
},
},
{
{
@ -55,19 +42,25 @@
"id": "609bbd5c-e5a1-4166-89e1-d6c52054860d",
"id": "609bbd5c-e5a1-4166-89e1-d6c52054860d",
"metadata": {},
"metadata": {},
"source": [
"source": [
"Otherwise you can initialize without any params:"
"If you'd prefer not to set an environment variable you can pass the key in directly via the api key arg named parameter when initiating the chat model class:"
return "\n\n".join(doc.page_content for doc in docs)
return "\n\n".join(doc.page_content for doc in docs)
@ -164,12 +165,11 @@ rag_chain = (
)
)
```
```
```python
```python
rag_chain.invoke("What is Task Decomposition?")
rag_chain.invoke("What is Task Decomposition?")
```
```
```text
```text
'Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It can be done through prompting techniques like Chain of Thought or Tree of Thoughts, or by using task-specific instructions or human inputs. Task decomposition helps agents plan ahead and manage complicated tasks more effectively.'
'Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It can be done through prompting techniques like Chain of Thought or Tree of Thoughts, or by using task-specific instructions or human inputs. Task decomposition helps agents plan ahead and manage complicated tasks more effectively.'
```
```
@ -219,12 +219,11 @@ loader = WebBaseLoader(
docs = loader.load()
docs = loader.load()
```
```
```python
```python
len(docs[0].page_content)
len(docs[0].page_content)
```
```
```text
```text
42824
42824
```
```
@ -232,11 +231,11 @@ len(docs[0].page_content)
print(docs[0].page_content[:500])
print(docs[0].page_content[:500])
```
```
```text
```text
LLM Powered Autonomous Agents
LLM Powered Autonomous Agents
Date: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng
Date: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng
@ -248,13 +247,14 @@ In
### Go deeper
### Go deeper
`DocumentLoader`: Object that loads data from a source as list of
`DocumentLoader`: Object that loads data from a source as list of
- [Interface](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.transformers.BaseDocumentTransformer.html): API reference for the base interface.
- [Interface](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.transformers.BaseDocumentTransformer.html): API reference for the base interface.
`Embeddings`: Wrapper around a text embedding model, used for converting
`Embeddings`: Wrapper around a text embedding model, used for converting
text to embeddings.
text to embeddings.
- [Docs](../../../docs/modules/data_connection/text_embedding): Detailed documentation on how to use embeddings.
- [Integrations](../../../docs/integrations/text_embedding/): 30+ integrations to choose from.
- [Docs](../../../docs/modules/data_connection/text_embedding): Detailed documentation on how to use embeddings.
- [Integrations](../../../docs/integrations/text_embedding/): 30+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/embeddings/langchain_core.embeddings.Embeddings.html): API reference for the base interface.
- [Interface](https://api.python.langchain.com/en/latest/embeddings/langchain_core.embeddings.Embeddings.html): API reference for the base interface.
`VectorStore`: Wrapper around a vector database, used for storing and
`VectorStore`: Wrapper around a vector database, used for storing and
querying embeddings.
querying embeddings.
- [Docs](../../../docs/modules/data_connection/vectorstores/): Detailed documentation on how to use vector stores.
- [Integrations](../../../docs/integrations/vectorstores/): 40+ integrations to choose from.
- [Docs](../../../docs/modules/data_connection/vectorstores/): Detailed documentation on how to use vector stores.
- [Integrations](../../../docs/integrations/vectorstores/): 40+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html): API reference for the base interface.
- [Interface](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html): API reference for the base interface.
This completes the **Indexing** portion of the pipeline. At this point
This completes the **Indexing** portion of the pipeline. At this point
@ -399,17 +402,15 @@ facilitate retrieval. Any `VectorStore` can easily be turned into a
retrieved_docs = retriever.invoke("What are the approaches to Task Decomposition?")
retrieved_docs = retriever.invoke("What are the approaches to Task Decomposition?")
```
```
```python
```python
len(retrieved_docs)
len(retrieved_docs)
```
```
```text
```text
6
6
```
```
@ -417,7 +418,7 @@ len(retrieved_docs)
print(retrieved_docs[0].page_content)
print(retrieved_docs[0].page_content)
```
```
```text
```text
Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.
Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.
Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.
Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.
```
```
@ -429,27 +430,27 @@ to do retrieval, too.
`Retriever`: An object that returns `Document`s given a text query
`Retriever`: An object that returns `Document`s given a text query
- [Docs](../../../docs/modules/data_connection/retrievers/): Further
- [Docs](../../../docs/modules/data_connection/retrievers/): Further
documentation on the interface and built-in retrieval techniques.
documentation on the interface and built-in retrieval techniques.
Some of which include:
Some of which include:
- `MultiQueryRetriever` [generates variants of the input
- `MultiQueryRetriever` [generates variants of the input
[HumanMessage(content="You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: filler question \nContext: filler context \nAnswer:")]
[HumanMessage(content="You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: filler question \nContext: filler context \nAnswer:")]
```
```
@ -514,10 +493,10 @@ example_messages
print(example_messages[0].content)
print(example_messages[0].content)
```
```
```text
```text
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.
Question: filler question
Question: filler question
Context: filler context
Context: filler context
Answer:
Answer:
```
```
@ -543,13 +522,12 @@ rag_chain = (
)
)
```
```
```python
```python
for chunk in rag_chain.stream("What is Task Decomposition?"):
for chunk in rag_chain.stream("What is Task Decomposition?"):
print(chunk, end="", flush=True)
print(chunk, end="", flush=True)
```
```
```text
```text
Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It involves transforming big tasks into multiple manageable tasks, allowing for easier interpretation and execution by autonomous agents or models. Task decomposition can be done through various methods, such as using prompting techniques, task-specific instructions, or human inputs.
Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It involves transforming big tasks into multiple manageable tasks, allowing for easier interpretation and execution by autonomous agents or models. Task decomposition can be done through various methods, such as using prompting techniques, task-specific instructions, or human inputs.
`ChatModel`: An LLM-backed chat model. Takes in a sequence of messages
`ChatModel`: An LLM-backed chat model. Takes in a sequence of messages
and returns a message.
and returns a message.
- [Docs](../../../docs/modules/model_io/chat/)
- [Docs](../../../docs/modules/model_io/chat/)
- [Integrations](../../../docs/integrations/chat/): 25+ integrations to choose from.
- [Integrations](../../../docs/integrations/chat/): 25+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html): API reference for the base interface.
- [Interface](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html): API reference for the base interface.
`LLM`: A text-in-text-out LLM. Takes in a string and returns a string.
`LLM`: A text-in-text-out LLM. Takes in a string and returns a string.
- [Docs](../../../docs/modules/model_io/llms)
- [Integrations](../../../docs/integrations/llms): 75+ integrations to choose from.
- [Docs](../../../docs/modules/model_io/llms)
- [Integrations](../../../docs/integrations/llms): 75+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.llms.BaseLLM.html): API reference for the base interface.
- [Interface](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.llms.BaseLLM.html): API reference for the base interface.
See a guide on RAG with locally-running models
See a guide on RAG with locally-running models
@ -605,7 +585,7 @@ rag_chain = (
rag_chain.invoke("What is Task Decomposition?")
rag_chain.invoke("What is Task Decomposition?")
```
```
```text
```text
'Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It involves transforming big tasks into multiple manageable tasks, allowing for a more systematic and organized approach to problem-solving. Thanks for asking!'
'Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It involves transforming big tasks into multiple manageable tasks, allowing for a more systematic and organized approach to problem-solving. Thanks for asking!'
```
```
@ -619,11 +599,11 @@ plenty of features, integrations, and extensions to explore in each of
the above sections. Along from the **Go deeper** sources mentioned
the above sections. Along from the **Go deeper** sources mentioned