"import { ColumnContainer, Column } from \\\"@theme/Columns\\\";"
"```{=mdx}\n",
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
"```"
]
},
{
@ -53,10 +55,13 @@
"## Invoke\n",
"In the simplest case, we just want to pass in a topic string and get back a joke string:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"```\n",
"\n",
"#### Without LCEL\n"
]
},
@ -95,9 +100,12 @@
"id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95",
"metadata": {},
"source": [
"\n",
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@ -136,14 +144,19 @@
"id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f",
"metadata": {},
"source": [
"\n",
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"```\n",
"## Stream\n",
"If we want to stream results instead, we'll need to change our function:\n",
"\n",
"```{=mdx}\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -184,10 +197,11 @@
"id": "f8e36b0e-c7dc-4130-a51b-189d4b756c7f",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"```\n",
"#### LCEL\n",
"\n"
]
@ -208,15 +222,19 @@
"id": "b9b41e78-ddeb-44d0-a58b-a0ea0c99a761",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Batch\n",
"\n",
"If we want to run on a batch of inputs in parallel, we'll again need a new function:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -244,10 +262,11 @@
"id": "9b3e9d34-6775-43c1-93d8-684b58e341ab",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"```\n",
"#### LCEL\n",
"\n"
]
@ -267,15 +286,18 @@
"id": "cc5ba36f-eec1-4fc1-8cfe-fa242a7f7809",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"```\n",
"## Async\n",
"\n",
"If we need an asynchronous version:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -311,10 +333,12 @@
"id": "2f209290-498c-4c17-839e-ee9002919846",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
" \n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
]
@ -334,13 +358,16 @@
"id": "1f282129-99a3-40f4-b67f-2d0718b1bea9",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"```\n",
"## Async Batch\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -370,9 +397,11 @@
"id": "90691048-17ae-479d-83c2-859e33ddf3eb",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@ -393,15 +422,19 @@
"id": "f6888245-1ebe-4768-a53b-e1fef6a8b379",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## LLM instead of chat model\n",
"\n",
"If we want to use a completion endpoint instead of a chat endpoint: \n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -433,9 +466,11 @@
"id": "45342cd6-58c2-4543-9392-773e05ef06e7",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@ -466,15 +501,19 @@
"id": "ca115eaf-59ef-45c1-aac1-e8b0ce7db250",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Different model provider\n",
"\n",
"If we want to use Anthropic instead of OpenAI: \n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -512,9 +551,11 @@
"id": "52a0c9f8-e316-42e1-af85-cabeba4b7059",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@ -545,15 +586,19 @@
"id": "d7a91eee-d017-420d-b215-f663dcbf8ed2",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Runtime configurability\n",
"\n",
"If we wanted to make the choice of chat model or LLM configurable at runtime:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -634,9 +679,11 @@
"id": "d1530c5c-6635-4599-9483-6df357ca2d64",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### With LCEL\n",
"\n"
@ -694,15 +741,19 @@
"id": "370dd4d7-b825-40c4-ae3c-2693cba2f22a",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Logging\n",
"\n",
"If we want to log our intermediate results:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n",
@ -733,9 +784,11 @@
"id": "16bd20fd-43cd-4aaf-866f-a53d1f20312d",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"Every component has built-in integrations with LangSmith. If we set the following two environment variables, all chain traces are logged to LangSmith.\n",
@ -770,16 +823,19 @@
"id": "e25ce3c5-27a7-4954-9f0e-b94313597135",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Fallbacks\n",
"\n",
"If we wanted to add fallback logic, in case one model API is down:\n",
"\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n",
@ -823,9 +879,11 @@
"id": "f7ef59b5-2ce3-479e-a7ac-79e1e2f30e9c",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@ -850,8 +908,10 @@
"id": "3af52d36-37c6-4d89-b515-95d7270bb96a",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>"
"</ColumnContainer>\n",
"```"
]
},
{
@ -863,8 +923,10 @@
"\n",
"Even in this simple case, our LCEL chain succinctly packs in a lot of functionality. As chains become more complex, this becomes especially valuable.\n",
For more see the [ChatFireworks](https://api.python.langchain.com/en/latest/chat_models/langchain_fireworks.chat_models.ChatFireworks.html#langchain_fireworks.chat_models.ChatFireworks.bind_tools) reference.
</TabItem>
<TabItem value="mistral" label="Mistral">
Install dependencies and set API keys:
```python
%pip install -qU langchain-mistralai
```
```python
os.environ["MISTRAL_API_KEY"] = getpass.getpass()
```
We can use the `ChatMistralAI.bind_tools()` method to handle converting
`Multiply` to a valid function schema and binding it to the model (i.e.,
For more see the [ChatMistralAI API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html#langchain_mistralai.chat_models.ChatMistralAI).
</TabItem>
<TabItem value="together" label="Together">
Since TogetherAI is a drop-in replacement for OpenAI, we can just use
"While chat models use language models under the hood, the interface they use is a bit different.\n",
"Rather than using a \"text in, text out\" API, they use an interface where \"chat messages\" are the inputs and outputs.\n",
"\n",
"## Setup\n",
"\n",
"For this example we'll need to install the OpenAI partner package:\n",
"\n",
"```bash\n",
"pip install langchain-openai\n",
"```\n",
"\n",
"Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running:\n",
"\n",
"```bash\n",
"export OPENAI_API_KEY=\"...\"\n",
"```\n",
"If you'd prefer not to set an environment variable you can pass the key in directly via the `openai_api_key` named parameter when initiating the OpenAI LLM class:\n"
"## Setup\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"id": "e230abb2-bc84-438b-b9ff-dd124acb1375",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"chat = ChatOpenAI(openai_api_key=\"...\")"
"<ChatModelTabs customVarName=\"chat\" />\n",
"```"
]
},
{
@ -55,19 +42,25 @@
"id": "609bbd5c-e5a1-4166-89e1-d6c52054860d",
"metadata": {},
"source": [
"Otherwise you can initialize without any params:"
"If you'd prefer not to set an environment variable you can pass the key in directly via the api key arg named parameter when initiating the chat model class:"
return "\n\n".join(doc.page_content for doc in docs)
@ -164,12 +165,11 @@ rag_chain = (
)
```
```python
rag_chain.invoke("What is Task Decomposition?")
```
```text
```text
'Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It can be done through prompting techniques like Chain of Thought or Tree of Thoughts, or by using task-specific instructions or human inputs. Task decomposition helps agents plan ahead and manage complicated tasks more effectively.'
```
@ -219,12 +219,11 @@ loader = WebBaseLoader(
docs = loader.load()
```
```python
len(docs[0].page_content)
```
```text
```text
42824
```
@ -232,7 +231,7 @@ len(docs[0].page_content)
print(docs[0].page_content[:500])
```
```text
```text
LLM Powered Autonomous Agents
@ -249,12 +248,13 @@ In
`DocumentLoader`: Object that loads data from a source as list of
- [Interface](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.transformers.BaseDocumentTransformer.html): API reference for the base interface.
`Embeddings`: Wrapper around a text embedding model, used for converting
text to embeddings.
- [Docs](../../../docs/modules/data_connection/text_embedding): Detailed documentation on how to use embeddings.
- [Integrations](../../../docs/integrations/text_embedding/): 30+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/embeddings/langchain_core.embeddings.Embeddings.html): API reference for the base interface.
`VectorStore`: Wrapper around a vector database, used for storing and
querying embeddings.
- [Docs](../../../docs/modules/data_connection/vectorstores/): Detailed documentation on how to use vector stores.
- [Integrations](../../../docs/integrations/vectorstores/): 40+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html): API reference for the base interface.
@ -399,17 +402,15 @@ facilitate retrieval. Any `VectorStore` can easily be turned into a
retrieved_docs = retriever.invoke("What are the approaches to Task Decomposition?")
```
```python
len(retrieved_docs)
```
```text
```text
6
```
@ -417,7 +418,7 @@ len(retrieved_docs)
print(retrieved_docs[0].page_content)
```
```text
```text
Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.
Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.
```
@ -460,34 +461,13 @@ parses the output.
We’ll use the gpt-3.5-turbo OpenAI chat model, but any LangChain `LLM`
[HumanMessage(content="You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: filler question \nContext: filler context \nAnswer:")]
```
@ -514,7 +493,7 @@ example_messages
print(example_messages[0].content)
```
```text
```text
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.
Question: filler question
Context: filler context
@ -543,13 +522,12 @@ rag_chain = (
)
```
```python
for chunk in rag_chain.stream("What is Task Decomposition?"):
print(chunk, end="", flush=True)
```
```text
```text
Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It involves transforming big tasks into multiple manageable tasks, allowing for easier interpretation and execution by autonomous agents or models. Task decomposition can be done through various methods, such as using prompting techniques, task-specific instructions, or human inputs.
`ChatModel`: An LLM-backed chat model. Takes in a sequence of messages
and returns a message.
- [Docs](../../../docs/modules/model_io/chat/)
- [Integrations](../../../docs/integrations/chat/): 25+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html): API reference for the base interface.
`LLM`: A text-in-text-out LLM. Takes in a string and returns a string.
- [Docs](../../../docs/modules/model_io/llms)
- [Integrations](../../../docs/integrations/llms): 75+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.llms.BaseLLM.html): API reference for the base interface.
@ -605,7 +585,7 @@ rag_chain = (
rag_chain.invoke("What is Task Decomposition?")
```
```text
```text
'Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. It involves transforming big tasks into multiple manageable tasks, allowing for a more systematic and organized approach to problem-solving. Thanks for asking!'