"import { ColumnContainer, Column } from \\\"@theme/Columns\\\";"
"```{=mdx}\n",
"import { ColumnContainer, Column } from \"@theme/Columns\";\n",
"```"
]
},
{
@ -53,10 +55,13 @@
"## Invoke\n",
"In the simplest case, we just want to pass in a topic string and get back a joke string:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"```\n",
"\n",
"#### Without LCEL\n"
]
},
@ -95,9 +100,12 @@
"id": "cdc3b527-c09e-4c77-9711-c3cc4506cd95",
"metadata": {},
"source": [
"\n",
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@ -136,14 +144,19 @@
"id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f",
"metadata": {},
"source": [
"\n",
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"```\n",
"## Stream\n",
"If we want to stream results instead, we'll need to change our function:\n",
"\n",
"```{=mdx}\n",
"\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -184,10 +197,11 @@
"id": "f8e36b0e-c7dc-4130-a51b-189d4b756c7f",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"```\n",
"#### LCEL\n",
"\n"
]
@ -208,15 +222,19 @@
"id": "b9b41e78-ddeb-44d0-a58b-a0ea0c99a761",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Batch\n",
"\n",
"If we want to run on a batch of inputs in parallel, we'll again need a new function:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -244,10 +262,11 @@
"id": "9b3e9d34-6775-43c1-93d8-684b58e341ab",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"\n",
"```\n",
"#### LCEL\n",
"\n"
]
@ -267,15 +286,18 @@
"id": "cc5ba36f-eec1-4fc1-8cfe-fa242a7f7809",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"```\n",
"## Async\n",
"\n",
"If we need an asynchronous version:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -311,9 +333,11 @@
"id": "2f209290-498c-4c17-839e-ee9002919846",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@ -334,13 +358,16 @@
"id": "1f282129-99a3-40f4-b67f-2d0718b1bea9",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"\n",
"```\n",
"## Async Batch\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -370,9 +397,11 @@
"id": "90691048-17ae-479d-83c2-859e33ddf3eb",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@ -393,15 +422,19 @@
"id": "f6888245-1ebe-4768-a53b-e1fef6a8b379",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## LLM instead of chat model\n",
"\n",
"If we want to use a completion endpoint instead of a chat endpoint: \n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -433,9 +466,11 @@
"id": "45342cd6-58c2-4543-9392-773e05ef06e7",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@ -466,15 +501,19 @@
"id": "ca115eaf-59ef-45c1-aac1-e8b0ce7db250",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Different model provider\n",
"\n",
"If we want to use Anthropic instead of OpenAI: \n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -512,9 +551,11 @@
"id": "52a0c9f8-e316-42e1-af85-cabeba4b7059",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@ -545,15 +586,19 @@
"id": "d7a91eee-d017-420d-b215-f663dcbf8ed2",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Runtime configurability\n",
"\n",
"If we wanted to make the choice of chat model or LLM configurable at runtime:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n"
@ -634,9 +679,11 @@
"id": "d1530c5c-6635-4599-9483-6df357ca2d64",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### With LCEL\n",
"\n"
@ -694,15 +741,19 @@
"id": "370dd4d7-b825-40c4-ae3c-2693cba2f22a",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Logging\n",
"\n",
"If we want to log our intermediate results:\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n",
@ -733,9 +784,11 @@
"id": "16bd20fd-43cd-4aaf-866f-a53d1f20312d",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"Every component has built-in integrations with LangSmith. If we set the following two environment variables, all chain traces are logged to LangSmith.\n",
@ -770,16 +823,19 @@
"id": "e25ce3c5-27a7-4954-9f0e-b94313597135",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>\n",
"```\n",
"\n",
"## Fallbacks\n",
"\n",
"If we wanted to add fallback logic, in case one model API is down:\n",
"\n",
"\n",
"```{=mdx}\n",
"<ColumnContainer>\n",
"<Column>\n",
"```\n",
"\n",
"#### Without LCEL\n",
"\n",
@ -823,9 +879,11 @@
"id": "f7ef59b5-2ce3-479e-a7ac-79e1e2f30e9c",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"\n",
"<Column>\n",
"```\n",
"\n",
"#### LCEL\n",
"\n"
@ -850,8 +908,10 @@
"id": "3af52d36-37c6-4d89-b515-95d7270bb96a",
"metadata": {},
"source": [
"```{=mdx}\n",
"</Column>\n",
"</ColumnContainer>"
"</ColumnContainer>\n",
"```"
]
},
{
@ -863,8 +923,10 @@
"\n",
"Even in this simple case, our LCEL chain succinctly packs in a lot of functionality. As chains become more complex, this becomes especially valuable.\n",
For more see the [ChatFireworks](https://api.python.langchain.com/en/latest/chat_models/langchain_fireworks.chat_models.ChatFireworks.html#langchain_fireworks.chat_models.ChatFireworks.bind_tools) reference.
</TabItem>
<TabItem value="mistral" label="Mistral">
Install dependencies and set API keys:
```python
%pip install -qU langchain-mistralai
```
```python
os.environ["MISTRAL_API_KEY"] = getpass.getpass()
```
We can use the `ChatMistralAI.bind_tools()` method to handle converting
`Multiply` to a valid function schema and binding it to the model (i.e.,
For more see the [ChatMistralAI API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html#langchain_mistralai.chat_models.ChatMistralAI).
</TabItem>
<TabItem value="together" label="Together">
Since TogetherAI is a drop-in replacement for OpenAI, we can just use
"While chat models use language models under the hood, the interface they use is a bit different.\n",
"Rather than using a \"text in, text out\" API, they use an interface where \"chat messages\" are the inputs and outputs.\n",
"\n",
"## Setup\n",
"\n",
"For this example we'll need to install the OpenAI partner package:\n",
"\n",
"```bash\n",
"pip install langchain-openai\n",
"```\n",
"\n",
"Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running:\n",
"\n",
"```bash\n",
"export OPENAI_API_KEY=\"...\"\n",
"```\n",
"If you'd prefer not to set an environment variable you can pass the key in directly via the `openai_api_key` named parameter when initiating the OpenAI LLM class:\n"
"## Setup\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"id": "e230abb2-bc84-438b-b9ff-dd124acb1375",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"chat = ChatOpenAI(openai_api_key=\"...\")"
"<ChatModelTabs customVarName=\"chat\" />\n",
"```"
]
},
{
@ -55,19 +42,25 @@
"id": "609bbd5c-e5a1-4166-89e1-d6c52054860d",
"metadata": {},
"source": [
"Otherwise you can initialize without any params:"
"If you'd prefer not to set an environment variable you can pass the key in directly via the api key arg named parameter when initiating the chat model class:"
- [Interface](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.transformers.BaseDocumentTransformer.html): API reference for the base interface.
`Embeddings`: Wrapper around a text embedding model, used for converting
text to embeddings.
- [Docs](../../../docs/modules/data_connection/text_embedding): Detailed documentation on how to use embeddings.
- [Integrations](../../../docs/integrations/text_embedding/): 30+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/embeddings/langchain_core.embeddings.Embeddings.html): API reference for the base interface.
`VectorStore`: Wrapper around a vector database, used for storing and
querying embeddings.
- [Docs](../../../docs/modules/data_connection/vectorstores/): Detailed documentation on how to use vector stores.
- [Integrations](../../../docs/integrations/vectorstores/): 40+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html): API reference for the base interface.
@ -399,12 +402,10 @@ facilitate retrieval. Any `VectorStore` can easily be turned into a
`ChatModel`: An LLM-backed chat model. Takes in a sequence of messages
and returns a message.
- [Docs](../../../docs/modules/model_io/chat/)
- [Integrations](../../../docs/integrations/chat/): 25+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html): API reference for the base interface.
`LLM`: A text-in-text-out LLM. Takes in a string and returns a string.
- [Docs](../../../docs/modules/model_io/llms)
- [Integrations](../../../docs/integrations/llms): 75+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.llms.BaseLLM.html): API reference for the base interface.