Should hopefully avoid weird broken link edge cases.
Relative links now trip up the Docusaurus broken link checker, so this
PR also removes them.
Also snuck in a small addition about asyncio
"- [`invoke`](#invoke): call the chain on an input\n",
"- [`batch`](#batch): call the chain on a list of inputs\n",
"\n",
"These also have corresponding async methods:\n",
"These also have corresponding async methods that should be used with [asyncio](https://docs.python.org/3/library/asyncio.html) `await` syntax for concurrency:\n",
"\n",
"- [`astream`](#async-stream): stream back chunks of the response async\n",
"- [`ainvoke`](#async-invoke): call the chain on an input async\n",
"**Please ensure OpenAI library is less than 1.0.0; otherwise, refer to the newer doc [OpenAI Adapter](./openai).**\n",
"**Please ensure OpenAI library is less than 1.0.0; otherwise, refer to the newer doc [OpenAI Adapter](/docs/integrations/adapters/openai/).**\n",
"\n",
"A lot of people get started with OpenAI but want to explore other models. LangChain's integrations with many model providers make this easy to do so. While LangChain has it's own message and model APIs, we've also made it as easy as possible to explore other models by exposing an adapter to adapt LangChain models to the OpenAI api.\n",
"**Please ensure OpenAI library is version 1.0.0 or higher; otherwise, refer to the older doc [OpenAI Adapter(Old)](./openai-old).**\n",
"**Please ensure OpenAI library is version 1.0.0 or higher; otherwise, refer to the older doc [OpenAI Adapter(Old)](/docs/integrations/adapters/openai-old/).**\n",
"\n",
"A lot of people get started with OpenAI but want to explore other models. LangChain's integrations with many model providers make this easy to do so. While LangChain has it's own message and model APIs, we've also made it as easy as possible to explore other models by exposing an adapter to adapt LangChain models to the OpenAI api.\n",
"This example goes over how to use LangChain to interact with the supported [NVIDIA Retrieval QA Embedding Model](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/nvolve-40k) for [retrieval-augmented generation](https://developer.nvidia.com/blog/build-enterprise-retrieval-augmented-generation-apps-with-nvidia-retrieval-qa-embedding-model/) via the `NVIDIAEmbeddings` class.\n",
"\n",
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](../chat/nvidia_ai_endpoints) documentation."
"For more information on accessing the chat models through this api, check out the [ChatNVIDIA](/docs/integrations/chat/nvidia_ai_endpoints/) documentation."
"To best understand the agent framework, let's build an agent that has two tools: one to look things up online, and one to look up specific data that we've loaded into a index.\n",
"\n",
"This will assume knowledge of [LLMs](../model_io) and [retrieval](../data_connection) so if you haven't already explored those sections, it is recommended you do so.\n",
"This will assume knowledge of [LLMs](/docs/modules/model_io/) and [retrieval](/docs/modules/data_connection/) so if you haven't already explored those sections, it is recommended you do so.\n",
"\n",
"## Setup: LangSmith\n",
"\n",
@ -107,7 +107,7 @@
"source": [
"### Retriever\n",
"\n",
"We will also create a retriever over some data of our own. For a deeper explanation of each step here, see [this section](/docs/modules/data_connection/)"
"We will also create a retriever over some data of our own. For a deeper explanation of each step here, see [this section](/docs/modules/data_connection/)."
]
},
{
@ -211,7 +211,7 @@
"source": [
"## Create the agent\n",
"\n",
"Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](./agent_types)\n",
"Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](/docs/modules/agents/agent_types/).\n",
"\n",
"First, we choose the LLM we want to be guiding the agent."
]
@ -273,7 +273,7 @@
"id": "f8014c9d",
"metadata": {},
"source": [
"Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](./concepts)"
"Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/modules/agents/concepts/)."
]
},
{
@ -293,7 +293,7 @@
"id": "1a58c9f8",
"metadata": {},
"source": [
"Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). For more information about how to think about these components, see our [conceptual guide](./concepts)"
"Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools)."
]
},
{
@ -544,7 +544,7 @@
"id": "07b3bcf2",
"metadata": {},
"source": [
"If we want to keep track of these messages automatically, we can wrap this in a RunnableWithMessageHistory. For more information on how to use this, see [this guide](/docs/expression_language/how_to/message_history)"
"If we want to keep track of these messages automatically, we can wrap this in a RunnableWithMessageHistory. For more information on how to use this, see [this guide](/docs/expression_language/how_to/message_history/)."
]
},
{
@ -673,7 +673,7 @@
"source": [
"## Conclusion\n",
"\n",
"That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! Head back to the [main agent page](./) to find more resources on conceptual guides, different types of agents, how to create custom tools, and more!"
"That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! Head back to the [main agent page](/docs/modules/agents/) to find more resources on conceptual guides, different types of agents, how to create custom tools, and more!"
@ -109,8 +109,4 @@ There are a few parsers dedicated to working with OpenAI function calling. They
### Agent Output Parsers
[Agents](../agents) are systems that use language models to determine what steps to take. The output of a language model therefore needs to be parsed into some schema that can represent what actions (if any) are to be taken. AgentOutputParsers are responsible for taking raw LLM or ChatModel output and converting it to that schema. The logic inside these output parsers can differ depending on the model and prompting strategy being used.
[Agents](/docs/modules/agents/) are systems that use language models to determine what steps to take. The output of a language model therefore needs to be parsed into some schema that can represent what actions (if any) are to be taken. AgentOutputParsers are responsible for taking raw LLM or ChatModel output and converting it to that schema. The logic inside these output parsers can differ depending on the model and prompting strategy being used.
"The goal of few-shot prompt templates are to dynamically select examples based on an input, and then format the examples in a final prompt to provide for the model.\n",
"\n",
"\n",
"**Note:** The following code examples are for chat models. For similar few-shot prompt examples for completion models (LLMs), see the [few-shot prompt templates](few_shot_examples) guide."
"**Note:** The following code examples are for chat models. For similar few-shot prompt examples for completion models (LLMs), see the [few-shot prompt templates](/docs/modules/model_io/prompts/few_shot_examples/) guide."
"You can create custom prompt templates that format the prompt in any way you want.\n",
"For more information, see [Prompt Template Composition](./composition).\n",
"For more information, see [Prompt Template Composition](/docs/modules/model_io/prompts/composition/).\n",
"\n",
"## `ChatPromptTemplate`\n",
"\n",
"The prompt to [chat models](../chat) is a list of [chat messages](/docs/modules/model_io/chat/message_types).\n",
"The prompt to [chat models](/docs/modules/model_io/chat)/ is a list of [chat messages](/docs/modules/model_io/chat/message_types/).\n",
"\n",
"Each chat message is associated with content, and an additional parameter called `role`.\n",
"For example, in the OpenAI [Chat Completions API](https://platform.openai.com/docs/guides/chat/introduction), a chat message can be associated with an AI assistant, a human or a system role.\n",
The quick start will cover the basics of working with language models. It will introduce the two different types of models - LLMs and ChatModels. It will then cover how to use PromptTemplates to format the inputs to these models, and how to use Output Parsers to work with the outputs. For a deeper conceptual guide into these topics - please see [this documentation](./concepts)
The quick start will cover the basics of working with language models. It will introduce the two different types of models - LLMs and ChatModels. It will then cover how to use PromptTemplates to format the inputs to these models, and how to use Output Parsers to work with the outputs.
## Models
For this getting started guide, we will provide a few options: using an API like Anthropic or OpenAI, or using a local open source model via Ollama.
@ -132,7 +132,6 @@ You can initialize them with parameters like `temperature` and others, and pass
The main difference between them is their input and output schemas.
The LLM objects take string as input and output string.
The ChatModel objects take a list of messages as input and output a message.
For a deeper conceptual explanation of this difference please see [this documentation](./concepts)
We can see the difference between an LLM and a ChatModel when we invoke it.
Note that we are using the `|` syntax to join these components together.
This `|` syntax is powered by the LangChain Expression Language (LCEL) and relies on the universal `Runnable` interface that all of these objects implement.
To learn more about LCEL, read the documentation [here](/docs/expression_language).
To learn more about LCEL, read the documentation [here](/docs/expression_language/).
## Conclusion
That's it for getting started with prompts, models, and output parsers! This just covered the surface of what there is to learn. For more information, check out:
- The [conceptual guide](./concepts) for information about the concepts presented here
- The [prompt section](./prompts) for information on how to work with prompt templates
- The [LLM section](./llms) for more information on the LLM interface
- The [ChatModel section](./chat) for more information on the ChatModel interface
- The [output parser section](./output_parsers) for information about the different types of output parsers.
- The [prompt section](/docs/modules/model_io/prompts/) for information on how to work with prompt templates
- The [LLM section](/docs/modules/model_io/llms/) for more information on the LLM interface
- The [ChatModel section](/docs/modules/model_io/chat/) for more information on the ChatModel interface
- The [output parser section](/docs/modules/model_io/output_parsers/) for information about the different types of output parsers.
- [Interface](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.transformers.BaseDocumentTransformer.html): API reference for the base interface.
## 3. Indexing: Store {#indexing-store}
@ -344,9 +344,9 @@ similarity —we measure the cosine of the angle between each pair of
embeddings (which are high dimensional vectors).
We can embed and store all of our document splits in a single command
using the [Chroma](../../../docs/integrations/vectorstores/chroma)
using the [Chroma](/docs/integrations/vectorstores/chroma)
`Embeddings`: Wrapper around a text embedding model, used for converting
text to embeddings.
- [Docs](../../../docs/modules/data_connection/text_embedding): Detailed documentation on how to use embeddings.
- [Integrations](../../../docs/integrations/text_embedding/): 30+ integrations to choose from.
- [Docs](/docs/modules/data_connection/text_embedding): Detailed documentation on how to use embeddings.
- [Integrations](/docs/integrations/text_embedding/): 30+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/embeddings/langchain_core.embeddings.Embeddings.html): API reference for the base interface.
`VectorStore`: Wrapper around a vector database, used for storing and
querying embeddings.
- [Docs](../../../docs/modules/data_connection/vectorstores/): Detailed documentation on how to use vector stores.
- [Integrations](../../../docs/integrations/vectorstores/): 40+ integrations to choose from.
- [Docs](/docs/modules/data_connection/vectorstores/): Detailed documentation on how to use vector stores.
- [Integrations](/docs/integrations/vectorstores/): 40+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/vectorstores/langchain_core.vectorstores.VectorStore.html): API reference for the base interface.
This completes the **Indexing** portion of the pipeline. At this point
@ -386,12 +386,12 @@ a model, and returns an answer.
First we need to define our logic for searching over documents.
`ChatModel`: An LLM-backed chat model. Takes in a sequence of messages
and returns a message.
- [Docs](../../../docs/modules/model_io/chat/)
- [Integrations](../../../docs/integrations/chat/): 25+ integrations to choose from.
- [Docs](/docs/modules/model_io/chat/)
- [Integrations](/docs/integrations/chat/): 25+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html): API reference for the base interface.
`LLM`: A text-in-text-out LLM. Takes in a string and returns a string.
- [Docs](../../../docs/modules/model_io/llms)
- [Integrations](../../../docs/integrations/llms): 75+ integrations to choose from.
- [Docs](/docs/modules/model_io/llms)
- [Integrations](/docs/integrations/llms): 75+ integrations to choose from.
- [Interface](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.llms.BaseLLM.html): API reference for the base interface.