From 9fa17bfabee4ed22491d8424d0f0d72182392847 Mon Sep 17 00:00:00 2001 From: ccurme Date: Thu, 9 May 2024 11:05:17 -0400 Subject: [PATCH] docs; fix links in v0.2.0 (#21483) --- docs/docs/additional_resources/tutorials.mdx | 2 +- docs/docs/concepts.mdx | 2 +- .../documentation/style_guide.mdx | 22 +++++++++---------- .../how_to/chat_token_usage_tracking.ipynb | 2 +- docs/docs/how_to/chatbots_memory.ipynb | 2 +- docs/docs/how_to/chatbots_retrieval.ipynb | 6 ++--- docs/docs/how_to/chatbots_tools.ipynb | 6 ++--- docs/docs/how_to/custom_retriever.ipynb | 2 +- docs/docs/how_to/document_loader_html.ipynb | 2 +- docs/docs/how_to/function_calling.ipynb | 10 ++++----- .../how_to/output_parser_structured.ipynb | 2 +- docs/docs/how_to/query_few_shot.ipynb | 2 +- docs/docs/how_to/sequence.ipynb | 2 +- docs/docs/how_to/streaming.ipynb | 6 ++--- docs/docs/how_to/tool_calling.ipynb | 6 ++--- docs/docs/how_to/tools_chain.ipynb | 4 ++-- docs/docs/how_to/tools_multiple.ipynb | 4 ++-- docs/docs/how_to/tools_parallel.ipynb | 2 +- docs/docs/how_to/tools_prompting.ipynb | 2 +- .../integrations/callbacks/trubrics.ipynb | 2 +- docs/docs/integrations/chat/cohere.ipynb | 4 ++-- docs/docs/integrations/chat/friendli.ipynb | 2 +- .../chat/google_vertex_ai_palm.ipynb | 2 +- docs/docs/integrations/chat/huggingface.ipynb | 4 ++-- docs/docs/integrations/chat/llama2_chat.ipynb | 4 ++-- docs/docs/integrations/chat/mistralai.ipynb | 2 +- .../chat/nvidia_ai_endpoints.ipynb | 2 +- docs/docs/integrations/chat/ollama.ipynb | 2 +- .../document_loaders/google_bigtable.ipynb | 2 +- .../google_cloud_sql_mssql.ipynb | 2 +- .../google_cloud_sql_mysql.ipynb | 2 +- .../document_loaders/google_datastore.ipynb | 2 +- .../document_loaders/google_el_carro.ipynb | 2 +- .../document_loaders/google_firestore.ipynb | 2 +- .../google_memorystore_redis.ipynb | 2 +- .../document_loaders/google_spanner.ipynb | 2 +- .../document_loaders/tomarkdown.ipynb | 14 ++++++------ docs/docs/integrations/graphs/memgraph.ipynb | 2 +- docs/docs/integrations/llms/cohere.ipynb | 4 ++-- docs/docs/integrations/llms/friendli.ipynb | 2 +- .../llms/google_vertex_ai_palm.ipynb | 4 ++-- docs/docs/integrations/llms/llamafile.ipynb | 2 +- docs/docs/integrations/llms/ollama.ipynb | 2 +- docs/docs/integrations/platforms/aws.mdx | 2 +- docs/docs/integrations/platforms/index.mdx | 2 +- .../docs/integrations/platforms/microsoft.mdx | 2 +- docs/docs/integrations/platforms/openai.mdx | 4 ++-- .../integrations/providers/motherduck.mdx | 2 +- docs/docs/integrations/providers/ollama.mdx | 2 +- docs/docs/integrations/providers/redis.mdx | 2 +- docs/docs/integrations/providers/spacy.mdx | 2 +- .../integrations/providers/unstructured.mdx | 4 ++-- .../providers/vectara/vectara_summary.ipynb | 2 +- .../retrievers/fleet_context.ipynb | 2 +- .../integrations/retrievers/ragatouille.ipynb | 2 +- .../docs/integrations/retrievers/tavily.ipynb | 2 +- .../tools/passio_nutrition_ai.ipynb | 6 ++--- .../integrations/tools/reddit_search.ipynb | 2 +- .../integrations/vectorstores/faiss.ipynb | 4 ++-- .../vectorstores/timescalevector.ipynb | 4 ++-- .../integrations/vectorstores/vespa.ipynb | 2 +- docs/docs/introduction.mdx | 2 +- docs/docs/tutorials/graph.ipynb | 8 +++---- docs/docs/tutorials/llm_chain.ipynb | 2 +- docs/docs/tutorials/local_rag.ipynb | 4 ++-- docs/docs/tutorials/qa_chat_history.ipynb | 6 ++--- docs/docs/tutorials/rag.ipynb | 2 +- 67 files changed, 114 insertions(+), 114 deletions(-) diff --git a/docs/docs/additional_resources/tutorials.mdx b/docs/docs/additional_resources/tutorials.mdx index 9bc9dc53c7..f1f781f395 100644 --- a/docs/docs/additional_resources/tutorials.mdx +++ b/docs/docs/additional_resources/tutorials.mdx @@ -48,7 +48,7 @@ - [by Rabbitmetrics](https://youtu.be/aywZrzNaKjs) - [by Ivan Reznikov](https://medium.com/@ivanreznikov/langchain-101-course-updated-668f7b41d6cb) -## [Documentation: Use cases](/docs/use_cases) +## [Documentation: Use cases](/docs/how_to#use-cases) --------------------- diff --git a/docs/docs/concepts.mdx b/docs/docs/concepts.mdx index 7209a4edcf..4db1999602 100644 --- a/docs/docs/concepts.mdx +++ b/docs/docs/concepts.mdx @@ -185,7 +185,7 @@ Tool calling allows a model to respond to a given prompt by generating output th matches a user-defined schema. While the name implies that the model is performing some action, this is actually not the case! The model is coming up with the arguments to a tool, and actually running the tool (or not) is up to the user - -for example, if you want to [extract output matching some schema](/docs/tutorial/extraction/) +for example, if you want to [extract output matching some schema](/docs/tutorials/extraction) from unstructured text, you could give the model an "extraction" tool that takes parameters matching the desired schema, then treat the generated output as your final result. diff --git a/docs/docs/contributing/documentation/style_guide.mdx b/docs/docs/contributing/documentation/style_guide.mdx index e8da942595..cd1cfe5a5f 100644 --- a/docs/docs/contributing/documentation/style_guide.mdx +++ b/docs/docs/contributing/documentation/style_guide.mdx @@ -16,15 +16,15 @@ LangChain's documentation aspires to follow the [Diataxis framework](https://dia Under this framework, all documentation falls under one of four categories: - **Tutorials**: Lessons that take the reader by the hand through a series of conceptual steps to complete a project. - - An example of this is our [LCEL streaming guide](/docs/expression_language/streaming). - - Our guides on [custom components](/docs/modules/model_io/chat/custom_chat_model) is another one. + - An example of this is our [LCEL streaming guide](/docs/how_to/streaming). + - Our guides on [custom components](/docs/how_to/custom_chat_model) is another one. - **How-to guides**: Guides that take the reader through the steps required to solve a real-world problem. - - The clearest examples of this are our [Use case](/docs/use_cases/) quickstart pages. + - The clearest examples of this are our [Use case](/docs/how_to#use-cases) quickstart pages. - **Reference**: Technical descriptions of the machinery and how to operate it. - - Our [Runnable interface](/docs/expression_language/interface) page is an example of this. + - Our [Runnable interface](/docs/concepts#interface) page is an example of this. - The [API reference pages](https://api.python.langchain.com/) are another. - **Explanation**: Explanations that clarify and illuminate a particular topic. - - The [LCEL primitives pages](/docs/expression_language/primitives/sequence) are an example of this. + - The [LCEL primitives pages](/docs/how_to/sequence) are an example of this. Each category serves a distinct purpose and requires a specific approach to writing and structuring the content. @@ -35,14 +35,14 @@ when contributing new documentation: ### Getting started -The [getting started section](/docs/get_started/introduction) includes a high-level introduction to LangChain, a quickstart that +The [getting started section](/docs/introduction) includes a high-level introduction to LangChain, a quickstart that tours LangChain's various features, and logistical instructions around installation and project setup. It contains elements of **How-to guides** and **Explanations**. ### Use cases -[Use cases](/docs/use_cases/) are guides that are meant to show how to use LangChain to accomplish a specific task (RAG, information extraction, etc.). +[Use cases](/docs/how_to#use-cases) are guides that are meant to show how to use LangChain to accomplish a specific task (RAG, information extraction, etc.). The quickstarts should be good entrypoints for first-time LangChain developers who prefer to learn by getting something practical prototyped, then taking the pieces apart retrospectively. These should mirror what LangChain is good at. @@ -55,7 +55,7 @@ The below sections are listed roughly in order of increasing level of abstractio ### Expression Language -[LangChain Expression Language (LCEL)](/docs/expression_language/) is the fundamental way that most LangChain components fit together, and this section is designed to teach +[LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language) is the fundamental way that most LangChain components fit together, and this section is designed to teach developers how to use it to build with LangChain's primitives effectively. This section should contains **Tutorials** that teach how to stream and use LCEL primitives for more abstract tasks, **Explanations** of specific behaviors, @@ -63,7 +63,7 @@ and some **References** for how to use different methods in the Runnable interfa ### Components -The [components section](/docs/modules) covers concepts one level of abstraction higher than LCEL. +The [components section](/docs/concepts) covers concepts one level of abstraction higher than LCEL. Abstract base classes like `BaseChatModel` and `BaseRetriever` should be covered here, as well as core implementations of these base classes, such as `ChatPromptTemplate` and `RecursiveCharacterTextSplitter`. Customization guides belong here too. @@ -88,7 +88,7 @@ Concepts covered in `Integrations` should generally exist in `langchain_communit ### Guides and Ecosystem -The [Guides](/docs/guides) and [Ecosystem](/docs/langsmith/) sections should contain guides that address higher-level problems than the sections above. +The [Guides](/docs/tutorials) and [Ecosystem](/docs/langsmith/) sections should contain guides that address higher-level problems than the sections above. This includes, but is not limited to, considerations around productionization and development workflows. These should contain mostly **How-to guides**, **Explanations**, and **Tutorials**. @@ -102,7 +102,7 @@ LangChain's API references. Should act as **References** (as the name implies) w We have set up our docs to assist a new developer to LangChain. Let's walk through the intended path: - The developer lands on https://python.langchain.com, and reads through the introduction and the diagram. -- If they are just curious, they may be drawn to the [Quickstart](/docs/get_started/quickstart) to get a high-level tour of what LangChain contains. +- If they are just curious, they may be drawn to the [Quickstart](/docs/tutorials/llm_chain) to get a high-level tour of what LangChain contains. - If they have a specific task in mind that they want to accomplish, they will be drawn to the Use-Case section. The use-case should provide a good, concrete hook that shows the value LangChain can provide them and be a good entrypoint to the framework. - They can then move to learn more about the fundamentals of LangChain through the Expression Language sections. - Next, they can learn about LangChain's various components and integrations. diff --git a/docs/docs/how_to/chat_token_usage_tracking.ipynb b/docs/docs/how_to/chat_token_usage_tracking.ipynb index de4a65236c..95d7c6181d 100644 --- a/docs/docs/how_to/chat_token_usage_tracking.ipynb +++ b/docs/docs/how_to/chat_token_usage_tracking.ipynb @@ -25,7 +25,7 @@ "source": [ "## Using AIMessage.response_metadata\n", "\n", - "A number of model providers return token usage information as part of the chat generation response. When available, this is included in the [`AIMessage.response_metadata`](/docs/modules/model_io/chat/response_metadata/) field. Here's an example with OpenAI:" + "A number of model providers return token usage information as part of the chat generation response. When available, this is included in the [`AIMessage.response_metadata`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.response_metadata) field. Here's an example with OpenAI:" ] }, { diff --git a/docs/docs/how_to/chatbots_memory.ipynb b/docs/docs/how_to/chatbots_memory.ipynb index 958047a278..2d5f35d17c 100644 --- a/docs/docs/how_to/chatbots_memory.ipynb +++ b/docs/docs/how_to/chatbots_memory.ipynb @@ -142,7 +142,7 @@ "\n", "## Chat history\n", "\n", - "It's perfectly fine to store and pass messages directly as an array, but we can use LangChain's built-in [message history class](/docs/modules/memory/chat_messages/) to store and load messages as well. Instances of this class are responsible for storing and loading chat messages from persistent storage. LangChain integrates with many providers - you can see a [list of integrations here](/docs/integrations/memory) - but for this demo we will use an ephemeral demo class.\n", + "It's perfectly fine to store and pass messages directly as an array, but we can use LangChain's built-in [message history class](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.memory) to store and load messages as well. Instances of this class are responsible for storing and loading chat messages from persistent storage. LangChain integrates with many providers - you can see a [list of integrations here](/docs/integrations/memory) - but for this demo we will use an ephemeral demo class.\n", "\n", "Here's an example of the API:" ] diff --git a/docs/docs/how_to/chatbots_retrieval.ipynb b/docs/docs/how_to/chatbots_retrieval.ipynb index 3eed67278d..757dc0cfca 100644 --- a/docs/docs/how_to/chatbots_retrieval.ipynb +++ b/docs/docs/how_to/chatbots_retrieval.ipynb @@ -15,7 +15,7 @@ "source": [ "# How to add retrieval to chatbots\n", "\n", - "Retrieval is a common technique chatbots use to augment their responses with data outside a chat model's training data. This section will cover how to implement retrieval in the context of chatbots, but it's worth noting that retrieval is a very subtle and deep topic - we encourage you to explore [other parts of the documentation](/docs/use_cases/question_answering/) that go into greater depth!\n", + "Retrieval is a common technique chatbots use to augment their responses with data outside a chat model's training data. This section will cover how to implement retrieval in the context of chatbots, but it's worth noting that retrieval is a very subtle and deep topic - we encourage you to explore [other parts of the documentation](/docs/how_to#qa-with-rag) that go into greater depth!\n", "\n", "## Setup\n", "\n", @@ -80,7 +80,7 @@ "source": [ "## Creating a retriever\n", "\n", - "We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/docs/use_cases/question_answering/).\n", + "We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/docs/how_to#qa-with-rag).\n", "\n", "Let's use a document loader to pull text from the docs:" ] @@ -737,7 +737,7 @@ "source": [ "## Further reading\n", "\n", - "This guide only scratches the surface of retrieval techniques. For more on different ways of ingesting, preparing, and retrieving the most relevant data, check out [this section](/docs/modules/data_connection/) of the docs." + "This guide only scratches the surface of retrieval techniques. For more on different ways of ingesting, preparing, and retrieving the most relevant data, check out the relevant how-to guides [here](/docs/how_to#document-loaders)." ] } ], diff --git a/docs/docs/how_to/chatbots_tools.ipynb b/docs/docs/how_to/chatbots_tools.ipynb index f70e21e0fb..4879debe34 100644 --- a/docs/docs/how_to/chatbots_tools.ipynb +++ b/docs/docs/how_to/chatbots_tools.ipynb @@ -17,11 +17,11 @@ "\n", "This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.\n", "\n", - "Before reading this guide, we recommend you read both [the chatbot quickstart](/docs/use_cases/chatbots/quickstart) in this section and be familiar with [the documentation on agents](/docs/tutorials/agents).\n", + "Before reading this guide, we recommend you read both [the chatbot quickstart](/docs/tutorials/chatbot) in this section and be familiar with [the documentation on agents](/docs/tutorials/agents).\n", "\n", "## Setup\n", "\n", - "For this guide, we'll be using an [OpenAI tools agent](/docs/modules/agents/agent_types/openai_tools) with a single tool for searching the web. The default will be powered by [Tavily](/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you're using Tavily.\n", + "For this guide, we'll be using an [OpenAI tools agent](/docs/how_to/agent_executor) with a single tool for searching the web. The default will be powered by [Tavily](/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you're using Tavily.\n", "\n", "You'll need to [sign up for an account](https://tavily.com/) on the Tavily website, and install the following packages:" ] @@ -437,7 +437,7 @@ "\n", "Other types agents can also support conversational responses too - for more, check out the [agents section](/docs/tutorials/agents).\n", "\n", - "For more on tool usage, you can also check out [this use case section](/docs/use_cases/tool_use/)." + "For more on tool usage, you can also check out [this use case section](/docs/how_to#tools)." ] } ], diff --git a/docs/docs/how_to/custom_retriever.ipynb b/docs/docs/how_to/custom_retriever.ipynb index c97795a721..0ab0191164 100644 --- a/docs/docs/how_to/custom_retriever.ipynb +++ b/docs/docs/how_to/custom_retriever.ipynb @@ -38,7 +38,7 @@ "The logic inside of `_get_relevant_documents` can involve arbitrary calls to a database or to the web using requests.\n", "\n", ":::{.callout-tip}\n", - "By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/docs/expression_language/interface) and will gain the standard `Runnable` functionality out of the box!\n", + "By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/docs/concepts#interface) and will gain the standard `Runnable` functionality out of the box!\n", ":::\n", "\n", "\n", diff --git a/docs/docs/how_to/document_loader_html.ipynb b/docs/docs/how_to/document_loader_html.ipynb index 5fd2686500..d23cc54101 100644 --- a/docs/docs/how_to/document_loader_html.ipynb +++ b/docs/docs/how_to/document_loader_html.ipynb @@ -11,7 +11,7 @@ "\n", "This covers how to load `HTML` documents into a LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects that we can use downstream.\n", "\n", - "Parsing HTML files often requires specialized tools. Here we demonstrate parsing via [Unstructured](https://unstructured-io.github.io/unstructured/) and [BeautifulSoup4](https://beautiful-soup-4.readthedocs.io/en/latest/), which can be installed via pip. Head over to the integrations page to find integrations with additional services, such as [Azure AI Document Intelligence](/docs/0.2.x/integrations/document_loaders/azure_document_intelligence) or [FireCrawl](/docs/0.2.x/integrations/document_loaders/firecrawl).\n", + "Parsing HTML files often requires specialized tools. Here we demonstrate parsing via [Unstructured](https://unstructured-io.github.io/unstructured/) and [BeautifulSoup4](https://beautiful-soup-4.readthedocs.io/en/latest/), which can be installed via pip. Head over to the integrations page to find integrations with additional services, such as [Azure AI Document Intelligence](/docs/integrations/document_loaders/azure_document_intelligence) or [FireCrawl](/docs/integrations/document_loaders/firecrawl).\n", "\n", "## Loading HTML with Unstructured" ] diff --git a/docs/docs/how_to/function_calling.ipynb b/docs/docs/how_to/function_calling.ipynb index f93942bf99..11d09f37f5 100644 --- a/docs/docs/how_to/function_calling.ipynb +++ b/docs/docs/how_to/function_calling.ipynb @@ -48,7 +48,7 @@ "receive the tool call, execute it, and return the output to the LLM to inform its \n", "response. LangChain includes a suite of [built-in tools](/docs/integrations/tools/) \n", "and supports several methods for defining your own [custom tools](/docs/how_to/custom_tools). \n", - "Tool-calling is extremely useful for building [tool-using chains and agents](/docs/use_cases/tool_use), \n", + "Tool-calling is extremely useful for building [tool-using chains and agents](/docs/how_to#tools), \n", "and for getting structured outputs from models more generally.\n", "\n", "Providers adopt different conventions for formatting tool schemas and tool calls. \n", @@ -262,7 +262,7 @@ "are populated in the `.invalid_tool_calls` attribute. An `InvalidToolCall` can have \n", "a name, string arguments, identifier, and error message.\n", "\n", - "If desired, [output parsers](/docs/modules/model_io/output_parsers) can further \n", + "If desired, [output parsers](/docs/how_to#output-parsers) can further \n", "process the output. For example, we can convert back to the original Pydantic class:" ] }, @@ -351,7 +351,7 @@ "id": "55046320-3466-4ec1-a1f8-336234ba9019", "metadata": {}, "source": [ - "Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/modules/model_io/output_parsers/types/openai_tools/) support streaming.\n", + "Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/how_to/output_parser_structured) support streaming.\n", "\n", "For example, below we accumulate tool call chunks:" ] @@ -669,7 +669,7 @@ "## Next steps\n", "\n", "- **Output parsing**: See [OpenAI Tools output\n", - " parsers](/docs/modules/model_io/output_parsers/types/openai_tools/)\n", + " parsers](/docs/how_to/output_parser_structured)\n", " and [OpenAI Functions output\n", " parsers](/docs/modules/model_io/output_parsers/types/openai_functions/)\n", " to learn about extracting the function calling API responses into\n", @@ -678,7 +678,7 @@ " handle creating a structured output chain for you.\n", "- **Tool use**: See how to construct chains and agents that\n", " call the invoked tools in [these\n", - " guides](/docs/use_cases/tool_use/)." + " guides](/docs/how_to#tools)." ] } ], diff --git a/docs/docs/how_to/output_parser_structured.ipynb b/docs/docs/how_to/output_parser_structured.ipynb index f1bf239fb9..b724038a0a 100644 --- a/docs/docs/how_to/output_parser_structured.ipynb +++ b/docs/docs/how_to/output_parser_structured.ipynb @@ -94,7 +94,7 @@ "source": [ "## LCEL\n", "\n", - "Output parsers implement the [Runnable interface](/docs/expression_language/interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/expression_language/). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n", + "Output parsers implement the [Runnable interface](/docs/concepts#interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n", "\n", "Output parsers accept a string or `BaseMessage` as input and can return an arbitrary type." ] diff --git a/docs/docs/how_to/query_few_shot.ipynb b/docs/docs/how_to/query_few_shot.ipynb index f9e50ab9b9..70dc9a0163 100644 --- a/docs/docs/how_to/query_few_shot.ipynb +++ b/docs/docs/how_to/query_few_shot.ipynb @@ -19,7 +19,7 @@ "\n", "As our query analysis becomes more complex, the LLM may struggle to understand how exactly it should respond in certain scenarios. In order to improve performance here, we can add examples to the prompt to guide the LLM.\n", "\n", - "Let's take a look at how we can add examples for the LangChain YouTube video query analyzer we built in the [Quickstart](/docs/use_cases/query_analysis/quickstart)." + "Let's take a look at how we can add examples for the LangChain YouTube video query analyzer we built in the [Quickstart](/docs/tutorials/query_analysis)." ] }, { diff --git a/docs/docs/how_to/sequence.ipynb b/docs/docs/how_to/sequence.ipynb index 37b479ecf5..37f91fb967 100644 --- a/docs/docs/how_to/sequence.ipynb +++ b/docs/docs/how_to/sequence.ipynb @@ -33,7 +33,7 @@ "\n", "## The pipe operator\n", "\n", - "To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/docs/modules/model_io/prompts/) to format input into a [chat model](/docs/modules/model_io/chat/), and finally converting the chat message output into a string with an [output parser](/docs/modules/model_io/output_parsers/).\n", + "To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/docs/how_to#prompt-templates) to format input into a [chat model](/docs/how_to#chat-models), and finally converting the chat message output into a string with an [output parser](/docs/how_to#output-parsers).\n", "\n", "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", diff --git a/docs/docs/how_to/streaming.ipynb b/docs/docs/how_to/streaming.ipynb index 8dc73214ea..25e6e6094e 100644 --- a/docs/docs/how_to/streaming.ipynb +++ b/docs/docs/how_to/streaming.ipynb @@ -19,7 +19,7 @@ "\n", "Streaming is critical in making applications based on LLMs feel responsive to end-users.\n", "\n", - "Important LangChain primitives like [chat models](/docs/concepts/#chat-models), [output parsers](/docs/concepts/#output-parsers), [prompts](/docs/concepts/#prompt-templates), [retrievers](/docs/concepts/#retrievers), and [agents](/docs/concepts/#agents) implement the LangChain [Runnable Interface](/docs/expression_language/interface).\n", + "Important LangChain primitives like [chat models](/docs/concepts/#chat-models), [output parsers](/docs/concepts/#output-parsers), [prompts](/docs/concepts/#prompt-templates), [retrievers](/docs/concepts/#retrievers), and [agents](/docs/concepts/#agents) implement the LangChain [Runnable Interface](/docs/concepts#interface).\n", "\n", "This interface provides two general approaches to stream content:\n", "\n", @@ -246,9 +246,9 @@ "id": "868bc412", "metadata": {}, "source": [ - "You might notice above that `parser` actually doesn't block the streaming output from the model, and instead processes each chunk individually. Many of the [LCEL primitives](/docs/expression_language/primitives) also support this kind of transform-style passthrough streaming, which can be very convenient when constructing apps.\n", + "You might notice above that `parser` actually doesn't block the streaming output from the model, and instead processes each chunk individually. Many of the [LCEL primitives](/docs/how_to#langchain-expression-language-lcel) also support this kind of transform-style passthrough streaming, which can be very convenient when constructing apps.\n", "\n", - "Certain runnables, like [prompt templates](/docs/modules/model_io/prompts) and [chat models](/docs/modules/model_io/chat), cannot process individual chunks and instead aggregate all previous steps. This will interrupt the streaming process. Custom functions can be [designed to return generators](/docs/expression_language/primitives/functions#streaming), which" + "Certain runnables, like [prompt templates](/docs/how_to#prompt-templates) and [chat models](/docs/how_to#chat-models), cannot process individual chunks and instead aggregate all previous steps. This will interrupt the streaming process. Custom functions can be [designed to return generators](/docs/how_to/functions#streaming), which" ] }, { diff --git a/docs/docs/how_to/tool_calling.ipynb b/docs/docs/how_to/tool_calling.ipynb index 3edef31888..3a293612bb 100644 --- a/docs/docs/how_to/tool_calling.ipynb +++ b/docs/docs/how_to/tool_calling.ipynb @@ -226,7 +226,7 @@ "are populated in the `.invalid_tool_calls` attribute. An `InvalidToolCall` can have \n", "a name, string arguments, identifier, and error message.\n", "\n", - "If desired, [output parsers](/docs/modules/model_io/output_parsers) can further \n", + "If desired, [output parsers](/docs/how_to#output-parsers) can further \n", "process the output. For example, we can convert back to the original Pydantic class:" ] }, @@ -309,7 +309,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/modules/model_io/output_parsers/types/openai_tools/) support streaming.\n", + "Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/how_to/output_parser_structured) support streaming.\n", "\n", "For example, below we accumulate tool call chunks:" ] @@ -685,7 +685,7 @@ "\n", "Now you've learned how to bind tool schemas to a chat model and to call those tools. Next, check out some more specific uses of tool calling:\n", "\n", - "- Building [tool-using chains and agents](/docs/use_cases/tool_use/)\n", + "- Building [tool-using chains and agents](/docs/how_to#tools)\n", "- Getting [structured outputs](/docs/how_to/structured_output/) from models" ] } diff --git a/docs/docs/how_to/tools_chain.ipynb b/docs/docs/how_to/tools_chain.ipynb index 8a9aa4312e..b7b1358e00 100644 --- a/docs/docs/how_to/tools_chain.ipynb +++ b/docs/docs/how_to/tools_chain.ipynb @@ -278,7 +278,7 @@ "\n", "Chains are great when we know the specific sequence of tool usage needed for any user input. But for certain use cases, how many times we use tools depends on the input. In these cases, we want to let the model itself decide how many times to use tools and in what order. [Agents](/docs/tutorials/agents) let us do just this.\n", "\n", - "LangChain comes with a number of built-in agents that are optimized for different use cases. Read about all the [agent types here](/docs/modules/agents/agent_types/).\n", + "LangChain comes with a number of built-in agents that are optimized for different use cases. Read about all the [agent types here](/docs/concepts#agents).\n", "\n", "We'll use the [tool calling agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html), which is generally the most reliable kind and the recommended one for most use cases.\n", "\n", @@ -335,7 +335,7 @@ "id": "616f9714-5b18-4eed-b88a-d38e4cb1de99", "metadata": {}, "source": [ - "Agents are also great because they make it easy to use multiple tools. To learn how to build Chains that use multiple tools, check out the [Chains with multiple tools](/docs/use_cases/tool_use/multiple_tools) page." + "Agents are also great because they make it easy to use multiple tools. To learn how to build Chains that use multiple tools, check out the [Chains with multiple tools](/docs/how_to/tools_multiple) page." ] }, { diff --git a/docs/docs/how_to/tools_multiple.ipynb b/docs/docs/how_to/tools_multiple.ipynb index 10a93a15e1..8614a61a5d 100644 --- a/docs/docs/how_to/tools_multiple.ipynb +++ b/docs/docs/how_to/tools_multiple.ipynb @@ -17,7 +17,7 @@ "source": [ "# How to use an LLM to choose between multiple tools\n", "\n", - "In our [Quickstart](/docs/use_cases/tool_use/quickstart) we went over how to build a Chain that calls a single `multiply` tool. Now let's take a look at how we might augment this chain so that it can pick from a number of tools to call. We'll focus on Chains since [Agents](/docs/tutorials/agents) can route between multiple tools by default." + "In our [Quickstart](/docs/how_to/tool_calling) we went over how to build a Chain that calls a single `multiply` tool. Now let's take a look at how we might augment this chain so that it can pick from a number of tools to call. We'll focus on Chains since [Agents](/docs/tutorials/agents) can route between multiple tools by default." ] }, { @@ -120,7 +120,7 @@ "id": "bbea4555-ed10-4a18-b802-e9a3071f132b", "metadata": {}, "source": [ - "The main difference between using one Tool and many is that we can't be sure which Tool the model will invoke upfront, so we cannot hardcode, like we did in the [Quickstart](/docs/use_cases/tool_use/quickstart), a specific tool into our chain. Instead we'll add `call_tools`, a `RunnableLambda` that takes the output AI message with tools calls and routes to the correct tools.\n", + "The main difference between using one Tool and many is that we can't be sure which Tool the model will invoke upfront, so we cannot hardcode, like we did in the [Quickstart](/docs/how_to/tool_calling), a specific tool into our chain. Instead we'll add `call_tools`, a `RunnableLambda` that takes the output AI message with tools calls and routes to the correct tools.\n", "\n", "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", diff --git a/docs/docs/how_to/tools_parallel.ipynb b/docs/docs/how_to/tools_parallel.ipynb index 034c13bf22..c58fbae82b 100644 --- a/docs/docs/how_to/tools_parallel.ipynb +++ b/docs/docs/how_to/tools_parallel.ipynb @@ -7,7 +7,7 @@ "source": [ "# How to call tools in parallel\n", "\n", - "In the [Chains with multiple tools](/docs/use_cases/tool_use/multiple_tools) guide we saw how to build function-calling chains that select between multiple tools. Some models, like the OpenAI models released in Fall 2023, also support parallel function calling, which allows you to invoke multiple functions (or the same function multiple times) in a single model call. Our previous chain from the multiple tools guides actually already supports this." + "In the [Chains with multiple tools](/docs/how_to/tools_multiple) guide we saw how to build function-calling chains that select between multiple tools. Some models, like the OpenAI models released in Fall 2023, also support parallel function calling, which allows you to invoke multiple functions (or the same function multiple times) in a single model call. Our previous chain from the multiple tools guides actually already supports this." ] }, { diff --git a/docs/docs/how_to/tools_prompting.ipynb b/docs/docs/how_to/tools_prompting.ipynb index 5dc4940bcd..f4ea7631b7 100644 --- a/docs/docs/how_to/tools_prompting.ipynb +++ b/docs/docs/how_to/tools_prompting.ipynb @@ -17,7 +17,7 @@ "source": [ "# How to use tools without function calling\n", "\n", - "In this guide we'll build a Chain that does not rely on any special model APIs (like tool calling, which we showed in the [Quickstart](/docs/use_cases/tool_use/quickstart)) and instead just prompts the model directly to invoke tools." + "In this guide we'll build a Chain that does not rely on any special model APIs (like tool calling, which we showed in the [Quickstart](/docs/how_to/tool_calling)) and instead just prompts the model directly to invoke tools." ] }, { diff --git a/docs/docs/integrations/callbacks/trubrics.ipynb b/docs/docs/integrations/callbacks/trubrics.ipynb index 4e3771c0a9..48ba49cc36 100644 --- a/docs/docs/integrations/callbacks/trubrics.ipynb +++ b/docs/docs/integrations/callbacks/trubrics.ipynb @@ -124,7 +124,7 @@ "tags": [] }, "source": [ - "Here are two examples of how to use the `TrubricsCallbackHandler` with Langchain [LLMs](/docs/modules/model_io/llms/) or [Chat Models](/docs/modules/model_io/chat/). We will use OpenAI models, so set your `OPENAI_API_KEY` key here:" + "Here are two examples of how to use the `TrubricsCallbackHandler` with Langchain [LLMs](/docs/how_to#llms) or [Chat Models](/docs/how_to#chat-models). We will use OpenAI models, so set your `OPENAI_API_KEY` key here:" ] }, { diff --git a/docs/docs/integrations/chat/cohere.ipynb b/docs/docs/integrations/chat/cohere.ipynb index 5eed6733ad..acc0bf59ec 100644 --- a/docs/docs/integrations/chat/cohere.ipynb +++ b/docs/docs/integrations/chat/cohere.ipynb @@ -77,7 +77,7 @@ "source": [ "## Usage\n", "\n", - "ChatCohere supports all [ChatModel](/docs/modules/model_io/chat/) functionality:" + "ChatCohere supports all [ChatModel](/docs/how_to#chat-models) functionality:" ] }, { @@ -201,7 +201,7 @@ "source": [ "## Chaining\n", "\n", - "You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/expression_language)" + "You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)" ] }, { diff --git a/docs/docs/integrations/chat/friendli.ipynb b/docs/docs/integrations/chat/friendli.ipynb index 4af9f0049d..f9fc3878f1 100644 --- a/docs/docs/integrations/chat/friendli.ipynb +++ b/docs/docs/integrations/chat/friendli.ipynb @@ -71,7 +71,7 @@ "source": [ "## Usage\n", "\n", - "`FrienliChat` supports all methods of [`ChatModel`](/docs/modules/model_io/chat/) including async APIs." + "`FrienliChat` supports all methods of [`ChatModel`](/docs/how_to#chat-models) including async APIs." ] }, { diff --git a/docs/docs/integrations/chat/google_vertex_ai_palm.ipynb b/docs/docs/integrations/chat/google_vertex_ai_palm.ipynb index d8a9155d42..aa3156edae 100644 --- a/docs/docs/integrations/chat/google_vertex_ai_palm.ipynb +++ b/docs/docs/integrations/chat/google_vertex_ai_palm.ipynb @@ -509,7 +509,7 @@ "source": [ "## Asynchronous calls\n", "\n", - "We can make asynchronous calls via the Runnables [Async Interface](/docs/expression_language/interface)." + "We can make asynchronous calls via the Runnables [Async Interface](/docs/concepts#interface)." ] }, { diff --git a/docs/docs/integrations/chat/huggingface.ipynb b/docs/docs/integrations/chat/huggingface.ipynb index 5688c93f7d..03b85c667d 100644 --- a/docs/docs/integrations/chat/huggingface.ipynb +++ b/docs/docs/integrations/chat/huggingface.ipynb @@ -10,7 +10,7 @@ "\n", "In particular, we will:\n", "1. Utilize the [HuggingFaceTextGenInference](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_text_gen_inference.py), [HuggingFaceEndpoint](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_endpoint.py), or [HuggingFaceHub](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_hub.py) integrations to instantiate an `LLM`.\n", - "2. Utilize the `ChatHuggingFace` class to enable any of these LLMs to interface with LangChain's [Chat Messages](/docs/modules/model_io/chat/#messages) abstraction.\n", + "2. Utilize the `ChatHuggingFace` class to enable any of these LLMs to interface with LangChain's [Chat Messages](docs/concepts#chat-models) abstraction.\n", "3. Demonstrate how to use an open-source LLM to power an `ChatAgent` pipeline\n", "\n", "\n", @@ -280,7 +280,7 @@ "source": [ "## 3. Take it for a spin as an agent!\n", "\n", - "Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. The example below is taken from [here](/docs/modules/agents/agent_types/react#using-chat-models).\n", + "Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. The example below is taken from [here](https://python.langchain.com/v0.1/docs/modules/agents/agent_types/react/#using-chat-models).\n", "\n", "> Note: To run this section, you'll need to have a [SerpAPI Token](https://serpapi.com/) saved as an environment variable: `SERPAPI_API_KEY`" ] diff --git a/docs/docs/integrations/chat/llama2_chat.ipynb b/docs/docs/integrations/chat/llama2_chat.ipynb index f3e6059fb4..dbcfce36fd 100644 --- a/docs/docs/integrations/chat/llama2_chat.ipynb +++ b/docs/docs/integrations/chat/llama2_chat.ipynb @@ -17,9 +17,9 @@ "source": [ "# Llama2Chat\n", "\n", - "This notebook shows how to augment Llama-2 `LLM`s with the `Llama2Chat` wrapper to support the [Llama-2 chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). Several `LLM` implementations in LangChain can be used as interface to Llama-2 chat models. These include [ChatHuggingFace](/docs/integrations/chat/huggingface), [LlamaCpp](/docs/use_cases/question_answering/local_retrieval_qa), [GPT4All](/docs/integrations/llms/gpt4all), ..., to mention a few examples. \n", + "This notebook shows how to augment Llama-2 `LLM`s with the `Llama2Chat` wrapper to support the [Llama-2 chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). Several `LLM` implementations in LangChain can be used as interface to Llama-2 chat models. These include [ChatHuggingFace](/docs/integrations/chat/huggingface), [LlamaCpp](/docs/tutorials/local_rag), [GPT4All](/docs/integrations/llms/gpt4all), ..., to mention a few examples. \n", "\n", - "`Llama2Chat` is a generic wrapper that implements `BaseChatModel` and can therefore be used in applications as [chat model](/docs/modules/model_io/chat/). `Llama2Chat` converts a list of Messages into the [required chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and forwards the formatted prompt as `str` to the wrapped `LLM`." + "`Llama2Chat` is a generic wrapper that implements `BaseChatModel` and can therefore be used in applications as [chat model](/docs/how_to#chat-models). `Llama2Chat` converts a list of Messages into the [required chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and forwards the formatted prompt as `str` to the wrapped `LLM`." ] }, { diff --git a/docs/docs/integrations/chat/mistralai.ipynb b/docs/docs/integrations/chat/mistralai.ipynb index 106d51a700..a7e126092f 100644 --- a/docs/docs/integrations/chat/mistralai.ipynb +++ b/docs/docs/integrations/chat/mistralai.ipynb @@ -225,7 +225,7 @@ "source": [ "## Chaining\n", "\n", - "You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/expression_language)" + "You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)" ] }, { diff --git a/docs/docs/integrations/chat/nvidia_ai_endpoints.ipynb b/docs/docs/integrations/chat/nvidia_ai_endpoints.ipynb index f2d2fb3954..4daf080d23 100644 --- a/docs/docs/integrations/chat/nvidia_ai_endpoints.ipynb +++ b/docs/docs/integrations/chat/nvidia_ai_endpoints.ipynb @@ -1005,7 +1005,7 @@ "id": "79efa62d" }, "source": [ - "Like any other integration, ChatNVIDIA is fine to support chat utilities like conversation buffers by default. Below, we show the [LangChain ConversationBufferMemory](/docs/modules/memory/types/buffer) example applied to the `mixtral_8x7b` model." + "Like any other integration, ChatNVIDIA is fine to support chat utilities like conversation buffers by default. Below, we show the [LangChain ConversationBufferMemory](https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html) example applied to the `mixtral_8x7b` model." ] }, { diff --git a/docs/docs/integrations/chat/ollama.ipynb b/docs/docs/integrations/chat/ollama.ipynb index bcdc41e83b..22a87ebfb7 100644 --- a/docs/docs/integrations/chat/ollama.ipynb +++ b/docs/docs/integrations/chat/ollama.ipynb @@ -185,7 +185,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Take a look at the [LangChain Expressive Language (LCEL) Interface](/docs/expression_language/interface) for the other available interfaces for use when a chain is created.\n", + "Take a look at the [LangChain Expressive Language (LCEL) Interface](/docs/concepts#interface) for the other available interfaces for use when a chain is created.\n", "\n", "## Building from source\n", "\n", diff --git a/docs/docs/integrations/document_loaders/google_bigtable.ipynb b/docs/docs/integrations/document_loaders/google_bigtable.ipynb index 86a1424f1d..2e32e6487d 100644 --- a/docs/docs/integrations/document_loaders/google_bigtable.ipynb +++ b/docs/docs/integrations/document_loaders/google_bigtable.ipynb @@ -8,7 +8,7 @@ "\n", "> [Bigtable](https://cloud.google.com/bigtable) is a key-value and wide-column store, ideal for fast access to structured, semi-structured, or unstructured data. Extend your database application to build AI-powered experiences leveraging Bigtable's Langchain integrations.\n", "\n", - "This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `BigtableLoader` and `BigtableSaver`.\n", + "This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `BigtableLoader` and `BigtableSaver`.\n", "\n", "Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-bigtable-python/).\n", "\n", diff --git a/docs/docs/integrations/document_loaders/google_cloud_sql_mssql.ipynb b/docs/docs/integrations/document_loaders/google_cloud_sql_mssql.ipynb index ca400b2835..1dd568c85c 100644 --- a/docs/docs/integrations/document_loaders/google_cloud_sql_mssql.ipynb +++ b/docs/docs/integrations/document_loaders/google_cloud_sql_mssql.ipynb @@ -8,7 +8,7 @@ "\n", "> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgres), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n", "\n", - "This notebook goes over how to use [Cloud SQL for SQL server](https://cloud.google.com/sql/sqlserver) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `MSSQLLoader` and `MSSQLDocumentSaver`.\n", + "This notebook goes over how to use [Cloud SQL for SQL server](https://cloud.google.com/sql/sqlserver) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `MSSQLLoader` and `MSSQLDocumentSaver`.\n", "\n", "Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mssql-python/).\n", "\n", diff --git a/docs/docs/integrations/document_loaders/google_cloud_sql_mysql.ipynb b/docs/docs/integrations/document_loaders/google_cloud_sql_mysql.ipynb index 7e19bd78a3..d656b8642f 100644 --- a/docs/docs/integrations/document_loaders/google_cloud_sql_mysql.ipynb +++ b/docs/docs/integrations/document_loaders/google_cloud_sql_mysql.ipynb @@ -8,7 +8,7 @@ "\n", "> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgresql), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n", "\n", - "This notebook goes over how to use [Cloud SQL for MySQL](https://cloud.google.com/sql/mysql) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `MySQLLoader` and `MySQLDocumentSaver`.\n", + "This notebook goes over how to use [Cloud SQL for MySQL](https://cloud.google.com/sql/mysql) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `MySQLLoader` and `MySQLDocumentSaver`.\n", "\n", "Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mysql-python/).\n", "\n", diff --git a/docs/docs/integrations/document_loaders/google_datastore.ipynb b/docs/docs/integrations/document_loaders/google_datastore.ipynb index 187822ffba..7841440464 100644 --- a/docs/docs/integrations/document_loaders/google_datastore.ipynb +++ b/docs/docs/integrations/document_loaders/google_datastore.ipynb @@ -8,7 +8,7 @@ "\n", "> [Firestore in Datastore Mode](https://cloud.google.com/datastore) is a NoSQL document database built for automatic scaling, high performance and ease of application development. Extend your database application to build AI-powered experiences leveraging Datastore's Langchain integrations.\n", "\n", - "This notebook goes over how to use [Firestore in Datastore Mode](https://cloud.google.com/datastore) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `DatastoreLoader` and `DatastoreSaver`.\n", + "This notebook goes over how to use [Firestore in Datastore Mode](https://cloud.google.com/datastore) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `DatastoreLoader` and `DatastoreSaver`.\n", "\n", "Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-datastore-python/).\n", "\n", diff --git a/docs/docs/integrations/document_loaders/google_el_carro.ipynb b/docs/docs/integrations/document_loaders/google_el_carro.ipynb index 8da905a10d..ee201a065d 100644 --- a/docs/docs/integrations/document_loaders/google_el_carro.ipynb +++ b/docs/docs/integrations/document_loaders/google_el_carro.ipynb @@ -18,7 +18,7 @@ "by leveraging the El Carro Langchain integration.\n", "\n", "This guide goes over how to use El Carro Langchain integration to\n", - "[save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/)\n", + "[save, load and delete langchain documents](/docs/how_to#document-loaders)\n", "with `ElCarroLoader` and `ElCarroDocumentSaver`. This integration works for any Oracle database, regardless of where it is running.\n", "\n", "Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-el-carro-python/).\n", diff --git a/docs/docs/integrations/document_loaders/google_firestore.ipynb b/docs/docs/integrations/document_loaders/google_firestore.ipynb index 0fc6d0afdb..139a848d81 100644 --- a/docs/docs/integrations/document_loaders/google_firestore.ipynb +++ b/docs/docs/integrations/document_loaders/google_firestore.ipynb @@ -8,7 +8,7 @@ "\n", "> [Firestore](https://cloud.google.com/firestore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Firestore's Langchain integrations.\n", "\n", - "This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `FirestoreLoader` and `FirestoreSaver`.\n", + "This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `FirestoreLoader` and `FirestoreSaver`.\n", "\n", "Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-firestore-python/).\n", "\n", diff --git a/docs/docs/integrations/document_loaders/google_memorystore_redis.ipynb b/docs/docs/integrations/document_loaders/google_memorystore_redis.ipynb index c797e36672..f9573ee6d5 100644 --- a/docs/docs/integrations/document_loaders/google_memorystore_redis.ipynb +++ b/docs/docs/integrations/document_loaders/google_memorystore_redis.ipynb @@ -10,7 +10,7 @@ "\n", "> [Google Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) is a fully-managed service that is powered by the Redis in-memory data store to build application caches that provide sub-millisecond data access. Extend your database application to build AI-powered experiences leveraging Memorystore for Redis's Langchain integrations.\n", "\n", - "This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `MemorystoreDocumentLoader` and `MemorystoreDocumentSaver`.\n", + "This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `MemorystoreDocumentLoader` and `MemorystoreDocumentSaver`.\n", "\n", "Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-memorystore-redis-python/).\n", "\n", diff --git a/docs/docs/integrations/document_loaders/google_spanner.ipynb b/docs/docs/integrations/document_loaders/google_spanner.ipynb index fe0b8b66aa..3ba9440573 100644 --- a/docs/docs/integrations/document_loaders/google_spanner.ipynb +++ b/docs/docs/integrations/document_loaders/google_spanner.ipynb @@ -8,7 +8,7 @@ "\n", "> [Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution.\n", "\n", - "This notebook goes over how to use [Spanner](https://cloud.google.com/spanner) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `SpannerLoader` and `SpannerDocumentSaver`.\n", + "This notebook goes over how to use [Spanner](https://cloud.google.com/spanner) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `SpannerLoader` and `SpannerDocumentSaver`.\n", "\n", "Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-spanner-python/).\n", "\n", diff --git a/docs/docs/integrations/document_loaders/tomarkdown.ipynb b/docs/docs/integrations/document_loaders/tomarkdown.ipynb index 32dcdfff74..343dcbd110 100644 --- a/docs/docs/integrations/document_loaders/tomarkdown.ipynb +++ b/docs/docs/integrations/document_loaders/tomarkdown.ipynb @@ -99,9 +99,9 @@ "\n", "## Get started [​](\\#get-started \"Direct link to Get started\")\n", "\n", - "[Here’s](/docs/get_started/installation) how to install LangChain, set up your environment, and start building.\n", + "[Here’s](/docs/installation) how to install LangChain, set up your environment, and start building.\n", "\n", - "We recommend following our [Quickstart](/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application.\n", + "We recommend following our [Quickstart](/docs/tutorials/llm_chain) guide to familiarize yourself with the framework by building your first LangChain application.\n", "\n", "Read up on our [Security](/docs/security) best practices to make sure you're developing safely with LangChain.\n", "\n", @@ -113,8 +113,8 @@ "\n", "LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.\n", "\n", - "- **[Overview](/docs/expression_language/)**: LCEL and its benefits\n", - "- **[Interface](/docs/expression_language/interface)**: The standard interface for LCEL objects\n", + "- **[Overview](/docs/concepts#langchain-expression-language)**: LCEL and its benefits\n", + "- **[Interface](/docs/concepts#interface)**: The standard interface for LCEL objects\n", "- **[How-to](/docs/expression_language/how_to)**: Key features of LCEL\n", "- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks\n", "\n", @@ -136,13 +136,13 @@ "\n", "## Examples, ecosystem, and resources [​](\\#examples-ecosystem-and-resources \"Direct link to Examples, ecosystem, and resources\")\n", "\n", - "### [Use cases](/docs/use_cases/question_answering/) [​](\\#use-cases \"Direct link to use-cases\")\n", + "### [Use cases](/docs/how_to#qa-with-rag) [​](\\#use-cases \"Direct link to use-cases\")\n", "\n", "Walkthroughs and techniques for common end-to-end use cases, like:\n", "\n", - "- [Document question answering](/docs/use_cases/question_answering/)\n", + "- [Document question answering](/docs/how_to#qa-with-rag)\n", "- [Chatbots](/docs/use_cases/chatbots/)\n", - "- [Analyzing structured data](/docs/use_cases/sql/)\n", + "- [Analyzing structured data](/docs/how_to#qa-over-sql--csv)\n", "- and much more...\n", "\n", "### [Integrations](/docs/integrations/providers/) [​](\\#integrations \"Direct link to integrations\")\n", diff --git a/docs/docs/integrations/graphs/memgraph.ipynb b/docs/docs/integrations/graphs/memgraph.ipynb index b46a2fd2b0..85eb7497db 100644 --- a/docs/docs/integrations/graphs/memgraph.ipynb +++ b/docs/docs/integrations/graphs/memgraph.ipynb @@ -584,7 +584,7 @@ "id": "8edb9976", "metadata": {}, "source": [ - "To address this, we can adjust the initial Cypher prompt of the QA chain. This involves adding guidance to the LLM on how users can refer to specific platforms, such as PS5 in our case. We achieve this using the LangChain [PromptTemplate](/docs/modules/model_io/prompts/), creating a modified initial prompt. This modified prompt is then supplied as an argument to our refined Memgraph-LangChain instance." + "To address this, we can adjust the initial Cypher prompt of the QA chain. This involves adding guidance to the LLM on how users can refer to specific platforms, such as PS5 in our case. We achieve this using the LangChain [PromptTemplate](/docs/how_to#prompt-templates), creating a modified initial prompt. This modified prompt is then supplied as an argument to our refined Memgraph-LangChain instance." ] }, { diff --git a/docs/docs/integrations/llms/cohere.ipynb b/docs/docs/integrations/llms/cohere.ipynb index bc312c02fc..490a1b3ce2 100644 --- a/docs/docs/integrations/llms/cohere.ipynb +++ b/docs/docs/integrations/llms/cohere.ipynb @@ -79,7 +79,7 @@ "source": [ "## Usage\n", "\n", - "Cohere supports all [LLM](/docs/modules/model_io/llms/) functionality:" + "Cohere supports all [LLM](/docs/how_to#llms) functionality:" ] }, { @@ -193,7 +193,7 @@ "id": "39198f7d-6fc8-4662-954a-37ad38c4bec4", "metadata": {}, "source": [ - "You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/expression_language)" + "You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)" ] }, { diff --git a/docs/docs/integrations/llms/friendli.ipynb b/docs/docs/integrations/llms/friendli.ipynb index 90d5d491c4..8e0e2d1511 100644 --- a/docs/docs/integrations/llms/friendli.ipynb +++ b/docs/docs/integrations/llms/friendli.ipynb @@ -71,7 +71,7 @@ "source": [ "## Usage\n", "\n", - "`Frienli` supports all methods of [`LLM`](/docs/modules/model_io/llms/) including async APIs." + "`Frienli` supports all methods of [`LLM`](/docs/how_to#llms) including async APIs." ] }, { diff --git a/docs/docs/integrations/llms/google_vertex_ai_palm.ipynb b/docs/docs/integrations/llms/google_vertex_ai_palm.ipynb index 64273011f4..e8cd512d7c 100644 --- a/docs/docs/integrations/llms/google_vertex_ai_palm.ipynb +++ b/docs/docs/integrations/llms/google_vertex_ai_palm.ipynb @@ -72,7 +72,7 @@ "source": [ "## Usage\n", "\n", - "VertexAI supports all [LLM](/docs/modules/model_io/llms/) functionality." + "VertexAI supports all [LLM](/docs/how_to#llms) functionality." ] }, { @@ -326,7 +326,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/expression_language)" + "You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)" ] }, { diff --git a/docs/docs/integrations/llms/llamafile.ipynb b/docs/docs/integrations/llms/llamafile.ipynb index 89733b5aab..3570c8c3c5 100644 --- a/docs/docs/integrations/llms/llamafile.ipynb +++ b/docs/docs/integrations/llms/llamafile.ipynb @@ -105,7 +105,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/expression_language/interface)" + "To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/concepts#interface)" ] } ], diff --git a/docs/docs/integrations/llms/ollama.ipynb b/docs/docs/integrations/llms/ollama.ipynb index cd4d1782e2..7c6be1a28c 100644 --- a/docs/docs/integrations/llms/ollama.ipynb +++ b/docs/docs/integrations/llms/ollama.ipynb @@ -175,7 +175,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/expression_language/interface)" + "To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/concepts#interface)" ] }, { diff --git a/docs/docs/integrations/platforms/aws.mdx b/docs/docs/integrations/platforms/aws.mdx index 54ba79eb20..8f83e388e5 100644 --- a/docs/docs/integrations/platforms/aws.mdx +++ b/docs/docs/integrations/platforms/aws.mdx @@ -305,7 +305,7 @@ We need to install the `boto3` and `nltk` libraries. pip install boto3 nltk ``` -See a [usage example](/docs/guides/productionization/safety/amazon_comprehend_chain). +See a [usage example](https://python.langchain.com/v0.1/docs/guides/productionization/safety/amazon_comprehend_chain/). ```python from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain diff --git a/docs/docs/integrations/platforms/index.mdx b/docs/docs/integrations/platforms/index.mdx index 5e7040eba4..fdce8a2b8e 100644 --- a/docs/docs/integrations/platforms/index.mdx +++ b/docs/docs/integrations/platforms/index.mdx @@ -7,7 +7,7 @@ sidebar_class_name: hidden :::info -If you'd like to write your own integration, see [Extending LangChain](/docs/guides/development/extending_langchain/). +If you'd like to write your own integration, see [Extending LangChain](/docs/how_to/#custom). If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing/integrations/). ::: diff --git a/docs/docs/integrations/platforms/microsoft.mdx b/docs/docs/integrations/platforms/microsoft.mdx index 22556c8c5c..bec5df4050 100644 --- a/docs/docs/integrations/platforms/microsoft.mdx +++ b/docs/docs/integrations/platforms/microsoft.mdx @@ -346,7 +346,7 @@ pip install langchain-experimental openai presidio-analyzer presidio-anonymizer python -m spacy download en_core_web_lg ``` -See [usage examples](/docs/guides/productionization/safety/presidio_data_anonymization/). +See [usage examples](https://python.langchain.com/v0.1/docs/guides/productionization/safety/presidio_data_anonymization). ```python from langchain_experimental.data_anonymizer import PresidioAnonymizer, PresidioReversibleAnonymizer diff --git a/docs/docs/integrations/platforms/openai.mdx b/docs/docs/integrations/platforms/openai.mdx index f0b33faabb..bbcc0e46a3 100644 --- a/docs/docs/integrations/platforms/openai.mdx +++ b/docs/docs/integrations/platforms/openai.mdx @@ -107,11 +107,11 @@ You can also use it to count tokens when splitting documents with from langchain.text_splitter import CharacterTextSplitter CharacterTextSplitter.from_tiktoken_encoder(...) ``` -For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/document_transformers/split_by_token#tiktoken) +For a more detailed walkthrough of this, see [this notebook](/docs/how_to/split_by_token/#tiktoken) ## Chain -See a [usage example](/docs/guides/productionization/safety/moderation). +See a [usage example](https://python.langchain.com/v0.1/docs/guides/productionization/safety/moderation). ```python from langchain.chains import OpenAIModerationChain diff --git a/docs/docs/integrations/providers/motherduck.mdx b/docs/docs/integrations/providers/motherduck.mdx index ee39a117bb..827c654f92 100644 --- a/docs/docs/integrations/providers/motherduck.mdx +++ b/docs/docs/integrations/providers/motherduck.mdx @@ -33,7 +33,7 @@ db = SQLDatabase.from_uri(conn_str) db_chain = SQLDatabaseChain.from_llm(OpenAI(temperature=0), db, verbose=True) ``` -From here, see the [SQL Chain](/docs/use_cases/sql/) documentation on how to use. +From here, see the [SQL Chain](/docs/how_to#qa-over-sql--csv) documentation on how to use. ## LLMCache diff --git a/docs/docs/integrations/providers/ollama.mdx b/docs/docs/integrations/providers/ollama.mdx index c7d5464f29..ad911cc1fd 100644 --- a/docs/docs/integrations/providers/ollama.mdx +++ b/docs/docs/integrations/providers/ollama.mdx @@ -7,7 +7,7 @@ >It optimizes setup and configuration details, including GPU usage. >For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library). -See [this guide](/docs/guides/development/local_llms#quickstart) for more details +See [this guide](/docs/tutorials/local_rag) for more details on how to use `Ollama` with LangChain. ## Installation and Setup diff --git a/docs/docs/integrations/providers/redis.mdx b/docs/docs/integrations/providers/redis.mdx index 07dc9d32e3..55dc73ee93 100644 --- a/docs/docs/integrations/providers/redis.mdx +++ b/docs/docs/integrations/providers/redis.mdx @@ -132,7 +132,7 @@ Redis can be used to persist LLM conversations. ### Vector Store Retriever Memory -For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](/docs/modules/memory/types/vectorstore_retriever_memory). +For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](https://api.python.langchain.com/en/latest/memory/langchain.memory.vectorstore.VectorStoreRetrieverMemory.html). ### Chat Message History Memory For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history). diff --git a/docs/docs/integrations/providers/spacy.mdx b/docs/docs/integrations/providers/spacy.mdx index bd388837e6..d893f12a3d 100644 --- a/docs/docs/integrations/providers/spacy.mdx +++ b/docs/docs/integrations/providers/spacy.mdx @@ -13,7 +13,7 @@ pip install spacy ## Text Splitter -See a [usage example](/docs/modules/data_connection/document_transformers/split_by_token#spacy). +See a [usage example](/docs/how_to/split_by_token/#spacy). ```python from langchain_text_splitters import SpacyTextSplitter diff --git a/docs/docs/integrations/providers/unstructured.mdx b/docs/docs/integrations/providers/unstructured.mdx index e210151646..1a85b180c0 100644 --- a/docs/docs/integrations/providers/unstructured.mdx +++ b/docs/docs/integrations/providers/unstructured.mdx @@ -126,7 +126,7 @@ from langchain_community.document_loaders import UnstructuredFileLoader ### UnstructuredHTMLLoader -See a [usage example](/docs/modules/data_connection/document_loaders/html). +See a [usage example](/docs/how_to/document_loader_html). ```python from langchain_community.document_loaders import UnstructuredHTMLLoader @@ -173,7 +173,7 @@ from langchain_community.document_loaders import UnstructuredOrgModeLoader ### UnstructuredPDFLoader -See a [usage example](/docs/modules/data_connection/document_loaders/pdf#using-unstructured). +See a [usage example](/docs/how_to/document_loader_pdf#using-unstructured). ```python from langchain_community.document_loaders import UnstructuredPDFLoader diff --git a/docs/docs/integrations/providers/vectara/vectara_summary.ipynb b/docs/docs/integrations/providers/vectara/vectara_summary.ipynb index 5937f12eee..94be639b93 100644 --- a/docs/docs/integrations/providers/vectara/vectara_summary.ipynb +++ b/docs/docs/integrations/providers/vectara/vectara_summary.ipynb @@ -22,7 +22,7 @@ "See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n", "\n", "This notebook shows how to use functionality related to the `Vectara`'s integration with langchain.\n", - "Specificaly we will demonstrate how to use chaining with [LangChain's Expression Language](/docs/expression_language/) and using Vectara's integrated summarization capability." + "Specificaly we will demonstrate how to use chaining with [LangChain's Expression Language](/docs/concepts#langchain-expression-language) and using Vectara's integrated summarization capability." ] }, { diff --git a/docs/docs/integrations/retrievers/fleet_context.ipynb b/docs/docs/integrations/retrievers/fleet_context.ipynb index 4b20645ddb..db3e6e79cb 100644 --- a/docs/docs/integrations/retrievers/fleet_context.ipynb +++ b/docs/docs/integrations/retrievers/fleet_context.ipynb @@ -9,7 +9,7 @@ "\n", ">[Fleet AI Context](https://www.fleet.so/context) is a dataset of high-quality embeddings of the top 1200 most popular & permissive Python Libraries & their documentation.\n", ">\n", - ">The `Fleet AI` team is on a mission to embed the world's most important data. They've started by embedding the top 1200 Python libraries to enable code generation with up-to-date knowledge. They've been kind enough to share their embeddings of the [LangChain docs](/docs/get_started/introduction) and [API reference](https://api.python.langchain.com/en/latest/api_reference.html).\n", + ">The `Fleet AI` team is on a mission to embed the world's most important data. They've started by embedding the top 1200 Python libraries to enable code generation with up-to-date knowledge. They've been kind enough to share their embeddings of the [LangChain docs](/docs/introduction) and [API reference](https://api.python.langchain.com/en/latest/api_reference.html).\n", "\n", "Let's take a look at how we can use these embeddings to power a docs retrieval system and ultimately a simple code-generating chain!" ] diff --git a/docs/docs/integrations/retrievers/ragatouille.ipynb b/docs/docs/integrations/retrievers/ragatouille.ipynb index 868fde5f60..a49b77ac45 100644 --- a/docs/docs/integrations/retrievers/ragatouille.ipynb +++ b/docs/docs/integrations/retrievers/ragatouille.ipynb @@ -12,7 +12,7 @@ ">\n", ">[ColBERT](https://github.com/stanford-futuredata/ColBERT) is a fast and accurate retrieval model, enabling scalable BERT-based search over large text collections in tens of milliseconds.\n", "\n", - "We can use this as a [retriever](/docs/modules/data_connection/retrievers). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/use_cases/question_answering) to learn how to use this vector store as part of a larger chain.\n", + "We can use this as a [retriever](/docs/how_to#retrievers). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/how_to#qa-with-rag) to learn how to use this vector store as part of a larger chain.\n", "\n", "This page covers how to use [RAGatouille](https://github.com/bclavie/RAGatouille) as a retriever in a LangChain chain. \n", "\n", diff --git a/docs/docs/integrations/retrievers/tavily.ipynb b/docs/docs/integrations/retrievers/tavily.ipynb index 6c5c61fb3d..997610de4d 100644 --- a/docs/docs/integrations/retrievers/tavily.ipynb +++ b/docs/docs/integrations/retrievers/tavily.ipynb @@ -8,7 +8,7 @@ "\n", ">[Tavily's Search API](https://tavily.com) is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.\n", "\n", - "We can use this as a [retriever](/docs/modules/data_connection/retrievers). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/use_cases/question_answering) to learn how to use this vectorstore as part of a larger chain.\n", + "We can use this as a [retriever](/docs/how_to#retrievers). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/how_to#qa-with-rag) to learn how to use this vectorstore as part of a larger chain.\n", "\n", "## Setup\n", "\n", diff --git a/docs/docs/integrations/tools/passio_nutrition_ai.ipynb b/docs/docs/integrations/tools/passio_nutrition_ai.ipynb index b52c357a3c..451ccc2e37 100644 --- a/docs/docs/integrations/tools/passio_nutrition_ai.ipynb +++ b/docs/docs/integrations/tools/passio_nutrition_ai.ipynb @@ -118,7 +118,7 @@ "source": [ "## Create the agent\n", "\n", - "Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](/docs/modules/agents/agent_types/)\n", + "Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](/docs/concepts#agents)\n", "\n", "First, we choose the LLM we want to be guiding the agent." ] @@ -176,7 +176,7 @@ "id": "f8014c9d", "metadata": {}, "source": [ - "Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/modules/agents/concepts)" + "Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/concepts#agents)" ] }, { @@ -196,7 +196,7 @@ "id": "1a58c9f8", "metadata": {}, "source": [ - "Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). For more information about how to think about these components, see our [conceptual guide](/docs/modules/agents/concepts)" + "Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). For more information about how to think about these components, see our [conceptual guide](/docs/concepts#agents)" ] }, { diff --git a/docs/docs/integrations/tools/reddit_search.ipynb b/docs/docs/integrations/tools/reddit_search.ipynb index 52ac17a1fc..9c4eca4c4a 100644 --- a/docs/docs/integrations/tools/reddit_search.ipynb +++ b/docs/docs/integrations/tools/reddit_search.ipynb @@ -156,7 +156,7 @@ "source": [ "## Using tool with an agent chain\n", "\n", - "Reddit search functionality is also provided as a multi-input tool. In this example, we adapt [existing code from the docs](/docs/modules/memory/agent_with_memory), and use ChatOpenAI to create an agent chain with memory. This agent chain is able to pull information from Reddit and use these posts to respond to subsequent input. \n", + "Reddit search functionality is also provided as a multi-input tool. In this example, we adapt [existing code from the docs](https://python.langchain.com/v0.1/docs/modules/memory/agent_with_memory/), and use ChatOpenAI to create an agent chain with memory. This agent chain is able to pull information from Reddit and use these posts to respond to subsequent input. \n", "\n", "To run the example, add your reddit API access information and also get an OpenAI key from the [OpenAI API](https://help.openai.com/en/articles/4936850-where-do-i-find-my-api-key)." ] diff --git a/docs/docs/integrations/vectorstores/faiss.ipynb b/docs/docs/integrations/vectorstores/faiss.ipynb index 022d5a5003..27969437df 100644 --- a/docs/docs/integrations/vectorstores/faiss.ipynb +++ b/docs/docs/integrations/vectorstores/faiss.ipynb @@ -11,7 +11,7 @@ "\n", "[Faiss documentation](https://faiss.ai/).\n", "\n", - "This notebook shows how to use functionality related to the `FAISS` vector database. It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/use_cases/question_answering) to learn how to use this vectorstore as part of a larger chain." + "This notebook shows how to use functionality related to the `FAISS` vector database. It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/how_to#qa-with-rag) to learn how to use this vectorstore as part of a larger chain." ] }, { @@ -169,7 +169,7 @@ "source": [ "## As a Retriever\n", "\n", - "We can also convert the vectorstore into a [Retriever](/docs/modules/data_connection/retrievers) class. This allows us to easily use it in other LangChain methods, which largely work with retrievers" + "We can also convert the vectorstore into a [Retriever](/docs/how_to#retrievers) class. This allows us to easily use it in other LangChain methods, which largely work with retrievers" ] }, { diff --git a/docs/docs/integrations/vectorstores/timescalevector.ipynb b/docs/docs/integrations/vectorstores/timescalevector.ipynb index 16982dc7e9..6c48a5e5e6 100644 --- a/docs/docs/integrations/vectorstores/timescalevector.ipynb +++ b/docs/docs/integrations/vectorstores/timescalevector.ipynb @@ -307,7 +307,7 @@ "metadata": {}, "source": [ "### Using a Timescale Vector as a Retriever\n", - "After initializing a TimescaleVector store, you can use it as a [retriever](/docs/modules/data_connection/retrievers/)." + "After initializing a TimescaleVector store, you can use it as a [retriever](/docs/how_to#retrievers)." ] }, { @@ -477,7 +477,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Next, we'll define a metadata function to extract the relevant metadata from the JSON record. We'll pass this function to the JSONLoader. See the [JSON document loader docs](/docs/modules/data_connection/document_loaders/json) for more details." + "Next, we'll define a metadata function to extract the relevant metadata from the JSON record. We'll pass this function to the JSONLoader. See the [JSON document loader docs](/docs/how_to/document_loader_json) for more details." ] }, { diff --git a/docs/docs/integrations/vectorstores/vespa.ipynb b/docs/docs/integrations/vectorstores/vespa.ipynb index 50cc60f4e3..b93a8255fc 100644 --- a/docs/docs/integrations/vectorstores/vespa.ipynb +++ b/docs/docs/integrations/vectorstores/vespa.ipynb @@ -388,7 +388,7 @@ "### As retriever\n", "\n", "To use this vector store as a\n", - "[LangChain retriever](/docs/modules/data_connection/retrievers/)\n", + "[LangChain retriever](/docs/how_to#retrievers)\n", "simply call the `as_retriever` function, which is a standard vector store\n", "method:" ] diff --git a/docs/docs/introduction.mdx b/docs/docs/introduction.mdx index 4c4be557d9..f6b29acca8 100644 --- a/docs/docs/introduction.mdx +++ b/docs/docs/introduction.mdx @@ -8,7 +8,7 @@ sidebar_class_name: hidden **LangChain** is a framework for developing applications powered by large language models (LLMs). LangChain simplifies every stage of the LLM application lifecycle: -- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/expression_language/) and [components](/docs/modules/). Hit the ground running using [third-party integrations](/docs/integrations/platforms/) and [Templates](/docs/templates). +- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts#langchain-expression-language) and [components](/docs/concepts). Hit the ground running using [third-party integrations](/docs/integrations/platforms/) and [Templates](/docs/templates). - **Productionization**: Use [LangSmith](/docs/langsmith/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence. - **Deployment**: Turn any chain into an API with [LangServe](/docs/langserve). diff --git a/docs/docs/tutorials/graph.ipynb b/docs/docs/tutorials/graph.ipynb index 5c44245d79..fadcd3c959 100644 --- a/docs/docs/tutorials/graph.ipynb +++ b/docs/docs/tutorials/graph.ipynb @@ -299,10 +299,10 @@ "\n", "For more complex query-generation, we may want to create few-shot prompts or add query-checking steps. For advanced techniques like this and more check out:\n", "\n", - "* [Prompting strategies](/docs/use_cases/graph/prompting): Advanced prompt engineering techniques.\n", - "* [Mapping values](/docs/use_cases/graph/mapping): Techniques for mapping values from questions to database.\n", - "* [Semantic layer](/docs/use_cases/graph/semantic): Techniques for implementing semantic layers.\n", - "* [Constructing graphs](/docs/use_cases/graph/constructing): Techniques for constructing knowledge graphs." + "* [Prompting strategies](/docs/how_to/graph_prompting): Advanced prompt engineering techniques.\n", + "* [Mapping values](docs/how_to/graph_mapping): Techniques for mapping values from questions to database.\n", + "* [Semantic layer](/docs/how_to/graph_semantic): Techniques for implementing semantic layers.\n", + "* [Constructing graphs](/docs/how_to/graph_constructing): Techniques for constructing knowledge graphs." ] }, { diff --git a/docs/docs/tutorials/llm_chain.ipynb b/docs/docs/tutorials/llm_chain.ipynb index e8fe6bb4b8..dfdf233551 100644 --- a/docs/docs/tutorials/llm_chain.ipynb +++ b/docs/docs/tutorials/llm_chain.ipynb @@ -541,7 +541,7 @@ "\n", "### Client\n", "\n", - "Now let's set up a client for programmatically interacting with our service. We can easily do this with the `[langserve.RemoteRunnable](/docs/langserve#client)`.\n", + "Now let's set up a client for programmatically interacting with our service. We can easily do this with the `[langserve.RemoteRunnable](/docs/langserve/#client)`.\n", "Using this, we can interact with the served chain as if it were running client-side." ] }, diff --git a/docs/docs/tutorials/local_rag.ipynb b/docs/docs/tutorials/local_rag.ipynb index a0ad5ae934..d429cac5e9 100644 --- a/docs/docs/tutorials/local_rag.ipynb +++ b/docs/docs/tutorials/local_rag.ipynb @@ -11,7 +11,7 @@ "\n", "LangChain has [integrations](https://integrations.langchain.com/) with many open-source LLMs that can be run locally.\n", "\n", - "See [here](/docs/guides/development/local_llms) for setup instructions for these LLMs. \n", + "See [here](/docs/tutorials/local_rag) for setup instructions for these LLMs. \n", "\n", "For example, here we show how to run `GPT4All` or `LLaMA2` locally (e.g., on your laptop) using local embeddings and a local LLM.\n", "\n", @@ -145,7 +145,7 @@ " \n", "And / or, you can download a GGUF converted model (e.g., [here](https://huggingface.co/TheBloke)).\n", "\n", - "Finally, as noted in detail [here](/docs/guides/development/local_llms) install `llama-cpp-python`" + "Finally, as noted in detail [here](/docs/tutorials/local_rag) install `llama-cpp-python`" ] }, { diff --git a/docs/docs/tutorials/qa_chat_history.ipynb b/docs/docs/tutorials/qa_chat_history.ipynb index a2ae18e6d9..208ba0dca0 100644 --- a/docs/docs/tutorials/qa_chat_history.ipynb +++ b/docs/docs/tutorials/qa_chat_history.ipynb @@ -409,7 +409,7 @@ "\n", "For this we can use:\n", "\n", - "- [BaseChatMessageHistory](/docs/modules/memory/chat_messages/): Store chat history.\n", + "- [BaseChatMessageHistory](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.memory): Store chat history.\n", "- [RunnableWithMessageHistory](/docs/how_to/message_history): Wrapper for an LCEL chain and a `BaseChatMessageHistory` that handles injecting chat history into inputs and updating it after each invocation.\n", "\n", "For a detailed walkthrough of how to use these classes together to create a stateful conversational chain, head to the [How to add message history (memory)](/docs/how_to/message_history) LCEL page.\n", @@ -744,7 +744,7 @@ "id": "07dcb968-ed9a-458a-85e1-528cd28c6965", "metadata": {}, "source": [ - "Tools are LangChain [Runnables](/docs/expression_language/), and implement the usual interface:" + "Tools are LangChain [Runnables](/docs/concepts#langchain-expression-language), and implement the usual interface:" ] }, { @@ -1048,7 +1048,7 @@ "- We used chains to build a predictable application that generates search queries for each user input;\n", "- We used agents to build an application that \"decides\" when and how to generate search queries.\n", "\n", - "To explore different types of retrievers and retrieval strategies, visit the [retrievers](/docs/0.2.x/how_to/#retrievers) section of the how-to guides.\n", + "To explore different types of retrievers and retrieval strategies, visit the [retrievers](/docs/how_to/#retrievers) section of the how-to guides.\n", "\n", "For a detailed walkthrough of LangChain's conversation memory abstractions, visit the [How to add message history (memory)](/docs/how_to/message_history) LCEL page.\n", "\n", diff --git a/docs/docs/tutorials/rag.ipynb b/docs/docs/tutorials/rag.ipynb index e2d46b6fa1..e84076ebca 100644 --- a/docs/docs/tutorials/rag.ipynb +++ b/docs/docs/tutorials/rag.ipynb @@ -78,7 +78,7 @@ "```\n", "\n", "\n", - "For more details, see our [Installation guide](/docs/get_started/installation).\n", + "For more details, see our [Installation guide](/docs/installation).\n", "\n", "### LangSmith\n", "\n",