From 4466caadba44e199cbfb2eeb0a939cc46a9fbd2e Mon Sep 17 00:00:00 2001 From: Eugene Yurtsev Date: Tue, 22 Oct 2024 23:03:36 -0400 Subject: [PATCH] concepts: update llm stub page and re-link (#27567) Update text llm stub page and re-link content --- docs/docs/concepts/callbacks.mdx | 2 +- docs/docs/concepts/few_shot_prompting.mdx | 2 +- docs/docs/concepts/lcel.mdx | 2 +- docs/docs/concepts/llms.mdx | 3 --- docs/docs/concepts/output_parsers.mdx | 2 +- docs/docs/concepts/streaming.mdx | 4 +-- docs/docs/concepts/text_llms.mdx | 10 +++++++ docs/docs/concepts/tool_calling.mdx | 2 +- .../documentation/style_guide.mdx | 4 +-- docs/docs/how_to/add_scores_retriever.ipynb | 2 +- docs/docs/how_to/agent_executor.ipynb | 12 ++++----- docs/docs/how_to/assign.ipynb | 4 +-- docs/docs/how_to/binding.ipynb | 2 +- docs/docs/how_to/callbacks_async.ipynb | 2 +- docs/docs/how_to/callbacks_attach.ipynb | 2 +- docs/docs/how_to/callbacks_constructor.ipynb | 2 +- .../docs/how_to/callbacks_custom_events.ipynb | 10 +++---- docs/docs/how_to/callbacks_runtime.ipynb | 2 +- docs/docs/how_to/chat_model_caching.ipynb | 4 +-- .../how_to/chat_model_rate_limiting.ipynb | 4 +-- .../how_to/chat_token_usage_tracking.ipynb | 2 +- docs/docs/how_to/chatbots_tools.ipynb | 4 +-- docs/docs/how_to/configure.ipynb | 2 +- .../how_to/convert_runnable_to_tool.ipynb | 6 ++--- docs/docs/how_to/custom_callbacks.ipynb | 2 +- docs/docs/how_to/custom_chat_model.ipynb | 4 +-- docs/docs/how_to/document_loader_pdf.ipynb | 6 ++--- docs/docs/how_to/document_loader_web.ipynb | 2 +- docs/docs/how_to/dynamic_chain.ipynb | 2 +- .../how_to/example_selectors_langsmith.ipynb | 4 +-- docs/docs/how_to/few_shot_examples.ipynb | 6 ++--- docs/docs/how_to/few_shot_examples_chat.ipynb | 8 +++--- docs/docs/how_to/functions.ipynb | 2 +- docs/docs/how_to/index.mdx | 26 +++++++++---------- docs/docs/how_to/inspect.ipynb | 4 +-- .../how_to/llm_token_usage_tracking.ipynb | 2 +- docs/docs/how_to/local_llms.ipynb | 2 +- docs/docs/how_to/logprobs.ipynb | 2 +- docs/docs/how_to/long_context_reorder.ipynb | 2 +- docs/docs/how_to/message_history.ipynb | 6 ++--- docs/docs/how_to/migrate_agent.ipynb | 4 +-- docs/docs/how_to/multimodal_inputs.ipynb | 2 +- docs/docs/how_to/output_parser_json.ipynb | 6 ++--- docs/docs/how_to/output_parser_xml.ipynb | 6 ++--- docs/docs/how_to/output_parser_yaml.ipynb | 6 ++--- docs/docs/how_to/parallel.ipynb | 2 +- docs/docs/how_to/passthrough.ipynb | 2 +- docs/docs/how_to/prompts_composition.ipynb | 2 +- docs/docs/how_to/prompts_partial.ipynb | 2 +- docs/docs/how_to/qa_chat_history_how_to.ipynb | 2 +- docs/docs/how_to/qa_sources.ipynb | 2 +- docs/docs/how_to/routing.ipynb | 6 ++--- docs/docs/how_to/sequence.ipynb | 10 +++---- docs/docs/how_to/serialization.ipynb | 2 +- docs/docs/how_to/streaming.ipynb | 12 ++++----- docs/docs/how_to/structured_output.ipynb | 6 ++--- docs/docs/how_to/summarize_refine.ipynb | 2 +- docs/docs/how_to/summarize_stuff.ipynb | 2 +- docs/docs/how_to/tool_artifacts.ipynb | 6 ++--- docs/docs/how_to/tool_calling.ipynb | 10 +++---- docs/docs/how_to/tool_choice.ipynb | 4 +-- docs/docs/how_to/tool_configure.ipynb | 4 +-- .../how_to/tool_results_pass_to_model.ipynb | 6 ++--- docs/docs/how_to/tool_runtime.ipynb | 4 +-- docs/docs/how_to/tool_stream_events.ipynb | 2 +- docs/docs/how_to/tools_builtin.ipynb | 4 +-- docs/docs/how_to/tools_error.ipynb | 4 +-- docs/docs/how_to/tools_prompting.ipynb | 8 +++--- docs/docs/how_to/trim_messages.ipynb | 8 +++--- docs/docs/integrations/chat/anthropic.ipynb | 2 +- .../integrations/chat/azure_chat_openai.ipynb | 2 +- docs/docs/integrations/chat/bedrock.ipynb | 2 +- docs/docs/integrations/chat/cerebras.ipynb | 2 +- docs/docs/integrations/chat/databricks.ipynb | 2 +- docs/docs/integrations/chat/fireworks.ipynb | 2 +- .../chat/google_generative_ai.ipynb | 2 +- .../chat/google_vertex_ai_palm.ipynb | 2 +- docs/docs/integrations/chat/huggingface.ipynb | 2 +- docs/docs/integrations/chat/index.mdx | 2 +- docs/docs/integrations/chat/mistralai.ipynb | 2 +- .../chat/nvidia_ai_endpoints.ipynb | 2 +- .../integrations/chat/oci_data_science.ipynb | 2 +- .../integrations/chat/oci_generative_ai.ipynb | 2 +- docs/docs/integrations/chat/openai.ipynb | 2 +- docs/docs/integrations/chat/sambanova.ipynb | 2 +- docs/docs/integrations/chat/sambastudio.ipynb | 2 +- docs/docs/integrations/chat/vllm.ipynb | 2 +- docs/docs/integrations/chat/yi.ipynb | 2 +- .../document_loaders/bshtml.ipynb | 2 +- .../integrations/document_loaders/json.ipynb | 2 +- .../document_loaders/langsmith.ipynb | 2 +- .../document_loaders/pypdfium2.ipynb | 2 +- .../document_loaders/pypdfloader.ipynb | 2 +- .../document_loaders/unstructured_file.ipynb | 2 +- .../unstructured_markdown.ipynb | 2 +- .../integrations/document_loaders/xml.ipynb | 2 +- docs/docs/integrations/llms/anthropic.ipynb | 2 +- .../docs/integrations/llms/azure_openai.ipynb | 2 +- docs/docs/integrations/llms/bedrock.ipynb | 2 +- docs/docs/integrations/llms/cohere.ipynb | 2 +- docs/docs/integrations/llms/databricks.ipynb | 2 +- docs/docs/integrations/llms/fireworks.ipynb | 2 +- docs/docs/integrations/llms/google_ai.ipynb | 2 +- .../llms/google_vertex_ai_palm.ipynb | 2 +- docs/docs/integrations/llms/index.mdx | 4 +-- .../llms/nvidia_ai_endpoints.ipynb | 2 +- docs/docs/integrations/llms/ollama.ipynb | 2 +- docs/docs/integrations/llms/openai.ipynb | 2 +- docs/docs/integrations/llms/together.ipynb | 2 +- .../docs/integrations/providers/databricks.md | 2 +- .../providers/mlflow_tracking.ipynb | 2 +- .../retrievers/azure_ai_search.ipynb | 2 +- .../integrations/retrievers/bedrock.ipynb | 2 +- docs/docs/integrations/retrievers/box.ipynb | 2 +- .../retrievers/elasticsearch_retriever.ipynb | 6 ++--- .../retrievers/google_vertex_ai_search.ipynb | 4 +-- docs/docs/integrations/retrievers/index.mdx | 2 +- .../retrievers/milvus_hybrid_search.ipynb | 2 +- docs/docs/integrations/stores/astradb.ipynb | 2 +- docs/docs/integrations/stores/cassandra.ipynb | 2 +- .../integrations/stores/elasticsearch.ipynb | 2 +- .../integrations/stores/file_system.ipynb | 2 +- docs/docs/integrations/stores/in_memory.ipynb | 2 +- docs/docs/integrations/stores/redis.ipynb | 2 +- .../integrations/stores/upstash_redis.ipynb | 2 +- .../text_embedding/databricks.ipynb | 2 +- docs/docs/integrations/tools/gmail.ipynb | 2 +- .../docs/integrations/tools/jina_search.ipynb | 4 +-- docs/docs/integrations/tools/requests.ipynb | 2 +- docs/docs/integrations/tools/slack.ipynb | 2 +- .../integrations/tools/sql_database.ipynb | 2 +- .../integrations/tools/tavily_search.ipynb | 4 +-- .../integrations/vectorstores/astradb.ipynb | 2 +- .../integrations/vectorstores/chroma.ipynb | 2 +- .../vectorstores/clickhouse.ipynb | 2 +- .../integrations/vectorstores/couchbase.ipynb | 2 +- .../databricks_vector_search.ipynb | 2 +- .../vectorstores/elasticsearch.ipynb | 2 +- .../integrations/vectorstores/faiss.ipynb | 2 +- docs/docs/integrations/vectorstores/index.mdx | 2 +- .../integrations/vectorstores/milvus.ipynb | 2 +- .../vectorstores/mongodb_atlas.ipynb | 2 +- .../integrations/vectorstores/pgvector.ipynb | 2 +- .../integrations/vectorstores/pinecone.ipynb | 2 +- .../integrations/vectorstores/qdrant.ipynb | 2 +- .../integrations/vectorstores/redis.ipynb | 2 +- .../errors/INVALID_PROMPT_INPUT.mdx | 2 +- docs/docs/tutorials/agents.ipynb | 6 ++--- docs/docs/tutorials/chatbot.ipynb | 6 ++--- docs/docs/tutorials/extraction.ipynb | 6 ++--- docs/docs/tutorials/llm_chain.ipynb | 8 +++--- docs/docs/tutorials/local_rag.ipynb | 8 +++--- docs/docs/tutorials/pdf_qa.ipynb | 14 +++++----- docs/docs/tutorials/qa_chat_history.ipynb | 10 +++---- docs/docs/tutorials/query_analysis.ipynb | 8 +++--- docs/docs/tutorials/rag.ipynb | 14 +++++----- docs/docs/tutorials/retrievers.ipynb | 2 +- docs/docs/tutorials/sql_qa.ipynb | 6 ++--- docs/docs/tutorials/summarization.ipynb | 4 +-- .../constitutional_chain.ipynb | 2 +- .../migrating_chains/conversation_chain.ipynb | 2 +- .../conversation_retrieval_chain.ipynb | 2 +- .../versions/migrating_chains/index.ipynb | 4 +-- .../versions/migrating_chains/llm_chain.ipynb | 2 +- .../migrating_chains/llm_math_chain.ipynb | 4 +-- .../migrating_chains/llm_router_chain.ipynb | 4 +-- .../map_rerank_docs_chain.ipynb | 2 +- .../migrating_chains/multi_prompt_chain.ipynb | 2 +- .../migrating_chains/refine_docs_chain.ipynb | 2 +- .../migrating_chains/retrieval_qa.ipynb | 2 +- .../migrating_chains/stuff_docs_chain.ipynb | 4 +-- .../migrating_memory/chat_history.ipynb | 4 +-- docs/docs/versions/migrating_memory/index.mdx | 2 +- 173 files changed, 306 insertions(+), 299 deletions(-) delete mode 100644 docs/docs/concepts/llms.mdx create mode 100644 docs/docs/concepts/text_llms.mdx diff --git a/docs/docs/concepts/callbacks.mdx b/docs/docs/concepts/callbacks.mdx index 6e3975271d..62fd28341b 100644 --- a/docs/docs/concepts/callbacks.mdx +++ b/docs/docs/concepts/callbacks.mdx @@ -1,7 +1,7 @@ # Callbacks :::note Prerequisites -- [Runnable interface](/docs/concepts/#runnable-interface) +- [Runnable interface](/docs/concepts/runnables) ::: LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks. diff --git a/docs/docs/concepts/few_shot_prompting.mdx b/docs/docs/concepts/few_shot_prompting.mdx index b7147addea..e48d76905d 100644 --- a/docs/docs/concepts/few_shot_prompting.mdx +++ b/docs/docs/concepts/few_shot_prompting.mdx @@ -70,7 +70,7 @@ Most state-of-the-art models these days are chat models, so we'll focus on forma If we insert our examples into the system prompt as a string, we'll need to make sure it's clear to the model where each example begins and which parts are the input versus output. Different models respond better to different syntaxes, like [ChatML](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chat-markup-language), XML, TypeScript, etc. -If we insert our examples as messages, where each example is represented as a sequence of Human, AI messages, we might want to also assign [names](/docs/concepts/#messages) to our messages like `"example_user"` and `"example_assistant"` to make it clear that these messages correspond to different actors than the latest input message. +If we insert our examples as messages, where each example is represented as a sequence of Human, AI messages, we might want to also assign [names](/docs/concepts/messages) to our messages like `"example_user"` and `"example_assistant"` to make it clear that these messages correspond to different actors than the latest input message. **Formatting tool call examples** diff --git a/docs/docs/concepts/lcel.mdx b/docs/docs/concepts/lcel.mdx index 9378ec8e92..63cced2dd4 100644 --- a/docs/docs/concepts/lcel.mdx +++ b/docs/docs/concepts/lcel.mdx @@ -22,7 +22,7 @@ LangChain optimizes the run-time execution of chains built with LCEL in a number - **Optimize parallel execution**: Run Runnables in parallel using [RunnableParallel](#RunnableParallel) or run multiple inputs through a given chain in parallel using the [Runnable Batch API](/docs/concepts/runnables#batch). Parallel execution can significantly reduce the latency as processing can be done in parallel instead of sequentially. - **Guarantee Async support**: Any chain built with LCEL can be run asynchronously using the [Runnable Async API](/docs/concepts/runnables#async-api). This can be useful when running chains in a server environment where you want to handle large number of requests concurrently. -- **Simplify streaming**: LCEL chains can be streamed, allowing for incremental output as the chain is executed. LangChain can optimize the streaming of the output to minimize the time-to-first-token(time elapsed until the first chunk of output from a [chat model](/docs/concepts/chat_models) or [llm](/docs/concepts/llms) comes out). +- **Simplify streaming**: LCEL chains can be streamed, allowing for incremental output as the chain is executed. LangChain can optimize the streaming of the output to minimize the time-to-first-token(time elapsed until the first chunk of output from a [chat model](/docs/concepts/chat_models) or [llm](/docs/concepts/text_llms) comes out). Other benefits include: diff --git a/docs/docs/concepts/llms.mdx b/docs/docs/concepts/llms.mdx deleted file mode 100644 index 5e2f7d98c7..0000000000 --- a/docs/docs/concepts/llms.mdx +++ /dev/null @@ -1,3 +0,0 @@ -# Large language models (llms) - -Please see the [Chat Model Concept Guide](/docs/concepts/chat_models) page for more information. \ No newline at end of file diff --git a/docs/docs/concepts/output_parsers.mdx b/docs/docs/concepts/output_parsers.mdx index a03daea873..d15bc6fdb6 100644 --- a/docs/docs/concepts/output_parsers.mdx +++ b/docs/docs/concepts/output_parsers.mdx @@ -7,7 +7,7 @@ The information here refers to parsers that take a text output from a model try to parse it into a more structured representation. More and more models are supporting function (or tool) calling, which handles this automatically. It is recommended to use function/tool calling rather than output parsing. -See documentation for that [here](/docs/concepts/#function-tool-calling). +See documentation for that [here](/docs/concepts/tool_calling). ::: diff --git a/docs/docs/concepts/streaming.mdx b/docs/docs/concepts/streaming.mdx index 7ab681b533..ba2c89d58c 100644 --- a/docs/docs/concepts/streaming.mdx +++ b/docs/docs/concepts/streaming.mdx @@ -106,7 +106,7 @@ While this API is available for use with [LangGraph](/docs/concepts/architecture For chains constructed using **LCEL**, the `.stream()` method only streams the output of the final step from te chain. This might be sufficient for some applications, but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of the chain alongside the final output. For example, you may want to return sources alongside the final generation when building a chat-over-documents app. -There are ways to do this [using callbacks](/docs/concepts/#callbacks-1), or by constructing your chain in such a way that it passes intermediate +There are ways to do this [using callbacks](/docs/concepts/callbacks), or by constructing your chain in such a way that it passes intermediate values to the end with something like chained [`.assign()`](/docs/how_to/passthrough/) calls, but LangChain also includes an `.astream_events()` method that combines the flexibility of callbacks with the ergonomics of `.stream()`. When called, it returns an iterator which yields [various types of events](/docs/how_to/streaming/#event-reference) that you can filter and process according @@ -188,4 +188,4 @@ Please see the following how-to guides for specific examples of streaming in Lan For writing custom data to the stream, please see the following resources: * If using LangGraph, see [how to stream custom data](https://langchain-ai.github.io/langgraph/how-tos/streaming-content/). -* If using LCEL, see [how to dispatch custom callback events](https://python.langchain.com/docs/how_to/callbacks_custom_events/#astream-events-api). \ No newline at end of file +* If using LCEL, see [how to dispatch custom callback events](https://python.langchain.com/docs/how_to/callbacks_custom_events/#astream-events-api). diff --git a/docs/docs/concepts/text_llms.mdx b/docs/docs/concepts/text_llms.mdx new file mode 100644 index 0000000000..d35a72476a --- /dev/null +++ b/docs/docs/concepts/text_llms.mdx @@ -0,0 +1,10 @@ +# String-in, string-out llms + +:::tip +You are probably looking for the [Chat Model Concept Guide](/docs/concepts/chat_models) page for more information. +::: + +LangChain has implementations for older language models that take a string as input and return a string as output. These models are typically named without the "Chat" prefix (e.g., `Ollama`, `Anthropic`, `OpenAI`, etc.), and may include the "LLM" suffix (e.g., `OllamaLLM`, `AnthropicLLM`, `OpenAILLM`, etc.). These models implement the [BaseLLM](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.llms.BaseLLM.html#langchain_core.language_models.llms.BaseLLM) interface. + +Users should be using almost exclusively the newer [Chat Models](/docs/concepts/chat_models) as most +model providers have adopted a chat like interface for interacting with language models. \ No newline at end of file diff --git a/docs/docs/concepts/tool_calling.mdx b/docs/docs/concepts/tool_calling.mdx index e377688334..c3c753ee52 100644 --- a/docs/docs/concepts/tool_calling.mdx +++ b/docs/docs/concepts/tool_calling.mdx @@ -141,7 +141,7 @@ For more details on usage, see our [how-to guides](/docs/how_to/#tools)! When designing [tools](/docs/concepts/tools/) to be used by a model, it is important to keep in mind that: -* Models that have explicit [tool-calling APIs](/docs/concepts/#functiontool-calling) will be better at tool calling than non-fine-tuned models. +* Models that have explicit [tool-calling APIs](/docs/concepts/tool_calling) will be better at tool calling than non-fine-tuned models. * Models will perform better if the tools have well-chosen names and descriptions. * Simple, narrowly scoped tools are easier for models to use than complex tools. * Asking the model to select from a large list of tools poses challenges for the model. diff --git a/docs/docs/contributing/documentation/style_guide.mdx b/docs/docs/contributing/documentation/style_guide.mdx index 83a3ae3c80..76a6ab837b 100644 --- a/docs/docs/contributing/documentation/style_guide.mdx +++ b/docs/docs/contributing/documentation/style_guide.mdx @@ -92,8 +92,8 @@ To quote the Diataxis website: Some examples include: -- [Retrieval conceptual docs](/docs/concepts/#retrieval) -- [Chat model conceptual docs](/docs/concepts/#chat-models) +- [Retrieval conceptual docs](/docs/concepts/retrieval) +- [Chat model conceptual docs](/docs/concepts/chat_models) Here are some high-level tips on writing a good conceptual guide: diff --git a/docs/docs/how_to/add_scores_retriever.ipynb b/docs/docs/how_to/add_scores_retriever.ipynb index 989b23594e..65d56cbcf8 100644 --- a/docs/docs/how_to/add_scores_retriever.ipynb +++ b/docs/docs/how_to/add_scores_retriever.ipynb @@ -75,7 +75,7 @@ "\n", "To obtain scores from a vector store retriever, we wrap the underlying vector store's `.similarity_search_with_score` method in a short function that packages scores into the associated document's metadata.\n", "\n", - "We add a `@chain` decorator to the function to create a [Runnable](/docs/concepts/#langchain-expression-language) that can be used similarly to a typical retriever." + "We add a `@chain` decorator to the function to create a [Runnable](/docs/concepts/lcel) that can be used similarly to a typical retriever." ] }, { diff --git a/docs/docs/how_to/agent_executor.ipynb b/docs/docs/how_to/agent_executor.ipynb index 657a6e5b37..995b631f17 100644 --- a/docs/docs/how_to/agent_executor.ipynb +++ b/docs/docs/how_to/agent_executor.ipynb @@ -31,10 +31,10 @@ "## Concepts\n", "\n", "Concepts we will cover are:\n", - "- Using [language models](/docs/concepts/#chat-models), in particular their tool calling ability\n", - "- Creating a [Retriever](/docs/concepts/#retrievers) to expose specific information to our agent\n", - "- Using a Search [Tool](/docs/concepts/#tools) to look up things online\n", - "- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to follow-up questions. \n", + "- Using [language models](/docs/concepts/chat_models), in particular their tool calling ability\n", + "- Creating a [Retriever](/docs/concepts/retrievers) to expose specific information to our agent\n", + "- Using a Search [Tool](/docs/concepts/tools) to look up things online\n", + "- [`Chat History`](/docs/concepts/chat_history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to follow-up questions. \n", "- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n", "\n", "## Setup\n", @@ -415,7 +415,7 @@ "source": [ "## Create the agent\n", "\n", - "Now that we have defined the tools and the LLM, we can create the agent. We will be using a tool calling agent - for more information on this type of agent, as well as other options, see [this guide](/docs/concepts/#agent_types/).\n", + "Now that we have defined the tools and the LLM, we can create the agent. We will be using a tool calling agent - for more information on this type of agent, as well as other options, see [this guide](/docs/concepts/agents/).\n", "\n", "We can first choose the prompt we want to use to guide the agent.\n", "\n", @@ -457,7 +457,7 @@ "id": "f8014c9d", "metadata": {}, "source": [ - "Now, we can initialize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/concepts/#agents).\n", + "Now, we can initialize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/concepts/agents).\n", "\n", "Note that we are passing in the `model`, not `model_with_tools`. That is because `create_tool_calling_agent` will call `.bind_tools` for us under the hood." ] diff --git a/docs/docs/how_to/assign.ipynb b/docs/docs/how_to/assign.ipynb index 59e4a0ed3e..b231b0fb78 100644 --- a/docs/docs/how_to/assign.ipynb +++ b/docs/docs/how_to/assign.ipynb @@ -19,7 +19,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", + "- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "- [Calling runnables in parallel](/docs/how_to/parallel/)\n", "- [Custom functions](/docs/how_to/functions/)\n", @@ -29,7 +29,7 @@ "\n", "An alternate way of [passing data through](/docs/how_to/passthrough) steps of a chain is to leave the current values of the chain state unchanged while assigning a new value under a given key. The [`RunnablePassthrough.assign()`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html#langchain_core.runnables.passthrough.RunnablePassthrough.assign) static method takes an input value and adds the extra arguments passed to the assign function.\n", "\n", - "This is useful in the common [LangChain Expression Language](/docs/concepts/#langchain-expression-language) pattern of additively creating a dictionary to use as input to a later step.\n", + "This is useful in the common [LangChain Expression Language](/docs/concepts/lcel) pattern of additively creating a dictionary to use as input to a later step.\n", "\n", "Here's an example:" ] diff --git a/docs/docs/how_to/binding.ipynb b/docs/docs/how_to/binding.ipynb index c25f038e2a..0d41fd8940 100644 --- a/docs/docs/how_to/binding.ipynb +++ b/docs/docs/how_to/binding.ipynb @@ -21,7 +21,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", + "- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "- [Tool calling](/docs/how_to/tool_calling)\n", "\n", diff --git a/docs/docs/how_to/callbacks_async.ipynb b/docs/docs/how_to/callbacks_async.ipynb index 4e977852eb..f42f7f0a04 100644 --- a/docs/docs/how_to/callbacks_async.ipynb +++ b/docs/docs/how_to/callbacks_async.ipynb @@ -10,7 +10,7 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Callbacks](/docs/concepts/#callbacks)\n", + "- [Callbacks](/docs/concepts/callbacks)\n", "- [Custom callback handlers](/docs/how_to/custom_callbacks)\n", ":::\n", "\n", diff --git a/docs/docs/how_to/callbacks_attach.ipynb b/docs/docs/how_to/callbacks_attach.ipynb index 8115da5a4a..7d06f3b764 100644 --- a/docs/docs/how_to/callbacks_attach.ipynb +++ b/docs/docs/how_to/callbacks_attach.ipynb @@ -10,7 +10,7 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Callbacks](/docs/concepts/#callbacks)\n", + "- [Callbacks](/docs/concepts/callbacks)\n", "- [Custom callback handlers](/docs/how_to/custom_callbacks)\n", "- [Chaining runnables](/docs/how_to/sequence)\n", "- [Attach runtime arguments to a Runnable](/docs/how_to/binding)\n", diff --git a/docs/docs/how_to/callbacks_constructor.ipynb b/docs/docs/how_to/callbacks_constructor.ipynb index 20cc043d63..bae51a02f2 100644 --- a/docs/docs/how_to/callbacks_constructor.ipynb +++ b/docs/docs/how_to/callbacks_constructor.ipynb @@ -10,7 +10,7 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Callbacks](/docs/concepts/#callbacks)\n", + "- [Callbacks](/docs/concepts/callbacks)\n", "- [Custom callback handlers](/docs/how_to/custom_callbacks)\n", "\n", ":::\n", diff --git a/docs/docs/how_to/callbacks_custom_events.ipynb b/docs/docs/how_to/callbacks_custom_events.ipynb index 429824c34e..7282cd8de7 100644 --- a/docs/docs/how_to/callbacks_custom_events.ipynb +++ b/docs/docs/how_to/callbacks_custom_events.ipynb @@ -10,13 +10,13 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Callbacks](/docs/concepts/#callbacks)\n", + "- [Callbacks](/docs/concepts/callbacks)\n", "- [Custom callback handlers](/docs/how_to/custom_callbacks)\n", - "- [Astream Events API](/docs/concepts/#astream_events) the `astream_events` method will surface custom callback events.\n", + "- [Astream Events API](/docs/concepts/streaming/#astream_events) the `astream_events` method will surface custom callback events.\n", ":::\n", "\n", - "In some situations, you may want to dipsatch a custom callback event from within a [Runnable](/docs/concepts/#runnable-interface) so it can be surfaced\n", - "in a custom callback handler or via the [Astream Events API](/docs/concepts/#astream_events).\n", + "In some situations, you may want to dipsatch a custom callback event from within a [Runnable](/docs/concepts/runnables) so it can be surfaced\n", + "in a custom callback handler or via the [Astream Events API](/docs/concepts/streaming/#astream_events).\n", "\n", "For example, if you have a long running tool with multiple steps, you can dispatch custom events between the steps and use these custom events to monitor progress.\n", "You could also surface these custom events to an end user of your application to show them how the current task is progressing.\n", @@ -64,7 +64,7 @@ "source": [ "## Astream Events API\n", "\n", - "The most useful way to consume custom events is via the [Astream Events API](/docs/concepts/#astream_events).\n", + "The most useful way to consume custom events is via the [Astream Events API](/docs/concepts/streaming/#astream_events).\n", "\n", "We can use the `async` `adispatch_custom_event` API to emit custom events in an async setting. \n", "\n", diff --git a/docs/docs/how_to/callbacks_runtime.ipynb b/docs/docs/how_to/callbacks_runtime.ipynb index a4b15416df..ff04fcde58 100644 --- a/docs/docs/how_to/callbacks_runtime.ipynb +++ b/docs/docs/how_to/callbacks_runtime.ipynb @@ -10,7 +10,7 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Callbacks](/docs/concepts/#callbacks)\n", + "- [Callbacks](/docs/concepts/callbacks)\n", "- [Custom callback handlers](/docs/how_to/custom_callbacks)\n", "\n", ":::\n", diff --git a/docs/docs/how_to/chat_model_caching.ipynb b/docs/docs/how_to/chat_model_caching.ipynb index b305223904..d9a7f38445 100644 --- a/docs/docs/how_to/chat_model_caching.ipynb +++ b/docs/docs/how_to/chat_model_caching.ipynb @@ -10,8 +10,8 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [LLMs](/docs/concepts/#llms)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [LLMs](/docs/concepts/text_llms)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/how_to/chat_model_rate_limiting.ipynb b/docs/docs/how_to/chat_model_rate_limiting.ipynb index d34084becb..32bbaf3779 100644 --- a/docs/docs/how_to/chat_model_rate_limiting.ipynb +++ b/docs/docs/how_to/chat_model_rate_limiting.ipynb @@ -10,8 +10,8 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [LLMs](/docs/concepts/#llms)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [LLMs](/docs/concepts/text_llms)\n", ":::\n", "\n", "\n", diff --git a/docs/docs/how_to/chat_token_usage_tracking.ipynb b/docs/docs/how_to/chat_token_usage_tracking.ipynb index 84948920e8..29290b2d7d 100644 --- a/docs/docs/how_to/chat_token_usage_tracking.ipynb +++ b/docs/docs/how_to/chat_token_usage_tracking.ipynb @@ -10,7 +10,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Chat models](/docs/concepts/#chat-models)\n", + "- [Chat models](/docs/concepts/chat_models)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/how_to/chatbots_tools.ipynb b/docs/docs/how_to/chatbots_tools.ipynb index 4bbd425579..0c9bbf5259 100644 --- a/docs/docs/how_to/chatbots_tools.ipynb +++ b/docs/docs/how_to/chatbots_tools.ipynb @@ -10,9 +10,9 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Chatbots](/docs/concepts/#messages)\n", + "- [Chatbots](/docs/concepts/messages)\n", "- [Agents](/docs/tutorials/agents)\n", - "- [Chat history](/docs/concepts/#chat-history)\n", + "- [Chat history](/docs/concepts/chat_history)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/how_to/configure.ipynb b/docs/docs/how_to/configure.ipynb index 3864a9ea80..8b67a4a3f8 100644 --- a/docs/docs/how_to/configure.ipynb +++ b/docs/docs/how_to/configure.ipynb @@ -21,7 +21,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", + "- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "- [Binding runtime arguments](/docs/how_to/binding/)\n", "\n", diff --git a/docs/docs/how_to/convert_runnable_to_tool.ipynb b/docs/docs/how_to/convert_runnable_to_tool.ipynb index bee037d6ac..d9967023a6 100644 --- a/docs/docs/how_to/convert_runnable_to_tool.ipynb +++ b/docs/docs/how_to/convert_runnable_to_tool.ipynb @@ -42,7 +42,7 @@ "source": [ "LangChain [tools](/docs/concepts#tools) are interfaces that an agent, chain, or chat model can use to interact with the world. See [here](/docs/how_to/#tools) for how-to guides covering tool-calling, built-in tools, custom tools, and more information.\n", "\n", - "LangChain tools-- instances of [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.BaseTool.html)-- are [Runnables](/docs/concepts/#runnable-interface) with additional constraints that enable them to be invoked effectively by language models:\n", + "LangChain tools-- instances of [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.BaseTool.html)-- are [Runnables](/docs/concepts/runnables) with additional constraints that enable them to be invoked effectively by language models:\n", "\n", "- Their inputs are constrained to be serializable, specifically strings and Python `dict` objects;\n", "- They contain names and descriptions indicating how and when they should be used;\n", @@ -259,9 +259,9 @@ "source": [ "## In agents\n", "\n", - "Below we will incorporate LangChain Runnables as tools in an [agent](/docs/concepts/#agents) application. We will demonstrate with:\n", + "Below we will incorporate LangChain Runnables as tools in an [agent](/docs/concepts/agents) application. We will demonstrate with:\n", "\n", - "- a document [retriever](/docs/concepts/#retrievers);\n", + "- a document [retriever](/docs/concepts/retrievers);\n", "- a simple [RAG](/docs/tutorials/rag/) chain, allowing an agent to delegate relevant queries to it.\n", "\n", "We first instantiate a chat model that supports [tool calling](/docs/how_to/tool_calling/):\n", diff --git a/docs/docs/how_to/custom_callbacks.ipynb b/docs/docs/how_to/custom_callbacks.ipynb index 2579f8b8ae..211f23ae22 100644 --- a/docs/docs/how_to/custom_callbacks.ipynb +++ b/docs/docs/how_to/custom_callbacks.ipynb @@ -10,7 +10,7 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Callbacks](/docs/concepts/#callbacks)\n", + "- [Callbacks](/docs/concepts/callbacks)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/how_to/custom_chat_model.ipynb b/docs/docs/how_to/custom_chat_model.ipynb index 09a34b521a..708a0942c9 100644 --- a/docs/docs/how_to/custom_chat_model.ipynb +++ b/docs/docs/how_to/custom_chat_model.ipynb @@ -10,7 +10,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Chat models](/docs/concepts/#chat-models)\n", + "- [Chat models](/docs/concepts/chat_models)\n", "\n", ":::\n", "\n", @@ -28,7 +28,7 @@ "\n", "Chat models take messages as inputs and return a message as output. \n", "\n", - "LangChain has a few [built-in message types](/docs/concepts/#message-types):\n", + "LangChain has a few [built-in message types](/docs/concepts/messages):\n", "\n", "| Message Type | Description |\n", "|-----------------------|-------------------------------------------------------------------------------------------------|\n", diff --git a/docs/docs/how_to/document_loader_pdf.ipynb b/docs/docs/how_to/document_loader_pdf.ipynb index 4daac5a7f5..83ccefdb47 100644 --- a/docs/docs/how_to/document_loader_pdf.ipynb +++ b/docs/docs/how_to/document_loader_pdf.ipynb @@ -50,7 +50,7 @@ "\n", "If you are looking for a simple string representation of text that is embedded in a PDF, the method below is appropriate. It will return a list of [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) objects-- one per page-- containing a single string of the page's text in the Document's `page_content` attribute. It will not parse text in images or scanned PDF pages. Under the hood it uses the [pypydf](https://pypdf.readthedocs.io/en/stable/) Python library.\n", "\n", - "LangChain [document loaders](/docs/concepts/#document-loaders) implement `lazy_load` and its async variant, `alazy_load`, which return iterators of `Document` objects. We will use these below." + "LangChain [document loaders](/docs/concepts/document_loaders) implement `lazy_load` and its async variant, `alazy_load`, which return iterators of `Document` objects. We will use these below." ] }, { @@ -147,7 +147,7 @@ "\n", "### Vector search over PDFs\n", "\n", - "Once we have loaded PDFs into LangChain `Document` objects, we can index them (e.g., a RAG application) in the usual way. Below we use OpenAI embeddings, although any LangChain [embeddings](https://python.langchain.com/docs/concepts/#embedding-models) model will suffice." + "Once we have loaded PDFs into LangChain `Document` objects, we can index them (e.g., a RAG application) in the usual way. Below we use OpenAI embeddings, although any LangChain [embeddings](https://python.langchain.com/docs/concepts/embedding_models) model will suffice." ] }, { @@ -804,7 +804,7 @@ "\n", "Many modern LLMs support inference over multimodal inputs (e.g., images). In some applications-- such as question-answering over PDFs with complex layouts, diagrams, or scans-- it may be advantageous to skip the PDF parsing, instead casting a PDF page to an image and passing it to a model directly. This allows a model to reason over the two dimensional content on the page, instead of a \"one-dimensional\" string representation.\n", "\n", - "In principle we can use any LangChain [chat model](/docs/concepts/#chat-models) that supports multimodal inputs. A list of these models is documented [here](/docs/integrations/chat/). Below we use OpenAI's `gpt-4o-mini`.\n", + "In principle we can use any LangChain [chat model](/docs/concepts/chat_models) that supports multimodal inputs. A list of these models is documented [here](/docs/integrations/chat/). Below we use OpenAI's `gpt-4o-mini`.\n", "\n", "First we define a short utility function to convert a PDF page to a base64-encoded image:" ] diff --git a/docs/docs/how_to/document_loader_web.ipynb b/docs/docs/how_to/document_loader_web.ipynb index 9eaa321822..04c4a3b7c6 100644 --- a/docs/docs/how_to/document_loader_web.ipynb +++ b/docs/docs/how_to/document_loader_web.ipynb @@ -389,7 +389,7 @@ "source": [ "### Vector search over page content\n", "\n", - "Once we have loaded the page contents into LangChain `Document` objects, we can index them (e.g., for a RAG application) in the usual way. Below we use OpenAI [embeddings](/docs/concepts/#embedding-models), although any LangChain embeddings model will suffice." + "Once we have loaded the page contents into LangChain `Document` objects, we can index them (e.g., for a RAG application) in the usual way. Below we use OpenAI [embeddings](/docs/concepts/embedding_models), although any LangChain embeddings model will suffice." ] }, { diff --git a/docs/docs/how_to/dynamic_chain.ipynb b/docs/docs/how_to/dynamic_chain.ipynb index ce2b98ef2c..f292e6a1f4 100644 --- a/docs/docs/how_to/dynamic_chain.ipynb +++ b/docs/docs/how_to/dynamic_chain.ipynb @@ -10,7 +10,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following:\n", - "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", + "- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n", "- [How to turn any function into a runnable](/docs/how_to/functions)\n", "\n", ":::\n", diff --git a/docs/docs/how_to/example_selectors_langsmith.ipynb b/docs/docs/how_to/example_selectors_langsmith.ipynb index 0db835427c..2c59ed313c 100644 --- a/docs/docs/how_to/example_selectors_langsmith.ipynb +++ b/docs/docs/how_to/example_selectors_langsmith.ipynb @@ -11,8 +11,8 @@ "import Compatibility from \"@theme/Compatibility\";\n", "\n", "\n", "\n", diff --git a/docs/docs/how_to/few_shot_examples.ipynb b/docs/docs/how_to/few_shot_examples.ipynb index 69ffd691bc..d583be400a 100644 --- a/docs/docs/how_to/few_shot_examples.ipynb +++ b/docs/docs/how_to/few_shot_examples.ipynb @@ -20,9 +20,9 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Prompt templates](/docs/concepts/#prompt-templates)\n", - "- [Example selectors](/docs/concepts/#example-selectors)\n", - "- [LLMs](/docs/concepts/#llms)\n", + "- [Prompt templates](/docs/concepts/prompt_templates)\n", + "- [Example selectors](/docs/concepts/example_selectors)\n", + "- [LLMs](/docs/concepts/text_llms)\n", "- [Vectorstores](/docs/concepts/#vector-stores)\n", "\n", ":::\n", diff --git a/docs/docs/how_to/few_shot_examples_chat.ipynb b/docs/docs/how_to/few_shot_examples_chat.ipynb index 5ccc06d9fc..1192a211f1 100644 --- a/docs/docs/how_to/few_shot_examples_chat.ipynb +++ b/docs/docs/how_to/few_shot_examples_chat.ipynb @@ -20,9 +20,9 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Prompt templates](/docs/concepts/#prompt-templates)\n", - "- [Example selectors](/docs/concepts/#example-selectors)\n", - "- [Chat models](/docs/concepts/#chat-model)\n", + "- [Prompt templates](/docs/concepts/prompt_templates)\n", + "- [Example selectors](/docs/concepts/example_selectors)\n", + "- [Chat models](/docs/concepts/chat_models)\n", "- [Vectorstores](/docs/concepts/#vector-stores)\n", "\n", ":::\n", @@ -33,7 +33,7 @@ "\n", "The goal of few-shot prompt templates are to dynamically select examples based on an input, and then format the examples in a final prompt to provide for the model.\n", "\n", - "**Note:** The following code examples are for chat models only, since `FewShotChatMessagePromptTemplates` are designed to output formatted [chat messages](/docs/concepts/#message-types) rather than pure strings. For similar few-shot prompt examples for pure string templates compatible with completion models (LLMs), see the [few-shot prompt templates](/docs/how_to/few_shot_examples/) guide." + "**Note:** The following code examples are for chat models only, since `FewShotChatMessagePromptTemplates` are designed to output formatted [chat messages](/docs/concepts/messages) rather than pure strings. For similar few-shot prompt examples for pure string templates compatible with completion models (LLMs), see the [few-shot prompt templates](/docs/how_to/few_shot_examples/) guide." ] }, { diff --git a/docs/docs/how_to/functions.ipynb b/docs/docs/how_to/functions.ipynb index be228013a9..c5b781f2ed 100644 --- a/docs/docs/how_to/functions.ipynb +++ b/docs/docs/how_to/functions.ipynb @@ -21,7 +21,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", + "- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "\n", ":::\n", diff --git a/docs/docs/how_to/index.mdx b/docs/docs/how_to/index.mdx index f0be664384..3ca54fe42a 100644 --- a/docs/docs/how_to/index.mdx +++ b/docs/docs/how_to/index.mdx @@ -27,7 +27,7 @@ This highlights functionality that is core to using LangChain. ## LangChain Expression Language (LCEL) -[LangChain Expression Language](/docs/concepts/#langchain-expression-language-lcel) is a way to create arbitrary custom chains. It is built on the [Runnable](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html) protocol. +[LangChain Expression Language](/docs/concepts/lcel) is a way to create arbitrary custom chains. It is built on the [Runnable](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html) protocol. [**LCEL cheatsheet**](/docs/how_to/lcel_cheatsheet/): For a quick overview of how to use the main LCEL primitives. @@ -53,7 +53,7 @@ These are the core building blocks you can use when building applications. ### Prompt templates -[Prompt Templates](/docs/concepts/#prompt-templates) are responsible for formatting user input into a format that can be passed to a language model. +[Prompt Templates](/docs/concepts/prompt_templates) are responsible for formatting user input into a format that can be passed to a language model. - [How to: use few shot examples](/docs/how_to/few_shot_examples) - [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/) @@ -62,7 +62,7 @@ These are the core building blocks you can use when building applications. ### Example selectors -[Example Selectors](/docs/concepts/#example-selectors) are responsible for selecting the correct few shot examples to pass to the prompt. +[Example Selectors](/docs/concepts/example_selectors) are responsible for selecting the correct few shot examples to pass to the prompt. - [How to: use example selectors](/docs/how_to/example_selectors) - [How to: select examples by length](/docs/how_to/example_selectors_length_based) @@ -73,7 +73,7 @@ These are the core building blocks you can use when building applications. ### Chat models -[Chat Models](/docs/concepts/#chat-models) are newer forms of language models that take messages in and output a message. +[Chat Models](/docs/concepts/chat_models) are newer forms of language models that take messages in and output a message. - [How to: do function/tool calling](/docs/how_to/tool_calling) - [How to: get models to return structured output](/docs/how_to/structured_output) @@ -94,7 +94,7 @@ These are the core building blocks you can use when building applications. ### Messages -[Messages](/docs/concepts/#messages) are the input and output of chat models. They have some `content` and a `role`, which describes the source of the message. +[Messages](/docs/concepts/messages) are the input and output of chat models. They have some `content` and a `role`, which describes the source of the message. - [How to: trim messages](/docs/how_to/trim_messages/) - [How to: filter messages](/docs/how_to/filter_messages/) @@ -102,7 +102,7 @@ These are the core building blocks you can use when building applications. ### LLMs -What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language models that take a string in and output a string. +What LangChain calls [LLMs](/docs/concepts/text_llms) are older forms of language models that take a string in and output a string. - [How to: cache model responses](/docs/how_to/llm_caching) - [How to: create a custom LLM class](/docs/how_to/custom_llm) @@ -112,7 +112,7 @@ What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language mo ### Output parsers -[Output Parsers](/docs/concepts/#output-parsers) are responsible for taking the output of an LLM and parsing into more structured format. +[Output Parsers](/docs/concepts/output_parsers) are responsible for taking the output of an LLM and parsing into more structured format. - [How to: use output parsers to parse an LLM response into structured format](/docs/how_to/output_parser_structured) - [How to: parse JSON output](/docs/how_to/output_parser_json) @@ -124,7 +124,7 @@ What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language mo ### Document loaders -[Document Loaders](/docs/concepts/#document-loaders) are responsible for loading documents from a variety of sources. +[Document Loaders](/docs/concepts/document_loaders) are responsible for loading documents from a variety of sources. - [How to: load PDF files](/docs/how_to/document_loader_pdf) - [How to: load web pages](/docs/how_to/document_loader_web) @@ -138,7 +138,7 @@ What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language mo ### Text splitters -[Text Splitters](/docs/concepts/#text-splitters) take a document and split into chunks that can be used for retrieval. +[Text Splitters](/docs/concepts/text_splitters) take a document and split into chunks that can be used for retrieval. - [How to: recursively split text](/docs/how_to/recursive_text_splitter) - [How to: split by HTML headers](/docs/how_to/HTML_header_metadata_splitter) @@ -152,7 +152,7 @@ What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language mo ### Embedding models -[Embedding Models](/docs/concepts/#embedding-models) take a piece of text and create a numerical representation of it. +[Embedding Models](/docs/concepts/embedding_models) take a piece of text and create a numerical representation of it. - [How to: embed text data](/docs/how_to/embed_text) - [How to: cache embedding results](/docs/how_to/caching_embeddings) @@ -165,7 +165,7 @@ What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language mo ### Retrievers -[Retrievers](/docs/concepts/#retrievers) are responsible for taking a query and returning relevant documents. +[Retrievers](/docs/concepts/retrievers) are responsible for taking a query and returning relevant documents. - [How to: use a vector store to retrieve data](/docs/how_to/vectorstore_retriever) - [How to: generate multiple queries to retrieve data for](/docs/how_to/MultiQueryRetriever) @@ -188,7 +188,7 @@ Indexing is the process of keeping your vectorstore in-sync with the underlying ### Tools -LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools. +LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools. - [How to: create tools](/docs/how_to/custom_tools) - [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin) @@ -225,7 +225,7 @@ For in depth how-to guides for agents, please check out [LangGraph](https://lang ### Callbacks -[Callbacks](/docs/concepts/#callbacks) allow you to hook into the various stages of your LLM application's execution. +[Callbacks](/docs/concepts/callbacks) allow you to hook into the various stages of your LLM application's execution. - [How to: pass in callbacks at runtime](/docs/how_to/callbacks_runtime) - [How to: attach callbacks to a module](/docs/how_to/callbacks_attach) diff --git a/docs/docs/how_to/inspect.ipynb b/docs/docs/how_to/inspect.ipynb index e3f82cbcc9..a77f214a75 100644 --- a/docs/docs/how_to/inspect.ipynb +++ b/docs/docs/how_to/inspect.ipynb @@ -10,12 +10,12 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", + "- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "\n", ":::\n", "\n", - "Once you create a runnable with [LangChain Expression Language](/docs/concepts/#langchain-expression-language), you may often want to inspect it to get a better sense for what is going on. This notebook covers some methods for doing so.\n", + "Once you create a runnable with [LangChain Expression Language](/docs/concepts/lcel), you may often want to inspect it to get a better sense for what is going on. This notebook covers some methods for doing so.\n", "\n", "This guide shows some ways you can programmatically introspect the internal steps of chains. If you are instead interested in debugging issues in your chain, see [this section](/docs/how_to/debugging) instead.\n", "\n", diff --git a/docs/docs/how_to/llm_token_usage_tracking.ipynb b/docs/docs/how_to/llm_token_usage_tracking.ipynb index e32c98a32a..2f1a8a92c1 100644 --- a/docs/docs/how_to/llm_token_usage_tracking.ipynb +++ b/docs/docs/how_to/llm_token_usage_tracking.ipynb @@ -13,7 +13,7 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [LLMs](/docs/concepts/#llms)\n", + "- [LLMs](/docs/concepts/text_llms)\n", ":::\n", "\n", "## Using LangSmith\n", diff --git a/docs/docs/how_to/local_llms.ipynb b/docs/docs/how_to/local_llms.ipynb index 2f8359a991..2bd4c44556 100644 --- a/docs/docs/how_to/local_llms.ipynb +++ b/docs/docs/how_to/local_llms.ipynb @@ -68,7 +68,7 @@ "\n", "### Formatting prompts\n", "\n", - "Some providers have [chat model](/docs/concepts/#chat-models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/#llms) wrapper, you may need to use a prompt tailed for your specific model.\n", + "Some providers have [chat model](/docs/concepts/chat_models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/text_llms) wrapper, you may need to use a prompt tailed for your specific model.\n", "\n", "This can [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n", "\n", diff --git a/docs/docs/how_to/logprobs.ipynb b/docs/docs/how_to/logprobs.ipynb index c15565e0b9..33e56c83db 100644 --- a/docs/docs/how_to/logprobs.ipynb +++ b/docs/docs/how_to/logprobs.ipynb @@ -10,7 +10,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Chat models](/docs/concepts/#chat-models)\n", + "- [Chat models](/docs/concepts/chat_models)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/how_to/long_context_reorder.ipynb b/docs/docs/how_to/long_context_reorder.ipynb index 29bed05bbe..fe54330e0e 100644 --- a/docs/docs/how_to/long_context_reorder.ipynb +++ b/docs/docs/how_to/long_context_reorder.ipynb @@ -9,7 +9,7 @@ "\n", "Substantial performance degradations in [RAG](/docs/tutorials/rag) applications have been [documented](https://arxiv.org/abs/2307.03172) as the number of retrieved documents grows (e.g., beyond ten). In brief: models are liable to miss relevant information in the middle of long contexts.\n", "\n", - "By contrast, queries against vector stores will typically return documents in descending order of relevance (e.g., as measured by cosine similarity of [embeddings](/docs/concepts/#embedding-models)).\n", + "By contrast, queries against vector stores will typically return documents in descending order of relevance (e.g., as measured by cosine similarity of [embeddings](/docs/concepts/embedding_models)).\n", "\n", "To mitigate the [\"lost in the middle\"](https://arxiv.org/abs/2307.03172) effect, you can re-order documents after retrieval such that the most relevant documents are positioned at extrema (e.g., the first and last pieces of context), and the least relevant documents are positioned in the middle. In some cases this can help surface the most relevant information to LLMs.\n", "\n", diff --git a/docs/docs/how_to/message_history.ipynb b/docs/docs/how_to/message_history.ipynb index 9d0d8f44b6..ec4b981c6e 100644 --- a/docs/docs/how_to/message_history.ipynb +++ b/docs/docs/how_to/message_history.ipynb @@ -25,8 +25,8 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", - "- [Prompt templates](/docs/concepts/#prompt-templates)\n", - "- [Chat Messages](/docs/concepts/#message-types)\n", + "- [Prompt templates](/docs/concepts/prompt_templates)\n", + "- [Chat Messages](/docs/concepts/messages)\n", "- [LangGraph persistence](https://langchain-ai.github.io/langgraph/how-tos/persistence/)\n", "\n", ":::\n", @@ -85,7 +85,7 @@ "source": [ "## Example: message inputs\n", "\n", - "Adding memory to a [chat model](/docs/concepts/#chat-models) provides a simple example. Chat models accept a list of messages as input and output a message. LangGraph includes a built-in `MessagesState` that we can use for this purpose.\n", + "Adding memory to a [chat model](/docs/concepts/chat_models) provides a simple example. Chat models accept a list of messages as input and output a message. LangGraph includes a built-in `MessagesState` that we can use for this purpose.\n", "\n", "Below, we:\n", "1. Define the graph state to be a list of messages;\n", diff --git a/docs/docs/how_to/migrate_agent.ipynb b/docs/docs/how_to/migrate_agent.ipynb index 49d374266b..975bbd8e76 100644 --- a/docs/docs/how_to/migrate_agent.ipynb +++ b/docs/docs/how_to/migrate_agent.ipynb @@ -24,7 +24,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Agents](/docs/concepts/#agents)\n", + "- [Agents](/docs/concepts/agents)\n", "- [LangGraph](https://langchain-ai.github.io/langgraph/)\n", "- [Tool calling](/docs/how_to/tool_calling/)\n", "\n", @@ -298,7 +298,7 @@ "- A `SystemMessage`, which is added to the beginning of the list of messages.\n", "- A `string`, which is converted to a `SystemMessage` and added to the beginning of the list of messages.\n", "- A `Callable`, which should take in full graph state. The output is then passed to the language model.\n", - "- Or a [`Runnable`](/docs/concepts/#langchain-expression-language-lcel), which should take in full graph state. The output is then passed to the language model.\n", + "- Or a [`Runnable`](/docs/concepts/lcel), which should take in full graph state. The output is then passed to the language model.\n", "\n", "Here's how it looks in action:" ] diff --git a/docs/docs/how_to/multimodal_inputs.ipynb b/docs/docs/how_to/multimodal_inputs.ipynb index 3dd5c9c06c..6d0b0b736a 100644 --- a/docs/docs/how_to/multimodal_inputs.ipynb +++ b/docs/docs/how_to/multimodal_inputs.ipynb @@ -162,7 +162,7 @@ "source": [ "## Tool calls\n", "\n", - "Some multimodal models support [tool calling](/docs/concepts/#functiontool-calling) features as well. To call tools using such models, simply bind tools to them in the [usual way](/docs/how_to/tool_calling), and invoke the model using content blocks of the desired type (e.g., containing image data)." + "Some multimodal models support [tool calling](/docs/concepts/tool_calling) features as well. To call tools using such models, simply bind tools to them in the [usual way](/docs/how_to/tool_calling), and invoke the model using content blocks of the desired type (e.g., containing image data)." ] }, { diff --git a/docs/docs/how_to/output_parser_json.ipynb b/docs/docs/how_to/output_parser_json.ipynb index 479a48d11f..0baadb17f6 100644 --- a/docs/docs/how_to/output_parser_json.ipynb +++ b/docs/docs/how_to/output_parser_json.ipynb @@ -10,9 +10,9 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [Output parsers](/docs/concepts/#output-parsers)\n", - "- [Prompt templates](/docs/concepts/#prompt-templates)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [Output parsers](/docs/concepts/output_parsers)\n", + "- [Prompt templates](/docs/concepts/prompt_templates)\n", "- [Structured output](/docs/how_to/structured_output)\n", "- [Chaining runnables together](/docs/how_to/sequence/)\n", "\n", diff --git a/docs/docs/how_to/output_parser_xml.ipynb b/docs/docs/how_to/output_parser_xml.ipynb index 4acfc96830..0ae3d1ed0d 100644 --- a/docs/docs/how_to/output_parser_xml.ipynb +++ b/docs/docs/how_to/output_parser_xml.ipynb @@ -10,9 +10,9 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [Output parsers](/docs/concepts/#output-parsers)\n", - "- [Prompt templates](/docs/concepts/#prompt-templates)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [Output parsers](/docs/concepts/output_parsers)\n", + "- [Prompt templates](/docs/concepts/prompt_templates)\n", "- [Structured output](/docs/how_to/structured_output)\n", "- [Chaining runnables together](/docs/how_to/sequence/)\n", "\n", diff --git a/docs/docs/how_to/output_parser_yaml.ipynb b/docs/docs/how_to/output_parser_yaml.ipynb index 36a3e095c0..fedc1f88fd 100644 --- a/docs/docs/how_to/output_parser_yaml.ipynb +++ b/docs/docs/how_to/output_parser_yaml.ipynb @@ -10,9 +10,9 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [Output parsers](/docs/concepts/#output-parsers)\n", - "- [Prompt templates](/docs/concepts/#prompt-templates)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [Output parsers](/docs/concepts/output_parsers)\n", + "- [Prompt templates](/docs/concepts/prompt_templates)\n", "- [Structured output](/docs/how_to/structured_output)\n", "- [Chaining runnables together](/docs/how_to/sequence/)\n", "\n", diff --git a/docs/docs/how_to/parallel.ipynb b/docs/docs/how_to/parallel.ipynb index 8d58b3b1ce..56c3c88b95 100644 --- a/docs/docs/how_to/parallel.ipynb +++ b/docs/docs/how_to/parallel.ipynb @@ -21,7 +21,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", + "- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n", "- [Chaining runnables](/docs/how_to/sequence)\n", "\n", ":::\n", diff --git a/docs/docs/how_to/passthrough.ipynb b/docs/docs/how_to/passthrough.ipynb index efe185a779..6a1aa35918 100644 --- a/docs/docs/how_to/passthrough.ipynb +++ b/docs/docs/how_to/passthrough.ipynb @@ -21,7 +21,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", + "- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "- [Calling runnables in parallel](/docs/how_to/parallel/)\n", "- [Custom functions](/docs/how_to/functions/)\n", diff --git a/docs/docs/how_to/prompts_composition.ipynb b/docs/docs/how_to/prompts_composition.ipynb index 899ea10452..bf8d0f5fb2 100644 --- a/docs/docs/how_to/prompts_composition.ipynb +++ b/docs/docs/how_to/prompts_composition.ipynb @@ -20,7 +20,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Prompt templates](/docs/concepts/#prompt-templates)\n", + "- [Prompt templates](/docs/concepts/prompt_templates)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/how_to/prompts_partial.ipynb b/docs/docs/how_to/prompts_partial.ipynb index 5823b40bc0..b32e2586c1 100644 --- a/docs/docs/how_to/prompts_partial.ipynb +++ b/docs/docs/how_to/prompts_partial.ipynb @@ -20,7 +20,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Prompt templates](/docs/concepts/#prompt-templates)\n", + "- [Prompt templates](/docs/concepts/prompt_templates)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/how_to/qa_chat_history_how_to.ipynb b/docs/docs/how_to/qa_chat_history_how_to.ipynb index 743f60c634..d3dfc8d772 100644 --- a/docs/docs/how_to/qa_chat_history_how_to.ipynb +++ b/docs/docs/how_to/qa_chat_history_how_to.ipynb @@ -161,7 +161,7 @@ "id": "15f8ad59-19de-42e3-85a8-3ba95ee0bd43", "metadata": {}, "source": [ - "For the retriever, we will use [WebBaseLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) to load the content of a web page. Here we instantiate a `InMemoryVectorStore` vectorstore and then use its [.as_retriever](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.as_retriever) method to build a retriever that can be incorporated into [LCEL](/docs/concepts/#langchain-expression-language) chains." + "For the retriever, we will use [WebBaseLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) to load the content of a web page. Here we instantiate a `InMemoryVectorStore` vectorstore and then use its [.as_retriever](https://python.langchain.com/api_reference/core/vectorstores/langchain_core.vectorstores.VectorStore.html#langchain_core.vectorstores.VectorStore.as_retriever) method to build a retriever that can be incorporated into [LCEL](/docs/concepts/lcel) chains." ] }, { diff --git a/docs/docs/how_to/qa_sources.ipynb b/docs/docs/how_to/qa_sources.ipynb index 756a428def..c6a939821b 100644 --- a/docs/docs/how_to/qa_sources.ipynb +++ b/docs/docs/how_to/qa_sources.ipynb @@ -326,7 +326,7 @@ "\n", "Up to this point, we've simply propagated the documents returned from the retrieval step through to the final response. But this may not illustrate what subset of information the model relied on when generating its answer. Below, we show how to structure sources into the model response, allowing the model to report what specific context it relied on for its answer.\n", "\n", - "Because the above LCEL implementation is composed of [Runnable](/docs/concepts/#runnable-interface) primitives, it is straightforward to extend. Below, we make a simple change:\n", + "Because the above LCEL implementation is composed of [Runnable](/docs/concepts/runnables) primitives, it is straightforward to extend. Below, we make a simple change:\n", "\n", "- We use the model's tool-calling features to generate [structured output](/docs/how_to/structured_output/), consisting of an answer and list of sources. The schema for the response is represented in the `AnswerWithSources` TypedDict, below.\n", "- We remove the `StrOutputParser()`, as we expect `dict` output in this scenario." diff --git a/docs/docs/how_to/routing.ipynb b/docs/docs/how_to/routing.ipynb index 50da40f6dd..81ec752e99 100644 --- a/docs/docs/how_to/routing.ipynb +++ b/docs/docs/how_to/routing.ipynb @@ -21,11 +21,11 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", + "- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "- [Configuring chain parameters at runtime](/docs/how_to/configure)\n", - "- [Prompt templates](/docs/concepts/#prompt-templates)\n", - "- [Chat Messages](/docs/concepts/#message-types)\n", + "- [Prompt templates](/docs/concepts/prompt_templates)\n", + "- [Chat Messages](/docs/concepts/messages)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/how_to/sequence.ipynb b/docs/docs/how_to/sequence.ipynb index 8fc7be8d8f..4f20f4b1d3 100644 --- a/docs/docs/how_to/sequence.ipynb +++ b/docs/docs/how_to/sequence.ipynb @@ -22,14 +22,14 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", - "- [Prompt templates](/docs/concepts/#prompt-templates)\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [Output parser](/docs/concepts/#output-parsers)\n", + "- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n", + "- [Prompt templates](/docs/concepts/prompt_templates)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [Output parser](/docs/concepts/output_parsers)\n", "\n", ":::\n", "\n", - "One point about [LangChain Expression Language](/docs/concepts/#langchain-expression-language) is that any two runnables can be \"chained\" together into sequences. The output of the previous runnable's `.invoke()` call is passed as input to the next runnable. This can be done using the pipe operator (`|`), or the more explicit `.pipe()` method, which does the same thing.\n", + "One point about [LangChain Expression Language](/docs/concepts/lcel) is that any two runnables can be \"chained\" together into sequences. The output of the previous runnable's `.invoke()` call is passed as input to the next runnable. This can be done using the pipe operator (`|`), or the more explicit `.pipe()` method, which does the same thing.\n", "\n", "The resulting [`RunnableSequence`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableSequence.html) is itself a runnable, which means it can be invoked, streamed, or further chained just like any other runnable. Advantages of chaining runnables in this way are efficient streaming (the sequence will stream output as soon as it is available), and debugging and tracing with tools like [LangSmith](/docs/how_to/debugging).\n", "\n", diff --git a/docs/docs/how_to/serialization.ipynb b/docs/docs/how_to/serialization.ipynb index e0355baf93..f572f6b369 100644 --- a/docs/docs/how_to/serialization.ipynb +++ b/docs/docs/how_to/serialization.ipynb @@ -14,7 +14,7 @@ "\n", "To save and load LangChain objects using this system, use the `dumpd`, `dumps`, `load`, and `loads` functions in the [load module](https://python.langchain.com/api_reference/core/load.html) of `langchain-core`. These functions support JSON and JSON-serializable objects.\n", "\n", - "All LangChain objects that inherit from [Serializable](https://python.langchain.com/api_reference/core/load/langchain_core.load.serializable.Serializable.html) are JSON-serializable. Examples include [messages](https://python.langchain.com/api_reference//python/core_api_reference.html#module-langchain_core.messages), [document objects](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) (e.g., as returned from [retrievers](/docs/concepts/#retrievers)), and most [Runnables](/docs/concepts/#langchain-expression-language-lcel), such as chat models, retrievers, and [chains](/docs/how_to/sequence) implemented with the LangChain Expression Language.\n", + "All LangChain objects that inherit from [Serializable](https://python.langchain.com/api_reference/core/load/langchain_core.load.serializable.Serializable.html) are JSON-serializable. Examples include [messages](https://python.langchain.com/api_reference//python/core_api_reference.html#module-langchain_core.messages), [document objects](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) (e.g., as returned from [retrievers](/docs/concepts/retrievers)), and most [Runnables](/docs/concepts/lcel), such as chat models, retrievers, and [chains](/docs/how_to/sequence) implemented with the LangChain Expression Language.\n", "\n", "Below we walk through an example with a simple [LLM chain](/docs/tutorials/llm_chain).\n", "\n", diff --git a/docs/docs/how_to/streaming.ipynb b/docs/docs/how_to/streaming.ipynb index 02b6e63466..25118b153a 100644 --- a/docs/docs/how_to/streaming.ipynb +++ b/docs/docs/how_to/streaming.ipynb @@ -24,15 +24,15 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [LangChain Expression Language](/docs/concepts/#langchain-expression-language)\n", - "- [Output parsers](/docs/concepts/#output-parsers)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [LangChain Expression Language](/docs/concepts/lcel)\n", + "- [Output parsers](/docs/concepts/output_parsers)\n", "\n", ":::\n", "\n", "Streaming is critical in making applications based on LLMs feel responsive to end-users.\n", "\n", - "Important LangChain primitives like [chat models](/docs/concepts/#chat-models), [output parsers](/docs/concepts/#output-parsers), [prompts](/docs/concepts/#prompt-templates), [retrievers](/docs/concepts/#retrievers), and [agents](/docs/concepts/#agents) implement the LangChain [Runnable Interface](/docs/concepts#interface).\n", + "Important LangChain primitives like [chat models](/docs/concepts/chat_models), [output parsers](/docs/concepts/output_parsers), [prompts](/docs/concepts/prompt_templates), [retrievers](/docs/concepts/retrievers), and [agents](/docs/concepts/agents) implement the LangChain [Runnable Interface](/docs/concepts#interface).\n", "\n", "This interface provides two general approaches to stream content:\n", "\n", @@ -42,7 +42,7 @@ "Let's take a look at both approaches, and try to understand how to use them.\n", "\n", ":::info\n", - "For a higher-level overview of streaming techniques in LangChain, see [this section of the conceptual guide](/docs/concepts/#streaming).\n", + "For a higher-level overview of streaming techniques in LangChain, see [this section of the conceptual guide](/docs/concepts/streaming).\n", ":::\n", "\n", "## Using Stream\n", @@ -1510,7 +1510,7 @@ "\n", "Now you've learned some ways to stream both final outputs and internal steps with LangChain.\n", "\n", - "To learn more, check out the other how-to guides in this section, or the [conceptual guide on Langchain Expression Language](/docs/concepts/#langchain-expression-language/)." + "To learn more, check out the other how-to guides in this section, or the [conceptual guide on Langchain Expression Language](/docs/concepts/lcel/)." ] } ], diff --git a/docs/docs/how_to/structured_output.ipynb b/docs/docs/how_to/structured_output.ipynb index 8a691a6e17..3e333e340b 100644 --- a/docs/docs/how_to/structured_output.ipynb +++ b/docs/docs/how_to/structured_output.ipynb @@ -25,8 +25,8 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [Function/tool calling](/docs/concepts/#functiontool-calling)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [Function/tool calling](/docs/concepts/tool_calling)\n", ":::\n", "\n", "It is often useful to have a model return output that matches a specific schema. One common use-case is extracting data from text to insert into a database or use with some other downstream system. This guide covers a few strategies for getting structured outputs from a model.\n", @@ -776,7 +776,7 @@ "\n", "### Custom Parsing\n", "\n", - "You can also create a custom prompt and parser with [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language), using a plain function to parse the output from the model:" + "You can also create a custom prompt and parser with [LangChain Expression Language (LCEL)](/docs/concepts/lcel), using a plain function to parse the output from the model:" ] }, { diff --git a/docs/docs/how_to/summarize_refine.ipynb b/docs/docs/how_to/summarize_refine.ipynb index 269be50eb8..10540606cf 100644 --- a/docs/docs/how_to/summarize_refine.ipynb +++ b/docs/docs/how_to/summarize_refine.ipynb @@ -33,7 +33,7 @@ "\n", "- LangGraph allows for individual steps (such as successive summarizations) to be streamed, allowing for greater control of execution;\n", "- LangGraph's [checkpointing](https://langchain-ai.github.io/langgraph/how-tos/persistence/) supports error recovery, extending with human-in-the-loop workflows, and easier incorporation into conversational applications.\n", - "- Because it is assembled from modular components, it is also simple to extend or modify (e.g., to incorporate [tool calling](/docs/concepts/#functiontool-calling) or other behavior).\n", + "- Because it is assembled from modular components, it is also simple to extend or modify (e.g., to incorporate [tool calling](/docs/concepts/tool_calling) or other behavior).\n", "\n", "Below, we demonstrate how to summarize text via iterative refinement." ] diff --git a/docs/docs/how_to/summarize_stuff.ipynb b/docs/docs/how_to/summarize_stuff.ipynb index 0e3f069b8b..86fbb86e10 100644 --- a/docs/docs/how_to/summarize_stuff.ipynb +++ b/docs/docs/how_to/summarize_stuff.ipynb @@ -117,7 +117,7 @@ "source": [ "## Invoke chain\n", "\n", - "Because the chain is a [Runnable](/docs/concepts/#runnable-interface), it implements the usual methods for invocation:" + "Because the chain is a [Runnable](/docs/concepts/runnables), it implements the usual methods for invocation:" ] }, { diff --git a/docs/docs/how_to/tool_artifacts.ipynb b/docs/docs/how_to/tool_artifacts.ipynb index 9687834cf7..a6fa076c0c 100644 --- a/docs/docs/how_to/tool_artifacts.ipynb +++ b/docs/docs/how_to/tool_artifacts.ipynb @@ -10,9 +10,9 @@ ":::info Prerequisites\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [ToolMessage](/docs/concepts/#toolmessage)\n", - "- [Tools](/docs/concepts/#tools)\n", - "- [Function/tool calling](/docs/concepts/#functiontool-calling)\n", + "- [ToolMessage](/docs/concepts/messages/#toolmessage)\n", + "- [Tools](/docs/concepts/tools)\n", + "- [Function/tool calling](/docs/concepts/tool_calling)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/how_to/tool_calling.ipynb b/docs/docs/how_to/tool_calling.ipynb index c812617d19..295e718a67 100644 --- a/docs/docs/how_to/tool_calling.ipynb +++ b/docs/docs/how_to/tool_calling.ipynb @@ -23,13 +23,13 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [Tool calling](/docs/concepts/#functiontool-calling)\n", - "- [Tools](/docs/concepts/#tools)\n", - "- [Output parsers](/docs/concepts/#output-parsers)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [Tool calling](/docs/concepts/tool_calling)\n", + "- [Tools](/docs/concepts/tools)\n", + "- [Output parsers](/docs/concepts/output_parsers)\n", ":::\n", "\n", - "[Tool calling](/docs/concepts/#functiontool-calling) allows a chat model to respond to a given prompt by \"calling a tool\".\n", + "[Tool calling](/docs/concepts/tool_calling) allows a chat model to respond to a given prompt by \"calling a tool\".\n", "\n", "Remember, while the name \"tool calling\" implies that the model is directly performing some action, this is actually not the case! The model only generates the arguments to a tool, and actually running the tool (or not) is up to the user.\n", "\n", diff --git a/docs/docs/how_to/tool_choice.ipynb b/docs/docs/how_to/tool_choice.ipynb index 075d9a0a62..1f59546303 100644 --- a/docs/docs/how_to/tool_choice.ipynb +++ b/docs/docs/how_to/tool_choice.ipynb @@ -9,8 +9,8 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [LangChain Tools](/docs/concepts/#tools)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [LangChain Tools](/docs/concepts/tools)\n", "- [How to use a model to call tools](/docs/how_to/tool_calling)\n", ":::\n", "\n", diff --git a/docs/docs/how_to/tool_configure.ipynb b/docs/docs/how_to/tool_configure.ipynb index a97446e261..474d2f3c5f 100644 --- a/docs/docs/how_to/tool_configure.ipynb +++ b/docs/docs/how_to/tool_configure.ipynb @@ -10,9 +10,9 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [LangChain Tools](/docs/concepts/#tools)\n", + "- [LangChain Tools](/docs/concepts/tools)\n", "- [Custom tools](/docs/how_to/custom_tools)\n", - "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language-lcel)\n", + "- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n", "- [Configuring runnable behavior](/docs/how_to/configure/)\n", "\n", ":::\n", diff --git a/docs/docs/how_to/tool_results_pass_to_model.ipynb b/docs/docs/how_to/tool_results_pass_to_model.ipynb index 78e391e435..6ace9a02c5 100644 --- a/docs/docs/how_to/tool_results_pass_to_model.ipynb +++ b/docs/docs/how_to/tool_results_pass_to_model.ipynb @@ -9,14 +9,14 @@ ":::info Prerequisites\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [LangChain Tools](/docs/concepts/#tools)\n", - "- [Function/tool calling](/docs/concepts/#functiontool-calling)\n", + "- [LangChain Tools](/docs/concepts/tools)\n", + "- [Function/tool calling](/docs/concepts/tool_calling)\n", "- [Using chat models to call tools](/docs/how_to/tool_calling)\n", "- [Defining custom tools](/docs/how_to/custom_tools/)\n", "\n", ":::\n", "\n", - "Some models are capable of [**tool calling**](/docs/concepts/#functiontool-calling) - generating arguments that conform to a specific user-provided schema. This guide will demonstrate how to use those tool cals to actually call a function and properly pass the results back to the model.\n", + "Some models are capable of [**tool calling**](/docs/concepts/tool_calling) - generating arguments that conform to a specific user-provided schema. This guide will demonstrate how to use those tool cals to actually call a function and properly pass the results back to the model.\n", "\n", "![Diagram of a tool call invocation](/img/tool_invocation.png)\n", "\n", diff --git a/docs/docs/how_to/tool_runtime.ipynb b/docs/docs/how_to/tool_runtime.ipynb index fbbd2beba9..d0e9df846e 100644 --- a/docs/docs/how_to/tool_runtime.ipynb +++ b/docs/docs/how_to/tool_runtime.ipynb @@ -10,8 +10,8 @@ "import Compatibility from \"@theme/Compatibility\";\n", "\n", "\n", diff --git a/docs/docs/how_to/tool_stream_events.ipynb b/docs/docs/how_to/tool_stream_events.ipynb index 6167e158d6..43d53c40d6 100644 --- a/docs/docs/how_to/tool_stream_events.ipynb +++ b/docs/docs/how_to/tool_stream_events.ipynb @@ -9,7 +9,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [LangChain Tools](/docs/concepts/#tools)\n", + "- [LangChain Tools](/docs/concepts/tools)\n", "- [Custom tools](/docs/how_to/custom_tools)\n", "- [Using stream events](/docs/how_to/streaming/#using-stream-events)\n", "- [Accessing RunnableConfig within a custom tool](/docs/how_to/tool_configure/)\n", diff --git a/docs/docs/how_to/tools_builtin.ipynb b/docs/docs/how_to/tools_builtin.ipynb index ea137060c4..9e3b0ed56d 100644 --- a/docs/docs/how_to/tools_builtin.ipynb +++ b/docs/docs/how_to/tools_builtin.ipynb @@ -23,8 +23,8 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [LangChain Tools](/docs/concepts/#tools)\n", - "- [LangChain Toolkits](/docs/concepts/#tools)\n", + "- [LangChain Tools](/docs/concepts/tools)\n", + "- [LangChain Toolkits](/docs/concepts/tools)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/how_to/tools_error.ipynb b/docs/docs/how_to/tools_error.ipynb index 989236b5ef..7a881b89fb 100644 --- a/docs/docs/how_to/tools_error.ipynb +++ b/docs/docs/how_to/tools_error.ipynb @@ -10,8 +10,8 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [LangChain Tools](/docs/concepts/#tools)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [LangChain Tools](/docs/concepts/tools)\n", "- [How to use a model to call tools](/docs/how_to/tool_calling)\n", "\n", ":::\n", diff --git a/docs/docs/how_to/tools_prompting.ipynb b/docs/docs/how_to/tools_prompting.ipynb index 03cc039e60..29f82a914f 100644 --- a/docs/docs/how_to/tools_prompting.ipynb +++ b/docs/docs/how_to/tools_prompting.ipynb @@ -27,10 +27,10 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [LangChain Tools](/docs/concepts/#tools)\n", - "- [Function/tool calling](https://python.langchain.com/docs/concepts/#functiontool-calling)\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [LLMs](/docs/concepts/#llms)\n", + "- [LangChain Tools](/docs/concepts/tools)\n", + "- [Function/tool calling](https://python.langchain.com/docs/concepts/tool_calling)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [LLMs](/docs/concepts/text_llms)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/how_to/trim_messages.ipynb b/docs/docs/how_to/trim_messages.ipynb index 6a882345e1..97b725dd72 100644 --- a/docs/docs/how_to/trim_messages.ipynb +++ b/docs/docs/how_to/trim_messages.ipynb @@ -11,10 +11,10 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Messages](/docs/concepts/#messages)\n", - "- [Chat models](/docs/concepts/#chat-models)\n", + "- [Messages](/docs/concepts/messages)\n", + "- [Chat models](/docs/concepts/chat_models)\n", "- [Chaining](/docs/how_to/sequence/)\n", - "- [Chat history](/docs/concepts/#chat-history)\n", + "- [Chat history](/docs/concepts/chat_history)\n", "\n", "The methods in this guide also require `langchain-core>=0.2.9`.\n", "\n", @@ -28,7 +28,7 @@ "If passing the trimmed chat history back into a chat model directly, the trimmed chat history should satisfy the following properties:\n", "\n", "1. The resulting chat history should be **valid**. Usually this means that the following properties should be satisfied:\n", - " - The chat history **starts** with either (1) a `HumanMessage` or (2) a [SystemMessage](/docs/concepts/#systemmessage) followed by a `HumanMessage`.\n", + " - The chat history **starts** with either (1) a `HumanMessage` or (2) a [SystemMessage](/docs/concepts/messages/#systemmessage) followed by a `HumanMessage`.\n", " - The chat history **ends** with either a `HumanMessage` or a `ToolMessage`.\n", " - A `ToolMessage` can only appear after an `AIMessage` that involved a tool call. \n", " This can be achieved by setting `start_on=\"human\"` and `ends_on=(\"human\", \"tool\")`.\n", diff --git a/docs/docs/integrations/chat/anthropic.ipynb b/docs/docs/integrations/chat/anthropic.ipynb index b813f6a9ee..3b0fb9ee8f 100644 --- a/docs/docs/integrations/chat/anthropic.ipynb +++ b/docs/docs/integrations/chat/anthropic.ipynb @@ -17,7 +17,7 @@ "source": [ "# ChatAnthropic\n", "\n", - "This notebook provides a quick overview for getting started with Anthropic [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatAnthropic features and configurations head to the [API reference](https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html).\n", + "This notebook provides a quick overview for getting started with Anthropic [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatAnthropic features and configurations head to the [API reference](https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html).\n", "\n", "Anthropic has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Anthropic docs](https://docs.anthropic.com/en/docs/models-overview).\n", "\n", diff --git a/docs/docs/integrations/chat/azure_chat_openai.ipynb b/docs/docs/integrations/chat/azure_chat_openai.ipynb index a0883d34b7..4552a5bb59 100644 --- a/docs/docs/integrations/chat/azure_chat_openai.ipynb +++ b/docs/docs/integrations/chat/azure_chat_openai.ipynb @@ -17,7 +17,7 @@ "source": [ "# AzureChatOpenAI\n", "\n", - "This guide will help you get started with AzureOpenAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all AzureChatOpenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html).\n", + "This guide will help you get started with AzureOpenAI [chat models](/docs/concepts/chat_models). For detailed documentation of all AzureChatOpenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html).\n", "\n", "Azure OpenAI has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Azure docs](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).\n", "\n", diff --git a/docs/docs/integrations/chat/bedrock.ipynb b/docs/docs/integrations/chat/bedrock.ipynb index 750f391822..a24d23f7d7 100644 --- a/docs/docs/integrations/chat/bedrock.ipynb +++ b/docs/docs/integrations/chat/bedrock.ipynb @@ -17,7 +17,7 @@ "source": [ "# ChatBedrock\n", "\n", - "This doc will help you get started with AWS Bedrock [chat models](/docs/concepts/#chat-models). Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.\n", + "This doc will help you get started with AWS Bedrock [chat models](/docs/concepts/chat_models). Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.\n", "\n", "For more information on which models are accessible via Bedrock, head to the [AWS docs](https://docs.aws.amazon.com/bedrock/latest/userguide/models-features.html).\n", "\n", diff --git a/docs/docs/integrations/chat/cerebras.ipynb b/docs/docs/integrations/chat/cerebras.ipynb index 41183d1ec3..76c0fb8e17 100644 --- a/docs/docs/integrations/chat/cerebras.ipynb +++ b/docs/docs/integrations/chat/cerebras.ipynb @@ -17,7 +17,7 @@ "source": [ "# ChatCerebras\n", "\n", - "This notebook provides a quick overview for getting started with Cerebras [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatCerebras features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_cerebras.chat_models.ChatCerebras.html).\n", + "This notebook provides a quick overview for getting started with Cerebras [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatCerebras features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_cerebras.chat_models.ChatCerebras.html).\n", "\n", "At Cerebras, we've developed the world's largest and fastest AI processor, the Wafer-Scale Engine-3 (WSE-3). The Cerebras CS-3 system, powered by the WSE-3, represents a new class of AI supercomputer that sets the standard for generative AI training and inference with unparalleled performance and scalability.\n", "\n", diff --git a/docs/docs/integrations/chat/databricks.ipynb b/docs/docs/integrations/chat/databricks.ipynb index 74c80b86cf..071a9f1688 100644 --- a/docs/docs/integrations/chat/databricks.ipynb +++ b/docs/docs/integrations/chat/databricks.ipynb @@ -21,7 +21,7 @@ "\n", "> [Databricks](https://www.databricks.com/) Lakehouse Platform unifies data, analytics, and AI on one platform. \n", "\n", - "This notebook provides a quick overview for getting started with Databricks [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatDatabricks features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.databricks.ChatDatabricks.html).\n", + "This notebook provides a quick overview for getting started with Databricks [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatDatabricks features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.databricks.ChatDatabricks.html).\n", "\n", "## Overview\n", "\n", diff --git a/docs/docs/integrations/chat/fireworks.ipynb b/docs/docs/integrations/chat/fireworks.ipynb index b43c5f56f8..0172b0a533 100644 --- a/docs/docs/integrations/chat/fireworks.ipynb +++ b/docs/docs/integrations/chat/fireworks.ipynb @@ -17,7 +17,7 @@ "source": [ "# ChatFireworks\n", "\n", - "This doc help you get started with Fireworks AI [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatFireworks features and configurations head to the [API reference](https://python.langchain.com/api_reference/fireworks/chat_models/langchain_fireworks.chat_models.ChatFireworks.html).\n", + "This doc help you get started with Fireworks AI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatFireworks features and configurations head to the [API reference](https://python.langchain.com/api_reference/fireworks/chat_models/langchain_fireworks.chat_models.ChatFireworks.html).\n", "\n", "Fireworks AI is an AI inference platform to run and customize models. For a list of all models served by Fireworks see the [Fireworks docs](https://fireworks.ai/models).\n", "\n", diff --git a/docs/docs/integrations/chat/google_generative_ai.ipynb b/docs/docs/integrations/chat/google_generative_ai.ipynb index 08aad6fefb..7492e641d4 100644 --- a/docs/docs/integrations/chat/google_generative_ai.ipynb +++ b/docs/docs/integrations/chat/google_generative_ai.ipynb @@ -17,7 +17,7 @@ "source": [ "# ChatGoogleGenerativeAI\n", "\n", - "This docs will help you get started with Google AI [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatGoogleGenerativeAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/google_genai/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html).\n", + "This docs will help you get started with Google AI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatGoogleGenerativeAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/google_genai/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html).\n", "\n", "Google AI offers a number of different chat models. For information on the latest models, their features, context windows, etc. head to the [Google AI docs](https://ai.google.dev/gemini-api/docs/models/gemini).\n", "\n", diff --git a/docs/docs/integrations/chat/google_vertex_ai_palm.ipynb b/docs/docs/integrations/chat/google_vertex_ai_palm.ipynb index 7261adba16..4709faa856 100644 --- a/docs/docs/integrations/chat/google_vertex_ai_palm.ipynb +++ b/docs/docs/integrations/chat/google_vertex_ai_palm.ipynb @@ -17,7 +17,7 @@ "source": [ "# ChatVertexAI\n", "\n", - "This page provides a quick overview for getting started with VertexAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatVertexAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/google_vertexai/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html).\n", + "This page provides a quick overview for getting started with VertexAI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatVertexAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/google_vertexai/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html).\n", "\n", "ChatVertexAI exposes all foundational models available in Google Cloud, like `gemini-1.5-pro`, `gemini-1.5-flash`, etc. For a full and updated list of available models visit [VertexAI documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview).\n", "\n", diff --git a/docs/docs/integrations/chat/huggingface.ipynb b/docs/docs/integrations/chat/huggingface.ipynb index b1ab4e16df..3970b5556e 100644 --- a/docs/docs/integrations/chat/huggingface.ipynb +++ b/docs/docs/integrations/chat/huggingface.ipynb @@ -6,7 +6,7 @@ "source": [ "# ChatHuggingFace\n", "\n", - "This will help you getting started with `langchain_huggingface` [chat models](/docs/concepts/#chat-models). For detailed documentation of all `ChatHuggingFace` features and configurations head to the [API reference](https://python.langchain.com/api_reference/huggingface/chat_models/langchain_huggingface.chat_models.huggingface.ChatHuggingFace.html). For a list of models supported by Hugging Face check out [this page](https://huggingface.co/models).\n", + "This will help you getting started with `langchain_huggingface` [chat models](/docs/concepts/chat_models). For detailed documentation of all `ChatHuggingFace` features and configurations head to the [API reference](https://python.langchain.com/api_reference/huggingface/chat_models/langchain_huggingface.chat_models.huggingface.ChatHuggingFace.html). For a list of models supported by Hugging Face check out [this page](https://huggingface.co/models).\n", "\n", "## Overview\n", "### Integration details\n", diff --git a/docs/docs/integrations/chat/index.mdx b/docs/docs/integrations/chat/index.mdx index 5ccb0fa13f..557c11c34a 100644 --- a/docs/docs/integrations/chat/index.mdx +++ b/docs/docs/integrations/chat/index.mdx @@ -6,7 +6,7 @@ keywords: [compatibility] # Chat models -[Chat models](/docs/concepts/#chat-models) are language models that use a sequence of [messages](/docs/concepts/#messages) as inputs and return messages as outputs (as opposed to using plain text). These are generally newer models. +[Chat models](/docs/concepts/chat_models) are language models that use a sequence of [messages](/docs/concepts/messages) as inputs and return messages as outputs (as opposed to using plain text). These are generally newer models. :::info diff --git a/docs/docs/integrations/chat/mistralai.ipynb b/docs/docs/integrations/chat/mistralai.ipynb index 94f4717108..d399f416da 100644 --- a/docs/docs/integrations/chat/mistralai.ipynb +++ b/docs/docs/integrations/chat/mistralai.ipynb @@ -17,7 +17,7 @@ "source": [ "# ChatMistralAI\n", "\n", - "This will help you getting started with Mistral [chat models](/docs/concepts/#chat-models). For detailed documentation of all `ChatMistralAI` features and configurations head to the [API reference](https://python.langchain.com/api_reference/mistralai/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html). The `ChatMistralAI` class is built on top of the [Mistral API](https://docs.mistral.ai/api/). For a list of all the models supported by Mistral, check out [this page](https://docs.mistral.ai/getting-started/models/).\n", + "This will help you getting started with Mistral [chat models](/docs/concepts/chat_models). For detailed documentation of all `ChatMistralAI` features and configurations head to the [API reference](https://python.langchain.com/api_reference/mistralai/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html). The `ChatMistralAI` class is built on top of the [Mistral API](https://docs.mistral.ai/api/). For a list of all the models supported by Mistral, check out [this page](https://docs.mistral.ai/getting-started/models/).\n", "\n", "## Overview\n", "### Integration details\n", diff --git a/docs/docs/integrations/chat/nvidia_ai_endpoints.ipynb b/docs/docs/integrations/chat/nvidia_ai_endpoints.ipynb index 58f5a01f91..322c2955f8 100644 --- a/docs/docs/integrations/chat/nvidia_ai_endpoints.ipynb +++ b/docs/docs/integrations/chat/nvidia_ai_endpoints.ipynb @@ -17,7 +17,7 @@ "source": [ "# ChatNVIDIA\n", "\n", - "This will help you getting started with NVIDIA [chat models](/docs/concepts/#chat-models). For detailed documentation of all `ChatNVIDIA` features and configurations head to the [API reference](https://python.langchain.com/api_reference/nvidia_ai_endpoints/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html).\n", + "This will help you getting started with NVIDIA [chat models](/docs/concepts/chat_models). For detailed documentation of all `ChatNVIDIA` features and configurations head to the [API reference](https://python.langchain.com/api_reference/nvidia_ai_endpoints/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html).\n", "\n", "## Overview\n", "The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n", diff --git a/docs/docs/integrations/chat/oci_data_science.ipynb b/docs/docs/integrations/chat/oci_data_science.ipynb index fdc1d8cda4..b5a8f040d9 100644 --- a/docs/docs/integrations/chat/oci_data_science.ipynb +++ b/docs/docs/integrations/chat/oci_data_science.ipynb @@ -19,7 +19,7 @@ "source": [ "# ChatOCIModelDeployment\n", "\n", - "This will help you getting started with OCIModelDeployment [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatOCIModelDeployment features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ChatOCIModelDeployment.html).\n", + "This will help you getting started with OCIModelDeployment [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOCIModelDeployment features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.ChatOCIModelDeployment.html).\n", "\n", "[OCI Data Science](https://docs.oracle.com/en-us/iaas/data-science/using/home.htm) is a fully managed and serverless platform for data science teams to build, train, and manage machine learning models in the Oracle Cloud Infrastructure. You can use [AI Quick Actions](https://blogs.oracle.com/ai-and-datascience/post/ai-quick-actions-in-oci-data-science) to easily deploy LLMs on [OCI Data Science Model Deployment Service](https://docs.oracle.com/en-us/iaas/data-science/using/model-dep-about.htm). You may choose to deploy the model with popular inference frameworks such as vLLM or TGI. By default, the model deployment endpoint mimics the OpenAI API protocol.\n", "\n", diff --git a/docs/docs/integrations/chat/oci_generative_ai.ipynb b/docs/docs/integrations/chat/oci_generative_ai.ipynb index 563a9dc0c6..d5dc26cf25 100644 --- a/docs/docs/integrations/chat/oci_generative_ai.ipynb +++ b/docs/docs/integrations/chat/oci_generative_ai.ipynb @@ -17,7 +17,7 @@ "source": [ "# ChatOCIGenAI\n", "\n", - "This notebook provides a quick overview for getting started with OCIGenAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatOCIGenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html).\n", + "This notebook provides a quick overview for getting started with OCIGenAI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOCIGenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.oci_generative_ai.ChatOCIGenAI.html).\n", "\n", "Oracle Cloud Infrastructure (OCI) Generative AI is a fully managed service that provides a set of state-of-the-art, customizable large language models (LLMs) that cover a wide range of use cases, and which is available through a single API.\n", "Using the OCI Generative AI service you can access ready-to-use pretrained models, or create and host your own fine-tuned custom models based on your own data on dedicated AI clusters. Detailed documentation of the service and API is available __[here](https://docs.oracle.com/en-us/iaas/Content/generative-ai/home.htm)__ and __[here](https://docs.oracle.com/en-us/iaas/api/#/en/generative-ai/20231130/)__.\n", diff --git a/docs/docs/integrations/chat/openai.ipynb b/docs/docs/integrations/chat/openai.ipynb index 5d7f5c9424..687f6ddc30 100644 --- a/docs/docs/integrations/chat/openai.ipynb +++ b/docs/docs/integrations/chat/openai.ipynb @@ -17,7 +17,7 @@ "source": [ "# ChatOpenAI\n", "\n", - "This notebook provides a quick overview for getting started with OpenAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatOpenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html).\n", + "This notebook provides a quick overview for getting started with OpenAI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatOpenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html).\n", "\n", "OpenAI has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [OpenAI docs](https://platform.openai.com/docs/models).\n", "\n", diff --git a/docs/docs/integrations/chat/sambanova.ipynb b/docs/docs/integrations/chat/sambanova.ipynb index 2375cd6e11..7f00e09467 100644 --- a/docs/docs/integrations/chat/sambanova.ipynb +++ b/docs/docs/integrations/chat/sambanova.ipynb @@ -19,7 +19,7 @@ "source": [ "# ChatSambaNovaCloud\n", "\n", - "This will help you getting started with SambaNovaCloud [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatSambaNovaCloud features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.sambanova.ChatSambaNovaCloud.html).\n", + "This will help you getting started with SambaNovaCloud [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatSambaNovaCloud features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.sambanova.ChatSambaNovaCloud.html).\n", "\n", "**[SambaNova](https://sambanova.ai/)'s** [SambaNova Cloud](https://cloud.sambanova.ai/) is a platform for performing inference with open-source models\n", "\n", diff --git a/docs/docs/integrations/chat/sambastudio.ipynb b/docs/docs/integrations/chat/sambastudio.ipynb index 3c7d029ab8..64dd05fd96 100644 --- a/docs/docs/integrations/chat/sambastudio.ipynb +++ b/docs/docs/integrations/chat/sambastudio.ipynb @@ -19,7 +19,7 @@ "source": [ "# ChatSambaStudio\n", "\n", - "This will help you getting started with SambaStudio [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatStudio features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html).\n", + "This will help you getting started with SambaStudio [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatStudio features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.sambanova.ChatSambaStudio.html).\n", "\n", "**[SambaNova](https://sambanova.ai/)'s** [SambaStudio](https://docs.sambanova.ai/sambastudio/latest/sambastudio-intro.html) SambaStudio is a rich, GUI-based platform that provides the functionality to train, deploy, and manage models in SambaNova [DataScale](https://sambanova.ai/products/datascale) systems.\n", "\n", diff --git a/docs/docs/integrations/chat/vllm.ipynb b/docs/docs/integrations/chat/vllm.ipynb index 3fa481fe4d..9fc97a650d 100644 --- a/docs/docs/integrations/chat/vllm.ipynb +++ b/docs/docs/integrations/chat/vllm.ipynb @@ -20,7 +20,7 @@ "vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API.\n", "\n", "## Overview\n", - "This will help you getting started with vLLM [chat models](/docs/concepts/#chat-models), which leverage the `langchain-openai` package. For detailed documentation of all `ChatOpenAI` features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html).\n", + "This will help you getting started with vLLM [chat models](/docs/concepts/chat_models), which leverage the `langchain-openai` package. For detailed documentation of all `ChatOpenAI` features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html).\n", "\n", "### Integration details\n", "\n", diff --git a/docs/docs/integrations/chat/yi.ipynb b/docs/docs/integrations/chat/yi.ipynb index 27f1915e57..11392981b1 100644 --- a/docs/docs/integrations/chat/yi.ipynb +++ b/docs/docs/integrations/chat/yi.ipynb @@ -6,7 +6,7 @@ "source": [ "# ChatYI\n", "\n", - "This will help you getting started with Yi [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatYi features and configurations head to the [API reference](https://python.langchain.com/api_reference/lanchain_community/chat_models/lanchain_community.chat_models.yi.ChatYi.html).\n", + "This will help you getting started with Yi [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatYi features and configurations head to the [API reference](https://python.langchain.com/api_reference/lanchain_community/chat_models/lanchain_community.chat_models.yi.ChatYi.html).\n", "\n", "[01.AI](https://www.lingyiwanwu.com/en), founded by Dr. Kai-Fu Lee, is a global company at the forefront of AI 2.0. They offer cutting-edge large language models, including the Yi series, which range from 6B to hundreds of billions of parameters. 01.AI also provides multimodal models, an open API platform, and open-source options like Yi-34B/9B/6B and Yi-VL.\n", "\n", diff --git a/docs/docs/integrations/document_loaders/bshtml.ipynb b/docs/docs/integrations/document_loaders/bshtml.ipynb index b23ba8dca4..29e9e8b1d5 100644 --- a/docs/docs/integrations/document_loaders/bshtml.ipynb +++ b/docs/docs/integrations/document_loaders/bshtml.ipynb @@ -7,7 +7,7 @@ "# BSHTMLLoader\n", "\n", "\n", - "This notebook provides a quick overview for getting started with BeautifulSoup4 [document loader](https://python.langchain.com/docs/concepts/#document-loaders). For detailed documentation of all __ModuleName__Loader features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.html_bs.BSHTMLLoader.html).\n", + "This notebook provides a quick overview for getting started with BeautifulSoup4 [document loader](https://python.langchain.com/docs/concepts/document_loaders). For detailed documentation of all __ModuleName__Loader features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.html_bs.BSHTMLLoader.html).\n", "\n", "\n", "## Overview\n", diff --git a/docs/docs/integrations/document_loaders/json.ipynb b/docs/docs/integrations/document_loaders/json.ipynb index 299bc1aa75..e3a11d0ace 100644 --- a/docs/docs/integrations/document_loaders/json.ipynb +++ b/docs/docs/integrations/document_loaders/json.ipynb @@ -6,7 +6,7 @@ "source": [ "# JSONLoader\n", "\n", - "This notebook provides a quick overview for getting started with JSON [document loader](https://python.langchain.com/docs/concepts/#document-loaders). For detailed documentation of all JSONLoader features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html).\n", + "This notebook provides a quick overview for getting started with JSON [document loader](https://python.langchain.com/docs/concepts/document_loaders). For detailed documentation of all JSONLoader features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html).\n", "\n", "- TODO: Add any other relevant links, like information about underlying API, etc.\n", "\n", diff --git a/docs/docs/integrations/document_loaders/langsmith.ipynb b/docs/docs/integrations/document_loaders/langsmith.ipynb index 9a8dc2aee3..54f791602d 100644 --- a/docs/docs/integrations/document_loaders/langsmith.ipynb +++ b/docs/docs/integrations/document_loaders/langsmith.ipynb @@ -15,7 +15,7 @@ "source": [ "# LangSmithLoader\n", "\n", - "This notebook provides a quick overview for getting started with the LangSmith [document loader](https://python.langchain.com/docs/concepts/#document-loaders). For detailed documentation of all LangSmithLoader features and configurations head to the [API reference](https://python.langchain.com/api_reference/core/document_loaders/langchain_core.document_loaders.langsmith.LangSmithLoader.html).\n", + "This notebook provides a quick overview for getting started with the LangSmith [document loader](https://python.langchain.com/docs/concepts/document_loaders). For detailed documentation of all LangSmithLoader features and configurations head to the [API reference](https://python.langchain.com/api_reference/core/document_loaders/langchain_core.document_loaders.langsmith.LangSmithLoader.html).\n", "\n", "## Overview\n", "### Integration details\n", diff --git a/docs/docs/integrations/document_loaders/pypdfium2.ipynb b/docs/docs/integrations/document_loaders/pypdfium2.ipynb index 48ca5ec023..24740de99a 100644 --- a/docs/docs/integrations/document_loaders/pypdfium2.ipynb +++ b/docs/docs/integrations/document_loaders/pypdfium2.ipynb @@ -7,7 +7,7 @@ "# PyPDFium2Loader\n", "\n", "\n", - "This notebook provides a quick overview for getting started with PyPDFium2 [document loader](https://python.langchain.com/docs/concepts/#document-loaders). For detailed documentation of all __ModuleName__Loader features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFium2Loader.html).\n", + "This notebook provides a quick overview for getting started with PyPDFium2 [document loader](https://python.langchain.com/docs/concepts/document_loaders). For detailed documentation of all __ModuleName__Loader features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFium2Loader.html).\n", "\n", "## Overview\n", "### Integration details\n", diff --git a/docs/docs/integrations/document_loaders/pypdfloader.ipynb b/docs/docs/integrations/document_loaders/pypdfloader.ipynb index 5085aa1c6c..b0cc79d92d 100644 --- a/docs/docs/integrations/document_loaders/pypdfloader.ipynb +++ b/docs/docs/integrations/document_loaders/pypdfloader.ipynb @@ -6,7 +6,7 @@ "source": [ "# PyPDFLoader\n", "\n", - "This notebook provides a quick overview for getting started with `PyPDF` [document loader](https://python.langchain.com/docs/concepts/#document-loaders). For detailed documentation of all DocumentLoader features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html).\n", + "This notebook provides a quick overview for getting started with `PyPDF` [document loader](https://python.langchain.com/docs/concepts/document_loaders). For detailed documentation of all DocumentLoader features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html).\n", "\n", "\n", "## Overview\n", diff --git a/docs/docs/integrations/document_loaders/unstructured_file.ipynb b/docs/docs/integrations/document_loaders/unstructured_file.ipynb index 02ff6a7c79..89c79eedbe 100644 --- a/docs/docs/integrations/document_loaders/unstructured_file.ipynb +++ b/docs/docs/integrations/document_loaders/unstructured_file.ipynb @@ -7,7 +7,7 @@ "source": [ "# Unstructured\n", "\n", - "This notebook covers how to use `Unstructured` [document loader](https://python.langchain.com/docs/concepts/#document-loaders) to load files of many types. `Unstructured` currently supports loading of text files, powerpoints, html, pdfs, images, and more.\n", + "This notebook covers how to use `Unstructured` [document loader](https://python.langchain.com/docs/concepts/document_loaders) to load files of many types. `Unstructured` currently supports loading of text files, powerpoints, html, pdfs, images, and more.\n", "\n", "Please see [this guide](../../integrations/providers/unstructured.mdx) for more instructions on setting up Unstructured locally, including setting up required system dependencies.\n", "\n", diff --git a/docs/docs/integrations/document_loaders/unstructured_markdown.ipynb b/docs/docs/integrations/document_loaders/unstructured_markdown.ipynb index 0645950990..3bd98c0bed 100644 --- a/docs/docs/integrations/document_loaders/unstructured_markdown.ipynb +++ b/docs/docs/integrations/document_loaders/unstructured_markdown.ipynb @@ -6,7 +6,7 @@ "source": [ "# UnstructuredMarkdownLoader\n", "\n", - "This notebook provides a quick overview for getting started with UnstructuredMarkdown [document loader](https://python.langchain.com/docs/concepts/#document-loaders). For detailed documentation of all __ModuleName__Loader features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.markdown.UnstructuredMarkdownLoader.html).\n", + "This notebook provides a quick overview for getting started with UnstructuredMarkdown [document loader](https://python.langchain.com/docs/concepts/document_loaders). For detailed documentation of all __ModuleName__Loader features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.markdown.UnstructuredMarkdownLoader.html).\n", "\n", "## Overview\n", "### Integration details\n", diff --git a/docs/docs/integrations/document_loaders/xml.ipynb b/docs/docs/integrations/document_loaders/xml.ipynb index 0e936b07ba..39fb2fb507 100644 --- a/docs/docs/integrations/document_loaders/xml.ipynb +++ b/docs/docs/integrations/document_loaders/xml.ipynb @@ -7,7 +7,7 @@ "source": [ "# UnstructuredXMLLoader\n", "\n", - "This notebook provides a quick overview for getting started with UnstructuredXMLLoader [document loader](https://python.langchain.com/docs/concepts/#document-loaders). The `UnstructuredXMLLoader` is used to load `XML` files. The loader works with `.xml` files. The page content will be the text extracted from the XML tags.\n", + "This notebook provides a quick overview for getting started with UnstructuredXMLLoader [document loader](https://python.langchain.com/docs/concepts/document_loaders). The `UnstructuredXMLLoader` is used to load `XML` files. The loader works with `.xml` files. The page content will be the text extracted from the XML tags.\n", "\n", "\n", "## Overview\n", diff --git a/docs/docs/integrations/llms/anthropic.ipynb b/docs/docs/integrations/llms/anthropic.ipynb index 0f9c74c225..bdab46365c 100644 --- a/docs/docs/integrations/llms/anthropic.ipynb +++ b/docs/docs/integrations/llms/anthropic.ipynb @@ -23,7 +23,7 @@ "# AnthropicLLM\n", "\n", ":::caution\n", - "You are currently on a page documenting the use of Anthropic legacy Claude 2 models as [text completion models](/docs/concepts/#llms). The latest and most popular Anthropic models are [chat completion models](/docs/concepts/#chat-models), and the text completion models have been deprecated.\n", + "You are currently on a page documenting the use of Anthropic legacy Claude 2 models as [text completion models](/docs/concepts/text_llms). The latest and most popular Anthropic models are [chat completion models](/docs/concepts/chat_models), and the text completion models have been deprecated.\n", "\n", "You are probably looking for [this page instead](/docs/integrations/chat/anthropic/).\n", ":::\n", diff --git a/docs/docs/integrations/llms/azure_openai.ipynb b/docs/docs/integrations/llms/azure_openai.ipynb index 44af56a2f2..17a6d0605e 100644 --- a/docs/docs/integrations/llms/azure_openai.ipynb +++ b/docs/docs/integrations/llms/azure_openai.ipynb @@ -8,7 +8,7 @@ "# Azure OpenAI\n", "\n", ":::caution\n", - "You are currently on a page documenting the use of Azure OpenAI [text completion models](/docs/concepts/#llms). The latest and most popular Azure OpenAI models are [chat completion models](/docs/concepts/#chat-models).\n", + "You are currently on a page documenting the use of Azure OpenAI [text completion models](/docs/concepts/text_llms). The latest and most popular Azure OpenAI models are [chat completion models](/docs/concepts/chat_models).\n", "\n", "Unless you are specifically using `gpt-3.5-turbo-instruct`, you are probably looking for [this page instead](/docs/integrations/chat/azure_chat_openai/).\n", ":::\n", diff --git a/docs/docs/integrations/llms/bedrock.ipynb b/docs/docs/integrations/llms/bedrock.ipynb index 70ecf569dc..86be8dca20 100644 --- a/docs/docs/integrations/llms/bedrock.ipynb +++ b/docs/docs/integrations/llms/bedrock.ipynb @@ -12,7 +12,7 @@ "metadata": {}, "source": [ ":::caution\n", - "You are currently on a page documenting the use of Amazon Bedrock models as [text completion models](/docs/concepts/#llms). Many popular models available on Bedrock are [chat completion models](/docs/concepts/#chat-models).\n", + "You are currently on a page documenting the use of Amazon Bedrock models as [text completion models](/docs/concepts/text_llms). Many popular models available on Bedrock are [chat completion models](/docs/concepts/chat_models).\n", "\n", "You may be looking for [this page instead](/docs/integrations/chat/bedrock/).\n", ":::\n", diff --git a/docs/docs/integrations/llms/cohere.ipynb b/docs/docs/integrations/llms/cohere.ipynb index 566667cca4..9e294f275c 100644 --- a/docs/docs/integrations/llms/cohere.ipynb +++ b/docs/docs/integrations/llms/cohere.ipynb @@ -8,7 +8,7 @@ "# Cohere\n", "\n", ":::caution\n", - "You are currently on a page documenting the use of Cohere models as [text completion models](/docs/concepts/#llms). Many popular Cohere models are [chat completion models](/docs/concepts/#chat-models).\n", + "You are currently on a page documenting the use of Cohere models as [text completion models](/docs/concepts/text_llms). Many popular Cohere models are [chat completion models](/docs/concepts/chat_models).\n", "\n", "You may be looking for [this page instead](/docs/integrations/chat/cohere/).\n", ":::\n", diff --git a/docs/docs/integrations/llms/databricks.ipynb b/docs/docs/integrations/llms/databricks.ipynb index a08ae1516f..7cc805bde2 100644 --- a/docs/docs/integrations/llms/databricks.ipynb +++ b/docs/docs/integrations/llms/databricks.ipynb @@ -10,7 +10,7 @@ "> [Databricks](https://www.databricks.com/) Lakehouse Platform unifies data, analytics, and AI on one platform.\n", "\n", "\n", - "This notebook provides a quick overview for getting started with Databricks [LLM models](https://python.langchain.com/docs/concepts/#llms). For detailed documentation of all features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.databricks.Databricks.html).\n", + "This notebook provides a quick overview for getting started with Databricks [LLM models](https://python.langchain.com/docs/concepts/text_llms). For detailed documentation of all features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/llms/langchain_community.llms.databricks.Databricks.html).\n", "\n", "## Overview\n", "\n", diff --git a/docs/docs/integrations/llms/fireworks.ipynb b/docs/docs/integrations/llms/fireworks.ipynb index 16e4ca4b37..3dd7fc4483 100644 --- a/docs/docs/integrations/llms/fireworks.ipynb +++ b/docs/docs/integrations/llms/fireworks.ipynb @@ -8,7 +8,7 @@ "# Fireworks\n", "\n", ":::caution\n", - "You are currently on a page documenting the use of Fireworks models as [text completion models](/docs/concepts/#llms). Many popular Fireworks models are [chat completion models](/docs/concepts/#chat-models).\n", + "You are currently on a page documenting the use of Fireworks models as [text completion models](/docs/concepts/text_llms). Many popular Fireworks models are [chat completion models](/docs/concepts/chat_models).\n", "\n", "You may be looking for [this page instead](/docs/integrations/chat/fireworks/).\n", ":::\n", diff --git a/docs/docs/integrations/llms/google_ai.ipynb b/docs/docs/integrations/llms/google_ai.ipynb index a3ebe9dfec..0ba3e56e0a 100644 --- a/docs/docs/integrations/llms/google_ai.ipynb +++ b/docs/docs/integrations/llms/google_ai.ipynb @@ -26,7 +26,7 @@ "metadata": {}, "source": [ ":::caution\n", - "You are currently on a page documenting the use of Google models as [text completion models](/docs/concepts/#llms). Many popular Google models are [chat completion models](/docs/concepts/#chat-models).\n", + "You are currently on a page documenting the use of Google models as [text completion models](/docs/concepts/text_llms). Many popular Google models are [chat completion models](/docs/concepts/chat_models).\n", "\n", "You may be looking for [this page instead](/docs/integrations/chat/google_generative_ai/).\n", ":::\n", diff --git a/docs/docs/integrations/llms/google_vertex_ai_palm.ipynb b/docs/docs/integrations/llms/google_vertex_ai_palm.ipynb index 83911014e4..8322e6d780 100644 --- a/docs/docs/integrations/llms/google_vertex_ai_palm.ipynb +++ b/docs/docs/integrations/llms/google_vertex_ai_palm.ipynb @@ -16,7 +16,7 @@ "# Google Cloud Vertex AI\n", "\n", ":::caution\n", - "You are currently on a page documenting the use of Google Vertex [text completion models](/docs/concepts/#llms). Many Google models are [chat completion models](/docs/concepts/#chat-models).\n", + "You are currently on a page documenting the use of Google Vertex [text completion models](/docs/concepts/text_llms). Many Google models are [chat completion models](/docs/concepts/chat_models).\n", "\n", "You may be looking for [this page instead](/docs/integrations/chat/google_vertex_ai_palm/).\n", ":::\n", diff --git a/docs/docs/integrations/llms/index.mdx b/docs/docs/integrations/llms/index.mdx index 8af9679143..b902bf7ea7 100644 --- a/docs/docs/integrations/llms/index.mdx +++ b/docs/docs/integrations/llms/index.mdx @@ -7,12 +7,12 @@ keywords: [compatibility] # LLMs :::caution -You are currently on a page documenting the use of [text completion models](/docs/concepts/#llms). Many of the latest and most popular models are [chat completion models](/docs/concepts/#chat-models). +You are currently on a page documenting the use of [text completion models](/docs/concepts/text_llms). Many of the latest and most popular models are [chat completion models](/docs/concepts/chat_models). Unless you are specifically using more advanced prompting techniques, you are probably looking for [this page instead](/docs/integrations/chat/). ::: -[LLMs](/docs/concepts/#llms) are language models that take a string as input and return a string as output. +[LLMs](/docs/concepts/text_llms) are language models that take a string as input and return a string as output. :::info diff --git a/docs/docs/integrations/llms/nvidia_ai_endpoints.ipynb b/docs/docs/integrations/llms/nvidia_ai_endpoints.ipynb index 190cf8db74..7c3df140d3 100644 --- a/docs/docs/integrations/llms/nvidia_ai_endpoints.ipynb +++ b/docs/docs/integrations/llms/nvidia_ai_endpoints.ipynb @@ -6,7 +6,7 @@ "source": [ "# NVIDIA\n", "\n", - "This will help you getting started with NVIDIA [models](/docs/concepts/#llms). For detailed documentation of all `NVIDIA` features and configurations head to the [API reference](https://python.langchain.com/api_reference/nvidia_ai_endpoints/llms/langchain_nvidia_ai_endpoints.chat_models.NVIDIA.html).\n", + "This will help you getting started with NVIDIA [models](/docs/concepts/text_llms). For detailed documentation of all `NVIDIA` features and configurations head to the [API reference](https://python.langchain.com/api_reference/nvidia_ai_endpoints/llms/langchain_nvidia_ai_endpoints.chat_models.NVIDIA.html).\n", "\n", "## Overview\n", "The `langchain-nvidia-ai-endpoints` package contains LangChain integrations building applications with models on \n", diff --git a/docs/docs/integrations/llms/ollama.ipynb b/docs/docs/integrations/llms/ollama.ipynb index 702bd912db..f7d18643da 100644 --- a/docs/docs/integrations/llms/ollama.ipynb +++ b/docs/docs/integrations/llms/ollama.ipynb @@ -18,7 +18,7 @@ "# OllamaLLM\n", "\n", ":::caution\n", - "You are currently on a page documenting the use of Ollama models as [text completion models](/docs/concepts/#llms). Many popular Ollama models are [chat completion models](/docs/concepts/#chat-models).\n", + "You are currently on a page documenting the use of Ollama models as [text completion models](/docs/concepts/text_llms). Many popular Ollama models are [chat completion models](/docs/concepts/chat_models).\n", "\n", "You may be looking for [this page instead](/docs/integrations/chat/ollama/).\n", ":::\n", diff --git a/docs/docs/integrations/llms/openai.ipynb b/docs/docs/integrations/llms/openai.ipynb index 58de2dec49..e24d5a5fa8 100644 --- a/docs/docs/integrations/llms/openai.ipynb +++ b/docs/docs/integrations/llms/openai.ipynb @@ -8,7 +8,7 @@ "# OpenAI\n", "\n", ":::caution\n", - "You are currently on a page documenting the use of OpenAI [text completion models](/docs/concepts/#llms). The latest and most popular OpenAI models are [chat completion models](/docs/concepts/#chat-models).\n", + "You are currently on a page documenting the use of OpenAI [text completion models](/docs/concepts/text_llms). The latest and most popular OpenAI models are [chat completion models](/docs/concepts/chat_models).\n", "\n", "Unless you are specifically using `gpt-3.5-turbo-instruct`, you are probably looking for [this page instead](/docs/integrations/chat/openai/).\n", ":::\n", diff --git a/docs/docs/integrations/llms/together.ipynb b/docs/docs/integrations/llms/together.ipynb index 19c306baa8..b98289f70c 100644 --- a/docs/docs/integrations/llms/together.ipynb +++ b/docs/docs/integrations/llms/together.ipynb @@ -8,7 +8,7 @@ "# Together AI\n", "\n", ":::caution\n", - "You are currently on a page documenting the use of Together AI models as [text completion models](/docs/concepts/#llms). Many popular Together AI models are [chat completion models](/docs/concepts/#chat-models).\n", + "You are currently on a page documenting the use of Together AI models as [text completion models](/docs/concepts/text_llms). Many popular Together AI models are [chat completion models](/docs/concepts/chat_models).\n", "\n", "You may be looking for [this page instead](/docs/integrations/chat/together/).\n", ":::\n", diff --git a/docs/docs/integrations/providers/databricks.md b/docs/docs/integrations/providers/databricks.md index 7cd8273874..0acb3287b8 100644 --- a/docs/docs/integrations/providers/databricks.md +++ b/docs/docs/integrations/providers/databricks.md @@ -39,7 +39,7 @@ LLM `Databricks` is an LLM class to access completion endpoints hosted on Databricks. :::caution -Text completion models have been deprecated and the latest and most popular models are [chat completion models](/docs/concepts/#chat-models). Use `ChatDatabricks` chat model instead to use those models and advanced features such as tool calling. +Text completion models have been deprecated and the latest and most popular models are [chat completion models](/docs/concepts/chat_models). Use `ChatDatabricks` chat model instead to use those models and advanced features such as tool calling. ::: ``` diff --git a/docs/docs/integrations/providers/mlflow_tracking.ipynb b/docs/docs/integrations/providers/mlflow_tracking.ipynb index b30f285064..c21161aaf6 100644 --- a/docs/docs/integrations/providers/mlflow_tracking.ipynb +++ b/docs/docs/integrations/providers/mlflow_tracking.ipynb @@ -523,7 +523,7 @@ "metadata": {}, "source": [ "#### Where to Pass the Callback\n", - " LangChain supports two ways of passing callback instances: (1) Request time callbacks - pass them to the `invoke` method or bind with `with_config()` (2) Constructor callbacks - set them in the chain constructor. When using the `MlflowLangchainTracer` as a callback, you **must use request time callbacks**. Setting it in the constructor instead will only apply the callback to the top-level object, preventing it from being propagated to child components, resulting in incomplete traces. For more information on this behavior, please refer to [Callbacks Documentation](https://python.langchain.com/docs/concepts/#callbacks) for more details.\n", + " LangChain supports two ways of passing callback instances: (1) Request time callbacks - pass them to the `invoke` method or bind with `with_config()` (2) Constructor callbacks - set them in the chain constructor. When using the `MlflowLangchainTracer` as a callback, you **must use request time callbacks**. Setting it in the constructor instead will only apply the callback to the top-level object, preventing it from being propagated to child components, resulting in incomplete traces. For more information on this behavior, please refer to [Callbacks Documentation](https://python.langchain.com/docs/concepts/callbacks) for more details.\n", "\n", "```python\n", "# OK\n", diff --git a/docs/docs/integrations/retrievers/azure_ai_search.ipynb b/docs/docs/integrations/retrievers/azure_ai_search.ipynb index 6b5ce1f360..e5a9bcec2a 100644 --- a/docs/docs/integrations/retrievers/azure_ai_search.ipynb +++ b/docs/docs/integrations/retrievers/azure_ai_search.ipynb @@ -21,7 +21,7 @@ "\n", "`AzureAISearchRetriever` is an integration module that returns documents from an unstructured query. It's based on the BaseRetriever class and it targets the 2023-11-01 stable REST API version of Azure AI Search, which means it supports vector indexing and queries.\n", "\n", - "This guide will help you getting started with the Azure AI Search [retriever](/docs/concepts/#retrievers). For detailed documentation of all `AzureAISearchRetriever` features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html).\n", + "This guide will help you getting started with the Azure AI Search [retriever](/docs/concepts/retrievers). For detailed documentation of all `AzureAISearchRetriever` features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/retrievers/langchain_community.retrievers.azure_ai_search.AzureAISearchRetriever.html).\n", "\n", "`AzureAISearchRetriever` replaces `AzureCognitiveSearchRetriever`, which will soon be deprecated. We recommend switching to the newer version that's based on the most recent stable version of the search APIs.\n", "\n", diff --git a/docs/docs/integrations/retrievers/bedrock.ipynb b/docs/docs/integrations/retrievers/bedrock.ipynb index b674b1175b..42e0702de8 100644 --- a/docs/docs/integrations/retrievers/bedrock.ipynb +++ b/docs/docs/integrations/retrievers/bedrock.ipynb @@ -17,7 +17,7 @@ "source": [ "# Bedrock (Knowledge Bases) Retriever\n", "\n", - "This guide will help you getting started with the AWS Knowledge Bases [retriever](/docs/concepts/#retrievers).\n", + "This guide will help you getting started with the AWS Knowledge Bases [retriever](/docs/concepts/retrievers).\n", "\n", "[Knowledge Bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an Amazon Web Services (AWS) offering which lets you quickly build RAG applications by using your private data to customize FM response.\n", "\n", diff --git a/docs/docs/integrations/retrievers/box.ipynb b/docs/docs/integrations/retrievers/box.ipynb index a4abf132ac..af2dd8bdd1 100644 --- a/docs/docs/integrations/retrievers/box.ipynb +++ b/docs/docs/integrations/retrievers/box.ipynb @@ -17,7 +17,7 @@ "source": [ "# BoxRetriever\n", "\n", - "This will help you getting started with the Box [retriever](/docs/concepts/#retrievers). For detailed documentation of all BoxRetriever features and configurations head to the [API reference](https://python.langchain.com/api_reference/box/retrievers/langchain_box.retrievers.box.BoxRetriever.html).\n", + "This will help you getting started with the Box [retriever](/docs/concepts/retrievers). For detailed documentation of all BoxRetriever features and configurations head to the [API reference](https://python.langchain.com/api_reference/box/retrievers/langchain_box.retrievers.box.BoxRetriever.html).\n", "\n", "# Overview\n", "\n", diff --git a/docs/docs/integrations/retrievers/elasticsearch_retriever.ipynb b/docs/docs/integrations/retrievers/elasticsearch_retriever.ipynb index dafc757e23..2bf4dae5ba 100644 --- a/docs/docs/integrations/retrievers/elasticsearch_retriever.ipynb +++ b/docs/docs/integrations/retrievers/elasticsearch_retriever.ipynb @@ -21,7 +21,7 @@ "\n", "The `ElasticsearchRetriever` is a generic wrapper to enable flexible access to all `Elasticsearch` features through the [Query DSL](https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html). For most use cases the other classes (`ElasticsearchStore`, `ElasticsearchEmbeddings`, etc.) should suffice, but if they don't you can use `ElasticsearchRetriever`.\n", "\n", - "This guide will help you getting started with the Elasticsearch [retriever](/docs/concepts/#retrievers). For detailed documentation of all `ElasticsearchRetriever` features and configurations head to the [API reference](https://python.langchain.com/api_reference/elasticsearch/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html).\n", + "This guide will help you getting started with the Elasticsearch [retriever](/docs/concepts/retrievers). For detailed documentation of all `ElasticsearchRetriever` features and configurations head to the [API reference](https://python.langchain.com/api_reference/elasticsearch/retrievers/langchain_elasticsearch.retrievers.ElasticsearchRetriever.html).\n", "\n", "### Integration details\n", "\n", @@ -66,7 +66,7 @@ "source": [ "### Installation\n", "\n", - "This retriever lives in the `langchain-elasticsearch` package. For demonstration purposes, we will also install `langchain-community` to generate text [embeddings](/docs/concepts/#embedding-models)." + "This retriever lives in the `langchain-elasticsearch` package. For demonstration purposes, we will also install `langchain-community` to generate text [embeddings](/docs/concepts/embedding_models)." ] }, { @@ -613,7 +613,7 @@ "source": [ "## Usage\n", "\n", - "Following the above examples, we use `.invoke` to issue a single query. Because retrievers are Runnables, we can use any method in the [Runnable interface](/docs/concepts/#runnable-interface), such as `.batch`, as well." + "Following the above examples, we use `.invoke` to issue a single query. Because retrievers are Runnables, we can use any method in the [Runnable interface](/docs/concepts/runnables), such as `.batch`, as well." ] }, { diff --git a/docs/docs/integrations/retrievers/google_vertex_ai_search.ipynb b/docs/docs/integrations/retrievers/google_vertex_ai_search.ipynb index 60352ca64d..dc19c20540 100644 --- a/docs/docs/integrations/retrievers/google_vertex_ai_search.ipynb +++ b/docs/docs/integrations/retrievers/google_vertex_ai_search.ipynb @@ -21,7 +21,7 @@ "\n", ">`Vertex AI Search` is available in the `Google Cloud Console` and via an API for enterprise workflow integration.\n", "\n", - "This notebook demonstrates how to configure `Vertex AI Search` and use the Vertex AI Search [retriever](/docs/concepts/#retrievers). The Vertex AI Search retriever encapsulates the [Python client library](https://cloud.google.com/generative-ai-app-builder/docs/libraries#client-libraries-install-python) and uses it to access the [Search Service API](https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1beta.services.search_service).\n", + "This notebook demonstrates how to configure `Vertex AI Search` and use the Vertex AI Search [retriever](/docs/concepts/retrievers). The Vertex AI Search retriever encapsulates the [Python client library](https://cloud.google.com/generative-ai-app-builder/docs/libraries#client-libraries-install-python) and uses it to access the [Search Service API](https://cloud.google.com/python/docs/reference/discoveryengine/latest/google.cloud.discoveryengine_v1beta.services.search_service).\n", "\n", "For detailed documentation of all `VertexAISearchRetriever` features and configurations head to the [API reference](https://python.langchain.com/api_reference/google_community/vertex_ai_search/langchain_google_community.vertex_ai_search.VertexAISearchRetriever.html).\n", "\n", @@ -374,7 +374,7 @@ "source": [ "## Usage\n", "\n", - "Following the above examples, we use `.invoke` to issue a single query. Because retrievers are Runnables, we can use any method in the [Runnable interface](/docs/concepts/#runnable-interface), such as `.batch`, as well." + "Following the above examples, we use `.invoke` to issue a single query. Because retrievers are Runnables, we can use any method in the [Runnable interface](/docs/concepts/runnables), such as `.batch`, as well." ] }, { diff --git a/docs/docs/integrations/retrievers/index.mdx b/docs/docs/integrations/retrievers/index.mdx index 6d749441e6..dee97cbb13 100644 --- a/docs/docs/integrations/retrievers/index.mdx +++ b/docs/docs/integrations/retrievers/index.mdx @@ -7,7 +7,7 @@ import {CategoryTable, IndexTable} from '@theme/FeatureTables' # Retrievers -A [retriever](/docs/concepts/#retrievers) is an interface that returns documents given an unstructured query. +A [retriever](/docs/concepts/retrievers) is an interface that returns documents given an unstructured query. It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) them. Retrievers can be created from vector stores, but are also broad enough to include [Wikipedia search](/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/). diff --git a/docs/docs/integrations/retrievers/milvus_hybrid_search.ipynb b/docs/docs/integrations/retrievers/milvus_hybrid_search.ipynb index fa498e2abe..11e91c78fd 100644 --- a/docs/docs/integrations/retrievers/milvus_hybrid_search.ipynb +++ b/docs/docs/integrations/retrievers/milvus_hybrid_search.ipynb @@ -17,7 +17,7 @@ "\n", "> [Milvus](https://milvus.io/docs) is an open-source vector database built to power embedding similarity search and AI applications. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment.\n", "\n", - "This will help you getting started with the Milvus Hybrid Search [retriever](/docs/concepts/#retrievers), which combines the strengths of both dense and sparse vector search. For detailed documentation of all `MilvusCollectionHybridSearchRetriever` features and configurations head to the [API reference](https://python.langchain.com/api_reference/milvus/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html).\n", + "This will help you getting started with the Milvus Hybrid Search [retriever](/docs/concepts/retrievers), which combines the strengths of both dense and sparse vector search. For detailed documentation of all `MilvusCollectionHybridSearchRetriever` features and configurations head to the [API reference](https://python.langchain.com/api_reference/milvus/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html).\n", "\n", "See also the Milvus Multi-Vector Search [docs](https://milvus.io/docs/multi-vector-search.md).\n", "\n", diff --git a/docs/docs/integrations/stores/astradb.ipynb b/docs/docs/integrations/stores/astradb.ipynb index c38cf5b7a0..ce78e19bc9 100644 --- a/docs/docs/integrations/stores/astradb.ipynb +++ b/docs/docs/integrations/stores/astradb.ipynb @@ -19,7 +19,7 @@ "source": [ "# AstraDBByteStore\n", "\n", - "This will help you get started with Astra DB [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `AstraDBByteStore` features and configurations head to the [API reference](https://python.langchain.com/api_reference/astradb/storage/langchain_astradb.storage.AstraDBByteStore.html).\n", + "This will help you get started with Astra DB [key-value stores](/docs/concepts/key_value_stores). For detailed documentation of all `AstraDBByteStore` features and configurations head to the [API reference](https://python.langchain.com/api_reference/astradb/storage/langchain_astradb.storage.AstraDBByteStore.html).\n", "\n", "## Overview\n", "\n", diff --git a/docs/docs/integrations/stores/cassandra.ipynb b/docs/docs/integrations/stores/cassandra.ipynb index 9c31d12864..309f6ea61e 100644 --- a/docs/docs/integrations/stores/cassandra.ipynb +++ b/docs/docs/integrations/stores/cassandra.ipynb @@ -19,7 +19,7 @@ "source": [ "# CassandraByteStore\n", "\n", - "This will help you get started with Cassandra [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `CassandraByteStore` features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.cassandra.CassandraByteStore.html).\n", + "This will help you get started with Cassandra [key-value stores](/docs/concepts/key_value_stores). For detailed documentation of all `CassandraByteStore` features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.cassandra.CassandraByteStore.html).\n", "\n", "## Overview\n", "\n", diff --git a/docs/docs/integrations/stores/elasticsearch.ipynb b/docs/docs/integrations/stores/elasticsearch.ipynb index 370886867e..e7578ad362 100644 --- a/docs/docs/integrations/stores/elasticsearch.ipynb +++ b/docs/docs/integrations/stores/elasticsearch.ipynb @@ -19,7 +19,7 @@ "source": [ "# ElasticsearchEmbeddingsCache\n", "\n", - "This will help you get started with Elasticsearch [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `ElasticsearchEmbeddingsCache` features and configurations head to the [API reference](https://python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html).\n", + "This will help you get started with Elasticsearch [key-value stores](/docs/concepts/key_value_stores). For detailed documentation of all `ElasticsearchEmbeddingsCache` features and configurations head to the [API reference](https://python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html).\n", "\n", "## Overview\n", "\n", diff --git a/docs/docs/integrations/stores/file_system.ipynb b/docs/docs/integrations/stores/file_system.ipynb index 1a0baa82dd..4e9c211ea3 100644 --- a/docs/docs/integrations/stores/file_system.ipynb +++ b/docs/docs/integrations/stores/file_system.ipynb @@ -19,7 +19,7 @@ "source": [ "# LocalFileStore\n", "\n", - "This will help you get started with local filesystem [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all LocalFileStore features and configurations head to the [API reference](https://python.langchain.com/api_reference/langchain/storage/langchain.storage.file_system.LocalFileStore.html).\n", + "This will help you get started with local filesystem [key-value stores](/docs/concepts/key_value_stores). For detailed documentation of all LocalFileStore features and configurations head to the [API reference](https://python.langchain.com/api_reference/langchain/storage/langchain.storage.file_system.LocalFileStore.html).\n", "\n", "## Overview\n", "\n", diff --git a/docs/docs/integrations/stores/in_memory.ipynb b/docs/docs/integrations/stores/in_memory.ipynb index 7f81222dab..b253249e97 100644 --- a/docs/docs/integrations/stores/in_memory.ipynb +++ b/docs/docs/integrations/stores/in_memory.ipynb @@ -19,7 +19,7 @@ "source": [ "# InMemoryByteStore\n", "\n", - "This guide will help you get started with in-memory [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `InMemoryByteStore` features and configurations head to the [API reference](https://python.langchain.com/api_reference/core/stores/langchain_core.stores.InMemoryByteStore.html).\n", + "This guide will help you get started with in-memory [key-value stores](/docs/concepts/key_value_stores). For detailed documentation of all `InMemoryByteStore` features and configurations head to the [API reference](https://python.langchain.com/api_reference/core/stores/langchain_core.stores.InMemoryByteStore.html).\n", "\n", "## Overview\n", "\n", diff --git a/docs/docs/integrations/stores/redis.ipynb b/docs/docs/integrations/stores/redis.ipynb index 1a90576acf..8eaaf5a3bb 100644 --- a/docs/docs/integrations/stores/redis.ipynb +++ b/docs/docs/integrations/stores/redis.ipynb @@ -19,7 +19,7 @@ "source": [ "# RedisStore\n", "\n", - "This will help you get started with Redis [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `RedisStore` features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.redis.RedisStore.html).\n", + "This will help you get started with Redis [key-value stores](/docs/concepts/key_value_stores). For detailed documentation of all `RedisStore` features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.redis.RedisStore.html).\n", "\n", "## Overview\n", "\n", diff --git a/docs/docs/integrations/stores/upstash_redis.ipynb b/docs/docs/integrations/stores/upstash_redis.ipynb index c512e9b376..54d9c668b0 100644 --- a/docs/docs/integrations/stores/upstash_redis.ipynb +++ b/docs/docs/integrations/stores/upstash_redis.ipynb @@ -19,7 +19,7 @@ "source": [ "# UpstashRedisByteStore\n", "\n", - "This will help you get started with Upstash redis [key-value stores](/docs/concepts/#key-value-stores). For detailed documentation of all `UpstashRedisByteStore` features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html).\n", + "This will help you get started with Upstash redis [key-value stores](/docs/concepts/key_value_stores). For detailed documentation of all `UpstashRedisByteStore` features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/storage/langchain_community.storage.upstash_redis.UpstashRedisByteStore.html).\n", "\n", "## Overview\n", "\n", diff --git a/docs/docs/integrations/text_embedding/databricks.ipynb b/docs/docs/integrations/text_embedding/databricks.ipynb index ae6b6ab8c0..4de876b37c 100644 --- a/docs/docs/integrations/text_embedding/databricks.ipynb +++ b/docs/docs/integrations/text_embedding/databricks.ipynb @@ -19,7 +19,7 @@ "\n", "> [Databricks](https://www.databricks.com/) Lakehouse Platform unifies data, analytics, and AI on one platform.\n", "\n", - "This notebook provides a quick overview for getting started with Databricks [embedding models](/docs/concepts/#embedding-models). For detailed documentation of all `DatabricksEmbeddings` features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.databricks.DatabricksEmbeddings.html).\n", + "This notebook provides a quick overview for getting started with Databricks [embedding models](/docs/concepts/embedding_models). For detailed documentation of all `DatabricksEmbeddings` features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.databricks.DatabricksEmbeddings.html).\n", "\n", "\n", "\n", diff --git a/docs/docs/integrations/tools/gmail.ipynb b/docs/docs/integrations/tools/gmail.ipynb index b105e0ae57..2f91852c05 100644 --- a/docs/docs/integrations/tools/gmail.ipynb +++ b/docs/docs/integrations/tools/gmail.ipynb @@ -6,7 +6,7 @@ "source": [ "# Gmail Toolkit\n", "\n", - "This will help you getting started with the GMail [toolkit](/docs/concepts/#toolkits). This toolkit interacts with the GMail API to read messages, draft and send messages, and more. For detailed documentation of all GmailToolkit features and configurations head to the [API reference](https://python.langchain.com/api_reference/google_community/gmail/langchain_google_community.gmail.toolkit.GmailToolkit.html).\n", + "This will help you getting started with the GMail [toolkit](/docs/concepts/tools/#toolkits). This toolkit interacts with the GMail API to read messages, draft and send messages, and more. For detailed documentation of all GmailToolkit features and configurations head to the [API reference](https://python.langchain.com/api_reference/google_community/gmail/langchain_google_community.gmail.toolkit.GmailToolkit.html).\n", "\n", "## Setup\n", "\n", diff --git a/docs/docs/integrations/tools/jina_search.ipynb b/docs/docs/integrations/tools/jina_search.ipynb index 511085a579..9003064566 100644 --- a/docs/docs/integrations/tools/jina_search.ipynb +++ b/docs/docs/integrations/tools/jina_search.ipynb @@ -117,7 +117,7 @@ "source": [ "## Invocation\n", "\n", - "### [Invoke directly with args](/docs/concepts/#invoke-with-just-the-arguments)" + "### [Invoke directly with args](/docs/concepts/tools)" ] }, { @@ -143,7 +143,7 @@ "id": "d6e73897", "metadata": {}, "source": [ - "### [Invoke with ToolCall](/docs/concepts/#invoke-with-toolcall)\n", + "### [Invoke with ToolCall](/docs/concepts/tools)\n", "\n", "We can also invoke the tool with a model-generated ToolCall, in which case a ToolMessage will be returned:" ] diff --git a/docs/docs/integrations/tools/requests.ipynb b/docs/docs/integrations/tools/requests.ipynb index 0db60b0aa9..08fd4e9003 100644 --- a/docs/docs/integrations/tools/requests.ipynb +++ b/docs/docs/integrations/tools/requests.ipynb @@ -7,7 +7,7 @@ "source": [ "# Requests Toolkit\n", "\n", - "We can use the Requests [toolkit](/docs/concepts/#toolkits) to construct agents that generate HTTP requests.\n", + "We can use the Requests [toolkit](/docs/concepts/tools/#toolkits) to construct agents that generate HTTP requests.\n", "\n", "For detailed documentation of all API toolkit features and configurations head to the API reference for [RequestsToolkit](https://python.langchain.com/api_reference/community/agent_toolkits/langchain_community.agent_toolkits.openapi.toolkit.RequestsToolkit.html).\n", "\n", diff --git a/docs/docs/integrations/tools/slack.ipynb b/docs/docs/integrations/tools/slack.ipynb index 3a516eeb0d..8420188168 100644 --- a/docs/docs/integrations/tools/slack.ipynb +++ b/docs/docs/integrations/tools/slack.ipynb @@ -6,7 +6,7 @@ "source": [ "# Slack Toolkit\n", "\n", - "This will help you getting started with the Slack [toolkit](/docs/concepts/#toolkits). For detailed documentation of all SlackToolkit features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/agent_toolkits/langchain_community.agent_toolkits.slack.toolkit.SlackToolkit.html).\n", + "This will help you getting started with the Slack [toolkit](/docs/concepts/tools/#toolkits). For detailed documentation of all SlackToolkit features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/agent_toolkits/langchain_community.agent_toolkits.slack.toolkit.SlackToolkit.html).\n", "\n", "## Setup\n", "\n", diff --git a/docs/docs/integrations/tools/sql_database.ipynb b/docs/docs/integrations/tools/sql_database.ipynb index f5be93f2c5..0582040343 100644 --- a/docs/docs/integrations/tools/sql_database.ipynb +++ b/docs/docs/integrations/tools/sql_database.ipynb @@ -7,7 +7,7 @@ "source": [ "# SQLDatabase Toolkit\n", "\n", - "This will help you getting started with the SQL Database [toolkit](/docs/concepts/#toolkits). For detailed documentation of all `SQLDatabaseToolkit` features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/agent_toolkits/langchain_community.agent_toolkits.sql.toolkit.SQLDatabaseToolkit.html).\n", + "This will help you getting started with the SQL Database [toolkit](/docs/concepts/tools/#toolkits). For detailed documentation of all `SQLDatabaseToolkit` features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/agent_toolkits/langchain_community.agent_toolkits.sql.toolkit.SQLDatabaseToolkit.html).\n", "\n", "Tools within the `SQLDatabaseToolkit` are designed to interact with a `SQL` database. \n", "\n", diff --git a/docs/docs/integrations/tools/tavily_search.ipynb b/docs/docs/integrations/tools/tavily_search.ipynb index 7a89acb94a..d4f6afdfff 100644 --- a/docs/docs/integrations/tools/tavily_search.ipynb +++ b/docs/docs/integrations/tools/tavily_search.ipynb @@ -126,7 +126,7 @@ "source": [ "## Invocation\n", "\n", - "### [Invoke directly with args](/docs/concepts/#invoke-with-just-the-arguments)\n", + "### [Invoke directly with args](/docs/concepts/tools)\n", "\n", "The `TavilySearchResults` tool takes a single \"query\" argument, which should be a natural language query:" ] @@ -166,7 +166,7 @@ "id": "d6e73897", "metadata": {}, "source": [ - "### [Invoke with ToolCall](/docs/concepts/#invoke-with-toolcall)\n", + "### [Invoke with ToolCall](/docs/concepts/tools)\n", "\n", "We can also invoke the tool with a model-generated ToolCall, in which case a ToolMessage will be returned:" ] diff --git a/docs/docs/integrations/vectorstores/astradb.ipynb b/docs/docs/integrations/vectorstores/astradb.ipynb index 36c9e07274..b2598ac964 100644 --- a/docs/docs/integrations/vectorstores/astradb.ipynb +++ b/docs/docs/integrations/vectorstores/astradb.ipynb @@ -459,7 +459,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/integrations/vectorstores/chroma.ipynb b/docs/docs/integrations/vectorstores/chroma.ipynb index d5c2133b7c..90ea438f25 100644 --- a/docs/docs/integrations/vectorstores/chroma.ipynb +++ b/docs/docs/integrations/vectorstores/chroma.ipynb @@ -463,7 +463,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/integrations/vectorstores/clickhouse.ipynb b/docs/docs/integrations/vectorstores/clickhouse.ipynb index 2631ab40a9..fecd3998e5 100644 --- a/docs/docs/integrations/vectorstores/clickhouse.ipynb +++ b/docs/docs/integrations/vectorstores/clickhouse.ipynb @@ -360,7 +360,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/integrations/vectorstores/couchbase.ipynb b/docs/docs/integrations/vectorstores/couchbase.ipynb index c80fbbd891..c5aa85e218 100644 --- a/docs/docs/integrations/vectorstores/couchbase.ipynb +++ b/docs/docs/integrations/vectorstores/couchbase.ipynb @@ -678,7 +678,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/integrations/vectorstores/databricks_vector_search.ipynb b/docs/docs/integrations/vectorstores/databricks_vector_search.ipynb index adfe23173b..db97f9fb4c 100644 --- a/docs/docs/integrations/vectorstores/databricks_vector_search.ipynb +++ b/docs/docs/integrations/vectorstores/databricks_vector_search.ipynb @@ -496,7 +496,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/integrations/vectorstores/elasticsearch.ipynb b/docs/docs/integrations/vectorstores/elasticsearch.ipynb index f2be67d8d2..339197a0fd 100644 --- a/docs/docs/integrations/vectorstores/elasticsearch.ipynb +++ b/docs/docs/integrations/vectorstores/elasticsearch.ipynb @@ -473,7 +473,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/integrations/vectorstores/faiss.ipynb b/docs/docs/integrations/vectorstores/faiss.ipynb index 854e5a25dc..2837abd360 100644 --- a/docs/docs/integrations/vectorstores/faiss.ipynb +++ b/docs/docs/integrations/vectorstores/faiss.ipynb @@ -366,7 +366,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/integrations/vectorstores/index.mdx b/docs/docs/integrations/vectorstores/index.mdx index cc4d33418d..d6631ce945 100644 --- a/docs/docs/integrations/vectorstores/index.mdx +++ b/docs/docs/integrations/vectorstores/index.mdx @@ -7,7 +7,7 @@ sidebar_class_name: hidden import { CategoryTable, IndexTable } from "@theme/FeatureTables"; -A [vector store](/docs/concepts/#vector-stores) stores [embedded](/docs/concepts/#embedding-models) data and performs similarity search. +A [vector store](/docs/concepts/#vector-stores) stores [embedded](/docs/concepts/embedding_models) data and performs similarity search. diff --git a/docs/docs/integrations/vectorstores/milvus.ipynb b/docs/docs/integrations/vectorstores/milvus.ipynb index 2dfbbbb2b6..34bda120d8 100644 --- a/docs/docs/integrations/vectorstores/milvus.ipynb +++ b/docs/docs/integrations/vectorstores/milvus.ipynb @@ -397,7 +397,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb b/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb index dd29cf56dc..812b2f12f2 100644 --- a/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb +++ b/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb @@ -490,7 +490,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/integrations/vectorstores/pgvector.ipynb b/docs/docs/integrations/vectorstores/pgvector.ipynb index afbd082aa1..8a532b627b 100644 --- a/docs/docs/integrations/vectorstores/pgvector.ipynb +++ b/docs/docs/integrations/vectorstores/pgvector.ipynb @@ -438,7 +438,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/integrations/vectorstores/pinecone.ipynb b/docs/docs/integrations/vectorstores/pinecone.ipynb index ecdf557f3d..a54e4edbee 100644 --- a/docs/docs/integrations/vectorstores/pinecone.ipynb +++ b/docs/docs/integrations/vectorstores/pinecone.ipynb @@ -412,7 +412,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/integrations/vectorstores/qdrant.ipynb b/docs/docs/integrations/vectorstores/qdrant.ipynb index 0ab58c7285..b4f2b5a428 100644 --- a/docs/docs/integrations/vectorstores/qdrant.ipynb +++ b/docs/docs/integrations/vectorstores/qdrant.ipynb @@ -717,7 +717,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/integrations/vectorstores/redis.ipynb b/docs/docs/integrations/vectorstores/redis.ipynb index 7cd3704b5d..d43ec71cf4 100644 --- a/docs/docs/integrations/vectorstores/redis.ipynb +++ b/docs/docs/integrations/vectorstores/redis.ipynb @@ -764,7 +764,7 @@ "\n", "- [Tutorials: working with external knowledge](https://python.langchain.com/docs/tutorials/#working-with-external-knowledge)\n", "- [How-to: Question and answer with RAG](https://python.langchain.com/docs/how_to/#qa-with-rag)\n", - "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/#retrieval)" + "- [Retrieval conceptual docs](https://python.langchain.com/docs/concepts/retrieval)" ] }, { diff --git a/docs/docs/troubleshooting/errors/INVALID_PROMPT_INPUT.mdx b/docs/docs/troubleshooting/errors/INVALID_PROMPT_INPUT.mdx index ee2662bc66..74647e4b06 100644 --- a/docs/docs/troubleshooting/errors/INVALID_PROMPT_INPUT.mdx +++ b/docs/docs/troubleshooting/errors/INVALID_PROMPT_INPUT.mdx @@ -8,7 +8,7 @@ The following may help resolve this error: - Double-check your prompt template to ensure that it is correct. - If you are using the default f-string format and you are using curly braces `{` anywhere in your template, they should be double escaped like this: `{{` (and if you want to render a double curly brace, you should use four curly braces: `{{{{`). -- If you are using a [`MessagesPlaceholder`](/docs/concepts/#messagesplaceholder), make sure that you are passing in an array of messages or message-like objects. +- If you are using a [`MessagesPlaceholder`](/docs/concepts/messages/#messagesplaceholder), make sure that you are passing in an array of messages or message-like objects. - If you are using shorthand tuples to declare your prompt template, make sure that the variable name is wrapped in curly braces (`["placeholder", "{messages}"]`). - Try viewing the inputs into your prompt template using [LangSmith](https://docs.smith.langchain.com/) or log statements to confirm they appear as expected. - If you are pulling a prompt from the [LangChain Prompt Hub](https://smith.langchain.com/prompts), try pulling and logging it or running it in isolation with a sample input to confirm that it is what you expect. diff --git a/docs/docs/tutorials/agents.ipynb b/docs/docs/tutorials/agents.ipynb index 6f688255e5..fc92024894 100644 --- a/docs/docs/tutorials/agents.ipynb +++ b/docs/docs/tutorials/agents.ipynb @@ -25,9 +25,9 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Chat Models](/docs/concepts/#chat-models)\n", - "- [Tools](/docs/concepts/#tools)\n", - "- [Agents](/docs/concepts/#agents)\n", + "- [Chat Models](/docs/concepts/chat_models)\n", + "- [Tools](/docs/concepts/tools)\n", + "- [Agents](/docs/concepts/agents)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/tutorials/chatbot.ipynb b/docs/docs/tutorials/chatbot.ipynb index dd6ab2544d..afc8142c26 100644 --- a/docs/docs/tutorials/chatbot.ipynb +++ b/docs/docs/tutorials/chatbot.ipynb @@ -29,9 +29,9 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Chat Models](/docs/concepts/#chat-models)\n", - "- [Prompt Templates](/docs/concepts/#prompt-templates)\n", - "- [Chat History](/docs/concepts/#chat-history)\n", + "- [Chat Models](/docs/concepts/chat_models)\n", + "- [Prompt Templates](/docs/concepts/prompt_templates)\n", + "- [Chat History](/docs/concepts/chat_history)\n", "\n", "This guide requires `langgraph >= 0.2.28`.\n", ":::\n", diff --git a/docs/docs/tutorials/extraction.ipynb b/docs/docs/tutorials/extraction.ipynb index 3fa1f7cabf..4470d47770 100644 --- a/docs/docs/tutorials/extraction.ipynb +++ b/docs/docs/tutorials/extraction.ipynb @@ -21,9 +21,9 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Chat Models](/docs/concepts/#chat-models)\n", - "- [Tools](/docs/concepts/#tools)\n", - "- [Tool calling](/docs/concepts/#function-tool-calling)\n", + "- [Chat Models](/docs/concepts/chat_models)\n", + "- [Tools](/docs/concepts/tools)\n", + "- [Tool calling](/docs/concepts/tool_calling)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/tutorials/llm_chain.ipynb b/docs/docs/tutorials/llm_chain.ipynb index c698c34473..d9bd9500bc 100644 --- a/docs/docs/tutorials/llm_chain.ipynb +++ b/docs/docs/tutorials/llm_chain.ipynb @@ -21,11 +21,11 @@ "\n", "After reading this tutorial, you'll have a high level overview of:\n", "\n", - "- Using [language models](/docs/concepts/#chat-models)\n", + "- Using [language models](/docs/concepts/chat_models)\n", "\n", - "- Using [PromptTemplates](/docs/concepts/#prompt-templates) and [OutputParsers](/docs/concepts/#output-parsers)\n", + "- Using [PromptTemplates](/docs/concepts/prompt_templates) and [OutputParsers](/docs/concepts/output_parsers)\n", "\n", - "- Using [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language-lcel) to chain components together\n", + "- Using [LangChain Expression Language (LCEL)](/docs/concepts/lcel) to chain components together\n", "\n", "- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n", "\n", @@ -443,7 +443,7 @@ "id": "0b19cecb", "metadata": {}, "source": [ - "This is a simple example of using [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language-lcel) to chain together LangChain modules. There are several benefits to this approach, including optimized streaming and tracing support.\n", + "This is a simple example of using [LangChain Expression Language (LCEL)](/docs/concepts/lcel) to chain together LangChain modules. There are several benefits to this approach, including optimized streaming and tracing support.\n", "\n", "If we take a look at the LangSmith trace, we can see all three components show up in the [LangSmith trace](https://smith.langchain.com/public/bc49bec0-6b13-4726-967f-dbd3448b786d/r)." ] diff --git a/docs/docs/tutorials/local_rag.ipynb b/docs/docs/tutorials/local_rag.ipynb index 01712fa94a..d98b07732a 100644 --- a/docs/docs/tutorials/local_rag.ipynb +++ b/docs/docs/tutorials/local_rag.ipynb @@ -11,9 +11,9 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Chat Models](/docs/concepts/#chat-models)\n", + "- [Chat Models](/docs/concepts/chat_models)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", - "- [Embeddings](/docs/concepts/#embedding-models)\n", + "- [Embeddings](/docs/concepts/embedding_models)\n", "- [Vector stores](/docs/concepts/#vector-stores)\n", "- [Retrieval-augmented generation](/docs/tutorials/rag/)\n", "\n", @@ -25,7 +25,7 @@ "\n", "This guide will show how to run `LLaMA 3.1` via one provider, [Ollama](/docs/integrations/providers/ollama/) locally (e.g., on your laptop) using local embeddings and a local LLM. However, you can set up and swap in other local providers, such as [LlamaCPP](/docs/integrations/chat/llamacpp/) if you prefer.\n", "\n", - "**Note:** This guide uses a [chat model](/docs/concepts/#chat-models) wrapper that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models directly with a [text-in/text-out LLM](/docs/concepts/#llms) wrapper, you may need to use a prompt tailed for your specific model. This will often [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n", + "**Note:** This guide uses a [chat model](/docs/concepts/chat_models) wrapper that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models directly with a [text-in/text-out LLM](/docs/concepts/text_llms) wrapper, you may need to use a prompt tailed for your specific model. This will often [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n", "\n", "## Setup\n", "\n", @@ -445,7 +445,7 @@ "\n", "- [Video: Reliable, fully local RAG agents with LLaMA 3](https://www.youtube.com/watch?v=-ROS6gfYIts) for an agentic approach to RAG with local models\n", "- [Video: Building Corrective RAG from scratch with open-source, local LLMs](https://www.youtube.com/watch?v=E2shqsYwxck)\n", - "- [Conceptual guide on retrieval](/docs/concepts/#retrieval) for an overview of various retrieval techniques you can apply to improve performance\n", + "- [Conceptual guide on retrieval](/docs/concepts/retrieval) for an overview of various retrieval techniques you can apply to improve performance\n", "- [How to guides on RAG](/docs/how_to/#qa-with-rag) for a deeper dive into different specifics around of RAG\n", "- [How to run models locally](/docs/how_to/local_llms/) for different approaches to setting up different providers" ] diff --git a/docs/docs/tutorials/pdf_qa.ipynb b/docs/docs/tutorials/pdf_qa.ipynb index 32dd23c091..d5af436708 100644 --- a/docs/docs/tutorials/pdf_qa.ipynb +++ b/docs/docs/tutorials/pdf_qa.ipynb @@ -23,9 +23,9 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Document loaders](/docs/concepts/#document-loaders)\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [Embeddings](/docs/concepts/#embedding-models)\n", + "- [Document loaders](/docs/concepts/document_loaders)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [Embeddings](/docs/concepts/embedding_models)\n", "- [Vector stores](/docs/concepts/#vector-stores)\n", "- [Retrieval-augmented generation](/docs/tutorials/rag/)\n", "\n", @@ -33,7 +33,7 @@ "\n", "PDF files often hold crucial unstructured data unavailable from other sources. They can be quite lengthy, and unlike plain text files, cannot generally be fed directly into the prompt of a language model.\n", "\n", - "In this tutorial, you'll create a system that can answer questions about PDF files. More specifically, you'll use a [Document Loader](/docs/concepts/#document-loaders) to load text in a format usable by an LLM, then build a retrieval-augmented generation (RAG) pipeline to answer questions, including citations from the source material.\n", + "In this tutorial, you'll create a system that can answer questions about PDF files. More specifically, you'll use a [Document Loader](/docs/concepts/document_loaders) to load text in a format usable by an LLM, then build a retrieval-augmented generation (RAG) pipeline to answer questions, including citations from the source material.\n", "\n", "This tutorial will gloss over some concepts more deeply covered in our [RAG](/docs/tutorials/rag/) tutorial, so you may want to go through those first if you haven't already.\n", "\n", @@ -111,13 +111,13 @@ "\n", "- The loader reads the PDF at the specified path into memory.\n", "- It then extracts text data using the `pypdf` package.\n", - "- Finally, it creates a LangChain [Document](/docs/concepts/#documents) for each page of the PDF with the page's content and some metadata about where in the document the text came from.\n", + "- Finally, it creates a LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) for each page of the PDF with the page's content and some metadata about where in the document the text came from.\n", "\n", "LangChain has [many other document loaders](/docs/integrations/document_loaders/) for other data sources, or you can create a [custom document loader](/docs/how_to/document_loader_custom/).\n", "\n", "## Question answering with RAG\n", "\n", - "Next, you'll prepare the loaded documents for later retrieval. Using a [text splitter](/docs/concepts/#text-splitters), you'll split your loaded documents into smaller documents that can more easily fit into an LLM's context window, then load them into a [vector store](/docs/concepts/#vector-stores). You can then create a [retriever](/docs/concepts/#retrievers) from the vector store for use in our RAG chain:\n", + "Next, you'll prepare the loaded documents for later retrieval. Using a [text splitter](/docs/concepts/text_splitters), you'll split your loaded documents into smaller documents that can more easily fit into an LLM's context window, then load them into a [vector store](/docs/concepts/#vector-stores). You can then create a [retriever](/docs/concepts/retrievers) from the vector store for use in our RAG chain:\n", "\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", @@ -315,7 +315,7 @@ "\n", "For more on document loaders, you can check out:\n", "\n", - "- [The entry in the conceptual guide](/docs/concepts/#document-loaders)\n", + "- [The entry in the conceptual guide](/docs/concepts/document_loaders)\n", "- [Related how-to guides](/docs/how_to/#document-loaders)\n", "- [Available integrations](/docs/integrations/document_loaders/)\n", "- [How to create a custom document loader](/docs/how_to/document_loader_custom/)\n", diff --git a/docs/docs/tutorials/qa_chat_history.ipynb b/docs/docs/tutorials/qa_chat_history.ipynb index 72cde13832..a74959d05b 100644 --- a/docs/docs/tutorials/qa_chat_history.ipynb +++ b/docs/docs/tutorials/qa_chat_history.ipynb @@ -21,13 +21,13 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Chat history](/docs/concepts/#chat-history)\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [Embeddings](/docs/concepts/#embedding-models)\n", + "- [Chat history](/docs/concepts/chat_history)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [Embeddings](/docs/concepts/embedding_models)\n", "- [Vector stores](/docs/concepts/#vector-stores)\n", "- [Retrieval-augmented generation](/docs/tutorials/rag/)\n", - "- [Tools](/docs/concepts/#tools)\n", - "- [Agents](/docs/concepts/#agents)\n", + "- [Tools](/docs/concepts/tools)\n", + "- [Agents](/docs/concepts/agents)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/tutorials/query_analysis.ipynb b/docs/docs/tutorials/query_analysis.ipynb index 29c70c26f3..102febcf1a 100644 --- a/docs/docs/tutorials/query_analysis.ipynb +++ b/docs/docs/tutorials/query_analysis.ipynb @@ -21,11 +21,11 @@ "\n", "This guide assumes familiarity with the following concepts:\n", "\n", - "- [Document loaders](/docs/concepts/#document-loaders)\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [Embeddings](/docs/concepts/#embedding-models)\n", + "- [Document loaders](/docs/concepts/document_loaders)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [Embeddings](/docs/concepts/embedding_models)\n", "- [Vector stores](/docs/concepts/#vector-stores)\n", - "- [Retrieval](/docs/concepts/#retrieval)\n", + "- [Retrieval](/docs/concepts/retrieval)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/tutorials/rag.ipynb b/docs/docs/tutorials/rag.ipynb index dcd60fbc55..dd2299346d 100644 --- a/docs/docs/tutorials/rag.ipynb +++ b/docs/docs/tutorials/rag.ipynb @@ -17,7 +17,7 @@ "complexity.\n", "\n", "If you're already familiar with basic retrieval, you might also be interested in\n", - "this [high-level overview of different retrieval techinques](/docs/concepts/#retrieval).\n", + "this [high-level overview of different retrieval techinques](/docs/concepts/retrieval).\n", "\n", "## What is RAG?\n", "\n", @@ -39,15 +39,15 @@ "The most common full sequence from raw data to answer looks like:\n", "\n", "### Indexing\n", - "1. **Load**: First we need to load our data. This is done with [Document Loaders](/docs/concepts/#document-loaders).\n", - "2. **Split**: [Text splitters](/docs/concepts/#text-splitters) break large `Documents` into smaller chunks. This is useful both for indexing data and for passing it in to a model, since large chunks are harder to search over and won't fit in a model's finite context window.\n", - "3. **Store**: We need somewhere to store and index our splits, so that they can later be searched over. This is often done using a [VectorStore](/docs/concepts/#vector-stores) and [Embeddings](/docs/concepts/#embedding-models) model.\n", + "1. **Load**: First we need to load our data. This is done with [Document Loaders](/docs/concepts/document_loaders).\n", + "2. **Split**: [Text splitters](/docs/concepts/text_splitters) break large `Documents` into smaller chunks. This is useful both for indexing data and for passing it in to a model, since large chunks are harder to search over and won't fit in a model's finite context window.\n", + "3. **Store**: We need somewhere to store and index our splits, so that they can later be searched over. This is often done using a [VectorStore](/docs/concepts/#vector-stores) and [Embeddings](/docs/concepts/embedding_models) model.\n", "\n", "![index_diagram](../../static/img/rag_indexing.png)\n", "\n", "### Retrieval and generation\n", - "4. **Retrieve**: Given a user input, relevant splits are retrieved from storage using a [Retriever](/docs/concepts/#retrievers).\n", - "5. **Generate**: A [ChatModel](/docs/concepts/#chat-models) / [LLM](/docs/concepts/#llms) produces an answer using a prompt that includes the question and the retrieved data\n", + "4. **Retrieve**: Given a user input, relevant splits are retrieved from storage using a [Retriever](/docs/concepts/retrievers).\n", + "5. **Generate**: A [ChatModel](/docs/concepts/chat_models) / [LLM](/docs/concepts/text_llms) produces an answer using a prompt that includes the question and the retrieved data\n", "\n", "![retrieval_diagram](../../static/img/rag_retrieval_generation.png)\n", "\n", @@ -946,7 +946,7 @@ "- [Return sources](/docs/how_to/qa_sources): Learn how to return source documents\n", "- [Streaming](/docs/how_to/streaming): Learn how to stream outputs and intermediate steps\n", "- [Add chat history](/docs/how_to/message_history): Learn how to add chat history to your app\n", - "- [Retrieval conceptual guide](/docs/concepts/#retrieval): A high-level overview of specific retrieval techniques\n", + "- [Retrieval conceptual guide](/docs/concepts/retrieval): A high-level overview of specific retrieval techniques\n", "- [Build a local RAG application](/docs/tutorials/local_rag): Create an app similar to the one above using all local components" ] } diff --git a/docs/docs/tutorials/retrievers.ipynb b/docs/docs/tutorials/retrievers.ipynb index acec0cc303..2eefbdd749 100644 --- a/docs/docs/tutorials/retrievers.ipynb +++ b/docs/docs/tutorials/retrievers.ipynb @@ -309,7 +309,7 @@ "\n", "## Retrievers\n", "\n", - "LangChain `VectorStore` objects do not subclass [Runnable](https://python.langchain.com/api_reference/core/index.html#module-langchain_core.runnables), and so cannot immediately be integrated into LangChain Expression Language [chains](/docs/concepts/#langchain-expression-language-lcel).\n", + "LangChain `VectorStore` objects do not subclass [Runnable](https://python.langchain.com/api_reference/core/index.html#module-langchain_core.runnables), and so cannot immediately be integrated into LangChain Expression Language [chains](/docs/concepts/lcel).\n", "\n", "LangChain [Retrievers](https://python.langchain.com/api_reference/core/index.html#module-langchain_core.retrievers) are Runnables, so they implement a standard set of methods (e.g., synchronous and asynchronous `invoke` and `batch` operations) and are designed to be incorporated in LCEL chains.\n", "\n", diff --git a/docs/docs/tutorials/sql_qa.ipynb b/docs/docs/tutorials/sql_qa.ipynb index 529bfaac49..2ba4c6acbb 100644 --- a/docs/docs/tutorials/sql_qa.ipynb +++ b/docs/docs/tutorials/sql_qa.ipynb @@ -11,9 +11,9 @@ "This guide assumes familiarity with the following concepts:\n", "\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", - "- [Chat models](/docs/concepts/#chat-models)\n", - "- [Tools](/docs/concepts/#tools)\n", - "- [Agents](/docs/concepts/#agents)\n", + "- [Chat models](/docs/concepts/chat_models)\n", + "- [Tools](/docs/concepts/tools)\n", + "- [Agents](/docs/concepts/agents)\n", "\n", ":::\n", "\n", diff --git a/docs/docs/tutorials/summarization.ipynb b/docs/docs/tutorials/summarization.ipynb index 7e178db1b1..c63f4c6479 100644 --- a/docs/docs/tutorials/summarization.ipynb +++ b/docs/docs/tutorials/summarization.ipynb @@ -52,9 +52,9 @@ "\n", "Concepts we will cover are:\n", "\n", - "- Using [language models](/docs/concepts/#chat-models).\n", + "- Using [language models](/docs/concepts/chat_models).\n", "\n", - "- Using [document loaders](/docs/concepts/#document-loaders), specifically the [WebBaseLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) to load content from an HTML webpage.\n", + "- Using [document loaders](/docs/concepts/document_loaders), specifically the [WebBaseLoader](https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html) to load content from an HTML webpage.\n", "\n", "- Two ways to summarize or otherwise combine documents.\n", " 1. [Stuff](/docs/tutorials/summarization#stuff), which simply concatenates documents into a prompt;\n", diff --git a/docs/docs/versions/migrating_chains/constitutional_chain.ipynb b/docs/docs/versions/migrating_chains/constitutional_chain.ipynb index 06adc77c9b..66ac36f687 100644 --- a/docs/docs/versions/migrating_chains/constitutional_chain.ipynb +++ b/docs/docs/versions/migrating_chains/constitutional_chain.ipynb @@ -15,7 +15,7 @@ "\n", "- Leverage tool-calling capabilities of chat models that have been fine-tuned for this purpose;\n", "- Reduce parsing errors from extracting expression from a string LLM response;\n", - "- Delegation of instructions to [message roles](/docs/concepts/#messages) (e.g., chat models can understand what a `ToolMessage` represents without the need for additional prompting);\n", + "- Delegation of instructions to [message roles](/docs/concepts/messages) (e.g., chat models can understand what a `ToolMessage` represents without the need for additional prompting);\n", "- Support for streaming, both of individual tokens and chain steps." ] }, diff --git a/docs/docs/versions/migrating_chains/conversation_chain.ipynb b/docs/docs/versions/migrating_chains/conversation_chain.ipynb index 87af17a655..781505daca 100644 --- a/docs/docs/versions/migrating_chains/conversation_chain.ipynb +++ b/docs/docs/versions/migrating_chains/conversation_chain.ipynb @@ -243,7 +243,7 @@ "\n", "See [this tutorial](/docs/tutorials/chatbot) for a more end-to-end guide on building with [`RunnableWithMessageHistory`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html).\n", "\n", - "Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information." + "Check out the [LCEL conceptual docs](/docs/concepts/lcel) for more background information." ] } ], diff --git a/docs/docs/versions/migrating_chains/conversation_retrieval_chain.ipynb b/docs/docs/versions/migrating_chains/conversation_retrieval_chain.ipynb index 3193008287..2e1db7be5d 100644 --- a/docs/docs/versions/migrating_chains/conversation_retrieval_chain.ipynb +++ b/docs/docs/versions/migrating_chains/conversation_retrieval_chain.ipynb @@ -254,7 +254,7 @@ "\n", "You've now seen how to migrate existing usage of some legacy chains to LCEL.\n", "\n", - "Next, check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information." + "Next, check out the [LCEL conceptual docs](/docs/concepts/lcel) for more background information." ] }, { diff --git a/docs/docs/versions/migrating_chains/index.ipynb b/docs/docs/versions/migrating_chains/index.ipynb index 9d9ddbba0c..7a46b74d67 100644 --- a/docs/docs/versions/migrating_chains/index.ipynb +++ b/docs/docs/versions/migrating_chains/index.ipynb @@ -41,7 +41,7 @@ "LangChain maintains a number of legacy abstractions. Many of these can be reimplemented via short combinations of LCEL and LangGraph primitives.\n", "\n", "### LCEL\n", - "[LCEL](/docs/concepts/#langchain-expression-language-lcel) is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:\n", + "[LCEL](/docs/concepts/lcel) is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:\n", "\n", "1. **A unified interface**: Every LCEL object implements the `Runnable` interface, which defines a common set of invocation methods (`invoke`, `batch`, `stream`, `ainvoke`, ...). This makes it possible to also automatically and consistently support useful operations like streaming of intermediate steps and batching, since every chain composed of LCEL objects is itself an LCEL object.\n", "2. **Composition primitives**: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internals, and more.\n", @@ -75,7 +75,7 @@ "- [LLMMathChain](./llm_math_chain.ipynb)\n", "- [ConstitutionalChain](./constitutional_chain.ipynb)\n", "\n", - "Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) and [LangGraph docs](https://langchain-ai.github.io/langgraph/) for more background information." + "Check out the [LCEL conceptual docs](/docs/concepts/lcel) and [LangGraph docs](https://langchain-ai.github.io/langgraph/) for more background information." ] } ], diff --git a/docs/docs/versions/migrating_chains/llm_chain.ipynb b/docs/docs/versions/migrating_chains/llm_chain.ipynb index a4addea4a4..6069a7ccfb 100644 --- a/docs/docs/versions/migrating_chains/llm_chain.ipynb +++ b/docs/docs/versions/migrating_chains/llm_chain.ipynb @@ -200,7 +200,7 @@ "\n", "See [this tutorial](/docs/tutorials/llm_chain) for more detail on building with prompt templates, LLMs, and output parsers.\n", "\n", - "Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information." + "Check out the [LCEL conceptual docs](/docs/concepts/lcel) for more background information." ] } ], diff --git a/docs/docs/versions/migrating_chains/llm_math_chain.ipynb b/docs/docs/versions/migrating_chains/llm_math_chain.ipynb index 2697d02fd1..36bbd884cf 100644 --- a/docs/docs/versions/migrating_chains/llm_math_chain.ipynb +++ b/docs/docs/versions/migrating_chains/llm_math_chain.ipynb @@ -9,11 +9,11 @@ "\n", "[`LLMMathChain`](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.llm_math.base.LLMMathChain.html) enabled the evaluation of mathematical expressions generated by a LLM. Instructions for generating the expressions were formatted into the prompt, and the expressions were parsed out of the string response before evaluation using the [numexpr](https://numexpr.readthedocs.io/en/latest/user_guide.html) library.\n", "\n", - "This is more naturally achieved via [tool calling](/docs/concepts/#functiontool-calling). We can equip a chat model with a simple calculator tool leveraging `numexpr` and construct a simple chain around it using [LangGraph](https://langchain-ai.github.io/langgraph/). Some advantages of this approach include:\n", + "This is more naturally achieved via [tool calling](/docs/concepts/tool_calling). We can equip a chat model with a simple calculator tool leveraging `numexpr` and construct a simple chain around it using [LangGraph](https://langchain-ai.github.io/langgraph/). Some advantages of this approach include:\n", "\n", "- Leverage tool-calling capabilities of chat models that have been fine-tuned for this purpose;\n", "- Reduce parsing errors from extracting expression from a string LLM response;\n", - "- Delegation of instructions to [message roles](/docs/concepts/#messages) (e.g., chat models can understand what a `ToolMessage` represents without the need for additional prompting);\n", + "- Delegation of instructions to [message roles](/docs/concepts/messages) (e.g., chat models can understand what a `ToolMessage` represents without the need for additional prompting);\n", "- Support for streaming, both of individual tokens and chain steps." ] }, diff --git a/docs/docs/versions/migrating_chains/llm_router_chain.ipynb b/docs/docs/versions/migrating_chains/llm_router_chain.ipynb index 5e37f2874c..24391baaf2 100644 --- a/docs/docs/versions/migrating_chains/llm_router_chain.ipynb +++ b/docs/docs/versions/migrating_chains/llm_router_chain.ipynb @@ -9,7 +9,7 @@ "\n", "The [`LLMRouterChain`](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.router.llm_router.LLMRouterChain.html) routed an input query to one of multiple destinations-- that is, given an input query, it used a LLM to select from a list of destination chains, and passed its inputs to the selected chain.\n", "\n", - "`LLMRouterChain` does not support common [chat model](/docs/concepts/#chat-models) features, such as message roles and [tool calling](/docs/concepts/#functiontool-calling). Under the hood, `LLMRouterChain` routes a query by instructing the LLM to generate JSON-formatted text, and parsing out the intended destination.\n", + "`LLMRouterChain` does not support common [chat model](/docs/concepts/chat_models) features, such as message roles and [tool calling](/docs/concepts/tool_calling). Under the hood, `LLMRouterChain` routes a query by instructing the LLM to generate JSON-formatted text, and parsing out the intended destination.\n", "\n", "Consider an example from a [MultiPromptChain](/docs/versions/migrating_chains/multi_prompt_chain), which uses `LLMRouterChain`. Below is an (example) default prompt:" ] @@ -240,7 +240,7 @@ "\n", "See [this tutorial](/docs/tutorials/llm_chain) for more detail on building with prompt templates, LLMs, and output parsers.\n", "\n", - "Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information." + "Check out the [LCEL conceptual docs](/docs/concepts/lcel) for more background information." ] }, { diff --git a/docs/docs/versions/migrating_chains/map_rerank_docs_chain.ipynb b/docs/docs/versions/migrating_chains/map_rerank_docs_chain.ipynb index 6b979eeaed..f52e68e2bd 100644 --- a/docs/docs/versions/migrating_chains/map_rerank_docs_chain.ipynb +++ b/docs/docs/versions/migrating_chains/map_rerank_docs_chain.ipynb @@ -15,7 +15,7 @@ "\n", "A common process in this scenario is question-answering using pieces of context from a document. Forcing the model to generate score along with its answer helps to select for answers generated only by relevant context.\n", "\n", - "An [LangGraph](https://langchain-ai.github.io/langgraph/) implementation allows for the incorporation of [tool calling](/docs/concepts/#functiontool-calling) and other features for this problem. Below we will go through both `MapRerankDocumentsChain` and a corresponding LangGraph implementation on a simple example for illustrative purposes." + "An [LangGraph](https://langchain-ai.github.io/langgraph/) implementation allows for the incorporation of [tool calling](/docs/concepts/tool_calling) and other features for this problem. Below we will go through both `MapRerankDocumentsChain` and a corresponding LangGraph implementation on a simple example for illustrative purposes." ] }, { diff --git a/docs/docs/versions/migrating_chains/multi_prompt_chain.ipynb b/docs/docs/versions/migrating_chains/multi_prompt_chain.ipynb index 6c98788035..0efaee8b91 100644 --- a/docs/docs/versions/migrating_chains/multi_prompt_chain.ipynb +++ b/docs/docs/versions/migrating_chains/multi_prompt_chain.ipynb @@ -9,7 +9,7 @@ "\n", "The [`MultiPromptChain`](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html) routed an input query to one of multiple LLMChains-- that is, given an input query, it used a LLM to select from a list of prompts, formatted the query into the prompt, and generated a response.\n", "\n", - "`MultiPromptChain` does not support common [chat model](/docs/concepts/#chat-models) features, such as message roles and [tool calling](/docs/concepts/#functiontool-calling).\n", + "`MultiPromptChain` does not support common [chat model](/docs/concepts/chat_models) features, such as message roles and [tool calling](/docs/concepts/tool_calling).\n", "\n", "A [LangGraph](https://langchain-ai.github.io/langgraph/) implementation confers a number of advantages for this problem:\n", "\n", diff --git a/docs/docs/versions/migrating_chains/refine_docs_chain.ipynb b/docs/docs/versions/migrating_chains/refine_docs_chain.ipynb index 80ee5afbac..6b8b3549a1 100644 --- a/docs/docs/versions/migrating_chains/refine_docs_chain.ipynb +++ b/docs/docs/versions/migrating_chains/refine_docs_chain.ipynb @@ -20,7 +20,7 @@ "\n", "- Where `RefineDocumentsChain` refines the summary via a `for` loop inside the class, a LangGraph implementation lets you step through the execution to monitor or otherwise steer it if needed.\n", "- The LangGraph implementation supports streaming of both execution steps and individual tokens.\n", - "- Because it is assembled from modular components, it is also simple to extend or modify (e.g., to incorporate [tool calling](/docs/concepts/#functiontool-calling) or other behavior).\n", + "- Because it is assembled from modular components, it is also simple to extend or modify (e.g., to incorporate [tool calling](/docs/concepts/tool_calling) or other behavior).\n", "\n", "Below we will go through both `RefineDocumentsChain` and a corresponding LangGraph implementation on a simple example for illustrative purposes.\n", "\n", diff --git a/docs/docs/versions/migrating_chains/retrieval_qa.ipynb b/docs/docs/versions/migrating_chains/retrieval_qa.ipynb index fec2a75f1b..64ec58b3eb 100644 --- a/docs/docs/versions/migrating_chains/retrieval_qa.ipynb +++ b/docs/docs/versions/migrating_chains/retrieval_qa.ipynb @@ -226,7 +226,7 @@ "\n", "## Next steps\n", "\n", - "Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information on the LangChain expression language." + "Check out the [LCEL conceptual docs](/docs/concepts/lcel) for more background information on the LangChain expression language." ] } ], diff --git a/docs/docs/versions/migrating_chains/stuff_docs_chain.ipynb b/docs/docs/versions/migrating_chains/stuff_docs_chain.ipynb index c09b0279b6..3d2a97b3a9 100644 --- a/docs/docs/versions/migrating_chains/stuff_docs_chain.ipynb +++ b/docs/docs/versions/migrating_chains/stuff_docs_chain.ipynb @@ -9,7 +9,7 @@ "\n", "[StuffDocumentsChain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html) combines documents by concatenating them into a single context window. It is a straightforward and effective strategy for combining documents for question-answering, summarization, and other purposes.\n", "\n", - "[create_stuff_documents_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) is the recommended alternative. It functions the same as `StuffDocumentsChain`, with better support for streaming and batch functionality. Because it is a simple combination of [LCEL primitives](/docs/concepts/#langchain-expression-language-lcel), it is also easier to extend and incorporate into other LangChain applications.\n", + "[create_stuff_documents_chain](https://python.langchain.com/api_reference/langchain/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) is the recommended alternative. It functions the same as `StuffDocumentsChain`, with better support for streaming and batch functionality. Because it is a simple combination of [LCEL primitives](/docs/concepts/lcel), it is also easier to extend and incorporate into other LangChain applications.\n", "\n", "Below we will go through both `StuffDocumentsChain` and `create_stuff_documents_chain` on a simple example for illustrative purposes.\n", "\n", @@ -245,7 +245,7 @@ "\n", "## Next steps\n", "\n", - "Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) for more background information.\n", + "Check out the [LCEL conceptual docs](/docs/concepts/lcel) for more background information.\n", "\n", "See these [how-to guides](/docs/how_to/#qa-with-rag) for more on question-answering tasks with RAG.\n", "\n", diff --git a/docs/docs/versions/migrating_memory/chat_history.ipynb b/docs/docs/versions/migrating_memory/chat_history.ipynb index fc164ee135..164e897369 100644 --- a/docs/docs/versions/migrating_memory/chat_history.ipynb +++ b/docs/docs/versions/migrating_memory/chat_history.ipynb @@ -10,7 +10,7 @@ ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", - "* [Chat History](/docs/concepts/#chat-history)\n", + "* [Chat History](/docs/concepts/chat_history)\n", "* [RunnableWithMessageHistory](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html)\n", "* [LangGraph](https://langchain-ai.github.io/langgraph/concepts/high_level/)\n", "* [Memory](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/#memory)\n", @@ -236,7 +236,7 @@ "\n", "This how-to guide used the `messages` and `add_messages` interface of `BaseChatMessageHistory` directly. \n", "\n", - "Alternatively, you can use [RunnableWithMessageHistory](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html), as [LCEL](/docs/concepts/#langchain-expression-language-lcel/) can be used inside any [LangGraph node](https://langchain-ai.github.io/langgraph/concepts/low_level/#nodes).\n", + "Alternatively, you can use [RunnableWithMessageHistory](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.history.RunnableWithMessageHistory.html), as [LCEL](/docs/concepts/lcel/) can be used inside any [LangGraph node](https://langchain-ai.github.io/langgraph/concepts/low_level/#nodes).\n", "\n", "To do that replace the following code:\n", "\n", diff --git a/docs/docs/versions/migrating_memory/index.mdx b/docs/docs/versions/migrating_memory/index.mdx index fbd7cd6d3e..8b8e668133 100644 --- a/docs/docs/versions/migrating_memory/index.mdx +++ b/docs/docs/versions/migrating_memory/index.mdx @@ -18,7 +18,7 @@ The main advantages of persistence in LangGraph are: - Error recovery - Allowing human intervention in AI workflows - Exploring different conversation paths ("time travel") -- Full compatibility with both traditional [language models](/docs/concepts/#llms) and modern [chat models](/docs/concepts/#chat-models). Early memory implementations in LangChain weren't designed for newer chat model APIs, causing issues with features like tool-calling. LangGraph memory can persist any custom state. +- Full compatibility with both traditional [language models](/docs/concepts/text_llms) and modern [chat models](/docs/concepts/chat_models). Early memory implementations in LangChain weren't designed for newer chat model APIs, causing issues with features like tool-calling. LangGraph memory can persist any custom state. - Highly customizable, allowing you to fully control how memory works and use different storage backends. ## Evolution of memory in LangChain