docs[patch]: Adds LangGraph and LangSmith links, adds more crosslinks between pages (#22656)

@baskaryan @hwchase17
pull/22684/head
Jacob Lee 4 months ago committed by GitHub
parent c3a8716589
commit 02ff78deb8
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -153,6 +153,8 @@ Generally, such models are better at tool calling than non-fine-tuned models, an
Please see the [tool calling section](/docs/concepts/#functiontool-calling) for more information. Please see the [tool calling section](/docs/concepts/#functiontool-calling) for more information.
::: :::
For specifics on how to use chat models, see the [relevant how-to guides here](/docs/how_to/#chat-models).
### LLMs ### LLMs
<span data-heading-keywords="llm,llms"></span> <span data-heading-keywords="llm,llms"></span>
@ -165,6 +167,8 @@ When messages are passed in as input, they will be formatted into a string under
LangChain does not provide any LLMs, rather we rely on third party integrations. LangChain does not provide any LLMs, rather we rely on third party integrations.
For specifics on how to use LLMs, see the [relevant how-to guides here](/docs/how_to/#llms).
### Messages ### Messages
Some language models take a list of messages as input and return a message. Some language models take a list of messages as input and return a message.
@ -228,7 +232,7 @@ Prompt Templates take as input a dictionary, where each key represents a variabl
Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or a list of messages. Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or a list of messages.
The reason this PromptValue exists is to make it easy to switch between strings and messages. The reason this PromptValue exists is to make it easy to switch between strings and messages.
There are a few different types of prompt templates There are a few different types of prompt templates:
#### String PromptTemplates #### String PromptTemplates
@ -296,12 +300,15 @@ prompt_template = ChatPromptTemplate.from_messages([
]) ])
``` ```
For specifics on how to use prompt templates, see the [relevant how-to guides here](/docs/how_to/#prompt-templates).
### Example selectors ### Example selectors
One common prompting technique for achieving better performance is to include examples as part of the prompt. One common prompting technique for achieving better performance is to include examples as part of the prompt.
This gives the language model concrete examples of how it should behave. This gives the language model concrete examples of how it should behave.
Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them. Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them.
Example Selectors are classes responsible for selecting and then formatting examples into prompts. Example Selectors are classes responsible for selecting and then formatting examples into prompts.
For specifics on how to use example selectors, see the [relevant how-to guides here](/docs/how_to/#example-selectors).
### Output parsers ### Output parsers
<span data-heading-keywords="output parser"></span> <span data-heading-keywords="output parser"></span>
@ -348,6 +355,8 @@ LangChain has lots of different types of output parsers. This is a list of outpu
| [Datetime](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html#langchain.output_parsers.datetime.DatetimeOutputParser) | | ✅ | | `str` \| `Message` | `datetime.datetime` | Parses response into a datetime string. | | [Datetime](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html#langchain.output_parsers.datetime.DatetimeOutputParser) | | ✅ | | `str` \| `Message` | `datetime.datetime` | Parses response into a datetime string. |
| [Structured](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html#langchain.output_parsers.structured.StructuredOutputParser) | | ✅ | | `str` \| `Message` | `Dict[str, str]` | An output parser that returns structured information. It is less powerful than other output parsers since it only allows for fields to be strings. This can be useful when you are working with smaller LLMs. | | [Structured](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html#langchain.output_parsers.structured.StructuredOutputParser) | | ✅ | | `str` \| `Message` | `Dict[str, str]` | An output parser that returns structured information. It is less powerful than other output parsers since it only allows for fields to be strings. This can be useful when you are working with smaller LLMs. |
For specifics on how to use output parsers, see the [relevant how-to guides here](/docs/how_to/#output-parsers).
### Chat history ### Chat history
Most LLM applications have a conversational interface. Most LLM applications have a conversational interface.
An essential component of a conversation is being able to refer to information introduced earlier in the conversation. An essential component of a conversation is being able to refer to information introduced earlier in the conversation.
@ -382,6 +391,8 @@ loader = CSVLoader(
data = loader.load() data = loader.load()
``` ```
For specifics on how to use document loaders, see the [relevant how-to guides here](/docs/how_to/#document-loaders).
### Text splitters ### Text splitters
Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents. Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.
@ -399,6 +410,8 @@ That means there are two different axes along which you can customize your text
1. How the text is split 1. How the text is split
2. How the chunk size is measured 2. How the chunk size is measured
For specifics on how to use text splitters, see the [relevant how-to guides here](/docs/how_to/#text-splitters).
### Embedding models ### Embedding models
<span data-heading-keywords="embedding,embeddings"></span> <span data-heading-keywords="embedding,embeddings"></span>
@ -408,6 +421,8 @@ Embeddings create a vector representation of a piece of text. This is useful bec
The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
For specifics on how to use embedding models, see the [relevant how-to guides here](/docs/how_to/#embedding-models).
### Vector stores ### Vector stores
<span data-heading-keywords="vector,vectorstore,vectorstores,vector store,vector stores"></span> <span data-heading-keywords="vector,vectorstore,vectorstores,vector store,vector stores"></span>
@ -422,6 +437,8 @@ vectorstore = MyVectorStore()
retriever = vectorstore.as_retriever() retriever = vectorstore.as_retriever()
``` ```
For specifics on how to use vector stores, see the [relevant how-to guides here](/docs/how_to/#vector-stores).
### Retrievers ### Retrievers
<span data-heading-keywords="retriever,retrievers"></span> <span data-heading-keywords="retriever,retrievers"></span>
@ -432,6 +449,8 @@ Retrievers can be created from vectorstores, but are also broad enough to includ
Retrievers accept a string query as input and return a list of Document's as output. Retrievers accept a string query as input and return a list of Document's as output.
For specifics on how to use retrievers, see the [relevant how-to guides here](/docs/how_to/#retrievers).
### Tools ### Tools
<span data-heading-keywords="tool,tools"></span> <span data-heading-keywords="tool,tools"></span>
@ -459,6 +478,8 @@ Generally, when designing tools to be used by a chat model or LLM, it is importa
- Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas. - Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas.
- Simpler tools are generally easier for models to use than more complex tools. - Simpler tools are generally easier for models to use than more complex tools.
For specifics on how to use tools, see the [relevant how-to guides here](/docs/how_to/#tools).
### Toolkits ### Toolkits
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods. Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
@ -491,7 +512,7 @@ In order to solve that we built LangGraph to be this flexible, highly-controllab
If you are still using AgentExecutor, do not fear: we still have a guide on [how to use AgentExecutor](/docs/how_to/agent_executor). If you are still using AgentExecutor, do not fear: we still have a guide on [how to use AgentExecutor](/docs/how_to/agent_executor).
It is recommended, however, that you start to transition to LangGraph. It is recommended, however, that you start to transition to LangGraph.
In order to assist in this we have put together a [transition guide on how to do so](/docs/how_to/migrate_agent) In order to assist in this we have put together a [transition guide on how to do so](/docs/how_to/migrate_agent).
### Multimodal ### Multimodal
@ -499,6 +520,8 @@ Some models are multimodal, accepting images, audio and even video as inputs. Th
In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations. In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
For specifics on how to use multimodal models, see the [relevant how-to guides here](/docs/how_to/#multimodal).
### Callbacks ### Callbacks
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks. LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
@ -569,6 +592,8 @@ This is a common reason why you may fail to see events being emitted from custom
runnables or tools. runnables or tools.
::: :::
For specifics on how to use callbacks, see the [relevant how-to guides here](/docs/how_to/#callbacks).
## Techniques ## Techniques
### Function/tool calling ### Function/tool calling
@ -640,6 +665,7 @@ LangChain provides several advanced retrieval types. A full list is below, along
| [Multi-Query Retriever](/docs/how_to/MultiQueryRetriever/) | Any | Yes | If users are asking questions that are complex and require multiple pieces of distinct information to respond | This uses an LLM to generate multiple queries from the original one. This is useful when the original query needs pieces of information about multiple topics to be properly answered. By generating multiple queries, we can then fetch documents for each of them. | | [Multi-Query Retriever](/docs/how_to/MultiQueryRetriever/) | Any | Yes | If users are asking questions that are complex and require multiple pieces of distinct information to respond | This uses an LLM to generate multiple queries from the original one. This is useful when the original query needs pieces of information about multiple topics to be properly answered. By generating multiple queries, we can then fetch documents for each of them. |
| [Ensemble](/docs/how_to/ensemble_retriever/) | Any | No | If you have multiple retrieval methods and want to try combining them. | This fetches documents from multiple retrievers and then combines them. | | [Ensemble](/docs/how_to/ensemble_retriever/) | Any | No | If you have multiple retrieval methods and want to try combining them. | This fetches documents from multiple retrievers and then combines them. |
For a high-level guide on retrieval, see this [tutorial on RAG](/docs/tutorials/rag/).
### Text splitting ### Text splitting

@ -49,7 +49,7 @@ These are the core building blocks you can use when building applications.
### Prompt templates ### Prompt templates
Prompt Templates are responsible for formatting user input into a format that can be passed to a language model. [Prompt Templates](/docs/concepts/#prompt-templates) are responsible for formatting user input into a format that can be passed to a language model.
- [How to: use few shot examples](/docs/how_to/few_shot_examples) - [How to: use few shot examples](/docs/how_to/few_shot_examples)
- [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/) - [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
@ -58,7 +58,7 @@ Prompt Templates are responsible for formatting user input into a format that ca
### Example selectors ### Example selectors
Example Selectors are responsible for selecting the correct few shot examples to pass to the prompt. [Example Selectors](/docs/concepts/#example-selectors) are responsible for selecting the correct few shot examples to pass to the prompt.
- [How to: use example selectors](/docs/how_to/example_selectors) - [How to: use example selectors](/docs/how_to/example_selectors)
- [How to: select examples by length](/docs/how_to/example_selectors_length_based) - [How to: select examples by length](/docs/how_to/example_selectors_length_based)
@ -68,7 +68,7 @@ Example Selectors are responsible for selecting the correct few shot examples to
### Chat models ### Chat models
Chat Models are newer forms of language models that take messages in and output a message. [Chat Models](/docs/concepts/#chat-models) are newer forms of language models that take messages in and output a message.
- [How to: do function/tool calling](/docs/how_to/tool_calling) - [How to: do function/tool calling](/docs/how_to/tool_calling)
- [How to: get models to return structured output](/docs/how_to/structured_output) - [How to: get models to return structured output](/docs/how_to/structured_output)
@ -82,7 +82,7 @@ Chat Models are newer forms of language models that take messages in and output
### LLMs ### LLMs
What LangChain calls LLMs are older forms of language models that take a string in and output a string. What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language models that take a string in and output a string.
- [How to: cache model responses](/docs/how_to/llm_caching) - [How to: cache model responses](/docs/how_to/llm_caching)
- [How to: create a custom LLM class](/docs/how_to/custom_llm) - [How to: create a custom LLM class](/docs/how_to/custom_llm)
@ -92,7 +92,7 @@ What LangChain calls LLMs are older forms of language models that take a string
### Output parsers ### Output parsers
Output Parsers are responsible for taking the output of an LLM and parsing into more structured format. [Output Parsers](/docs/concepts/#output-parsers) are responsible for taking the output of an LLM and parsing into more structured format.
- [How to: use output parsers to parse an LLM response into structured format](/docs/how_to/output_parser_structured) - [How to: use output parsers to parse an LLM response into structured format](/docs/how_to/output_parser_structured)
- [How to: parse JSON output](/docs/how_to/output_parser_json) - [How to: parse JSON output](/docs/how_to/output_parser_json)
@ -104,7 +104,7 @@ Output Parsers are responsible for taking the output of an LLM and parsing into
### Document loaders ### Document loaders
Document Loaders are responsible for loading documents from a variety of sources. [Document Loaders](/docs/concepts/#document-loaders) are responsible for loading documents from a variety of sources.
- [How to: load CSV data](/docs/how_to/document_loader_csv) - [How to: load CSV data](/docs/how_to/document_loader_csv)
- [How to: load data from a directory](/docs/how_to/document_loader_directory) - [How to: load data from a directory](/docs/how_to/document_loader_directory)
@ -117,7 +117,7 @@ Document Loaders are responsible for loading documents from a variety of sources
### Text splitters ### Text splitters
Text Splitters take a document and split into chunks that can be used for retrieval. [Text Splitters](/docs/concepts/#text-splitters) take a document and split into chunks that can be used for retrieval.
- [How to: recursively split text](/docs/how_to/recursive_text_splitter) - [How to: recursively split text](/docs/how_to/recursive_text_splitter)
- [How to: split by HTML headers](/docs/how_to/HTML_header_metadata_splitter) - [How to: split by HTML headers](/docs/how_to/HTML_header_metadata_splitter)
@ -131,20 +131,20 @@ Text Splitters take a document and split into chunks that can be used for retrie
### Embedding models ### Embedding models
Embedding Models take a piece of text and create a numerical representation of it. [Embedding Models](/docs/concepts/#embedding-models) take a piece of text and create a numerical representation of it.
- [How to: embed text data](/docs/how_to/embed_text) - [How to: embed text data](/docs/how_to/embed_text)
- [How to: cache embedding results](/docs/how_to/caching_embeddings) - [How to: cache embedding results](/docs/how_to/caching_embeddings)
### Vector stores ### Vector stores
Vector stores are databases that can efficiently store and retrieve embeddings. [Vector stores](/docs/concepts/#vector-stores) are databases that can efficiently store and retrieve embeddings.
- [How to: use a vector store to retrieve data](/docs/how_to/vectorstores) - [How to: use a vector store to retrieve data](/docs/how_to/vectorstores)
### Retrievers ### Retrievers
Retrievers are responsible for taking a query and returning relevant documents. [Retrievers](/docs/concepts/#retrievers) are responsible for taking a query and returning relevant documents.
- [How to: use a vector store to retrieve data](/docs/how_to/vectorstore_retriever) - [How to: use a vector store to retrieve data](/docs/how_to/vectorstore_retriever)
- [How to: generate multiple queries to retrieve data for](/docs/how_to/MultiQueryRetriever) - [How to: generate multiple queries to retrieve data for](/docs/how_to/MultiQueryRetriever)
@ -167,7 +167,7 @@ Indexing is the process of keeping your vectorstore in-sync with the underlying
### Tools ### Tools
LangChain Tools contain a description of the tool (to pass to the language model) as well as the implementation of the function to call). LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call).
- [How to: create custom tools](/docs/how_to/custom_tools) - [How to: create custom tools](/docs/how_to/custom_tools)
- [How to: use built-in tools and built-in toolkits](/docs/how_to/tools_builtin) - [How to: use built-in tools and built-in toolkits](/docs/how_to/tools_builtin)
@ -196,6 +196,8 @@ For in depth how-to guides for agents, please check out [LangGraph](https://gith
### Callbacks ### Callbacks
[Callbacks](/docs/concepts/#callbacks) allow you to hook into the various stages of your LLM application's execution.
- [How to: pass in callbacks at runtime](/docs/how_to/callbacks_runtime) - [How to: pass in callbacks at runtime](/docs/how_to/callbacks_runtime)
- [How to: attach callbacks to a module](/docs/how_to/callbacks_attach) - [How to: attach callbacks to a module](/docs/how_to/callbacks_attach)
- [How to: pass callbacks into a module constructor](/docs/how_to/callbacks_constructor) - [How to: pass callbacks into a module constructor](/docs/how_to/callbacks_constructor)
@ -222,6 +224,7 @@ These guides cover use-case specific details.
### Q&A with RAG ### Q&A with RAG
Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data. Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data.
For a high-level tutorial on RAG, check out [this guide](/docs/tutorials/rag/).
- [How to: add chat history](/docs/how_to/qa_chat_history_how_to/) - [How to: add chat history](/docs/how_to/qa_chat_history_how_to/)
- [How to: stream](/docs/how_to/qa_streaming/) - [How to: stream](/docs/how_to/qa_streaming/)
@ -233,6 +236,7 @@ Retrieval Augmented Generation (RAG) is a way to connect LLMs to external source
### Extraction ### Extraction
Extraction is when you use LLMs to extract structured information from unstructured text. Extraction is when you use LLMs to extract structured information from unstructured text.
For a high level tutorial on extraction, check out [this guide](/docs/tutorials/extraction/).
- [How to: use reference examples](/docs/how_to/extraction_examples/) - [How to: use reference examples](/docs/how_to/extraction_examples/)
- [How to: handle long text](/docs/how_to/extraction_long_text/) - [How to: handle long text](/docs/how_to/extraction_long_text/)
@ -241,6 +245,7 @@ Extraction is when you use LLMs to extract structured information from unstructu
### Chatbots ### Chatbots
Chatbots involve using an LLM to have a conversation. Chatbots involve using an LLM to have a conversation.
For a high-level tutorial on building chatbots, check out [this guide](/docs/tutorials/chatbot/).
- [How to: manage memory](/docs/how_to/chatbots_memory) - [How to: manage memory](/docs/how_to/chatbots_memory)
- [How to: do retrieval](/docs/how_to/chatbots_retrieval) - [How to: do retrieval](/docs/how_to/chatbots_retrieval)
@ -249,6 +254,7 @@ Chatbots involve using an LLM to have a conversation.
### Query analysis ### Query analysis
Query Analysis is the task of using an LLM to generate a query to send to a retriever. Query Analysis is the task of using an LLM to generate a query to send to a retriever.
For a high-level tutorial on query analysis, check out [this guide](/docs/tutorials/query_analysis/).
- [How to: add examples to the prompt](/docs/how_to/query_few_shot) - [How to: add examples to the prompt](/docs/how_to/query_few_shot)
- [How to: handle cases where no queries are generated](/docs/how_to/query_no_queries) - [How to: handle cases where no queries are generated](/docs/how_to/query_no_queries)
@ -260,6 +266,7 @@ Query Analysis is the task of using an LLM to generate a query to send to a retr
### Q&A over SQL + CSV ### Q&A over SQL + CSV
You can use LLMs to do question answering over tabular data. You can use LLMs to do question answering over tabular data.
For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/).
- [How to: use prompting to improve results](/docs/how_to/sql_prompting) - [How to: use prompting to improve results](/docs/how_to/sql_prompting)
- [How to: do query validation](/docs/how_to/sql_query_checking) - [How to: do query validation](/docs/how_to/sql_query_checking)
@ -269,8 +276,25 @@ You can use LLMs to do question answering over tabular data.
### Q&A over graph databases ### Q&A over graph databases
You can use an LLM to do question answering over graph databases. You can use an LLM to do question answering over graph databases.
For a high-level tutorial, check out [this guide](/docs/tutorials/graph/).
- [How to: map values to a database](/docs/how_to/graph_mapping) - [How to: map values to a database](/docs/how_to/graph_mapping)
- [How to: add a semantic layer over the database](/docs/how_to/graph_semantic) - [How to: add a semantic layer over the database](/docs/how_to/graph_semantic)
- [How to: improve results with prompting](/docs/how_to/graph_prompting) - [How to: improve results with prompting](/docs/how_to/graph_prompting)
- [How to: construct knowledge graphs](/docs/how_to/graph_constructing) - [How to: construct knowledge graphs](/docs/how_to/graph_constructing)
## [LangGraph](https://langchain-ai.github.io/langgraph)
LangGraph is an extension of LangChain aimed at
building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph documentation is currently hosted on a separate site.
You can peruse [LangGraph how-to guides here](https://langchain-ai.github.io/langgraph/how-tos/).
## [LangSmith](https://docs.smith.langchain.com/)
LangSmith allows you to closely trace, monitor and evaluate your LLM application.
It seamlessly integrates with LangChain, and you can use it to inspect and debug individual steps of your chains as you build.
LangSmith documentation is hosted on a separate site.
You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/).

@ -27,5 +27,20 @@ New to LangChain or to LLM app development in general? Read this material to qui
- [Classify text into labels](/docs/tutorials/classification) - [Classify text into labels](/docs/tutorials/classification)
- [Summarize text](/docs/tutorials/summarization) - [Summarize text](/docs/tutorials/summarization)
### LangGraph
LangGraph is an extension of LangChain aimed at
building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph documentation is currently hosted on a separate site.
You can peruse [LangGraph tutorials here](https://langchain-ai.github.io/langgraph/tutorials/).
### LangSmith
LangSmith allows you to closely trace, monitor and evaluate your LLM application.
It seamlessly integrates with LangChain, and you can use it to inspect and debug individual steps of your chains as you build.
LangSmith documentation is hosted on a separate site.
You can peruse [LangSmith tutorials here](https://docs.smith.langchain.com/tutorials/).
For a longer list of tutorials, see our [cookbook section](https://github.com/langchain-ai/langchain/tree/master/cookbook). For a longer list of tutorials, see our [cookbook section](https://github.com/langchain-ai/langchain/tree/master/cookbook).

Loading…
Cancel
Save