mv module integrations docs (#8101)

pull/8169/head^2
Bagatur 1 year ago committed by GitHub
parent 8ea840432f
commit c8c8635dc9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -51,7 +51,7 @@ Walkthroughs and best-practices for common end-to-end use cases, like:
Learn best practices for developing with LangChain. Learn best practices for developing with LangChain.
### [Ecosystem](/docs/ecosystem/) ### [Ecosystem](/docs/ecosystem/)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/ecosystem/integrations/) and [dependent repos](/docs/ecosystem/dependents.html). LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/integrations/) and [dependent repos](/docs/ecosystem/dependents).
### [Additional resources](/docs/additional_resources/) ### [Additional resources](/docs/additional_resources/)
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com). Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).

@ -19,7 +19,7 @@ This prompt can include things like:
2. Background context for the agent (useful for giving it more context on the types of tasks it's being asked to do) 2. Background context for the agent (useful for giving it more context on the types of tasks it's being asked to do)
3. Prompting strategies to invoke better reasoning (the most famous/widely used being [ReAct](https://arxiv.org/abs/2210.03629)) 3. Prompting strategies to invoke better reasoning (the most famous/widely used being [ReAct](https://arxiv.org/abs/2210.03629))
LangChain provides a few different agent types to get started. LangChain provides a few different types of agents to get started.
Even then, you will likely want to customize those agents with parts (1) and (2). Even then, you will likely want to customize those agents with parts (1) and (2).
For a full list of agent types see [agent types](/docs/modules/agents/agent_types/) For a full list of agent types see [agent types](/docs/modules/agents/agent_types/)

@ -3,8 +3,8 @@ sidebar_position: 3
--- ---
# Toolkits # Toolkits
Toolkits are collections of tools that are designed to be used together for specific tasks and have convenience loading methods. :::info
Head to [Integrations](/docs/integrations/toolkits/) for documentation on built-in toolkit integrations.
import DocCardList from "@theme/DocCardList"; :::
<DocCardList /> Toolkits are collections of tools that are designed to be used together for specific tasks and have convenience loading methods.

@ -3,6 +3,10 @@ sidebar_position: 2
--- ---
# Tools # Tools
:::info
Head to [Integrations](/docs/integrations/tools/) for documentation on built-in tool integrations.
:::
Tools are interfaces that an agent can use to interact with the world. Tools are interfaces that an agent can use to interact with the world.
## Get started ## Get started

@ -3,6 +3,10 @@ sidebar_position: 5
--- ---
# Callbacks # Callbacks
:::info
Head to [Integrations](/docs/integrations/callbacks/) for documentation on built-in callbacks integrations with 3rd-party tools.
:::
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks. LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
import GetStarted from "@snippets/modules/callbacks/get_started.mdx" import GetStarted from "@snippets/modules/callbacks/get_started.mdx"

@ -3,6 +3,10 @@ sidebar_position: 0
--- ---
# Document loaders # Document loaders
:::info
Head to [Integrations](/docs/integrations/document_loaders/) for documentation on built-in document loader integrations with 3rd-party tools.
:::
Use document loaders to load data from a source as `Document`'s. A `Document` is a piece of text Use document loaders to load data from a source as `Document`'s. A `Document` is a piece of text
and associated metadata. For example, there are document loaders for loading a simple `.txt` file, for loading the text and associated metadata. For example, there are document loaders for loading a simple `.txt` file, for loading the text
contents of any web page, or even for loading a transcript of a YouTube video. contents of any web page, or even for loading a transcript of a YouTube video.

@ -3,6 +3,10 @@ sidebar_position: 1
--- ---
# Document transformers # Document transformers
:::info
Head to [Integrations](/docs/integrations/document_transformers/) for documentation on built-in document transformer integrations with 3rd-party tools.
:::
Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example
is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain
has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents. has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.

@ -3,6 +3,10 @@ sidebar_position: 4
--- ---
# Retrievers # Retrievers
:::info
Head to [Integrations](/docs/integrations/retrievers/) for documentation on built-in retriever integrations with 3rd-party tools.
:::
A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store. A retriever is an interface that returns documents given an unstructured query. It is more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used
as the backbone of a retriever, but there are other types of retrievers as well. as the backbone of a retriever, but there are other types of retrievers as well.

@ -3,6 +3,10 @@ sidebar_position: 2
--- ---
# Text embedding models # Text embedding models
:::info
Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers.
:::
The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space. Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.

@ -3,6 +3,10 @@ sidebar_position: 3
--- ---
# Vector stores # Vector stores
:::info
Head to [Integrations](/docs/integrations/vectorstores/) for documentation on built-in integrations with 3rd-party vector stores.
:::
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding
vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are
'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search

@ -6,6 +6,10 @@ sidebar_position: 3
🚧 _Docs under construction_ 🚧 🚧 _Docs under construction_ 🚧
:::info
Head to [Integrations](/docs/integrations/memory/) for documentation on built-in memory integrations with 3rd-party tools.
:::
By default, Chains and Agents are stateless, By default, Chains and Agents are stateless,
meaning that they treat each incoming query independently (like the underlying LLMs and chat models themselves). meaning that they treat each incoming query independently (like the underlying LLMs and chat models themselves).
In some applications, like chatbots, it is essential In some applications, like chatbots, it is essential

@ -3,18 +3,16 @@ sidebar_position: 1
--- ---
# Chat models # Chat models
:::info
Head to [Integrations](/docs/integrations/chat/) for documentation on built-in integrations with chat model providers.
:::
Chat models are a variation on language models. Chat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different. While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs. Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions. Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
The following sections of documentation are provided:
- **How-to guides**: Walkthroughs of core functionality, like streaming, creating chat prompts, etc.
- **Integrations**: How to use different chat model providers (OpenAI, Anthropic, etc).
## Get started ## Get started
import GetStarted from "@snippets/modules/model_io/models/chat/get_started.mdx" import GetStarted from "@snippets/modules/model_io/models/chat/get_started.mdx"

@ -3,15 +3,13 @@ sidebar_position: 0
--- ---
# LLMs # LLMs
:::info
Head to [Integrations](/docs/integrations/llms/) for documentation on built-in integrations with LLM providers.
:::
Large Language Models (LLMs) are a core component of LangChain. Large Language Models (LLMs) are a core component of LangChain.
LangChain does not serve it's own LLMs, but rather provides a standard interface for interacting with many different LLMs. LangChain does not serve it's own LLMs, but rather provides a standard interface for interacting with many different LLMs.
For more detailed documentation check out our:
- **How-to guides**: Walkthroughs of core functionality, like streaming, async, etc.
- **Integrations**: How to use different LLM providers (OpenAI, Anthropic, etc.)
## Get started ## Get started
There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the `LLM` class is designed to provide a standard interface for all of them. There are lots of LLM providers (OpenAI, Cohere, Hugging Face, etc) - the `LLM` class is designed to provide a standard interface for all of them.

File diff suppressed because it is too large Load Diff

@ -0,0 +1,9 @@
---
sidebar_position: 0
---
# Callbacks
import DocCardList from "@theme/DocCardList";
<DocCardList />

@ -0,0 +1,9 @@
---
sidebar_position: 0
---
# Chat models
import DocCardList from "@theme/DocCardList";
<DocCardList />

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save