docs: standardize capitalization (#21641)

pull/21623/head
Bagatur 2 weeks ago committed by GitHub
parent 89aae3e043
commit b514a479c0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -87,7 +87,7 @@ With LCEL, **all** steps are automatically logged to [LangSmith](/docs/langsmith
[**Seamless LangServe deployment**](/docs/langserve)
Any chain created with LCEL can be easily deployed using [LangServe](/docs/langserve).
### Runnable Interface
### Runnable interface
To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below.
@ -281,7 +281,7 @@ prompt_template = ChatPromptTemplate.from_messages([
])
```
### Example Selectors
### Example selectors
One common prompting technique for achieving better performance is to include examples as part of the prompt.
This gives the language model concrete examples of how it should behave.
Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them.
@ -332,7 +332,7 @@ LangChain has lots of different types of output parsers. This is a list of outpu
| [Datetime](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html#langchain.output_parsers.datetime.DatetimeOutputParser) | | ✅ | | `str` \| `Message` | `datetime.datetime` | Parses response into a datetime string. |
| [Structured](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html#langchain.output_parsers.structured.StructuredOutputParser) | | ✅ | | `str` \| `Message` | `Dict[str, str]` | An output parser that returns structured information. It is less powerful than other output parsers since it only allows for fields to be strings. This can be useful when you are working with smaller LLMs. |
### Chat History
### Chat history
Most LLM applications have a conversational interface.
An essential component of a conversation is being able to refer to information introduced earlier in the conversation.
At bare minimum, a conversational system should be able to access some window of past messages directly.
@ -388,12 +388,12 @@ Embeddings create a vector representation of a piece of text. This is useful bec
The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
### Vectorstores
### Vector stores
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors,
and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query.
A vector store takes care of storing embedded data and performing vector search for you.
Vectorstores can be converted to the retriever interface by doing:
Vector stores can be converted to the retriever interface by doing:
```python
vectorstore = MyVectorStore()
@ -462,7 +462,7 @@ In order to assist in this we have put together a [transition guide on how to do
## Techniques
### Function/Tool Calling
### Function/tool calling
:::info
We use the term tool calling interchangeably with function calling. Although

@ -3,7 +3,7 @@ sidebar_position: 0
sidebar_class_name: hidden
---
# How-to Guides
# How-to guides
Here youll find short answers to “How do I….?” types of questions.
These how-to guides dont cover topics in depth youll find that material in the [Tutorials](/docs/tutorials) and the [API Reference](https://api.python.langchain.com/en/latest/).
@ -38,7 +38,7 @@ LangChain Expression Language a way to create arbitrary custom chains.
These are the core building blocks you can use when building applications.
### Prompt Templates
### Prompt templates
Prompt Templates are responsible for formatting user input into a format that can be passed to a language model.
@ -47,7 +47,7 @@ Prompt Templates are responsible for formatting user input into a format that ca
- [How to partially format prompt templates](/docs/how_to/prompts_partial)
- [How to compose prompts together](/docs/how_to/prompts_composition)
### Example Selectors
### Example selectors
Example Selectors are responsible for selecting the correct few shot examples to pass to the prompt.
@ -57,7 +57,7 @@ Example Selectors are responsible for selecting the correct few shot examples to
- [How to select examples by semantic ngram overlap](/docs/how_to/example_selectors_ngram)
- [How to select examples by maximal marginal relevance](/docs/how_to/example_selectors_mmr)
### Chat Models
### Chat models
Chat Models are newer forms of language models that take messages in and output a message.
@ -80,7 +80,7 @@ What LangChain calls LLMs are older forms of language models that take a string
- [How to track token usage](/docs/how_to/llm_token_usage_tracking)
- [How to work with local LLMs](/docs/how_to/local_llms)
### Output Parsers
### Output parsers
Output Parsers are responsible for taking the output of an LLM and parsing into more structured format.
@ -92,7 +92,7 @@ Output Parsers are responsible for taking the output of an LLM and parsing into
- [How to try to fix errors in output parsing](/docs/how_to/output_parser_fixing)
- [How to write a custom output parser class](/docs/how_to/output_parser_custom)
### Document Loaders
### Document loaders
Document Loaders are responsible for loading documents from a variety of sources.
@ -105,7 +105,7 @@ Document Loaders are responsible for loading documents from a variety of sources
- [How to load PDF files](/docs/how_to/document_loader_pdf)
- [How to write a custom document loader](/docs/how_to/document_loader_custom)
### Text Splitters
### Text splitters
Text Splitters take a document and split into chunks that can be used for retrieval.
@ -119,16 +119,16 @@ Text Splitters take a document and split into chunks that can be used for retrie
- [How to split text into semantic chunks](/docs/how_to/semantic-chunker)
- [How to split by tokens](/docs/how_to/split_by_token)
### Embedding Models
### Embedding models
Embedding Models take a piece of text and create a numerical representation of it.
- [How to embed text data](/docs/how_to/embed_text)
- [How to cache embedding results](/docs/how_to/caching_embeddings)
### Vector Stores
### Vector stores
Vector Stores are databases that can efficiently store and retrieve embeddings.
Vector stores are databases that can efficiently store and retrieve embeddings.
- [How to use a vector store to retrieve data](/docs/how_to/vectorstores)
@ -193,9 +193,7 @@ All of LangChain components can easily be extended to support your own versions.
- [How to define a custom tool](/docs/how_to/custom_tools)
## Use Cases
## Use cases
These guides cover use-case specific details.
@ -226,7 +224,7 @@ Chatbots involve using an LLM to have a conversation.
- [How to do retrieval](/docs/how_to/chatbots_retrieval)
- [How to use tools](/docs/how_to/chatbots_tools)
### Query Analysis
### Query analysis
Query Analysis is the task of using an LLM to generate a query to send to a retriever.
@ -246,7 +244,7 @@ You can use LLMs to do question answering over tabular data.
- [How to deal with large databases](/docs/how_to/sql_large_db)
- [How to deal with CSV files](/docs/how_to/sql_csv)
### Q&A over Graph Databases
### Q&A over graph databases
You can use an LLM to do question answering over graph databases.

@ -54,13 +54,13 @@ These are the best ones to get started with:
Explore the full list of tutorials [here](/docs/tutorials).
## [How-To Guides](/docs/how_to)
## [How-to guides](/docs/how_to)
[Here](/docs/how_to) youll find short answers to “How do I….?” types of questions.
These how-to guides dont cover topics in depth youll find that material in the [Tutorials](/docs/tutorials) and the [API Reference](https://api.python.langchain.com/en/latest/).
However, these guides will help you quickly accomplish common tasks.
## [Conceptual Guide](/docs/concepts)
## [Conceptual guide](/docs/concepts)
Introductions to all the key parts of LangChain youll need to know! [Here](/docs/concepts) you'll find high level explanations of all LangChain concepts.

@ -2,7 +2,7 @@
LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.
## Best Practices
## Best practices
When building such applications developers should remember to follow good security practices:
@ -25,6 +25,6 @@ If you're building applications that access external resources like file systems
or databases, consider speaking with your company's security team to determine how to best
design and secure your applications.
## Reporting a Vulnerability
## Reporting a vulnerability
Please report security vulnerabilities by email to security@langchain.dev. This will ensure the issue is promptly triaged and acted upon as needed.

@ -3,7 +3,7 @@ sidebar_position: 0
sidebar_label: Overview
---
# LangChain Over Time
# LangChain over time
## Whats new in LangChain?
@ -45,7 +45,7 @@ This document serves to outline at a high level what has changed and why.
- `langchain` was split into the following component packages: `langchain-core`, `langchain`, `langchain-community`, `langchain-[partner]` to improve the usability of langchain code in production settings. You can read more about it on our [blog](https://blog.langchain.dev/langchain-v0-1-0/).
### Ecosystem Organization
### Ecosystem organization
By the release of 0.1.0, LangChain had grown to a large ecosystem with many integrations and a large community.

@ -3,7 +3,7 @@ sidebar_position: 3
sidebar_label: Packages
---
# 📕 Package Versioning
# 📕 Package versioning
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by
a maintainer and published to [PyPI](https://pypi.org/).

@ -3,7 +3,7 @@ sidebar_position: 2
sidebar_label: Release Policy
---
# LangChain Releases
# LangChain releases
The LangChain ecosystem is composed of different component packages (e.g., `langchain-core`, `langchain`, `langchain-community`, `langgraph`, `langserve`, partner packages etc.)
@ -32,13 +32,13 @@ From time to time, we will version packages as **release candidates**. These are
Other packages in the ecosystem (including user packages) can follow a different versioning scheme, but are generally expected to pin to specific minor versions of `langchain` and `langchain-core`.
## Release Cadence
## Release cadence
We expect to space out **minor** releases (e.g., from 0.2.0 to 0.3.0) of `langchain` and `langchain-core` by at least 2-3 months, as such releases may contain breaking changes.
Patch versions are released frequently as they contain bug fixes and new features.
## API Stability
## API stability
The development of LLM applications is a rapidly evolving field, and we are constantly learning from our users and the community. As such, we expect that the APIs in `langchain` and `langchain-core` will continue to evolve to better serve the needs of our users.
@ -49,14 +49,14 @@ Even though both `langchain` and `langchain-core` are currently in a pre-1.0 sta
We will generally try to avoid making unnecessary changes, and will provide a deprecation policy for features that are being removed.
### Stability of Other Packages
### Stability of other packages
The stability of other packages in the LangChain ecosystem may vary:
- `langchain-community` is a community maintained package that contains 3rd party integrations. While we do our best to review and test changes in `langchain-community`, `langchain-community` is expected to experience more breaking changes than `langchain` and `langchain-core` as it contains many community contributions.
- Partner packages may follow different stability and versioning policies, and users should refer to the documentation of those packages for more information; however, in general these packages are expected to be stable.
### What is a "API Stability"?
### What is a "API stability"?
API stability means:
@ -72,7 +72,7 @@ Certain APIs are explicitly marked as “internal” in a couple of ways:
- Functions, methods, and other objects prefixed by a leading underscore (**`_`**). This is the standard Python convention of indicating that something is private; if any method starts with a single **`_`**, its an internal API.
- **Exception:** Certain methods are prefixed with `_` , but do not contain an implementation. These methods are *meant* to be overridden by sub-classes that provide the implementation. Such methods are generally part of the **Public API** of LangChain.
## Deprecation Policy
## Deprecation policy
We will generally avoid deprecating features until a better alternative is available.

@ -41,7 +41,7 @@ Here is an example of the import changes that the migration script can help appl
| langchain | langchain-text-splitters | from langchain.text_splitter import RecursiveCharacterTextSplitter | from langchain_text_splitters import RecursiveCharacterTextSplitter |
#### Deprecation Timeline
#### Deprecation timeline
We have two main types of deprecations:
@ -102,7 +102,7 @@ langchain-cli migrate [path to code] --diff # Preview
langchain-cli migrate [path to code] # Apply
```
#### Other Options
#### Other options
```bash
# See help menu
@ -114,11 +114,11 @@ langchain-cli migrate --diff [path to code]
langchain-cli migrate --disable langchain_to_core --include-ipynb [path to code]
```
## Deprecations and Breaking Changes
## Deprecations and breaking changes
This code contains a list of deprecations and removals in the `langchain` and `langchain-core` packages.
### Breaking Changes in 0.2.0
### Breaking changes in 0.2.0
As of release 0.2.0, `langchain` is required to be integration-agnostic. This means that code in `langchain` should not by default instantiate any specific chat models, llms, embedding models, vectorstores etc; instead, the user will be required to specify those explicitly.

@ -42,7 +42,7 @@ module.exports = {
{
type: "category",
link: {type: 'doc', id: 'how_to/index'},
label: "How-To Guides",
label: "How-to guides",
collapsible: false,
items: [{
type: 'autogenerated',

Loading…
Cancel
Save