- Add support for local build and linkchecking of docs
- Add GitHub Action to automatically check links before prior to
publication
- Minor reformat of Contributing readme
- Fix existing broken links
Co-authored-by: Hunter Gerlach <hunter@huntergerlach.com>
Co-authored-by: Hunter Gerlach <HunterGerlach@users.noreply.github.com>
Co-authored-by: Hunter Gerlach <hunter@huntergerlach.com>
@ -55,9 +55,7 @@ even patch releases may contain [non-backwards-compatible changes](https://semve
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or in another manner.
## 🤖Developer Setup
### 🚀Quick Start
## 🚀Quick Start
This project uses [Poetry](https://python-poetry.org/) as a dependency manager. Check out Poetry's [documentation on how to install it](https://python-poetry.org/docs/#installation) on your system before proceeding.
@ -77,9 +75,9 @@ This will install all requirements for running the package, examples, linting, f
Now, you should be able to run the common tasks in the following section.
### ✅Common Tasks
## ✅Common Tasks
#### Code Formatting
### Code Formatting
Formatting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/) and [isort](https://pycqa.github.io/isort/).
@ -89,7 +87,7 @@ To run formatting for this project:
make format
```
#### Linting
### Linting
Linting for this project is done via a combination of [Black](https://black.readthedocs.io/en/stable/), [isort](https://pycqa.github.io/isort/), [flake8](https://flake8.pycqa.org/en/latest/), and [mypy](http://mypy-lang.org/).
@ -101,7 +99,7 @@ make lint
We recognize linting can be annoying - if you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
#### Coverage
### Coverage
Code coverage (i.e. the amount of code that is covered by unit tests) helps identify areas of the code that are potentially more or less brittle.
@ -111,7 +109,7 @@ To get a report of current coverage, run the following:
make coverage
```
#### Testing
### Testing
Unit tests cover modular logic that does not require calls to outside APIs.
@ -133,7 +131,7 @@ make integration_tests
If you add support for a new external API, please add a new integration test.
#### Adding a Jupyter Notebook
### Adding a Jupyter Notebook
If you are adding a Jupyter notebook example, you'll want to install the optional `dev` dependencies.
@ -151,10 +149,32 @@ poetry run jupyter notebook
When you run `poetry install`, the `langchain` package is installed as editable in the virtualenv, so your new logic can be imported into the notebook.
#### Contribute Documentation
## Documentation
### Contribute Documentation
Docs are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code.
For that reason, we ask that you add good documentation to all classes and methods.
Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
### Build Documentation Locally
Before building the documentation, it is always a good idea to clean the build directory:
```bash
make docs_clean
```
Next, you can run the linkchecker to make sure all links are valid:
```bash
make docs_linkcheck
```
Finally, you can build the documentation as outlined below:
- [Language Model Cascades](https://arxiv.org/abs/2207.10342)
- [ICE Primer Book](https://primer.ought.org/)
@ -52,25 +57,28 @@ Resources:
## Memetic Proxy
Encouraging the LLM to respond in a certain way framing the discussion in a context that the model knows of and that will result in that type of response. For example, as a conversation between a student and a teacher.
Encouraging the LLM to respond in a certain way framing the discussion in a context that the model knows of and that will result in that type of response. For example, as a conversation between a student and a teacher.
Resources:
- [Paper](https://arxiv.org/pdf/2102.07350.pdf)
## Self Consistency
A decoding strategy that samples a diverse set of reasoning paths and then selects the most consistent answer.
Is most effective when combined with Chain-of-thought prompting.
A decoding strategy that samples a diverse set of reasoning paths and then selects the most consistent answer.
Is most effective when combined with Chain-of-thought prompting.
Resources:
- [Paper](https://arxiv.org/pdf/2203.11171.pdf)
## Inception
Also called “First Person Instruction”.
Encouraging the model to think a certain way by including the start of the model’s response in the prompt.
Also called “First Person Instruction”.
Encouraging the model to think a certain way by including the start of the model’s response in the prompt.
Checkout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.
- `Getting Started Documentation <getting_started/getting_started.html>`_
- `Getting Started Documentation <./getting_started/getting_started.html>`_
..toctree::
:maxdepth:1
@ -32,17 +32,17 @@ For each module we provide some examples to get started, how-to guides, referenc
These modules are, in increasing order of complexity:
- `Prompts <modules/prompts.html>`_: This includes prompt management, prompt optimization, and prompt serialization.
- `Prompts <./modules/prompts.html>`_: This includes prompt management, prompt optimization, and prompt serialization.
- `LLMs <modules/llms.html>`_: This includes a generic interface for all LLMs, and common utilities for working with LLMs.
- `LLMs <./modules/llms.html>`_: This includes a generic interface for all LLMs, and common utilities for working with LLMs.
- `Utils <modules/utils.html>`_: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.
- `Utils <./modules/utils.html>`_: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.
- `Chains <modules/chains.html>`_: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
- `Chains <./modules/chains.html>`_: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
- `Agents <modules/agents.html>`_: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.
- `Agents <./modules/agents.html>`_: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.
- `Memory <modules/memory.html>`_: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
- `Memory <./modules/memory.html>`_: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
..toctree::
@ -51,33 +51,33 @@ These modules are, in increasing order of complexity:
:name:modules
:hidden:
modules/prompts.md
modules/llms.md
modules/utils.md
modules/chains.md
modules/agents.md
modules/memory.md
./modules/prompts.md
./modules/llms.md
./modules/utils.md
./modules/chains.md
./modules/agents.md
./modules/memory.md
Use Cases
----------
The above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.
- `Agents <use_cases/agents.html>`_: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.
- `Agents <./use_cases/agents.html>`_: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.
- `Chatbots <use_cases/chatbots.html>`_: Since language models are good at producing text, that makes them ideal for creating chatbots.
- `Chatbots <./use_cases/chatbots.html>`_: Since language models are good at producing text, that makes them ideal for creating chatbots.
- `Data Augmented Generation <use_cases/combine_docs.html>`_: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.
- `Data Augmented Generation <./use_cases/combine_docs.html>`_: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.
- `Question Answering <use_cases/question_answering.html>`_: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.
- `Question Answering <./use_cases/question_answering.html>`_: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.
- `Summarization <use_cases/summarization.html>`_: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.
- `Summarization <./use_cases/summarization.html>`_: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.
- `Evaluation <use_cases/evaluation.html>`_: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
- `Evaluation <./use_cases/evaluation.html>`_: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
- `Generate similar examples <use_cases/generate_examples.html>`_: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.
- `Generate similar examples <./use_cases/generate_examples.html>`_: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.
- `Compare models <model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
- `Compare models <./use_cases/model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
@ -87,14 +87,14 @@ The above modules can be used in a variety of ways. LangChain also provides guid
:name:use_cases
:hidden:
use_cases/agents.md
use_cases/chatbots.md
use_cases/generate_examples.ipynb
use_cases/combine_docs.md
use_cases/question_answering.md
use_cases/summarization.md
use_cases/evaluation.rst
use_cases/model_laboratory.ipynb
./use_cases/agents.md
./use_cases/chatbots.md
./use_cases/generate_examples.ipynb
./use_cases/combine_docs.md
./use_cases/question_answering.md
./use_cases/summarization.md
./use_cases/evaluation.rst
./use_cases/model_laboratory.ipynb
Reference Docs
@ -103,16 +103,16 @@ Reference Docs
All of LangChain's reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.
- `Reference Documentation <reference.html>`_
- `Reference Documentation <./reference.html>`_
..toctree::
:maxdepth:1
:caption:Reference
:name:reference
:hidden:
reference/installation.md
reference/integrations.md
reference.rst
./reference/installation.md
./reference/integrations.md
./reference.rst
LangChain Ecosystem
@ -120,7 +120,7 @@ LangChain Ecosystem
Guides for how other companies/products can be used with LangChain
- `LangChain Ecosystem <ecosystem.html>`_
- `LangChain Ecosystem <./ecosystem.html>`_
..toctree::
:maxdepth:1
@ -129,7 +129,7 @@ Guides for how other companies/products can be used with LangChain
:name:ecosystem
:hidden:
ecosystem.rst
./ecosystem.rst
Additional Resources
@ -137,9 +137,9 @@ Additional Resources
Additional collection of resources we think may be useful as you develop your application!
- `Glossary <glossary.html>`_: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!
- `Glossary <./glossary.html>`_: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!
- `Gallery <gallery.html>`_: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.
- `Gallery <./gallery.html>`_: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.
- `Discord <https://discord.gg/6adMQxSpJS>`_: Join us on our Discord to discuss all things LangChain!
@ -150,5 +150,5 @@ Additional collection of resources we think may be useful as you develop your ap
@ -8,13 +8,13 @@ Depending on the user input, the agent can then decide which, if any, of these t
The following sections of documentation are provided:
- `Getting Started <agents/getting_started.html>`_: A notebook to help you get started working with agents as quickly as possible.
- `Getting Started <./agents/getting_started.html>`_: A notebook to help you get started working with agents as quickly as possible.
- `Key Concepts <agents/key_concepts.html>`_: A conceptual guide going over the various concepts related to agents.
- `Key Concepts <./agents/key_concepts.html>`_: A conceptual guide going over the various concepts related to agents.
- `How-To Guides <agents/how_to_guides.html>`_: A collection of how-to guides. These highlight how to integrate various types of tools, how to work with different types of agent, and how to customize agents.
- `How-To Guides <./agents/how_to_guides.html>`_: A collection of how-to guides. These highlight how to integrate various types of tools, how to work with different types of agent, and how to customize agents.
- `Reference </reference/modules/agents.html>`_: API reference documentation for all Agent classes.
- `Reference <../reference/modules/agents.html>`_: API reference documentation for all Agent classes.
@ -24,7 +24,7 @@ The following sections of documentation are provided:
The first category of how-to guides here cover specific parts of working with agents.
`Custom Tools <examples/custom_tools.html>`_: How to create custom tools that an agent can use.
`Custom Tools <./examples/custom_tools.html>`_: How to create custom tools that an agent can use.
`Intermediate Steps <examples/intermediate_steps.html>`_: How to access and use intermediate steps to get more visibility into the internals of an agent.
`Intermediate Steps <./examples/intermediate_steps.html>`_: How to access and use intermediate steps to get more visibility into the internals of an agent.
`Custom Agent <examples/custom_agent.html>`_: How to create a custom agent (specifically, a custom LLM + prompt to drive that agent).
`Custom Agent <./examples/custom_agent.html>`_: How to create a custom agent (specifically, a custom LLM + prompt to drive that agent).
`Multi Input Tools <examples/multi_input_tool.html>`_: How to use a tool that requires multiple inputs with an agent.
`Multi Input Tools <./examples/multi_input_tool.html>`_: How to use a tool that requires multiple inputs with an agent.
`Search Tools <examples/search_tools.html>`_: How to use the different type of search tools that LangChain supports.
`Search Tools <./examples/search_tools.html>`_: How to use the different type of search tools that LangChain supports.
`Max Iterations <examples/max_iterations.html>`_: How to restrict an agent to a certain number of iterations.
`Max Iterations <./examples/max_iterations.html>`_: How to restrict an agent to a certain number of iterations.
The next set of examples are all end-to-end agents for specific applications.
In all examples there is an Agent with a particular set of tools.
- Tools: A tool can be anything that takes in a string and returns a string. This means that you can use both the primitives AND the chains found in `this <chains.html>`_ documentation. LangChain also provides a list of easily loadable tools. For detailed information on those, please see `this documentation <../explanation/tools.html>`_
- Agents: An agent uses an LLMChain to determine which tools to use. For a list of all available agent types, see `here <../explanation/agents.html>`_.
- Tools: A tool can be anything that takes in a string and returns a string. This means that you can use both the primitives AND the chains found in `this <../chains.html>`_ documentation. LangChain also provides a list of easily loadable tools. For detailed information on those, please see `this documentation <./tools.html>`_
- Agents: An agent uses an LLMChain to determine which tools to use. For a list of all available agent types, see `here <./agents.html>`_.
**MRKL**
@ -28,21 +28,21 @@ In all examples there is an Agent with a particular set of tools.
- **Agent used**: `zero-shot-react-description`
- `Paper <https://arxiv.org/pdf/2205.00445.pdf>`_
- **Note**: This is the most general purpose example, so if you are looking to use an agent with arbitrary tools, please start here.
@ -5,15 +5,15 @@ A chain is made up of links, which can be either primitives or other chains.
Primitives can be either `prompts <../prompts.html>`_, `llms <../llms.html>`_, `utils <../utils.html>`_, or other chains.
The examples here are all end-to-end chains for working with documents.
`Question Answering <combine_docs_examples/question_answering.html>`_: A walkthrough of how to use LangChain for question answering over specific documents.
`Question Answering <./combine_docs_examples/question_answering.html>`_: A walkthrough of how to use LangChain for question answering over specific documents.
`Question Answering with Sources <combine_docs_examples/qa_with_sources.html>`_: A walkthrough of how to use LangChain for question answering (with sources) over specific documents.
`Question Answering with Sources <./combine_docs_examples/qa_with_sources.html>`_: A walkthrough of how to use LangChain for question answering (with sources) over specific documents.
`Summarization <combine_docs_examples/summarize.html>`_: A walkthrough of how to use LangChain for summarization over specific documents.
`Summarization <./combine_docs_examples/summarize.html>`_: A walkthrough of how to use LangChain for summarization over specific documents.
`Vector DB Question Answering <combine_docs_examples/vector_db_qa.html>`_: A walkthrough of how to use LangChain for question answering over a vector database.
`Vector DB Question Answering <./combine_docs_examples/vector_db_qa.html>`_: A walkthrough of how to use LangChain for question answering over a vector database.
`Vector DB Question Answering with Sources <combine_docs_examples/vector_db_qa_with_sources.html>`_: A walkthrough of how to use LangChain for question answering (with sources) over a vector database.
`Vector DB Question Answering with Sources <./combine_docs_examples/vector_db_qa_with_sources.html>`_: A walkthrough of how to use LangChain for question answering (with sources) over a vector database.
..toctree::
@ -23,4 +23,4 @@ The examples here are all end-to-end chains for working with documents.
@ -9,19 +9,19 @@ The examples here are all generic end-to-end chains that are meant to be used to
- **Links Used**: PromptTemplate, LLM
- **Notes**: This chain is the simplest chain, and is widely used by almost every other chain. This chain takes arbitrary user input, creates a prompt with it from the PromptTemplate, passes that to the LLM, and then returns the output of the LLM as the final output.
- `Example Notebook <generic/llm_chain.html>`_
- `Example Notebook <./generic/llm_chain.html>`_
**Transformation Chain**
- **Links Used**: TransformationChain
- **Notes**: This notebook shows how to use the Transformation Chain, which takes an arbitrary python function and applies it to inputs/outputs of other chains.
@ -6,15 +6,15 @@ Primitives can be either `prompts <../prompts.html>`_, `llms <../llms.html>`_, `
The examples here are all end-to-end chains for specific applications.
They are broken up into three categories:
1. `Generic Chains <generic_how_to.html>`_: Generic chains, that are meant to help build other chains rather than serve a particular purpose.
2. `CombineDocuments Chains <combine_docs_how_to.html>`_: Chains aimed at making it easy to work with documents (question answering, summarization, etc).
3. `Utility Chains <utility_how_to.html>`_: Chains consisting of an LLMChain interacting with a specific util.
1. `Generic Chains <./generic_how_to.html>`_: Generic chains, that are meant to help build other chains rather than serve a particular purpose.
2. `CombineDocuments Chains <./combine_docs_how_to.html>`_: Chains aimed at making it easy to work with documents (question answering, summarization, etc).
3. `Utility Chains <./utility_how_to.html>`_: Chains consisting of an LLMChain interacting with a specific util.
@ -9,44 +9,44 @@ The examples here are all end-to-end chains for specific applications, focused o
- **Links Used**: Python REPL, LLMChain
- **Notes**: This chain takes user input (a math question), uses an LLMChain to convert it to python code snippet to run in the Python REPL, and then returns that as the result.
- `Example Notebook <examples/llm_math.html>`_
- `Example Notebook <./examples/llm_math.html>`_
**PAL**
- **Links Used**: Python REPL, LLMChain
- **Notes**: This chain takes user input (a reasoning question), uses an LLMChain to convert it to python code snippet to run in the Python REPL, and then returns that as the result.
- `Paper <https://arxiv.org/abs/2211.10435>`_
- `Example Notebook <examples/pal.html>`_
- `Example Notebook <./examples/pal.html>`_
**SQLDatabase Chain**
- **Links Used**: SQLDatabase, LLMChain
- **Notes**: This chain takes user input (a question), uses a first LLM chain to construct a SQL query to run against the SQL database, and then uses another LLMChain to take the results of that query and use it to answer the original question.
- `Example Notebook <examples/sqlite.html>`_
- `Example Notebook <./examples/sqlite.html>`_
**LLMBash Chain**
- **Links Used**: BashProcess, LLMChain
- **Notes**: This chain takes user input (a question), uses an LLM chain to convert it to a bash command to run in the terminal, and then returns that as the result.
- `Example Notebook <examples/llm_bash.html>`_
- `Example Notebook <./examples/llm_bash.html>`_
**LLMChecker Chain**
- **Links Used**: LLMChain
- **Notes**: This chain takes user input (a question), uses an LLM chain to answer that question, and then uses other LLMChains to self-check that answer.
- **Notes**: This chain takes a URL and other inputs, uses Requests to get the data at that URL, and then passes that along with the other inputs into an LLMChain to generate a response. The example included shows how to ask a question to Google - it firsts constructs a Google url, then fetches the data there, then passes that data + the original question into an LLMChain to get an answer.
@ -7,13 +7,13 @@ you can interact with a variety of LLMs.
The following sections of documentation are provided:
- `Getting Started <llms/getting_started.html>`_: An overview of all the functionality the LangChain LLM class provides.
- `Getting Started <./llms/getting_started.html>`_: An overview of all the functionality the LangChain LLM class provides.
- `Key Concepts <llms/key_concepts.html>`_: A conceptual guide going over the various concepts related to LLMs.
- `Key Concepts <./llms/key_concepts.html>`_: A conceptual guide going over the various concepts related to LLMs.
- `How-To Guides <llms/how_to_guides.html>`_: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class, as well as how to integrate with various LLM providers.
- `How-To Guides <./llms/how_to_guides.html>`_: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class, as well as how to integrate with various LLM providers.
- `Reference </reference/modules/llms.html>`_: API reference documentation for all LLM classes.
- `Reference <../reference/modules/llms.html>`_: API reference documentation for all LLM classes.
..toctree::
@ -21,7 +21,7 @@ The following sections of documentation are provided:
The examples here all address certain "how-to" guides for working with LLMs.
`LLM Serialization <examples/llm_serialization.html>`_: A walkthrough of how to serialize LLMs to and from disk.
`LLM Serialization <./examples/llm_serialization.html>`_: A walkthrough of how to serialize LLMs to and from disk.
`LLM Caching <examples/llm_caching.html>`_: Covers different types of caches, and how to use a cache to save results of LLM calls.
`LLM Caching <./examples/llm_caching.html>`_: Covers different types of caches, and how to use a cache to save results of LLM calls.
`Custom LLM <examples/custom_llm.html>`_: How to create and use a custom LLM class, in case you have an LLM not from one of the standard providers (including one that you host yourself).
`Custom LLM <./examples/custom_llm.html>`_: How to create and use a custom LLM class, in case you have an LLM not from one of the standard providers (including one that you host yourself).
..toctree::
@ -17,4 +17,4 @@ The examples here all address certain "how-to" guides for working with LLMs.
@ -9,11 +9,11 @@ The concept of “Memory” exists to do exactly that.
The following sections of documentation are provided:
- `Getting Started <memory/getting_started.html>`_: An overview of how to get started with different types of memory.
- `Getting Started <./memory/getting_started.html>`_: An overview of how to get started with different types of memory.
- `Key Concepts <memory/key_concepts.html>`_: A conceptual guide going over the various concepts related to memory.
- `Key Concepts <./memory/key_concepts.html>`_: A conceptual guide going over the various concepts related to memory.
- `How-To Guides <memory/how_to_guides.html>`_: A collection of how-to guides. These highlight how to work with different types of memory, as well as how to customize memory.
- `How-To Guides <./memory/how_to_guides.html>`_: A collection of how-to guides. These highlight how to work with different types of memory, as well as how to customize memory.
@ -22,6 +22,6 @@ The following sections of documentation are provided:
The examples here all highlight how to use memory in different ways.
`Adding Memory <examples/adding_memory.html>`_: How to add a memory component to any single input chain.
`Adding Memory <./examples/adding_memory.html>`_: How to add a memory component to any single input chain.
`ChatGPT Clone <examples/chatgpt_clone.html>`_: How to recreate ChatGPT with LangChain prompting + memory components.
`ChatGPT Clone <./examples/chatgpt_clone.html>`_: How to recreate ChatGPT with LangChain prompting + memory components.
`Adding Memory to Multi-Input Chain <examples/adding_memory_chain_multiple_inputs.html>`_: How to add a memory component to any multiple input chain.
`Adding Memory to Multi-Input Chain <./examples/adding_memory_chain_multiple_inputs.html>`_: How to add a memory component to any multiple input chain.
`Conversational Memory Customization <examples/conversational_customization.html>`_: How to customize existing conversation memory components.
`Conversational Memory Customization <./examples/conversational_customization.html>`_: How to customize existing conversation memory components.
`Custom Memory <examples/custom_memory.html>`_: How to write your own custom memory component.
`Custom Memory <./examples/custom_memory.html>`_: How to write your own custom memory component.
`Adding Memory to Agents <examples/agent_with_memory.html>`_: How to add a memory component to any agent.
`Adding Memory to Agents <./examples/agent_with_memory.html>`_: How to add a memory component to any agent.
..toctree::
@ -21,4 +21,4 @@ The examples here all highlight how to use memory in different ways.
@ -7,13 +7,13 @@ LangChain provides several classes and functions to make constructing and workin
The following sections of documentation are provided:
- `Getting Started <prompts/getting_started.html>`_: An overview of all the functionality LangChain provides for working with and constructing prompts.
- `Getting Started <./prompts/getting_started.html>`_: An overview of all the functionality LangChain provides for working with and constructing prompts.
- `Key Concepts <prompts/key_concepts.html>`_: A conceptual guide going over the various concepts related to prompts.
- `Key Concepts <./prompts/key_concepts.html>`_: A conceptual guide going over the various concepts related to prompts.
- `How-To Guides <prompts/how_to_guides.html>`_: A collection of how-to guides. These highlight how to accomplish various objectives with our prompt class.
- `How-To Guides <./prompts/how_to_guides.html>`_: A collection of how-to guides. These highlight how to accomplish various objectives with our prompt class.
- `Reference </reference/prompts.html>`_: API reference documentation for all prompt classes.
- `Reference <../reference/prompts.html>`_: API reference documentation for all prompt classes.
@ -24,7 +24,7 @@ The following sections of documentation are provided:
If you're new to the library, you may want to start with the `Quickstart <getting_started.html>`_.
If you're new to the library, you may want to start with the `Quickstart <./getting_started.html>`_.
The user guide here shows more advanced workflows and how to use the library in different ways.
`Custom Prompt Template <examples/custom_prompt_template.html>`_: How to create and use a custom PromptTemplate, the logic that decides how input variables get formatted into a prompt.
`Custom Prompt Template <./examples/custom_prompt_template.html>`_: How to create and use a custom PromptTemplate, the logic that decides how input variables get formatted into a prompt.
`Custom Example Selector <examples/custom_example_selector.html>`_: How to create and use a custom ExampleSelector (the class responsible for choosing which examples to use in a prompt).
`Custom Example Selector <./examples/custom_example_selector.html>`_: How to create and use a custom ExampleSelector (the class responsible for choosing which examples to use in a prompt).
`Few Shot Prompt Templates <examples/few_shot_examples.html>`_: How to include examples in the prompt.
`Few Shot Prompt Templates <./examples/few_shot_examples.html>`_: How to include examples in the prompt.
`Prompt Serialization <examples/prompt_serialization.html>`_: A walkthrough of how to serialize prompts to and from disk.
`Prompt Serialization <./examples/prompt_serialization.html>`_: A walkthrough of how to serialize prompts to and from disk.
`Few Shot Prompt Examples <examples/few_shot_examples.html>`_: Examples of Few Shot Prompt Templates.
`Few Shot Prompt Examples <./examples/few_shot_examples.html>`_: Examples of Few Shot Prompt Templates.
@ -27,8 +27,8 @@ The user guide here shows more advanced workflows and how to use the library in
A prompt is the input to a language model. It is a string of text that is used to generate a response from the language model.
## Prompt Templates
`PromptTemplates` are a way to create prompts in a reproducible way. They contain a template string, and a set of input variables. The template string can be formatted with the input variables to generate a prompt. The template string often contains instructions to the language model, a few shot examples, and a question to the language model.
@ -26,7 +25,6 @@ Capital:
"""
```
### Input Variables
Input variables are the variables that are used to fill in the template string. In the example above, the input variable is `country`.
@ -57,20 +55,21 @@ Capital: Ottawa
```
To learn more about how to provide few shot examples, see [Few Shot Examples](examples/few_shot_examples.ipynb).
<!-- TODO(shreya): Add correct link here. -->
<!-- TODO(shreya): Add correct link here. -->
## Example selection
If there are multiple examples that are relevant to a prompt, it is important to select the most relevant examples. Generally, the quality of the response from the LLM can be significantly improved by selecting the most relevant examples. This is because the language model will be able to better understand the context of the prompt, and also potentially learn failure modes to avoid.
To help the user with selecting the most relevant examples, we provide example selectors that select the most relevant based on different criteria, such as length, semantic similarity, etc. The example selector takes in a list of examples and returns a list of selected examples, formatted as a string. The user can also provide their own example selector. To learn more about example selectors, see [Example Selection](example_selection.md).
<!-- TODO(shreya): Add correct link here. -->
<!-- TODO(shreya): Add correct link here. -->
## Serialization
To make it easy to share `PromptTemplates`, we provide a `serialize` method that returns a JSON string. The JSON string can be saved to a file, and then loaded back into a `PromptTemplate` using the `deserialize` method. This allows users to share `PromptTemplates` with others, and also to save them for later use.
To learn more about serialization, see [Serialization](examples/prompt_serialization.ipynb).
@ -5,13 +5,13 @@ There are a lot of different utilities that LangChain provides integrations for
These guides go over how to use them.
The utilities here are all utilities that make it easier to work with documents.
`Text Splitters <combine_docs_examples/textsplitter.html>`_: A walkthrough of how to split large documents up into smaller, more manageable pieces of text.
`Text Splitters <./combine_docs_examples/textsplitter.html>`_: A walkthrough of how to split large documents up into smaller, more manageable pieces of text.
`VectorStores <combine_docs_examples/vectorstores.html>`_: A walkthrough of vectorstore functionalities, and different types of vectorstores, that LangChain supports.
`VectorStores <./combine_docs_examples/vectorstores.html>`_: A walkthrough of vectorstore functionalities, and different types of vectorstores, that LangChain supports.
`Embeddings <combine_docs_examples/embeddings.html>`_: A walkthrough of embedding functionalities, and different types of embeddings, that LangChain supports.
`Embeddings <./combine_docs_examples/embeddings.html>`_: A walkthrough of embedding functionalities, and different types of embeddings, that LangChain supports.
`HyDE <combine_docs_examples/hyde.html>`_: How to use Hypothetical Document Embeddings, a novel way of constructing embeddings for document retrieval systems.
`HyDE <./combine_docs_examples/hyde.html>`_: How to use Hypothetical Document Embeddings, a novel way of constructing embeddings for document retrieval systems.
@ -5,13 +5,13 @@ There are a lot of different utilities that LangChain provides integrations for
These guides go over how to use them.
These can largely be grouped into two categories:
1. `Generic Utilities <generic_how_to.html>`_: Generic utilities, including search, python REPLs, etc.
2. `Utilities for working with Documents <combine_docs_how_to.html>`_: Utilities aimed at making it easy to work with documents (text splitting, embeddings, vectorstores, etc).
1. `Generic Utilities <./generic_how_to.html>`_: Generic utilities, including search, python REPLs, etc.
2. `Utilities for working with Documents <./combine_docs_how_to.html>`_: Utilities aimed at making it easy to work with documents (text splitting, embeddings, vectorstores, etc).
@ -5,11 +5,11 @@ Generative models are notoriously hard to evaluate with traditional metrics. One
The examples here all highlight how to use language models to assist in evaluation of themselves.
`Question Answering <evaluation/question_answering.html>`_: An overview of LLMs aimed at evaluating question answering systems in general.
`Question Answering <./evaluation/question_answering.html>`_: An overview of LLMs aimed at evaluating question answering systems in general.
`Data Augmented Question Answering <evaluation/data_augmented_question_answering.html>`_: An end-to-end example of evaluating a question answering system focused on a specific document (a VectorDBQAChain to be precise). This example highlights how to use LLMs to come up with question/answer examples to evaluate over, and then highlights how to use LLMs to evaluate performance on those generated examples.
`Data Augmented Question Answering <./evaluation/data_augmented_question_answering.html>`_: An end-to-end example of evaluating a question answering system focused on a specific document (a VectorDBQAChain to be precise). This example highlights how to use LLMs to come up with question/answer examples to evaluate over, and then highlights how to use LLMs to evaluate performance on those generated examples.
`Hugging Face Datasets <evaluation/huggingface_datasets.html>`_: Covers an example of loading and using a dataset from Hugging Face for evaluation.
`Hugging Face Datasets <./evaluation/huggingface_datasets.html>`_: Covers an example of loading and using a dataset from Hugging Face for evaluation.