mirror of
https://github.com/hwchase17/langchain
synced 2024-10-29 17:07:25 +00:00
985496f4be
Big docs refactor! Motivation is to make it easier for people to find resources they are looking for. To accomplish this, there are now three main sections: - Getting Started: steps for getting started, walking through most core functionality - Modules: these are different modules of functionality that langchain provides. Each part here has a "getting started", "how to", "key concepts" and "reference" section (except in a few select cases where it didnt easily fit). - Use Cases: this is to separate use cases (like summarization, question answering, evaluation, etc) from the modules, and provide a different entry point to the code base. There is also a full reference section, as well as extra resources (glossary, gallery, etc) Co-authored-by: Shreya Rajpal <ShreyaR@users.noreply.github.com>
1.1 KiB
1.1 KiB
Key Concepts
Memory
By default, Chains and Agents are stateless, meaning that they treat each incoming query independently. In some applications (chatbots being a GREAT example) it is highly important to remember previous interactions, both at a short term but also at a long term level. The concept of "Memory" exists to do exactly that.
Conversational Memory
One of the simpler forms of memory occurs in chatbots, where they remember previous conversations. There are a few different ways to accomplish this:
- Buffer: This is just passing in the past
N
interactions in as context.N
can be chosen based on a fixed number, the length of the interactions, or other! - Summary: This involves summarizing previous conversations and passing that summary in, instead of the raw dialouge itself. Compared to
Buffer
, this compresses information: meaning it is more lossy, but also less likely to run into context length limits. - Combination: A combination of the above two approaches, where you compute a summary but also pass in some previous interfactions directly!