update chain docs (#15495)

Co-authored-by: Bagatur <baskaryan@gmail.com>
pull/15507/head^2
Harrison Chase 5 months ago committed by GitHub
parent 00dfbd2a99
commit 9b9449750c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -20,4 +20,4 @@ wget https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O
yarn
quarto preview docs
poetry run quarto preview docs

@ -6,7 +6,7 @@
"metadata": {},
"source": [
"---\n",
"sidebar_position: 3\n",
"sidebar_position: 4\n",
"sidebar_class_name: hidden\n",
"title: Agents\n",
"---"

@ -0,0 +1,176 @@
{
"cells": [
{
"cell_type": "raw",
"id": "bcb4ca40-c3cb-4f23-b09f-4d6c3c46999f",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 3\n",
"title: Chains\n",
"sidebar_class_name: hidden\n",
"hide_table_of_contents: true\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "b872d874-ad6e-49b5-9435-66063a64d1a8",
"metadata": {},
"source": [
"Chains refer to sequences of calls - whether to an LLM, a tool, or a data preprocessing step. The primary supported way to do this is with [LCEL](/docs/expression_language). \n",
"\n",
"LCEL is great for constructing your own chains, but it's also nice to have chains that you can use off-the-shelf. There are two types of off-the-shelf chains that LangChain supports:\n",
"\n",
"- Chains that are built with LCEL. In this case, LangChain offers a higher-level constructor method. However, all that is being done under the hood is constructing a chain with LCEL. \n",
"\n",
"- [Legacy] Chains constructed by subclassing from a legacy `Chain` class. These chains do not use LCEL under the hood but are rather standalone classes.\n",
"\n",
"We are working creating methods that create LCEL versions of all chains. We are doing this for a few reasons.\n",
"\n",
"1. Chains constructed in this way are nice because if you want to modify the internals of a chain you can simply modify the LCEL.\n",
"\n",
"2. These chains natively support streaming, async, and batch out of the box.\n",
"\n",
"3. These chains automatically get observability at each step.\n",
"\n",
"This page contains two lists. First, a list of all LCEL chain constructors. Second, a list of all legacy Chains."
]
},
{
"cell_type": "markdown",
"id": "6aedf9f6-b53f-4456-90cb-be3cfec04b4e",
"metadata": {},
"source": [
"## LCEL Chains\n",
"\n",
"Below is a table of all LCEL chain constructors. In addition, we report on:\n",
"\n",
"**Chain Constructor**\n",
"\n",
"The constructor function for this chain. These are all methods that return LCEL runnables. We also link to the API documentation.\n",
"\n",
"**Function Calling**\n",
"\n",
"Whether this requires OpenAI function calling.\n",
"\n",
"**Other Tools**\n",
"\n",
"What other tools (if any) are used in this chain.\n",
"\n",
"**When to Use**\n",
"\n",
"Our commentary on when to use this chain.\n",
"\n",
"\n",
"\n",
"| Chain Constructor | Function Calling | Other Tools | When to Use |\n",
"|----------------------------------|-------------------------|--------------|--------------------------------------------------------------------------------|\n",
"| [create_stuff_documents_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html#langchain.chains.combine_documents.stuff.create_stuff_documents_chain) | | | This chain takes a list of documents and formats them all into a prompt, then passes that prompt to an LLM. It passes ALL documents, so you should make sure it fits within the context window the LLM you are using. |\n",
"| [create_openai_fn_runnable](https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.base.create_openai_fn_runnable.html#langchain.chains.openai_functions.base.create_openai_fn_runnable) | ✅ | | If you want to use OpenAI function calling to OPTIONALLY structured an output response. You may pass in multiple functions for it call, but it does not have to call it. |\n",
"| [create_structured_output_runnable](https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.base.create_structured_output_runnable.html#langchain.chains.openai_functions.base.create_structured_output_runnable) | ✅ | | If you want to use OpenAI function calling to FORCE the LLM to respond with a certain function. You may only pass in one function, and the chain will ALWAYS return this response. |\n",
"| [load_query_constructor_runnable](https://api.python.langchain.com/en/latest/chains/langchain.chains.query_constructor.base.load_query_constructor_runnable.html#langchain.chains.query_constructor.base.load_query_constructor_runnable) | | | Can be used to generates queries. You must specify a list of allowed operations, and then will return a runnable that converts a natural language query into those allowed operations. |\n",
"| [create_sql_query_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.sql_database.query.create_sql_query_chain.html#langchain.chains.sql_database.query.create_sql_query_chain) | | SQL Database | If you want to construct a query for a SQL database from natural language. |\n",
"| [create_history_aware_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html#langchain.chains.history_aware_retriever.create_history_aware_retriever) | | Retriever | This chain takes in conversation history and then uses that to generate a search query which is passed to the underlying retriever. |\n",
"| [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html#langchain.chains.retrieval.create_retrieval_chain) | | Retriever | This chain takes in a user inquiry, which is then passed to the retriever to fetch relevant documents. Those documents (and original inputs) are then passed to an LLM to generate a response |"
]
},
{
"cell_type": "markdown",
"id": "4b32f348",
"metadata": {},
"source": [
"## Legacy Chains\n",
"\n",
"Below we report on the legacy chain types that exist. We will maintain support for these until we are able to create a LCEL alternative. We report on:\n",
"\n",
"**Chain**\n",
"\n",
"Name of the chain, or name of the constructor method. If constructor method, this will return a `Chain` subclass.\n",
"\n",
"**Function Calling**\n",
"\n",
"Whether this requires OpenAI Function Calling.\n",
"\n",
"**Other Tools**\n",
"\n",
"Other tools used in the chain.\n",
"\n",
"**When to Use**\n",
"\n",
"Our commentary on when to use.\n",
"\n",
"| Chain | Function Calling | Other Tools | When to Use |\n",
"|------------------------------|--------------------|------------------------|-------------|\n",
"| [APIChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.api.base.APIChain.html#langchain.chains.api.base.APIChain) | | Requests Wrapper | This chain uses an LLM to convert a query into an API request, then executes that request, gets back a response, and then passes that request to an LLM to respond |\n",
"| [OpenAPIEndpointChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.api.openapi.chain.OpenAPIEndpointChain.html#langchain.chains.api.openapi.chain.OpenAPIEndpointChain) | | OpenAPI Spec | Similar to APIChain, this chain is designed to interact with APIs. The main difference is this is optimized for ease of use with OpenAPI endpoints |\n",
"| [ConversationalRetrievalChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html#langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain) | | Retriever |This chain can be used to have **conversations** with a document. It takes in a question and (optional) previous conversation history. If there is previous conversation history, it uses an LLM to rewrite the conversation into a query to send to a retriever (otherwise it just uses the newest user input). It then fetches those documents and passes them (along with the conversation) to an LLM to respond. |\n",
"| [StuffDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.StuffDocumentsChain.html#langchain.chains.combine_documents.stuff.StuffDocumentsChain) | | |This chain takes a list of documents and formats them all into a prompt, then passes that prompt to an LLM. It passes ALL documents, so you should make sure it fits within the context window the LLM you are using. |\n",
"| [ReduceDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.reduce.ReduceDocumentsChain.html#langchain.chains.combine_documents.reduce.ReduceDocumentsChain) | | |This chain combines documents by iterative reducing them. It groups documents into chunks (less than some context length) then passes them into an LLM. It then takes the responses and continues to do this until it can fit everything into one final LLM call. Useful when you have a lot of documents, you want to have the LLM run over all of them, and you can do in parallel. |\n",
"| [MapReduceDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain.html#langchain.chains.combine_documents.map_reduce.MapReduceDocumentsChain) | | |This chain first passes each document through an LLM, then reduces them using the ReduceDocumentsChain. Useful in the same situations as ReduceDocumentsChain, but does an initial LLM call before trying to reduce the documents. |\n",
"| [RefineDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.refine.RefineDocumentsChain.html#langchain.chains.combine_documents.refine.RefineDocumentsChain) | | |This chain collapses documents by generating an initial answer based on the first document and then looping over the remaining documents to *refine* its answer. This operates sequentially, so it cannot be parallelized. It is useful in similar situatations as MapReduceDocuments Chain, but for cases where you want to build up an answer by refining the previous answer (rather than parallelizing calls). | |\n",
"| [MapRerankDocumentsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain.html#langchain.chains.combine_documents.map_rerank.MapRerankDocumentsChain) | | | This calls on LLM on each document, asking it to not only answer but also produce a score of how confident it is. The answer with the highest confidence is then returned. This is useful when you have a lot of documents, but only want to answer based on a single document, rather than trying to combine answers (like Refine and Reduce methods do).|\n",
"| [ConstitutionalChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html#langchain.chains.constitutional_ai.base.ConstitutionalChain) | | |This chain answers, then attempts to refine its answer based on constitutional principles that are provided. Use this when you want to enforce that a chain's answer follows some principles. |\n",
"| [LLMChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html#langchain.chains.llm.LLMChain) | | | |This chain simply combines a prompt with an LLM and an output parser. The recommended way to do this is just to use LCEL. |\n",
"| [ElasticsearchDatabaseChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.elasticsearch_database.base.ElasticsearchDatabaseChain.html#langchain.chains.elasticsearch_database.base.ElasticsearchDatabaseChain) | | ElasticSearch Instance |This chain converts a natural language question to an ElasticSearch query, and then runs it, and then summarizes the response. This is useful for when you want to ask natural language questions of an Elastic Search database |\n",
"| [FlareChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.flare.base.FlareChain.html#langchain.chains.flare.base.FlareChain) | | |This implements [FLARE](https://arxiv.org/abs/2305.06983), an advanced retrieval technique. It is primarily meant as an exploratory advanced retrieval method. |\n",
"| [ArangoGraphQAChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.arangodb.ArangoGraphQAChain.html#langchain.chains.graph_qa.arangodb.ArangoGraphQAChain) | |Arango Graph |This chain constructs an Arango query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. |\n",
"|[GraphCypherQAChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.cypher.GraphCypherQAChain.html#langchain.chains.graph_qa.cypher.GraphCypherQAChain) | |A graph that works with Cypher query language |This chain constructs an Cypher query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. |\n",
"|[FalkorDBGraphQAChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.falkordb.FalkorDBQAChain.html#langchain.chains.graph_qa.falkordb.FalkorDBQAChain) | |Falkor Database | This chain constructs a FalkorDB query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. |\n",
"|[HugeGraphQAChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.hugegraph.HugeGraphQAChain.html#langchain.chains.graph_qa.hugegraph.HugeGraphQAChain) | |HugeGraph |This chain constructs an HugeGraph query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. |\n",
"|[KuzuQAChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.kuzu.KuzuQAChain.html#langchain.chains.graph_qa.kuzu.KuzuQAChain) | |Kuzu Graph |This chain constructs a Kuzu Graph query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. |\n",
"|[NebulaGraphQAChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain.html#langchain.chains.graph_qa.nebulagraph.NebulaGraphQAChain) | |Nebula Graph |This chain constructs a Nebula Graph query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. |\n",
"|[NeptuneOpenCypherQAChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.neptune_cypher.NeptuneOpenCypherQAChain.html#langchain.chains.graph_qa.neptune_cypher.NeptuneOpenCypherQAChain) | |Neptune Graph |This chain constructs an Neptune Graph query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. |\n",
"|[GraphSparqlChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.graph_qa.sparql.GraphSparqlQAChain.html#langchain.chains.graph_qa.sparql.GraphSparqlQAChain) | |Graph that works with SparQL |This chain constructs an SparQL query from natural language, executes that query against the graph, and then passes the results back to an LLM to respond. |\n",
"|[LLMMath](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html#langchain.chains.llm_math.base.LLMMathChain) | | |This chain converts a user question to a math problem and then executes it (using [numexpr](https://github.com/pydata/numexpr)) |\n",
"|[LLMCheckerChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_checker.base.LLMCheckerChain.html#langchain.chains.llm_checker.base.LLMCheckerChain) | | |This chain uses a second LLM call to varify its initial answer. Use this when you to have an extra layer of validation on the initial LLM call. |\n",
"|[LLMSummarizationChecker](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain.html#langchain.chains.llm_summarization_checker.base.LLMSummarizationCheckerChain) | | |This chain creates a summary using a sequence of LLM calls to make sure it is extra correct. Use this over the normal summarization chain when you are okay with multiple LLM calls (eg you care more about accuracy than speed/cost). |\n",
"|[create_citation_fuzzy_match_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain.html#langchain.chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain) |✅ | |Uses OpenAI function calling to answer questions and cite its sources. |\n",
"|[create_extraction_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.extraction.create_extraction_chain.html#langchain.chains.openai_functions.extraction.create_extraction_chain) | ✅ | |Uses OpenAI Function calling to extract information from text. |\n",
"|[create_extraction_chain_pydantic](https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic.html#langchain.chains.openai_functions.extraction.create_extraction_chain_pydantic) | ✅ | |Uses OpenAI function calling to extract information from text into a Pydantic model. Compared to `create_extraction_chain` this has a tighter integration with Pydantic. |\n",
"|[get_openapi_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.openapi.get_openapi_chain.html#langchain.chains.openai_functions.openapi.get_openapi_chain) | ✅ |OpenAPI Spec |Uses OpenAI function calling to query an OpenAPI. |\n",
"|[create_qa_with_structure_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.qa_with_structure.create_qa_with_structure_chain.html#langchain.chains.openai_functions.qa_with_structure.create_qa_with_structure_chain) | ✅ | |Uses OpenAI function calling to do question answering over text and respond in a specific format. |\n",
"|[create_qa_with_sources_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.openai_functions.qa_with_structure.create_qa_with_sources_chain.html#langchain.chains.openai_functions.qa_with_structure.create_qa_with_sources_chain) | ✅ | |Uses OpenAI function calling to answer questions with citations. |\n",
"|[QAGenerationChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_generation.base.QAGenerationChain.html#langchain.chains.qa_generation.base.QAGenerationChain) | | |Creates both questions and answers from documents. Can be used to generate question/answer pairs for evaluation of retrieval projects. | \n",
"|[RetrievalQAWithSourcesChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain.html#langchain.chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain) | | Retriever |Does question answering over retrieved documents, and cites it sources. Use this when you want the answer response to have sources in the text response. Use this over `load_qa_with_sources_chain` when you want to use a retriever to fetch the relevant document as part of the chain (rather than pass them in).| \n",
"|[load_qa_with_sources_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.qa_with_sources.loading.load_qa_with_sources_chain.html#langchain.chains.qa_with_sources.loading.load_qa_with_sources_chain) | |Retriever |Does question answering over documents you pass in, and cites it sources. Use this when you want the answer response to have sources in the text response. Use this over RetrievalQAWithSources when you want to pass in the documents directly (rather than rely on a retriever to get them).| \n",
"|[RetrievalQA](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval_qa.base.RetrievalQA.html#langchain.chains.retrieval_qa.base.RetrievalQA) | |Retriever |This chain first does a retrieval step to fetch relevant documents, then passes those documents into an LLM to generate a respoinse.|\n",
"|[MultiPromptChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_prompt.MultiPromptChain.html#langchain.chains.router.multi_prompt.MultiPromptChain) | | |This chain routes input between multiple prompts. Use this when you have multiple potential prompts you could use to respond and want to route to just one. | \n",
"|[MultiRetrievalQAChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain.html#langchain.chains.router.multi_retrieval_qa.MultiRetrievalQAChain)| |Retriever |This chain routes input between multiple retrievers. Use this when you have multiple potential retrievers you could fetch relevant documents from and want to route to just one. | \n",
"|[EmbeddingRouterChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.router.embedding_router.EmbeddingRouterChain.html#langchain.chains.router.embedding_router.EmbeddingRouterChain)| | |This chain uses embedding similarity to route incoming queries.| \n",
"|[LLMRouterChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html#langchain.chains.router.llm_router.LLMRouterChain)| | |This chain uses an LLM to route between potential options. \n",
"|load_summarize_chain| | | |This chain summarizes text| \n",
"|[LLMRequestsChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_requests.LLMRequestsChain.html#langchain.chains.llm_requests.LLMRequestsChain)| | |This chain constructs a URL from user input, gets data at that URL, and then summarizes the response. Compared to APIChain, this chain is not focused on a single API spec but is more general | \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "17868bf7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,21 +0,0 @@
---
sidebar_position: 2
---
# Documents
These are the core chains for working with documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.
These chains all implement a common interface:
```python
class BaseCombineDocumentsChain(Chain, ABC):
"""Base interface for chains combining documents."""
@abstractmethod
def combine_docs(self, docs: List[Document], **kwargs: Any) -> Tuple[str, dict]:
"""Combine documents into a single string."""
```
import DocCardList from "@theme/DocCardList";
<DocCardList />

@ -1,255 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2accd7d9-20b6-47f9-9cec-923809cc36c7",
"metadata": {},
"source": [
"# Map reduce\n",
"\n",
"The map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.\n",
"\n",
"![map_reduce_diagram](../../../../static/img/map_reduce.jpg)"
]
},
{
"cell_type": "markdown",
"id": "343fc972-40be-44f8-8ed3-305322661a00",
"metadata": {},
"source": [
"## Recreating with LCEL\n",
"\n",
"With [LangChain Expression Language](/docs/expression_language) we can recreate the `MapReduceDocumentsChain` functionality, with the additional benefit of getting all the built-in LCEL features (batch, async, etc.) and with much more ability to customize specific parts of the chain."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "7bc161bc-4054-457a-9d04-7245093acd16",
"metadata": {},
"outputs": [],
"source": [
"from functools import partial\n",
"\n",
"from langchain.chains.combine_documents import collapse_docs, split_list_of_docs\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.schema import StrOutputParser\n",
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_core.prompts import format_document\n",
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a93ba908-5b81-4e91-a598-ee6fa05eac01",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatAnthropic()\n",
"\n",
"# Prompt and method for converting Document -> str.\n",
"document_prompt = PromptTemplate.from_template(\"{page_content}\")\n",
"partial_format_document = partial(format_document, prompt=document_prompt)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "75918baa-df6b-4570-91eb-1acd1c87e09b",
"metadata": {},
"outputs": [],
"source": [
"# The chain we'll apply to each individual document.\n",
"# Returns a summary of the document.\n",
"map_chain = (\n",
" {\"context\": partial_format_document}\n",
" | PromptTemplate.from_template(\"Summarize this content:\\n\\n{context}\")\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
"\n",
"# A wrapper chain to keep the original Document metadata\n",
"map_as_doc_chain = (\n",
" RunnableParallel({\"doc\": RunnablePassthrough(), \"content\": map_chain})\n",
" | (lambda x: Document(page_content=x[\"content\"], metadata=x[\"doc\"].metadata))\n",
").with_config(run_name=\"Summarize (return doc)\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "720cb117-8a3e-4595-9e2c-dbbd7a3777b5",
"metadata": {},
"outputs": [],
"source": [
"# The chain we'll repeatedly apply to collapse subsets of the documents\n",
"# into a consolidate document until the total token size of our\n",
"# documents is below some max size.\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(partial_format_document(doc) for doc in docs)\n",
"\n",
"\n",
"collapse_chain = (\n",
" {\"context\": format_docs}\n",
" | PromptTemplate.from_template(\"Collapse this content:\\n\\n{context}\")\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
"\n",
"\n",
"def get_num_tokens(docs):\n",
" return llm.get_num_tokens(format_docs(docs))\n",
"\n",
"\n",
"def collapse(\n",
" docs,\n",
" config,\n",
" token_max=4000,\n",
"):\n",
" collapse_ct = 1\n",
" while get_num_tokens(docs) > token_max:\n",
" config[\"run_name\"] = f\"Collapse {collapse_ct}\"\n",
" invoke = partial(collapse_chain.invoke, config=config)\n",
" split_docs = split_list_of_docs(docs, get_num_tokens, token_max)\n",
" docs = [collapse_docs(_docs, invoke) for _docs in split_docs]\n",
" collapse_ct += 1\n",
" return docs"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "fe5c5597-3ea3-443e-ad9f-e2f055cf092f",
"metadata": {},
"outputs": [],
"source": [
"# The chain we'll use to combine our individual document summaries\n",
"# (or summaries over subset of documents if we had to collapse the map results)\n",
"# into a final summary.\n",
"\n",
"reduce_chain = (\n",
" {\"context\": format_docs}\n",
" | PromptTemplate.from_template(\"Combine these summaries:\\n\\n{context}\")\n",
" | llm\n",
" | StrOutputParser()\n",
").with_config(run_name=\"Reduce\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "fd1148ce-f693-42b5-91e4-304983e26be6",
"metadata": {},
"outputs": [],
"source": [
"# The final full chain\n",
"map_reduce = (map_as_doc_chain.map() | collapse | reduce_chain).with_config(\n",
" run_name=\"Map reduce\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "5a10615c-f3ab-4603-bf0c-e6aea73c5450",
"metadata": {},
"source": [
"## Example run"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "28a8f74c-2441-431a-b352-541d0ad1e75b",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"\n",
"text = \"\"\"Nuclear power in space is the use of nuclear power in outer space, typically either small fission systems or radioactive decay for electricity or heat. Another use is for scientific observation, as in a Mössbauer spectrometer. The most common type is a radioisotope thermoelectric generator, which has been used on many space probes and on crewed lunar missions. Small fission reactors for Earth observation satellites, such as the TOPAZ nuclear reactor, have also been flown.[1] A radioisotope heater unit is powered by radioactive decay and can keep components from becoming too cold to function, potentially over a span of decades.[2]\n",
"\n",
"The United States tested the SNAP-10A nuclear reactor in space for 43 days in 1965,[3] with the next test of a nuclear reactor power system intended for space use occurring on 13 September 2012 with the Demonstration Using Flattop Fission (DUFF) test of the Kilopower reactor.[4]\n",
"\n",
"After a ground-based test of the experimental 1965 Romashka reactor, which used uranium and direct thermoelectric conversion to electricity,[5] the USSR sent about 40 nuclear-electric satellites into space, mostly powered by the BES-5 reactor. The more powerful TOPAZ-II reactor produced 10 kilowatts of electricity.[3]\n",
"\n",
"Examples of concepts that use nuclear power for space propulsion systems include the nuclear electric rocket (nuclear powered ion thruster(s)), the radioisotope rocket, and radioisotope electric propulsion (REP).[6] One of the more explored concepts is the nuclear thermal rocket, which was ground tested in the NERVA program. Nuclear pulse propulsion was the subject of Project Orion.[7]\n",
"\n",
"Regulation and hazard prevention[edit]\n",
"After the ban of nuclear weapons in space by the Outer Space Treaty in 1967, nuclear power has been discussed at least since 1972 as a sensitive issue by states.[8] Particularly its potential hazards to Earth's environment and thus also humans has prompted states to adopt in the U.N. General Assembly the Principles Relevant to the Use of Nuclear Power Sources in Outer Space (1992), particularly introducing safety principles for launches and to manage their traffic.[8]\n",
"\n",
"Benefits\n",
"\n",
"Both the Viking 1 and Viking 2 landers used RTGs for power on the surface of Mars. (Viking launch vehicle pictured)\n",
"While solar power is much more commonly used, nuclear power can offer advantages in some areas. Solar cells, although efficient, can only supply energy to spacecraft in orbits where the solar flux is sufficiently high, such as low Earth orbit and interplanetary destinations close enough to the Sun. Unlike solar cells, nuclear power systems function independently of sunlight, which is necessary for deep space exploration. Nuclear-based systems can have less mass than solar cells of equivalent power, allowing more compact spacecraft that are easier to orient and direct in space. In the case of crewed spaceflight, nuclear power concepts that can power both life support and propulsion systems may reduce both cost and flight time.[9]\n",
"\n",
"Selected applications and/or technologies for space include:\n",
"\n",
"Radioisotope thermoelectric generator\n",
"Radioisotope heater unit\n",
"Radioisotope piezoelectric generator\n",
"Radioisotope rocket\n",
"Nuclear thermal rocket\n",
"Nuclear pulse propulsion\n",
"Nuclear electric rocket\n",
"\"\"\"\n",
"\n",
"docs = [\n",
" Document(\n",
" page_content=split,\n",
" metadata={\"source\": \"https://en.wikipedia.org/wiki/Nuclear_power_in_space\"},\n",
" )\n",
" for split in text.split(\"\\n\\n\")\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "846fe4ce-7016-4bc7-a8e0-7e914675d568",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Here is a summary that combines the key points about nuclear power in space:\n",
"\n",
"Nuclear power is used in space for electricity, heat, and scientific observation. The most common type is a radioisotope thermoelectric generator, which has powered space probes and lunar missions using the heat from radioactive decay. Small nuclear fission reactors have also been used to generate electricity for Earth observation satellites like the TOPAZ reactor. In addition, radioisotope heater units use radioactive decay to provide reliable heat that can keep components functioning properly over decades in the harsh space environment. Overall, nuclear power has proven useful for providing long-lasting power for space applications where solar power is not practical. Technologies like radioisotope decay heat and small fission reactors allow probes, satellites, and missions to operate far from the Sun and for extended periods by generating electricity and heat without reliance on solar energy.\n"
]
}
],
"source": [
"print(map_reduce.invoke(docs[0:1], config={\"max_concurrency\": 5}))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a8b77e13-6db4-4096-a1a4-d0abe2979b6b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,204 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "3bbf49ee-f3f1-40c1-b48a-828e4166bfe0",
"metadata": {},
"source": [
"# Map re-rank\n",
"\n",
"The map re-rank documents chain runs an initial prompt on each document, that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest scoring response is returned.\n",
"\n",
"![map_rerank_diagram](../../../../static/img/map_rerank.jpg)"
]
},
{
"cell_type": "markdown",
"id": "d4cfac68-f2c4-49bf-9aad-d3e07eb9ee53",
"metadata": {},
"source": [
"## Recreating with LCEL\n",
"\n",
"With [LangChain Expression Language](/docs/expression_language) we can recreate the `MapRerankDocumentsChain` functionality, with the additional benefit of getting all the built-in LCEL features (batch, async, etc.) and with much more ability to customize specific parts of the chain."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "27c523f9-f9f1-4ad5-8bdb-38f8faa9c6e3",
"metadata": {},
"outputs": [],
"source": [
"from langchain.output_parsers.openai_functions import PydanticOutputFunctionsParser\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.utils.openai_functions import convert_pydantic_to_openai_function\n",
"from langchain_community.chat_models import ChatOpenAI\n",
"from langchain_core.prompts import format_document\n",
"from langchain_core.pydantic_v1 import BaseModel, Field"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "ea687400-9410-445c-9adc-f8c8b9f66327",
"metadata": {},
"outputs": [],
"source": [
"# Chain to apply to each individual document. Chain\n",
"# provides an answer to the question based on the document\n",
"# and scores it's confidence in the answer.\n",
"\n",
"map_prompt = PromptTemplate.from_template(\n",
" \"Answer the user question using the context.\"\n",
" \"\\n\\nContext:\\n\\n{context}\\n\\nQuestion: {question}\"\n",
")\n",
"\n",
"\n",
"class AnswerAndScore(BaseModel):\n",
" \"\"\"Return the answer to the question and a relevance score.\"\"\"\n",
"\n",
" answer: str = Field(\n",
" description=\"The answer to the question, which is based ONLY on the provided context.\"\n",
" )\n",
" score: float = Field(\n",
" description=\"A 0.0-1.0 relevance score, where 1.0 indicates the provided context answers the question completely and 0.0 indicates the provided context does not answer the question at all.\"\n",
" )\n",
"\n",
"\n",
"function = convert_pydantic_to_openai_function(AnswerAndScore)\n",
"map_chain = (\n",
" map_prompt\n",
" | ChatOpenAI().bind(\n",
" temperature=0, functions=[function], function_call={\"name\": \"AnswerAndScore\"}\n",
" )\n",
" | PydanticOutputFunctionsParser(pydantic_schema=AnswerAndScore)\n",
").with_config(run_name=\"Map\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ace2b1d5-a9ea-4a70-8d39-2826a5445aa7",
"metadata": {},
"outputs": [],
"source": [
"# Final chain, which after answer and scoring based on\n",
"# each doc return the answer with the highest score.\n",
"\n",
"\n",
"def top_answer(scored_answers):\n",
" return max(scored_answers, key=lambda x: x.score).answer\n",
"\n",
"\n",
"document_prompt = PromptTemplate.from_template(\"{page_content}\")\n",
"map_rerank_chain = (\n",
" (\n",
" lambda x: [\n",
" {\n",
" \"context\": format_document(doc, document_prompt),\n",
" \"question\": x[\"question\"],\n",
" }\n",
" for doc in x[\"docs\"]\n",
" ]\n",
" )\n",
" | map_chain.map()\n",
" | top_answer\n",
").with_config(run_name=\"Map rerank\")"
]
},
{
"cell_type": "markdown",
"id": "62b863c9-2316-42ad-9581-ebc688889855",
"metadata": {},
"source": [
"## Example run"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "7b46c373-64c9-4b69-b64f-9bc6e52ae91c",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"\n",
"text = \"\"\"Nuclear power in space is the use of nuclear power in outer space, typically either small fission systems or radioactive decay for electricity or heat. Another use is for scientific observation, as in a Mössbauer spectrometer. The most common type is a radioisotope thermoelectric generator, which has been used on many space probes and on crewed lunar missions. Small fission reactors for Earth observation satellites, such as the TOPAZ nuclear reactor, have also been flown.[1] A radioisotope heater unit is powered by radioactive decay and can keep components from becoming too cold to function, potentially over a span of decades.[2]\n",
"\n",
"The United States tested the SNAP-10A nuclear reactor in space for 43 days in 1965,[3] with the next test of a nuclear reactor power system intended for space use occurring on 13 September 2012 with the Demonstration Using Flattop Fission (DUFF) test of the Kilopower reactor.[4]\n",
"\n",
"After a ground-based test of the experimental 1965 Romashka reactor, which used uranium and direct thermoelectric conversion to electricity,[5] the USSR sent about 40 nuclear-electric satellites into space, mostly powered by the BES-5 reactor. The more powerful TOPAZ-II reactor produced 10 kilowatts of electricity.[3]\n",
"\n",
"Examples of concepts that use nuclear power for space propulsion systems include the nuclear electric rocket (nuclear powered ion thruster(s)), the radioisotope rocket, and radioisotope electric propulsion (REP).[6] One of the more explored concepts is the nuclear thermal rocket, which was ground tested in the NERVA program. Nuclear pulse propulsion was the subject of Project Orion.[7]\n",
"\n",
"Regulation and hazard prevention[edit]\n",
"After the ban of nuclear weapons in space by the Outer Space Treaty in 1967, nuclear power has been discussed at least since 1972 as a sensitive issue by states.[8] Particularly its potential hazards to Earth's environment and thus also humans has prompted states to adopt in the U.N. General Assembly the Principles Relevant to the Use of Nuclear Power Sources in Outer Space (1992), particularly introducing safety principles for launches and to manage their traffic.[8]\n",
"\n",
"Benefits\n",
"\n",
"Both the Viking 1 and Viking 2 landers used RTGs for power on the surface of Mars. (Viking launch vehicle pictured)\n",
"While solar power is much more commonly used, nuclear power can offer advantages in some areas. Solar cells, although efficient, can only supply energy to spacecraft in orbits where the solar flux is sufficiently high, such as low Earth orbit and interplanetary destinations close enough to the Sun. Unlike solar cells, nuclear power systems function independently of sunlight, which is necessary for deep space exploration. Nuclear-based systems can have less mass than solar cells of equivalent power, allowing more compact spacecraft that are easier to orient and direct in space. In the case of crewed spaceflight, nuclear power concepts that can power both life support and propulsion systems may reduce both cost and flight time.[9]\n",
"\n",
"Selected applications and/or technologies for space include:\n",
"\n",
"Radioisotope thermoelectric generator\n",
"Radioisotope heater unit\n",
"Radioisotope piezoelectric generator\n",
"Radioisotope rocket\n",
"Nuclear thermal rocket\n",
"Nuclear pulse propulsion\n",
"Nuclear electric rocket\n",
"\"\"\"\n",
"\n",
"docs = [\n",
" Document(\n",
" page_content=split,\n",
" metadata={\"source\": \"https://en.wikipedia.org/wiki/Nuclear_power_in_space\"},\n",
" )\n",
" for split in text.split(\"\\n\\n\")\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d1998c41-1ebb-4d55-9c28-d2f3feb12657",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The Viking missions were powered by radioisotope thermoelectric generators (RTGs). These generators used the heat produced by the natural decay of plutonium-238 to generate electricity.\n"
]
}
],
"source": [
"print(\n",
" map_rerank_chain.invoke({\"docs\": docs, \"question\": \"How were the vikings powered\"})\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,236 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "15868084-aec1-4e58-8524-32cbb12aa272",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 1\n",
"title: Refine\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "2d592270-ebbc-4310-abd8-86ef8d70fe81",
"metadata": {},
"source": [
"# Refine\n",
"\n",
"The Refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.\n",
"\n",
"Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context.\n",
"The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain.\n",
"There are also certain tasks which are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.\n",
"\n",
"![refine_diagram](../../../../static/img/refine.jpg)\n"
]
},
{
"cell_type": "markdown",
"id": "644838eb-6dbd-4d29-a557-628c1fa2d4c6",
"metadata": {},
"source": [
"## Recreating with LCEL\n",
"\n",
"With [LangChain Expression Language](/docs/expression_language) we can easily recreate the `RefineDocumentsChain`, with the additional benefit of getting all the built-in LCEL features (batch, async, etc.) and with much more ability to customize specific parts of the chain."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2560d994-e1a4-4fe0-97c7-182b6bd7798b",
"metadata": {},
"outputs": [],
"source": [
"from functools import partial\n",
"from operator import itemgetter\n",
"\n",
"from langchain.callbacks.manager import trace_as_chain_group\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.schema import StrOutputParser\n",
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_core.prompts import format_document"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f8bb92d8-3961-4a75-b4ad-568a224c78b2",
"metadata": {},
"outputs": [],
"source": [
"# Chain for generating initial summary based on the first document\n",
"\n",
"llm = ChatAnthropic()\n",
"first_prompt = PromptTemplate.from_template(\"Summarize this content:\\n\\n{context}\")\n",
"document_prompt = PromptTemplate.from_template(\"{page_content}\")\n",
"partial_format_doc = partial(format_document, prompt=document_prompt)\n",
"summary_chain = {\"context\": partial_format_doc} | first_prompt | llm | StrOutputParser()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9aedc203-b842-4d1b-84f0-23946b0acc7a",
"metadata": {},
"outputs": [],
"source": [
"# Chain for refining an existing summary based on\n",
"# an additional document\n",
"\n",
"refine_prompt = PromptTemplate.from_template(\n",
" \"Here's your first summary: {prev_response}. \"\n",
" \"Now add to it based on the following context: {context}\"\n",
")\n",
"refine_chain = (\n",
" {\n",
" \"prev_response\": itemgetter(\"prev_response\"),\n",
" \"context\": lambda x: partial_format_doc(x[\"doc\"]),\n",
" }\n",
" | refine_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cd118a07-1678-4274-9248-2c88656da686",
"metadata": {},
"outputs": [],
"source": [
"# The final refine loop, which generates an initial summary\n",
"# then iteratively refines it based on each of the rest of the documents\n",
"\n",
"\n",
"def refine_loop(docs):\n",
" with trace_as_chain_group(\"refine loop\", inputs={\"input\": docs}) as manager:\n",
" summary = summary_chain.invoke(\n",
" docs[0], config={\"callbacks\": manager, \"run_name\": \"initial summary\"}\n",
" )\n",
" for i, doc in enumerate(docs[1:]):\n",
" summary = refine_chain.invoke(\n",
" {\"prev_response\": summary, \"doc\": doc},\n",
" config={\"callbacks\": manager, \"run_name\": f\"refine {i}\"},\n",
" )\n",
" manager.on_chain_end({\"output\": summary})\n",
" return summary"
]
},
{
"cell_type": "markdown",
"id": "0d2ec3b0-a47a-4344-8f12-8bb644fdccae",
"metadata": {},
"source": [
"## Example run"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d8242337-3aa4-4377-bd4c-3bf0c01f9dd7",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"\n",
"text = \"\"\"Nuclear power in space is the use of nuclear power in outer space, typically either small fission systems or radioactive decay for electricity or heat. Another use is for scientific observation, as in a Mössbauer spectrometer. The most common type is a radioisotope thermoelectric generator, which has been used on many space probes and on crewed lunar missions. Small fission reactors for Earth observation satellites, such as the TOPAZ nuclear reactor, have also been flown.[1] A radioisotope heater unit is powered by radioactive decay and can keep components from becoming too cold to function, potentially over a span of decades.[2]\n",
"\n",
"The United States tested the SNAP-10A nuclear reactor in space for 43 days in 1965,[3] with the next test of a nuclear reactor power system intended for space use occurring on 13 September 2012 with the Demonstration Using Flattop Fission (DUFF) test of the Kilopower reactor.[4]\n",
"\n",
"After a ground-based test of the experimental 1965 Romashka reactor, which used uranium and direct thermoelectric conversion to electricity,[5] the USSR sent about 40 nuclear-electric satellites into space, mostly powered by the BES-5 reactor. The more powerful TOPAZ-II reactor produced 10 kilowatts of electricity.[3]\n",
"\n",
"Examples of concepts that use nuclear power for space propulsion systems include the nuclear electric rocket (nuclear powered ion thruster(s)), the radioisotope rocket, and radioisotope electric propulsion (REP).[6] One of the more explored concepts is the nuclear thermal rocket, which was ground tested in the NERVA program. Nuclear pulse propulsion was the subject of Project Orion.[7]\n",
"\n",
"Regulation and hazard prevention[edit]\n",
"After the ban of nuclear weapons in space by the Outer Space Treaty in 1967, nuclear power has been discussed at least since 1972 as a sensitive issue by states.[8] Particularly its potential hazards to Earth's environment and thus also humans has prompted states to adopt in the U.N. General Assembly the Principles Relevant to the Use of Nuclear Power Sources in Outer Space (1992), particularly introducing safety principles for launches and to manage their traffic.[8]\n",
"\n",
"Benefits\n",
"\n",
"Both the Viking 1 and Viking 2 landers used RTGs for power on the surface of Mars. (Viking launch vehicle pictured)\n",
"While solar power is much more commonly used, nuclear power can offer advantages in some areas. Solar cells, although efficient, can only supply energy to spacecraft in orbits where the solar flux is sufficiently high, such as low Earth orbit and interplanetary destinations close enough to the Sun. Unlike solar cells, nuclear power systems function independently of sunlight, which is necessary for deep space exploration. Nuclear-based systems can have less mass than solar cells of equivalent power, allowing more compact spacecraft that are easier to orient and direct in space. In the case of crewed spaceflight, nuclear power concepts that can power both life support and propulsion systems may reduce both cost and flight time.[9]\n",
"\n",
"Selected applications and/or technologies for space include:\n",
"\n",
"Radioisotope thermoelectric generator\n",
"Radioisotope heater unit\n",
"Radioisotope piezoelectric generator\n",
"Radioisotope rocket\n",
"Nuclear thermal rocket\n",
"Nuclear pulse propulsion\n",
"Nuclear electric rocket\n",
"\"\"\"\n",
"\n",
"docs = [\n",
" Document(\n",
" page_content=split,\n",
" metadata={\"source\": \"https://en.wikipedia.org/wiki/Nuclear_power_in_space\"},\n",
" )\n",
" for split in text.split(\"\\n\\n\")\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "68f23401-cb6b-4500-8576-7e1c9254dfef",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Here is the updated summary with the additional context:\n",
"\n",
"Here is a summary of the key points about nuclear power in space:\n",
"\n",
"- Nuclear power is used in space for electricity, heat, and scientific observation. The most common type is a radioisotope thermoelectric generator (RTG), which uses radioactive decay to generate electricity. RTGs have powered space probes and crewed lunar missions. \n",
"\n",
"- Small nuclear fission reactors have also been used to power Earth observation satellites, like the TOPAZ reactor. The United States tested the SNAP-10A nuclear reactor in space for 43 days in 1965.\n",
"\n",
"- After a ground-based test of the experimental 1965 Romashka reactor, which used uranium and direct thermoelectric conversion to electricity, the USSR sent about 40 nuclear-electric satellites into space, mostly powered by the BES-5 reactor. The more powerful TOPAZ-II reactor produced 10 kilowatts of electricity.\n",
"\n",
"- Radioisotope heater units use radioactive decay for heat. They can keep components warm enough to function over decades.\n",
"\n",
"- Nuclear power concepts have also been proposed and tested for space propulsion. Examples include the nuclear electric rocket (nuclear powered ion thruster(s)), the radioisotope\n"
]
}
],
"source": [
"print(refine_loop(docs))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1db2ea25-0f93-4e23-8793-b7b94df8be07",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,177 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "bafbef65-ace3-42ce-83f3-553909b48685",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 0\n",
"title: Stuff\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "1c473378-ff18-45cf-b718-43f6f94af040",
"metadata": {},
"source": [
"The stuff documents chain (\"stuff\" as in \"to stuff\" or \"to fill\") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.\n",
"\n",
"This chain is well-suited for applications where documents are small and only a few are passed in for most calls.\n",
"\n",
"![stuff_diagram](../../../../static/img/stuff.jpg)"
]
},
{
"cell_type": "markdown",
"id": "1f20da9a-2a7f-4fd1-9dbc-c01f164b078b",
"metadata": {},
"source": [
"## Recreating with LCEL\n",
"\n",
"With [LangChain Expression Language](/docs/expression_language) we can easily recreate the `StuffDocumentsChain` functionality, with the additional benefit of getting all the built-in LCEL features (batch, async, etc.) and with much more ability to customize specific parts of the chain."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c0f33d9f-2cf2-4064-92f6-39fbd5a597e3",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"from langchain.schema import StrOutputParser\n",
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_core.prompts import format_document"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "3e355e46-6375-4f5e-a102-fdbf72476422",
"metadata": {},
"outputs": [],
"source": [
"doc_prompt = PromptTemplate.from_template(\"{page_content}\")\n",
"\n",
"chain = (\n",
" {\n",
" \"content\": lambda docs: \"\\n\\n\".join(\n",
" format_document(doc, doc_prompt) for doc in docs\n",
" )\n",
" }\n",
" | PromptTemplate.from_template(\"Summarize the following content:\\n\\n{content}\")\n",
" | ChatAnthropic()\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ece7db13-5d4b-490f-8677-387bf4eac944",
"metadata": {},
"source": [
"### Example run\n",
"\n",
"Lets run this summarization chain on some sample data."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "a0ddd725-d97c-4e22-8509-7337d3a71dff",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"\n",
"text = \"\"\"Nuclear power in space is the use of nuclear power in outer space, typically either small fission systems or radioactive decay for electricity or heat. Another use is for scientific observation, as in a Mössbauer spectrometer. The most common type is a radioisotope thermoelectric generator, which has been used on many space probes and on crewed lunar missions. Small fission reactors for Earth observation satellites, such as the TOPAZ nuclear reactor, have also been flown.[1] A radioisotope heater unit is powered by radioactive decay and can keep components from becoming too cold to function, potentially over a span of decades.[2]\n",
"\n",
"The United States tested the SNAP-10A nuclear reactor in space for 43 days in 1965,[3] with the next test of a nuclear reactor power system intended for space use occurring on 13 September 2012 with the Demonstration Using Flattop Fission (DUFF) test of the Kilopower reactor.[4]\n",
"\n",
"After a ground-based test of the experimental 1965 Romashka reactor, which used uranium and direct thermoelectric conversion to electricity,[5] the USSR sent about 40 nuclear-electric satellites into space, mostly powered by the BES-5 reactor. The more powerful TOPAZ-II reactor produced 10 kilowatts of electricity.[3]\n",
"\n",
"Examples of concepts that use nuclear power for space propulsion systems include the nuclear electric rocket (nuclear powered ion thruster(s)), the radioisotope rocket, and radioisotope electric propulsion (REP).[6] One of the more explored concepts is the nuclear thermal rocket, which was ground tested in the NERVA program. Nuclear pulse propulsion was the subject of Project Orion.[7]\n",
"\n",
"Regulation and hazard prevention[edit]\n",
"After the ban of nuclear weapons in space by the Outer Space Treaty in 1967, nuclear power has been discussed at least since 1972 as a sensitive issue by states.[8] Particularly its potential hazards to Earth's environment and thus also humans has prompted states to adopt in the U.N. General Assembly the Principles Relevant to the Use of Nuclear Power Sources in Outer Space (1992), particularly introducing safety principles for launches and to manage their traffic.[8]\n",
"\n",
"Benefits\n",
"\n",
"Both the Viking 1 and Viking 2 landers used RTGs for power on the surface of Mars. (Viking launch vehicle pictured)\n",
"While solar power is much more commonly used, nuclear power can offer advantages in some areas. Solar cells, although efficient, can only supply energy to spacecraft in orbits where the solar flux is sufficiently high, such as low Earth orbit and interplanetary destinations close enough to the Sun. Unlike solar cells, nuclear power systems function independently of sunlight, which is necessary for deep space exploration. Nuclear-based systems can have less mass than solar cells of equivalent power, allowing more compact spacecraft that are easier to orient and direct in space. In the case of crewed spaceflight, nuclear power concepts that can power both life support and propulsion systems may reduce both cost and flight time.[9]\n",
"\n",
"Selected applications and/or technologies for space include:\n",
"\n",
"Radioisotope thermoelectric generator\n",
"Radioisotope heater unit\n",
"Radioisotope piezoelectric generator\n",
"Radioisotope rocket\n",
"Nuclear thermal rocket\n",
"Nuclear pulse propulsion\n",
"Nuclear electric rocket\n",
"\"\"\"\n",
"\n",
"docs = [\n",
" Document(\n",
" page_content=split,\n",
" metadata={\"source\": \"https://en.wikipedia.org/wiki/Nuclear_power_in_space\"},\n",
" )\n",
" for split in text.split()\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "bd18f647-bdb4-46ce-b01f-fe8afc208ffa",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Here is a summary of the key points:\n",
"\n",
"- Nuclear power has been used in space for electricity, heat, and scientific observation. The most common type is a radioisotope thermoelectric generator, used on many probes and lunar missions. \n",
"\n",
"- Small fission reactors have been used for Earth observation satellites. Radioisotope heater units use radioactive decay to keep components warm for decades.\n",
"\n",
"- The US tested a nuclear reactor in space in 1965. The Soviet Union launched around 40 nuclear-powered satellites, mostly with BES-5 reactors.\n",
"\n",
"- Concepts for nuclear propulsion include nuclear thermal rockets, nuclear electric rockets, and nuclear pulse propulsion. The NERVA program ground tested nuclear thermal rockets.\n",
"\n",
"- After the 1967 Outer Space Treaty banned nuclear weapons in space, safety principles were introduced for nuclear power launch and traffic management.\n",
"\n",
"- Benefits of nuclear power in space include functioning independently of sunlight needed for deep space exploration, less mass than equivalent solar power, and ability to power both life support and propulsion.\n"
]
}
],
"source": [
"print(chain.invoke(docs))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,8 +0,0 @@
---
sidebar_position: 1
---
# Foundational
import DocCardList from "@theme/DocCardList";
<DocCardList />

@ -1,375 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7d46647d-638f-497e-b51a-52bf8dd76e39",
"metadata": {},
"source": [
"# LLM\n",
"\n",
"The most common type of chaining in any LLM application is combining a prompt template with an LLM and optionally an output parser.\n",
"\n",
"The recommended way to do this is using LangChain Expression Language. We also continue to support the legacy `LLMChain`, which is a single class for composing these three components."
]
},
{
"cell_type": "markdown",
"id": "0ad20b88-f2e8-4ba0-b8e6-1892ab4d2190",
"metadata": {},
"source": [
"## Using LCEL\n",
"\n",
"`BasePromptTemplate`, `BaseLanguageModel` and `BaseOutputParser` all implement the `Runnable` interface and are designed to be piped into one another, making LCEL composition very easy:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "92ad7c9d-a1d2-49bd-a4a3-0f6f0fd1656b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'VibrantSocks'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.prompts import PromptTemplate\n",
"from langchain.schema import StrOutputParser\n",
"from langchain_community.chat_models import ChatOpenAI\n",
"\n",
"prompt = PromptTemplate.from_template(\n",
" \"What is a good name for a company that makes {product}?\"\n",
")\n",
"runnable = prompt | ChatOpenAI() | StrOutputParser()\n",
"runnable.invoke({\"product\": \"colorful socks\"})"
]
},
{
"cell_type": "markdown",
"id": "784d8083-a2c8-4172-92b8-0bd0d74f032a",
"metadata": {},
"source": [
"Head to the [LCEL](/docs/expression_language) section for more on the interface, built-in features, and cookbook examples."
]
},
{
"cell_type": "markdown",
"id": "efee07bb-fc45-4e06-999f-a776e6d53333",
"metadata": {},
"source": [
"## [Legacy] LLMChain\n",
"\n",
":::note\n",
"\n",
"This is a legacy class, using LCEL as shown above is preferred.\n",
"\n",
":::\n",
"\n",
"An `LLMChain` is a simple chain that adds some functionality around language models. It is used widely throughout LangChain, including in other chains and agents.\n",
"\n",
"An `LLMChain` consists of a `PromptTemplate` and a language model (either an LLM or chat model). It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output.\n",
"\n",
"### Get started"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fc0b7d6c-b808-48d9-bdb5-818ab4a1ccca",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'product': 'colorful socks', 'text': '\\n\\nSocktastic!'}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_community.llms import OpenAI\n",
"\n",
"prompt_template = \"What is a good name for a company that makes {product}?\"\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"llm_chain = LLMChain(llm=llm, prompt=PromptTemplate.from_template(prompt_template))\n",
"llm_chain(\"colorful socks\")"
]
},
{
"cell_type": "markdown",
"id": "040634f0-fe60-4b0e-b3f6-e9c15146e2cd",
"metadata": {},
"source": [
"### Additional ways of running `LLMChain`\n",
"\n",
"Aside from `__call__` and `run` methods shared by all `Chain` object, `LLMChain` offers a few more ways of calling the chain logic:\n",
"\n",
"- `apply` allows you run the chain against a list of inputs:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8cd8dd72-6d5a-488f-80a6-1a9324c743e8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'text': '\\n\\nSocktastic!'},\n",
" {'text': '\\n\\nTechCore Solutions.'},\n",
" {'text': '\\n\\nFootwear Factory.'}]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"input_list = [{\"product\": \"socks\"}, {\"product\": \"computer\"}, {\"product\": \"shoes\"}]\n",
"llm_chain.apply(input_list)"
]
},
{
"cell_type": "markdown",
"id": "18624d04-474a-425e-bcf3-58748b747e08",
"metadata": {},
"source": [
"- `generate` is similar to `apply`, except it return an `LLMResult` instead of string. `LLMResult` often contains useful generation such as token usages and finish reason."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "67e72139-686d-40eb-9c1e-4342d3b1abfe",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[Generation(text='\\n\\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\\n\\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 36, 'total_tokens': 55}, 'model_name': 'text-davinci-003'}, run=[RunInfo(run_id=UUID('9a423a43-6d35-4e8f-9aca-cacfc8e0dc49')), RunInfo(run_id=UUID('a879c077-b521-461c-8f29-ba63adfc327c')), RunInfo(run_id=UUID('40b892fa-e8c2-47d0-a309-4f7a4ed5b64a'))])"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.generate(input_list)"
]
},
{
"cell_type": "markdown",
"id": "0480da3a-865d-4ec5-9366-e29c3967fef3",
"metadata": {},
"source": [
"- `predict` is similar to `run` method except that the input keys are specified as keyword arguments instead of a Python dict."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f4afb8a4-9113-4082-85cb-55a2d406c99a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nSocktastic!'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Single input example\n",
"llm_chain.predict(product=\"colorful socks\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e58eaab1-4db4-43cb-b523-7b3380332cad",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nQ: What did the duck say when his friend died?\\nA: Quack, quack, goodbye.'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Multiple inputs example\n",
"template = \"\"\"Tell me a {adjective} joke about {subject}.\"\"\"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"adjective\", \"subject\"])\n",
"llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))\n",
"\n",
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
]
},
{
"cell_type": "markdown",
"id": "63f02d9e-6470-41d3-b91c-b064baf84733",
"metadata": {},
"source": [
"### Parsing the outputs\n",
"\n",
"By default, `LLMChain` does not parse the output even if the underlying `prompt` object has an output parser. If you would like to apply that output parser on the LLM output, use `predict_and_parse` instead of `predict` and `apply_and_parse` instead of `apply`.\n",
"\n",
"With `predict`:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "134126ca-2f1c-4829-94ba-810d91c92138",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nRed, orange, yellow, green, blue, indigo, violet'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.output_parsers import CommaSeparatedListOutputParser\n",
"\n",
"output_parser = CommaSeparatedListOutputParser()\n",
"template = \"\"\"List all the colors in a rainbow\"\"\"\n",
"prompt = PromptTemplate(\n",
" template=template, input_variables=[], output_parser=output_parser\n",
")\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"llm_chain.predict()"
]
},
{
"cell_type": "markdown",
"id": "7a46f1e8-daaf-43d6-8045-9b187655631b",
"metadata": {},
"source": [
"With `predict_and_parse`:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "7ef9b74d-7ef5-4b80-80cc-f8226f79259b",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/bagatur/langchain/libs/langchain/langchain/chains/llm.py:280: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.\n",
" warnings.warn(\n"
]
},
{
"data": {
"text/plain": [
"['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.predict_and_parse()"
]
},
{
"cell_type": "markdown",
"id": "93446f7f-0a2d-4fc5-99a1-a26cc0605b4b",
"metadata": {},
"source": [
"### Initialize from string\n",
"\n",
"You can also construct an `LLMChain` from a string template directly."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "7e324174-e8ab-4095-87cb-17874a058da9",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Tell me a {adjective} joke about {subject}.\"\"\"\n",
"llm_chain = LLMChain.from_string(llm=llm, template=template)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "a4f10407-6519-4174-89fe-e7507765f1ae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\nQ: What did the duck say when his friend died?\\nA: Quack, quack, goodbye.'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,567 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a5cf6c49",
"metadata": {},
"source": [
"# Router\n",
"\n",
"Routing allows you to create non-deterministic chains where the output of a previous step defines the next step. Routing helps provide structure and consistency around interactions with LLMs.\n",
"\n",
"As a very simple example, let's suppose we have two templates optimized for different types of questions, and we want to choose the template based on the user input."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "8d11fa5c",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"\n",
"physics_template = \"\"\"You are a very smart physics professor. \\\n",
"You are great at answering questions about physics in a concise and easy to understand manner. \\\n",
"When you don't know the answer to a question you admit that you don't know.\n",
"\n",
"Here is a question:\n",
"{input}\"\"\"\n",
"physics_prompt = PromptTemplate.from_template(physics_template)\n",
"\n",
"math_template = \"\"\"You are a very good mathematician. You are great at answering math questions. \\\n",
"You are so good because you are able to break down hard problems into their component parts, \\\n",
"answer the component parts, and then put them together to answer the broader question.\n",
"\n",
"Here is a question:\n",
"{input}\"\"\"\n",
"math_prompt = PromptTemplate.from_template(math_template)"
]
},
{
"cell_type": "markdown",
"id": "892bb71f-e4f4-431e-8321-fe6a40e71b78",
"metadata": {},
"source": [
"## Using LCEL\n",
"\n",
"We can easily do this using a `RunnableBranch`. A `RunnableBranch` is initialized with a list of (condition, runnable) pairs and a default runnable. It selects which branch by passing each condition the input it's invoked with. It selects the first condition to evaluate to True, and runs the corresponding runnable to that condition with the input. \n",
"\n",
"If no provided conditions match, it runs the default runnable."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f2c4cdb4-1108-491c-9f6f-bbceeb452e29",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatOpenAI\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableBranch"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "49308c5c-8722-4fb0-b78d-3b1dac0e656d",
"metadata": {},
"outputs": [],
"source": [
"general_prompt = PromptTemplate.from_template(\n",
" \"You are a helpful assistant. Answer the question as accurately as you can.\\n\\n{input}\"\n",
")\n",
"prompt_branch = RunnableBranch(\n",
" (lambda x: x[\"topic\"] == \"math\", math_prompt),\n",
" (lambda x: x[\"topic\"] == \"physics\", physics_prompt),\n",
" general_prompt,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "750da8ec-7c1c-4a0e-9c94-e3a1da49319b",
"metadata": {},
"outputs": [],
"source": [
"from typing import Literal\n",
"\n",
"from langchain.output_parsers.openai_functions import PydanticAttrOutputFunctionsParser\n",
"from langchain.utils.openai_functions import convert_pydantic_to_openai_function\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"\n",
"\n",
"class TopicClassifier(BaseModel):\n",
" \"Classify the topic of the user question\"\n",
"\n",
" topic: Literal[\"math\", \"physics\", \"general\"]\n",
" \"The topic of the user question. One of 'math', 'physics' or 'general'.\"\n",
"\n",
"\n",
"classifier_function = convert_pydantic_to_openai_function(TopicClassifier)\n",
"llm = ChatOpenAI().bind(\n",
" functions=[classifier_function], function_call={\"name\": \"TopicClassifier\"}\n",
")\n",
"parser = PydanticAttrOutputFunctionsParser(\n",
" pydantic_schema=TopicClassifier, attr_name=\"topic\"\n",
")\n",
"classifier_chain = llm | parser"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "35be97db-2b31-4503-af56-2cae802a9822",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"final_chain = (\n",
" RunnablePassthrough.assign(topic=itemgetter(\"input\") | classifier_chain)\n",
" | prompt_branch\n",
" | ChatOpenAI()\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "9b161436-432b-4ecd-9752-5f458a7b1d54",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Thank you for your kind words! I'll be happy to help you with this math question.\\n\\nTo find the first prime number greater than 40 that satisfies the given condition, we need to follow a step-by-step approach. \\n\\nFirstly, let's list the prime numbers greater than 40:\\n41, 43, 47, 53, 59, 61, 67, 71, ...\\n\\nNow, we need to check if one plus each of these prime numbers is divisible by 3. We can do this by calculating the remainder when dividing each number by 3.\\n\\nFor 41, (41 + 1) % 3 = 42 % 3 = 0. It is divisible by 3.\\n\\nFor 43, (43 + 1) % 3 = 44 % 3 = 2. It is not divisible by 3.\\n\\nFor 47, (47 + 1) % 3 = 48 % 3 = 0. It is divisible by 3.\\n\\nSince 41 and 47 are both greater than 40 and satisfy the condition, the first prime number greater than 40 such that one plus the prime number is divisible by 3 is 41.\\n\\nTherefore, the answer to the question is 41.\""
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"final_chain.invoke(\n",
" {\n",
" \"input\": \"What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?\"\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "aa11d8fb-b9f2-4427-9c7f-2146f84cba72",
"metadata": {},
"source": [
"For more on routing with LCEL [head here](/docs/expression_language/how_to/routing)."
]
},
{
"cell_type": "markdown",
"id": "681af961-388e-4b37-9572-4f084365abba",
"metadata": {},
"source": [
"## [Legacy] RouterChain\n",
"\n",
":::note The preferred approach as of version `0.0.293` is to use LCEL as above.\n",
"\n",
"Here we show how to use the `RouterChain` paradigm to create a chain that dynamically selects the next chain to use for a given input. \n",
"\n",
"Router chains are made up of two components:\n",
"\n",
"- The `RouterChain` itself (responsible for selecting the next chain to call)\n",
"- `destination_chains`: chains that the router chain can route to\n",
"\n",
"\n",
"In this example, we will focus on the different types of routing chains. We will show these routing chains used in a `MultiPromptChain` to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt."
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "e8d624d4",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import ConversationChain\n",
"from langchain.chains.llm import LLMChain\n",
"from langchain.chains.router import MultiPromptChain\n",
"from langchain_community.llms import OpenAI"
]
},
{
"cell_type": "markdown",
"id": "83cea2d5",
"metadata": {},
"source": [
"### [Legacy] LLMRouterChain\n",
"\n",
"This chain uses an LLM to determine how to route things."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "d0b8856e",
"metadata": {},
"outputs": [],
"source": [
"prompt_infos = [\n",
" {\n",
" \"name\": \"physics\",\n",
" \"description\": \"Good for answering questions about physics\",\n",
" \"prompt_template\": physics_template,\n",
" },\n",
" {\n",
" \"name\": \"math\",\n",
" \"description\": \"Good for answering math questions\",\n",
" \"prompt_template\": math_template,\n",
" },\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "de2dc0f0",
"metadata": {},
"outputs": [],
"source": [
"llm = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "f27c154a",
"metadata": {},
"outputs": [],
"source": [
"destination_chains = {}\n",
"for p_info in prompt_infos:\n",
" name = p_info[\"name\"]\n",
" prompt_template = p_info[\"prompt_template\"]\n",
" prompt = PromptTemplate(template=prompt_template, input_variables=[\"input\"])\n",
" chain = LLMChain(llm=llm, prompt=prompt)\n",
" destination_chains[name] = chain\n",
"default_chain = ConversationChain(llm=llm, output_key=\"text\")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "60142895",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser\n",
"from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "60769f96",
"metadata": {},
"outputs": [],
"source": [
"destinations = [f\"{p['name']}: {p['description']}\" for p in prompt_infos]\n",
"destinations_str = \"\\n\".join(destinations)\n",
"router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)\n",
"router_prompt = PromptTemplate(\n",
" template=router_template,\n",
" input_variables=[\"input\"],\n",
" output_parser=RouterOutputParser(),\n",
")\n",
"router_chain = LLMRouterChain.from_llm(llm, router_prompt)"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "db679975",
"metadata": {},
"outputs": [],
"source": [
"chain = MultiPromptChain(\n",
" router_chain=router_chain,\n",
" destination_chains=destination_chains,\n",
" default_chain=default_chain,\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "90fd594c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new MultiPromptChain chain...\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/bagatur/langchain/libs/langchain/langchain/chains/llm.py:280: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.\n",
" warnings.warn(\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"physics: {'input': 'What is black body radiation?'}\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"Black body radiation is the thermal electromagnetic radiation within or surrounding a body in thermodynamic equilibrium with its environment, or emitted by a black body (an idealized physical body which absorbs all incident electromagnetic radiation). It is a characteristic of the temperature of the body; if the body has a uniform temperature, the radiation is also uniform across the spectrum of frequencies. The spectral characteristics of the radiation are determined by the temperature of the body, which implies that a black body at a given temperature will emit the same amount of radiation at every frequency.\n"
]
}
],
"source": [
"print(chain.run(\"What is black body radiation?\"))"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "b8c83765",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new MultiPromptChain chain...\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/bagatur/langchain/libs/langchain/langchain/chains/llm.py:280: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.\n",
" warnings.warn(\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?'}\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"The first prime number greater than 40 such that one plus the prime number is divisible by 3 is 43. This can be seen by breaking down the problem:\n",
"\n",
"1) We know that a prime number is a number that is only divisible by itself and one. \n",
"2) We also know that if a number is divisible by 3, the sum of its digits must be divisible by 3. \n",
"\n",
"So, if we want to find the first prime number greater than 40 such that one plus the prime number is divisible by 3, we can start counting up from 40, testing each number to see if it is prime and if the sum of the number and one is divisible by three. \n",
"\n",
"The first number we come to that satisfies these conditions is 43.\n"
]
}
],
"source": [
"print(\n",
" chain.run(\n",
" \"What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?\"\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "74c6bba7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new MultiPromptChain chain...\u001b[0m\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/bagatur/langchain/libs/langchain/langchain/chains/llm.py:280: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.\n",
" warnings.warn(\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"physics: {'input': 'What is the name of the type of cloud that rains?'}\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"The type of cloud that rains is called a cumulonimbus cloud.\n"
]
}
],
"source": [
"print(chain.run(\"What is the name of the type of cloud that rains?\"))"
]
},
{
"cell_type": "markdown",
"id": "239d4743",
"metadata": {},
"source": [
"## [Legacy] EmbeddingRouterChain\n",
"\n",
"The `EmbeddingRouterChain` uses embeddings and similarity to route between destination chains."
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "55c3ed0e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.router.embedding_router import EmbeddingRouterChain\n",
"from langchain_community.embeddings import CohereEmbeddings\n",
"from langchain_community.vectorstores import Chroma"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "572a5082",
"metadata": {},
"outputs": [],
"source": [
"names_and_descriptions = [\n",
" (\"physics\", [\"for questions about physics\"]),\n",
" (\"math\", [\"for questions about math\"]),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "50221efe",
"metadata": {},
"outputs": [],
"source": [
"router_chain = EmbeddingRouterChain.from_names_and_descriptions(\n",
" names_and_descriptions, Chroma, CohereEmbeddings(), routing_keys=[\"input\"]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "ff7996a0",
"metadata": {},
"outputs": [],
"source": [
"chain = MultiPromptChain(\n",
" router_chain=router_chain,\n",
" destination_chains=destination_chains,\n",
" default_chain=default_chain,\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "99270cc9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new MultiPromptChain chain...\u001b[0m\n",
"physics: {'input': 'What is black body radiation?'}\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"Black body radiation is the electromagnetic radiation emitted by a black body, which is an idealized physical body that absorbs all incident electromagnetic radiation. This radiation is related to the temperature of the body, with higher temperatures leading to higher radiation levels. The spectrum of the radiation is continuous, and is described by the Planck's law of black body radiation.\n"
]
}
],
"source": [
"print(chain.run(\"What is black body radiation?\"))"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "b5ce6238",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new MultiPromptChain chain...\u001b[0m\n",
"math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?'}\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"The first prime number greater than 40 such that one plus the prime number is divisible by 3 is 43. This is because 43 is a prime number, and 1 + 43 = 44, which is divisible by 3.\n"
]
}
],
"source": [
"print(\n",
" chain.run(\n",
" \"What is the first prime number greater than 40 such that one plus the prime number is divisible by 3?\"\n",
" )\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,440 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "119af92c-f02c-4729-84ac-0f69d6208c1b",
"metadata": {},
"source": [
"# Sequential\n",
"\n",
"The next step after calling a language model is to make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.\n",
"\n",
"The recommended way to do this is using the LangChain Expression Language. The legacy way is using the `SequentialChain`, which we continue to document here for backwards compatibility.\n",
"\n",
"As a toy example, let's suppose we want to create a chain that first creates a play synopsis and then generates a play review based on the synopsis."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "443e62b9-8a68-468e-b91d-f19de2993fe8",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"\n",
"synopsis_prompt = PromptTemplate.from_template(\n",
" \"\"\"You are a playwright. Given the title of play, it is your job to write a synopsis for that title.\n",
"\n",
"Title: {title}\n",
"Playwright: This is a synopsis for the above play:\"\"\"\n",
")\n",
"\n",
"review_prompt = PromptTemplate.from_template(\n",
" \"\"\"You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.\n",
"\n",
"Play Synopsis:\n",
"{synopsis}\n",
"Review from a New York Times play critic of the above play:\"\"\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "7d1b284f-73b4-4f3c-ab88-e4c6f4b0bf76",
"metadata": {},
"source": [
"## Using LCEL\n",
"\n",
"Creating a sequence of calls (to LLMs or any other component/arbitrary function) is precisely what LangChain Expression Language was designed for."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "c0a43154-7624-41b7-9832-f2022af41fba",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'In \"Tragedy at Sunset on the Beach,\" playwright has crafted a deeply affecting drama that delves into the complexities of human relationships and the consequences that arise from one fateful evening. Set against the breathtaking backdrop of a serene beach at sunset, the play takes audiences on an emotional journey as it explores the lives of four individuals whose paths intertwine in unexpected and tragic ways.\\n\\nAt the center of the story is Sarah, a young woman grappling with the recent loss of her husband. Seeking solace and a fresh start, she embarks on a solitary trip to the beach, hoping to find peace and clarity. It is here that she encounters James, a charismatic but troubled artist, lost in his own world of anguish and self-doubt. The unlikely connection they form becomes the catalyst for a series of heart-wrenching events, as their emotional baggage and personal demons collide.\\n\\nThe play skillfully weaves together the narratives of Sarah, James, and Rachel, Sarah\\'s best friend. As Rachel arrives on the beach with the intention of helping Sarah heal, she unknowingly carries a secret that threatens to shatter their friendship forever. Against the backdrop of crashing waves and vibrant sunsets, the characters\\' lives unravel, exposing hidden desires, betrayals, and deeply buried secrets. The boundaries of love, friendship, and loyalty blur, forcing each character to confront their own vulnerabilities and face the consequences of their choices.\\n\\nWhat sets \"Tragedy at Sunset on the Beach\" apart is its ability to evoke genuine emotion from its audience. The playwright\\'s poignant exploration of the human condition touches upon universal themes of loss, forgiveness, and the lengths we go to protect the ones we love. The richly drawn characters come alive on stage, their struggles and triumphs resonating deeply with the audience. Moments of intense emotion are skillfully crafted, leaving spectators captivated and moved.\\n\\nThe play\\'s evocative setting adds another layer of depth to the storytelling. The picturesque beach at sunset becomes a metaphor for the fragility of life and the fleeting nature of happiness. The crashing waves and vibrant colors serve as a backdrop to the characters\\' unraveling lives, heightening the emotional impact of their stories.\\n\\nWhile \"Tragedy at Sunset on the Beach\" is undeniably a heavy and somber play, it ultimately leaves audiences questioning the power of redemption. The characters\\' journeys, though tragic, offer glimpses of hope and the potential for healing. It reminds us that even amidst the darkest moments, there is still a chance for redemption and forgiveness.\\n\\nOverall, \"Tragedy at Sunset on the Beach\" is a thought-provoking and emotionally charged play that will captivate audiences from start to finish. The playwright\\'s skillful storytelling, evocative setting, and richly drawn characters make for a truly memorable theatrical experience. This is a play that will leave spectators questioning their own lives and the choices they make, long after the curtain falls.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.schema import StrOutputParser\n",
"from langchain_community.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI()\n",
"chain = (\n",
" {\"synopsis\": synopsis_prompt | llm | StrOutputParser()}\n",
" | review_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
"chain.invoke({\"title\": \"Tragedy at sunset on the beach\"})"
]
},
{
"cell_type": "markdown",
"id": "c37f72d5-a005-444b-b97e-39df86c515c7",
"metadata": {},
"source": [
"If we wanted to get back the synopsis as well we could do:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "9f9fb8ad-b6eb-49c3-a1d1-83f4460525e6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'synopsis': 'Tragedy at Sunset on the Beach is a gripping and emotionally charged drama that delves into the complexities of human relationships and the fragility of life. Set against the backdrop of a picturesque beach at sunset, the play follows a group of friends who gather to celebrate a joyous occasion.\\n\\nAs the sun begins its descent, tensions simmer beneath the surface, and long-held secrets and resentments come to light. The characters find themselves entangled in a web of love, betrayal, and loss, as they confront their deepest fears and desires.\\n\\nThe main focus revolves around Sarah, a vibrant and free-spirited woman who becomes the center of a tragic event. Through a series of flashback scenes, we witness the unraveling of her life, exploring her complicated relationships with her closest friends and romantic partners.\\n\\nThe play explores themes of regret, redemption, and the consequences of our choices. It delves into the human condition, questioning the nature of happiness and the value of time. The audience is taken on an emotional rollercoaster, experiencing moments of laughter, heartache, and profound reflection.\\n\\nTragedy at Sunset on the Beach challenges conventional notions of tragedy, evoking a sense of empathy and understanding for the flawed and vulnerable characters. It serves as a reminder that life is unpredictable and fragile, urging us to cherish every moment and embrace the beauty that exists even amidst tragedy.',\n",
" 'review': \"In Tragedy at Sunset on the Beach, playwright John Smithson delivers a powerful and thought-provoking exploration of the human experience. Set against the stunning backdrop of a beach at sunset, this emotionally charged drama takes the audience on a journey through the complexities of relationships, the fragility of life, and the profound impact of our choices.\\n\\nSmithson skillfully weaves together a tale of love, betrayal, and loss, as a group of friends gather to celebrate a joyous occasion. As the sun sets, tensions rise, and long-held secrets and resentments are exposed, leaving the characters entangled in a web of emotions. Through a series of poignant flashback scenes, we witness the unraveling of Sarah's life, a vibrant and free-spirited woman who becomes the center of a tragic event.\\n\\nWhat sets Tragedy at Sunset on the Beach apart is its ability to challenge conventional notions of tragedy. Smithson masterfully portrays flawed and vulnerable characters with such empathy and understanding that the audience can't help but empathize with their struggles. This play serves as a reminder that life is unpredictable and fragile, urging us to cherish every moment and embrace the beauty that exists even amidst tragedy.\\n\\nThe performances in this production are nothing short of extraordinary. The actors effortlessly navigate the emotional rollercoaster of the script, eliciting moments of laughter, heartache, and profound reflection from the audience. Their ability to convey the complexities of their characters' relationships and inner turmoil is truly commendable.\\n\\nThe direction by Jane Anderson is impeccable, capturing the essence of the beach at sunset and utilizing the space to create an immersive experience for the audience. The use of flashbacks adds depth and nuance to the narrative, allowing for a deeper understanding of the characters and their motivations.\\n\\nTragedy at Sunset on the Beach is not a play for the faint of heart. It tackles heavy themes of regret, redemption, and the consequences of our choices. However, it is precisely this raw and unflinching exploration of the human condition that makes it such a compelling piece of theater. Smithson's writing, combined with the exceptional performances and direction, make this play a must-see for theatergoers looking for a thought-provoking and emotionally resonant experience.\\n\\nIn a city renowned for its theater scene, Tragedy at Sunset on the Beach stands out as a shining example of the power of live performance to evoke empathy, provoke contemplation, and remind us of the fragile beauty of life. It is a production that will linger in the minds and hearts of its audience long after the final curtain falls.\"}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"synopsis_chain = synopsis_prompt | llm | StrOutputParser()\n",
"review_chain = review_prompt | llm | StrOutputParser()\n",
"chain = {\"synopsis\": synopsis_chain} | RunnablePassthrough.assign(review=review_chain)\n",
"chain.invoke({\"title\": \"Tragedy at sunset on the beach\"})"
]
},
{
"cell_type": "markdown",
"id": "5b145aac-cd8f-466c-a5ba-92b376a711f8",
"metadata": {},
"source": [
"Head to the [LCEL](/docs/expression_language) section for more on the interface, built-in features, and cookbook examples."
]
},
{
"cell_type": "markdown",
"id": "9af35228-d3ff-4c95-8168-506c72618ace",
"metadata": {},
"source": [
"## [Legacy] SequentialChain\n",
"\n",
":::note\n",
"\n",
"This is a legacy class, using LCEL as shown above is preferred.\n",
"\n",
":::\n",
"\n",
"Sequential chains allow you to connect multiple chains and compose them into pipelines that execute some specific scenario. There are two types of sequential chains:\n",
"\n",
"- `SimpleSequentialChain`: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.\n",
"- `SequentialChain`: A more general form of sequential chains, allowing for multiple inputs/outputs."
]
},
{
"cell_type": "markdown",
"id": "6c25c84e-c9f6-43be-8282-78fbd1525091",
"metadata": {},
"source": [
"### SimpleSequentialChain"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7ed84b1a-66a6-463c-ba61-1e98434e1958",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_community.llms import OpenAI\n",
"\n",
"# This is an LLMChain to write a synopsis given a title of a play.\n",
"llm = OpenAI(temperature=0.7)\n",
"synopsis_chain = LLMChain(llm=llm, prompt=synopsis_prompt)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "a3173022-e2c7-478b-a9b8-4a535d905a1c",
"metadata": {},
"outputs": [],
"source": [
"# This is an LLMChain to write a review of a play given a synopsis.\n",
"llm = OpenAI(temperature=0.7)\n",
"review_chain = LLMChain(llm=llm, prompt=review_prompt)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b2fdd3d9-cd49-4606-b016-678e27d2b6e0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new SimpleSequentialChain chain...\u001b[0m\n",
"\u001b[36;1m\u001b[1;3m\n",
"\n",
"Tragedy at Sunset on the Beach is a modern tragedy about a young couple in love. The couple, Jack and Jill, are deeply in love and plan to spend the day together on the beach at sunset. However, when they arrive, they are shocked to discover that the beach is an abandoned, dilapidated wasteland. With no one else around, they explore the beach and start to reminisce about their relationship and the good times theyve shared. \n",
"\n",
"But then, out of the blue, a mysterious figure emerges from the shadows and reveals a dark secret. The figure tells the couple that the beach is no ordinary beach, but is in fact the site of a terrible tragedy that took place many years ago. As the figure explains what happened, Jack and Jill become overwhelmed with grief. \n",
"\n",
"In the end, Jack and Jill are forced to confront the truth about the tragedy and its consequences. The play is ultimately a reflection on the power of tragedy and the human capacity to confront and overcome it.\u001b[0m\n",
"\u001b[33;1m\u001b[1;3m\n",
"\n",
"Tragedy at Sunset on the Beach is a powerful, thought-provoking modern tragedy that is sure to leave a lasting impression on its audience. The play follows the story of Jack and Jill, a young couple deeply in love, as they explore an abandoned beach and discover a dark secret from the past.\n",
"\n",
"The play brilliantly captures the raw emotions of Jack and Jill as they learn of the tragedy that has occurred on the beach. The writing is masterful, and the actors do a wonderful job of conveying the couples grief and pain. The play is ultimately a reflection on the power of tragedy and the human capacity to confront and overcome it.\n",
"\n",
"Overall, Tragedy at Sunset on the Beach is a must-see for anyone looking for a thought-provoking and emotionally moving play. This play is sure to stay with its audience long after the curtain closes. Highly recommended.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"# This is the overall chain where we run these two chains in sequence.\n",
"from langchain.chains import SimpleSequentialChain\n",
"\n",
"overall_chain = SimpleSequentialChain(\n",
" chains=[synopsis_chain, review_chain], verbose=True\n",
")\n",
"\n",
"review = overall_chain.run(\"Tragedy at sunset on the beach\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "5f023a0c-9305-4a14-ae24-23fff9933861",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"Tragedy at Sunset on the Beach is a powerful, thought-provoking modern tragedy that is sure to leave a lasting impression on its audience. The play follows the story of Jack and Jill, a young couple deeply in love, as they explore an abandoned beach and discover a dark secret from the past.\n",
"\n",
"The play brilliantly captures the raw emotions of Jack and Jill as they learn of the tragedy that has occurred on the beach. The writing is masterful, and the actors do a wonderful job of conveying the couples grief and pain. The play is ultimately a reflection on the power of tragedy and the human capacity to confront and overcome it.\n",
"\n",
"Overall, Tragedy at Sunset on the Beach is a must-see for anyone looking for a thought-provoking and emotionally moving play. This play is sure to stay with its audience long after the curtain closes. Highly recommended.\n"
]
}
],
"source": [
"print(review)"
]
},
{
"cell_type": "markdown",
"id": "d09df151-6a66-4982-8424-44ec3c92422d",
"metadata": {},
"source": [
"### SequentialChain\n",
"Of course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs.\n",
"\n",
"Of particular importance is how we name the input/output variables. In the above example we didn't have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "7481ed64-22f3-47dc-9796-c372eeb4f7bd",
"metadata": {},
"outputs": [],
"source": [
"# This is an LLMChain to write a synopsis given a title of a play and the era it is set in.\n",
"llm = OpenAI(temperature=0.7)\n",
"synopsis_template = \"\"\"You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title.\n",
"\n",
"Title: {title}\n",
"Era: {era}\n",
"Playwright: This is a synopsis for the above play:\"\"\"\n",
"synopsis_prompt_template = PromptTemplate(\n",
" input_variables=[\"title\", \"era\"], template=synopsis_template\n",
")\n",
"synopsis_chain = LLMChain(\n",
" llm=llm, prompt=synopsis_prompt_template, output_key=\"synopsis\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "6d5ec9be-7101-460a-9fc1-7ef2d02434e7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new SequentialChain chain...\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'title': 'Tragedy at sunset on the beach',\n",
" 'era': 'Victorian England',\n",
" 'synopsis': \"\\n\\nThe play is set in Victorian England and follows the story of a young couple, Mary and John, who were deeply in love and had just gotten engaged. On the night of their engagement, they decided to take a romantic walk along the beach at sunset. Unexpectedly, John is shot by a stranger and killed right in front of Mary. In a state of shock and anguish, Mary is left alone, struggling to comprehend what has just occurred. \\n\\nThe play follows Mary as she searches for answers to John's death. As Mary's investigation begins, she discovers that John was actually involved in a dark and dangerous plot to overthrow the government. Unbeknownst to Mary, John had been working as a spy in a secret mission to uncover the truth behind a political scandal. \\n\\nNow, Mary must face the consequences of her beloved's actions and find a way to save the future of England. As the story unfolds, Mary must confront her own beliefs as well as the powerful people who are determined to end her mission. \\n\\nAt the end of the play, all of Mary's questions are answered and she is able to make a choice that will ultimately decide the fate of the nation. Tragedy at Sunset on the Beach is a\",\n",
" 'review': \"\\n\\nSet against the backdrop of Victorian England, Tragedy at Sunset on the Beach tells a heart-wrenching story of love, loss, and tragedy. The play follows Mary and John, a young couple deeply in love, who experience an unexpected tragedy on the night of their engagement. When John is shot and killed by a stranger, Mary is left alone to uncover the truth behind her beloved's death.\\n\\nWhat follows is an intense and gripping journey as Mary discovers that John was a spy in a secret mission to uncover a powerful political scandal. As Mary faces off against those determined to end her mission, she must confront her own beliefs and ultimately decide the fate of the nation.\\n\\nThe play is skillfully crafted and brilliantly performed. The actors portray a range of emotions from joy to sorrow that will leave the audience moved and captivated. The production is a beautiful testament to the power of love and the strength of the human spirit, and it is sure to leave a lasting impression. Highly recommended.\"}"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This is an LLMChain to write a review of a play given a synopsis.\n",
"llm = OpenAI(temperature=0.7)\n",
"template = \"\"\"You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.\n",
"\n",
"Play Synopsis:\n",
"{synopsis}\n",
"Review from a New York Times play critic of the above play:\"\"\"\n",
"prompt_template = PromptTemplate(input_variables=[\"synopsis\"], template=template)\n",
"review_chain = LLMChain(llm=llm, prompt=prompt_template, output_key=\"review\")\n",
"\n",
"# This is the overall chain where we run these two chains in sequence.\n",
"from langchain.chains import SequentialChain\n",
"\n",
"overall_chain = SequentialChain(\n",
" chains=[synopsis_chain, review_chain],\n",
" input_variables=[\"era\", \"title\"],\n",
" # Here we return multiple variables\n",
" output_variables=[\"synopsis\", \"review\"],\n",
" verbose=True,\n",
")\n",
"\n",
"\n",
"overall_chain({\"title\": \"Tragedy at sunset on the beach\", \"era\": \"Victorian England\"})"
]
},
{
"cell_type": "markdown",
"id": "282f3c01-566b-4285-9615-dd07c8d43d54",
"metadata": {},
"source": [
"#### Memory in Sequential Chains\n",
"Sometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using `SimpleMemory` is a convenient way to do manage this and clean up your chains.\n",
"\n",
"For example, using the previous playwright `SequentialChain`, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as `input_variables`, or we can add a `SimpleMemory` to the chain to manage this context:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e50d0da6-dea1-428f-94eb-c7dfc3d298e3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new SequentialChain chain...\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'title': 'Tragedy at sunset on the beach',\n",
" 'era': 'Victorian England',\n",
" 'time': 'December 25th, 8pm PST',\n",
" 'location': 'Theater in the Park',\n",
" 'social_post_text': \"Experience a heartbreaking love story this Christmas as we bring you 'Tragedy at Sunset on the Beach', set in Victorian England on December 25th at 8pm PST at the Theater in the Park. Follow the story of two young lovers, George and Mary, and their fight against overwhelming odds. Will their love prevail? Find out this Christmas Day! #TragedyAtSunset #LoveStory #Christmas #VictorianEngland\"}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import SequentialChain\n",
"from langchain.memory import SimpleMemory\n",
"\n",
"llm = OpenAI(temperature=0.7)\n",
"template = \"\"\"You are a social media manager for a theater company. Given the title of play, the era it is set in, the date,time and location, the synopsis of the play, and the review of the play, it is your job to write a social media post for that play.\n",
"\n",
"Here is some context about the time and location of the play:\n",
"Date and Time: {time}\n",
"Location: {location}\n",
"\n",
"Play Synopsis:\n",
"{synopsis}\n",
"Review from a New York Times play critic of the above play:\n",
"{review}\n",
"\n",
"Social Media Post:\n",
"\"\"\"\n",
"prompt_template = PromptTemplate(\n",
" input_variables=[\"synopsis\", \"review\", \"time\", \"location\"], template=template\n",
")\n",
"social_chain = LLMChain(llm=llm, prompt=prompt_template, output_key=\"social_post_text\")\n",
"\n",
"overall_chain = SequentialChain(\n",
" memory=SimpleMemory(\n",
" memories={\"time\": \"December 25th, 8pm PST\", \"location\": \"Theater in the Park\"}\n",
" ),\n",
" chains=[synopsis_chain, review_chain, social_chain],\n",
" input_variables=[\"era\", \"title\"],\n",
" # Here we return multiple variables\n",
" output_variables=[\"social_post_text\"],\n",
" verbose=True,\n",
")\n",
"\n",
"overall_chain({\"title\": \"Tragedy at sunset on the beach\", \"era\": \"Victorian England\"})"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,198 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "872bb8b5",
"metadata": {},
"source": [
"# Transformation\n",
"\n",
"Often we want to transform inputs as they are passed from one component to another.\n",
"\n",
"As an example, we will create a dummy transformation that takes in a super long text, filters the text to only the first 3 paragraphs, and then passes that into a chain to summarize those."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d257f50d-c53d-41b7-be8a-df23fbd7c017",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"\n",
"prompt = PromptTemplate.from_template(\n",
" \"\"\"Summarize this text:\n",
"\n",
"{output_text}\n",
"\n",
"Summary:\"\"\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8ae5937c",
"metadata": {},
"outputs": [],
"source": [
"with open(\"../../state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()"
]
},
{
"cell_type": "markdown",
"id": "4c938536-e3fb-45eb-a1b3-cb82be410e32",
"metadata": {},
"source": [
"## Using LCEL\n",
"\n",
"With LCEL this is trivial, since we can add functions in any `RunnableSequence`."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "1e53e851-b1bd-424f-a144-5f2e8b413dcf",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The speaker acknowledges the presence of important figures in the government and addresses the audience as fellow Americans. They highlight the impact of COVID-19 on keeping people apart in the previous year but express joy in being able to come together again. The speaker emphasizes the unity of Democrats, Republicans, and Independents as Americans.'"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.schema import StrOutputParser\n",
"from langchain_community.chat_models import ChatOpenAI\n",
"\n",
"runnable = (\n",
" {\"output_text\": lambda text: \"\\n\\n\".join(text.split(\"\\n\\n\")[:3])}\n",
" | prompt\n",
" | ChatOpenAI()\n",
" | StrOutputParser()\n",
")\n",
"runnable.invoke(state_of_the_union)"
]
},
{
"cell_type": "markdown",
"id": "a9b9bd07-155f-4777-9215-509d39ecfe3f",
"metadata": {},
"source": [
"## [Legacy] TransformationChain\n",
"\n",
":::note\n",
"\n",
"This is a legacy class, using LCEL as shown above is preferred.\n",
"\n",
":::\n",
"\n",
"This notebook showcases using a generic transformation chain."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "bbbb4330",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain, SimpleSequentialChain, TransformChain\n",
"from langchain_community.llms import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "98739592",
"metadata": {},
"outputs": [],
"source": [
"def transform_func(inputs: dict) -> dict:\n",
" text = inputs[\"text\"]\n",
" shortened_text = \"\\n\\n\".join(text.split(\"\\n\\n\")[:3])\n",
" return {\"output_text\": shortened_text}\n",
"\n",
"\n",
"transform_chain = TransformChain(\n",
" input_variables=[\"text\"], output_variables=[\"output_text\"], transform=transform_func\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e9397934",
"metadata": {},
"outputs": [],
"source": [
"template = \"\"\"Summarize this text:\n",
"\n",
"{output_text}\n",
"\n",
"Summary:\"\"\"\n",
"prompt = PromptTemplate(input_variables=[\"output_text\"], template=template)\n",
"llm_chain = LLMChain(llm=OpenAI(), prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "06f51f17",
"metadata": {},
"outputs": [],
"source": [
"sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain])"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "f7caa1ee",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' In an address to the nation, the speaker acknowledges the hardships of the past year due to the COVID-19 pandemic, but emphasizes that regardless of political affiliation, all Americans can come together.'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sequential_chain.run(state_of_the_union)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,141 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "593f7553-7038-498e-96d4-8255e5ce34f0",
"metadata": {},
"source": [
"# Async API\n",
"\n",
"LangChain provides async support by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library."
]
},
{
"cell_type": "raw",
"id": "0c0f45ed-9cef-4798-975c-d2912a248591",
"metadata": {},
"source": [
":::info\n",
"Async support is built into all `Runnable` objects (the building block of [LangChain Expression Language (LCEL)](/docs/expression_language) by default. Using LCEL is preferred to using `Chain`s. Head to [Interface](/docs/expression_language/interface) for more on the `Runnable` interface.\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c19c736e-ca74-4726-bb77-0a849bcc2960",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"BrightSmile Toothpaste Company\n",
"\n",
"\n",
"BrightSmile Toothpaste Co.\n",
"\n",
"\n",
"BrightSmile Toothpaste\n",
"\n",
"\n",
"Gleaming Smile Inc.\n",
"\n",
"\n",
"SparkleSmile Toothpaste\n",
"\u001b[1mConcurrent executed in 1.54 seconds.\u001b[0m\n",
"\n",
"\n",
"BrightSmile Toothpaste Co.\n",
"\n",
"\n",
"MintyFresh Toothpaste Co.\n",
"\n",
"\n",
"SparkleSmile Toothpaste.\n",
"\n",
"\n",
"Pearly Whites Toothpaste Co.\n",
"\n",
"\n",
"BrightSmile Toothpaste.\n",
"\u001b[1mSerial executed in 6.38 seconds.\u001b[0m\n"
]
}
],
"source": [
"import asyncio\n",
"import time\n",
"\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain_community.llms import OpenAI\n",
"\n",
"\n",
"def generate_serially():\n",
" llm = OpenAI(temperature=0.9)\n",
" prompt = PromptTemplate(\n",
" input_variables=[\"product\"],\n",
" template=\"What is a good name for a company that makes {product}?\",\n",
" )\n",
" chain = LLMChain(llm=llm, prompt=prompt)\n",
" for _ in range(5):\n",
" resp = chain.run(product=\"toothpaste\")\n",
" print(resp)\n",
"\n",
"\n",
"async def async_generate(chain):\n",
" resp = await chain.arun(product=\"toothpaste\")\n",
" print(resp)\n",
"\n",
"\n",
"async def generate_concurrently():\n",
" llm = OpenAI(temperature=0.9)\n",
" prompt = PromptTemplate(\n",
" input_variables=[\"product\"],\n",
" template=\"What is a good name for a company that makes {product}?\",\n",
" )\n",
" chain = LLMChain(llm=llm, prompt=prompt)\n",
" tasks = [async_generate(chain) for _ in range(5)]\n",
" await asyncio.gather(*tasks)\n",
"\n",
"\n",
"s = time.perf_counter()\n",
"# If running this outside of Jupyter, use asyncio.run(generate_concurrently())\n",
"await generate_concurrently()\n",
"elapsed = time.perf_counter() - s\n",
"print(\"\\033[1m\" + f\"Concurrent executed in {elapsed:0.2f} seconds.\" + \"\\033[0m\")\n",
"\n",
"s = time.perf_counter()\n",
"generate_serially()\n",
"elapsed = time.perf_counter() - s\n",
"print(\"\\033[1m\" + f\"Serial executed in {elapsed:0.2f} seconds.\" + \"\\033[0m\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,191 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Different call methods\n",
"\n",
"All classes inherited from `Chain` offer a few ways of running chain logic. The most direct one is by using `__call__`:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'adjective': 'corny',\n",
" 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains.llm import LLMChain\n",
"from langchain_community.chat_models.openai import ChatOpenAI\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"chat = ChatOpenAI(temperature=0)\n",
"prompt_template = \"Tell me a {adjective} joke\"\n",
"llm_chain = LLMChain(llm=chat, prompt=PromptTemplate.from_template(prompt_template))\n",
"\n",
"llm_chain(inputs={\"adjective\": \"corny\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By default, `__call__` returns both the input and output key values. You can configure it to only return output key values by setting `return_only_outputs` to `True`."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain(\"corny\", return_only_outputs=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If the `Chain` only outputs one output key (i.e. only has one element in its `output_keys`), you can use `run` method. Note that `run` outputs a string instead of a dictionary."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['text']"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# llm_chain only has one output key, so we can use run\n",
"llm_chain.output_keys"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Why did the tomato turn red? Because it saw the salad dressing!'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_chain.run({\"adjective\": \"corny\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the case of one input key, you can input the string directly without specifying the input mapping."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'adjective': 'corny',\n",
" 'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# These two are equivalent\n",
"llm_chain.run({\"adjective\": \"corny\"})\n",
"llm_chain.run(\"corny\")\n",
"\n",
"# These two are also equivalent\n",
"llm_chain(\"corny\")\n",
"llm_chain({\"adjective\": \"corny\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Tips: You can easily integrate a `Chain` object as a `Tool` in your `Agent` via its `run` method. See an example [here](/docs/modules/agents/tools/how_to/custom_tools)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
},
"vscode": {
"interpreter": {
"hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

@ -1,188 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "593f7553-7038-498e-96d4-8255e5ce34f0",
"metadata": {},
"source": [
"# Custom chain\n",
"\n",
"To implement your own custom chain you can subclass `Chain` and implement the following methods:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "c19c736e-ca74-4726-bb77-0a849bcc2960",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from __future__ import annotations\n",
"\n",
"from typing import Any, Dict, List, Optional\n",
"\n",
"from langchain.callbacks.manager import (\n",
" AsyncCallbackManagerForChainRun,\n",
" CallbackManagerForChainRun,\n",
")\n",
"from langchain.chains.base import Chain\n",
"from langchain.prompts.base import BasePromptTemplate\n",
"from langchain_core.language_models import BaseLanguageModel\n",
"from pydantic import Extra\n",
"\n",
"\n",
"class MyCustomChain(Chain):\n",
" \"\"\"\n",
" An example of a custom chain.\n",
" \"\"\"\n",
"\n",
" prompt: BasePromptTemplate\n",
" \"\"\"Prompt object to use.\"\"\"\n",
" llm: BaseLanguageModel\n",
" output_key: str = \"text\" #: :meta private:\n",
"\n",
" class Config:\n",
" \"\"\"Configuration for this pydantic object.\"\"\"\n",
"\n",
" extra = Extra.forbid\n",
" arbitrary_types_allowed = True\n",
"\n",
" @property\n",
" def input_keys(self) -> List[str]:\n",
" \"\"\"Will be whatever keys the prompt expects.\n",
"\n",
" :meta private:\n",
" \"\"\"\n",
" return self.prompt.input_variables\n",
"\n",
" @property\n",
" def output_keys(self) -> List[str]:\n",
" \"\"\"Will always return text key.\n",
"\n",
" :meta private:\n",
" \"\"\"\n",
" return [self.output_key]\n",
"\n",
" def _call(\n",
" self,\n",
" inputs: Dict[str, Any],\n",
" run_manager: Optional[CallbackManagerForChainRun] = None,\n",
" ) -> Dict[str, str]:\n",
" # Your custom chain logic goes here\n",
" # This is just an example that mimics LLMChain\n",
" prompt_value = self.prompt.format_prompt(**inputs)\n",
"\n",
" # Whenever you call a language model, or another chain, you should pass\n",
" # a callback manager to it. This allows the inner run to be tracked by\n",
" # any callbacks that are registered on the outer run.\n",
" # You can always obtain a callback manager for this by calling\n",
" # `run_manager.get_child()` as shown below.\n",
" response = self.llm.generate_prompt(\n",
" [prompt_value], callbacks=run_manager.get_child() if run_manager else None\n",
" )\n",
"\n",
" # If you want to log something about this run, you can do so by calling\n",
" # methods on the `run_manager`, as shown below. This will trigger any\n",
" # callbacks that are registered for that event.\n",
" if run_manager:\n",
" run_manager.on_text(\"Log something about this run\")\n",
"\n",
" return {self.output_key: response.generations[0][0].text}\n",
"\n",
" async def _acall(\n",
" self,\n",
" inputs: Dict[str, Any],\n",
" run_manager: Optional[AsyncCallbackManagerForChainRun] = None,\n",
" ) -> Dict[str, str]:\n",
" # Your custom chain logic goes here\n",
" # This is just an example that mimics LLMChain\n",
" prompt_value = self.prompt.format_prompt(**inputs)\n",
"\n",
" # Whenever you call a language model, or another chain, you should pass\n",
" # a callback manager to it. This allows the inner run to be tracked by\n",
" # any callbacks that are registered on the outer run.\n",
" # You can always obtain a callback manager for this by calling\n",
" # `run_manager.get_child()` as shown below.\n",
" response = await self.llm.agenerate_prompt(\n",
" [prompt_value], callbacks=run_manager.get_child() if run_manager else None\n",
" )\n",
"\n",
" # If you want to log something about this run, you can do so by calling\n",
" # methods on the `run_manager`, as shown below. This will trigger any\n",
" # callbacks that are registered for that event.\n",
" if run_manager:\n",
" await run_manager.on_text(\"Log something about this run\")\n",
"\n",
" return {self.output_key: response.generations[0][0].text}\n",
"\n",
" @property\n",
" def _chain_type(self) -> str:\n",
" return \"my_custom_chain\""
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "18361f89",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new MyCustomChain chain...\u001b[0m\n",
"Log something about this run\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Why did the callback function feel lonely? Because it was always waiting for someone to call it back!'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.callbacks.stdout import StdOutCallbackHandler\n",
"from langchain.prompts.prompt import PromptTemplate\n",
"from langchain_community.chat_models.openai import ChatOpenAI\n",
"\n",
"chain = MyCustomChain(\n",
" prompt=PromptTemplate.from_template(\"tell us a joke about {topic}\"),\n",
" llm=ChatOpenAI(),\n",
")\n",
"\n",
"chain.run({\"topic\": \"callbacks\"}, callbacks=[StdOutCallbackHandler()])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,8 +0,0 @@
---
sidebar_position: 0
---
# How to
import DocCardList from "@theme/DocCardList";
<DocCardList />

@ -1,32 +0,0 @@
# Adding memory (state)
Chains can be initialized with a Memory object, which will persist data across calls to the chain. This makes a Chain stateful.
## Get started
```python
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
conversation = ConversationChain(
llm=chat,
memory=ConversationBufferMemory()
)
conversation.run("Answer briefly. What are the first 3 colors of a rainbow?")
# -> The first three colors of a rainbow are red, orange, and yellow.
conversation.run("And the next 4?")
# -> The next four colors of a rainbow are green, blue, indigo, and violet.
```
<CodeOutputBlock lang="python">
```
'The next four colors of a rainbow are green, blue, indigo, and violet.'
```
</CodeOutputBlock>
Essentially, `BaseMemory` defines an interface of how `langchain` stores memory. It allows reading of stored data through `load_memory_variables` method and storing new data through `save_context` method. You can learn more about it in the [Memory](/docs/modules/memory/) section.

@ -1,566 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "54ccb772",
"metadata": {},
"source": [
"# Using OpenAI functions\n",
"This walkthrough demonstrates how to incorporate OpenAI function-calling API's in a chain. We'll go over: \n",
"1. How to use functions to get structured outputs from ChatOpenAI\n",
"2. How to create a generic chain that uses (multiple) functions\n",
"3. How to create a chain that actually executes the chosen function"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "767ac575",
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional\n",
"\n",
"from langchain.chains.openai_functions import (\n",
" create_openai_fn_chain,\n",
" create_openai_fn_runnable,\n",
" create_structured_output_chain,\n",
" create_structured_output_runnable,\n",
")\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain_community.chat_models import ChatOpenAI"
]
},
{
"cell_type": "markdown",
"id": "976b6496",
"metadata": {},
"source": [
"## Getting structured outputs\n",
"We can take advantage of OpenAI functions to try and force the model to return a particular kind of structured output. We'll use `create_structured_output_runnable` to create our chain, which takes the desired structured output either as a Pydantic class or as JsonSchema."
]
},
{
"cell_type": "markdown",
"id": "e052faae",
"metadata": {},
"source": [
"### Using Pydantic classes\n",
"When passing in Pydantic classes to structure our text, we need to make sure to have a docstring description for the class. It also helps to have descriptions for each of the classes attributes."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0e085c99",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class Person(BaseModel):\n",
" \"\"\"Identifying information about a person.\"\"\"\n",
"\n",
" name: str = Field(..., description=\"The person's name\")\n",
" age: int = Field(..., description=\"The person's age\")\n",
" fav_food: Optional[str] = Field(None, description=\"The person's favorite food\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "b459a33e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Person(name='Sally', age=13, fav_food='Unknown')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# For better results in OpenAI function-calling API, it is recommended to explicitly pass the latest model.\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-1106\", temperature=0)\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a world class algorithm for extracting information in structured formats.\",\n",
" ),\n",
" (\n",
" \"human\",\n",
" \"Use the given format to extract information from the following input: {input}\",\n",
" ),\n",
" (\"human\", \"Tip: Make sure to answer in the correct format\"),\n",
" ]\n",
")\n",
"\n",
"runnable = create_structured_output_runnable(Person, llm, prompt)\n",
"runnable.invoke({\"input\": \"Sally is 13\"})"
]
},
{
"cell_type": "markdown",
"id": "e3539936",
"metadata": {},
"source": [
"To extract arbitrarily many structured outputs of a given format, we can just create a wrapper Pydantic class that takes a sequence of the original class."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4d8ea815",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"People(people=[Person(name='Sally', age=13, fav_food=''), Person(name='Joey', age=12, fav_food='spinach'), Person(name='Caroline', age=23, fav_food='')])"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Sequence\n",
"\n",
"\n",
"class People(BaseModel):\n",
" \"\"\"Identifying information about all people in a text.\"\"\"\n",
"\n",
" people: Sequence[Person] = Field(..., description=\"The people in the text\")\n",
"\n",
"\n",
"runnable = create_structured_output_runnable(People, llm, prompt)\n",
"runnable.invoke(\n",
" {\n",
" \"input\": \"Sally is 13, Joey just turned 12 and loves spinach. Caroline is 10 years older than Sally.\"\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ea66e10e",
"metadata": {},
"source": [
"### Using JsonSchema\n",
"\n",
"We can also pass in JsonSchema instead of Pydantic classes to specify the desired structure. When we do this, our chain will output JSON corresponding to the properties described in the JsonSchema, instead of a Pydantic class."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "3484415e",
"metadata": {},
"outputs": [],
"source": [
"json_schema = {\n",
" \"title\": \"Person\",\n",
" \"description\": \"Identifying information about a person.\",\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"name\": {\"title\": \"Name\", \"description\": \"The person's name\", \"type\": \"string\"},\n",
" \"age\": {\"title\": \"Age\", \"description\": \"The person's age\", \"type\": \"integer\"},\n",
" \"fav_food\": {\n",
" \"title\": \"Fav Food\",\n",
" \"description\": \"The person's favorite food\",\n",
" \"type\": \"string\",\n",
" },\n",
" },\n",
" \"required\": [\"name\", \"age\"],\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "be9b76b3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'name': 'Sally', 'age': 13}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"runnable = create_structured_output_runnable(json_schema, llm, prompt)\n",
"runnable.invoke({\"input\": \"Sally is 13\"})"
]
},
{
"cell_type": "markdown",
"id": "5f38ca2d-eb65-4836-9a21-9eaaa8c6c47c",
"metadata": {},
"source": [
"### [Legacy] LLMChain-based approach"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "4cf8d9b8-043b-414d-81e5-1a53c4881845",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mSystem: You are a world class algorithm for extracting information in structured formats.\n",
"Human: Use the given format to extract information from the following input: Sally is 13\n",
"Human: Tip: Make sure to answer in the correct format\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"Person(name='Sally', age=13, fav_food='Unknown')"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain = create_structured_output_chain(Person, llm, prompt, verbose=True)\n",
"chain.run(\"Sally is 13\")"
]
},
{
"cell_type": "markdown",
"id": "12394696",
"metadata": {},
"source": [
"## Creating a generic OpenAI functions chain\n",
"To create a generic OpenAI functions chain, we can use the `create_openai_fn_runnable` method. This is the same as `create_structured_output_runnable` except that instead of taking a single output schema, it takes a sequence of function definitions.\n",
"\n",
"Functions can be passed in as:\n",
"- dicts conforming to OpenAI functions spec,\n",
"- Pydantic classes, in which case they should have docstring descriptions of the function they represent and descriptions for each of the parameters,\n",
"- Python functions, in which case they should have docstring descriptions of the function and args, along with type hints."
]
},
{
"cell_type": "markdown",
"id": "ff19be25",
"metadata": {},
"source": [
"### Using Pydantic classes"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "17f52508",
"metadata": {},
"outputs": [],
"source": [
"class RecordPerson(BaseModel):\n",
" \"\"\"Record some identifying information about a pe.\"\"\"\n",
"\n",
" name: str = Field(..., description=\"The person's name\")\n",
" age: int = Field(..., description=\"The person's age\")\n",
" fav_food: Optional[str] = Field(None, description=\"The person's favorite food\")\n",
"\n",
"\n",
"class RecordDog(BaseModel):\n",
" \"\"\"Record some identifying information about a dog.\"\"\"\n",
"\n",
" name: str = Field(..., description=\"The dog's name\")\n",
" color: str = Field(..., description=\"The dog's color\")\n",
" fav_food: Optional[str] = Field(None, description=\"The dog's favorite food\")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "a4658ad8",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.openai_functions import (\n",
" convert_to_openai_function,\n",
" get_openai_output_parser,\n",
")\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a world class algorithm for recording entities.\"),\n",
" (\n",
" \"human\",\n",
" \"Make calls to the relevant function to record the entities in the following input: {input}\",\n",
" ),\n",
" (\"human\", \"Tip: Make sure to answer in the correct format\"),\n",
" ]\n",
")\n",
"\n",
"openai_functions = [convert_to_openai_function(f) for f in (RecordPerson, RecordDog)]\n",
"llm_kwargs = {\"functions\": openai_functions}\n",
"if len(openai_functions) == 1:\n",
" llm_kwargs[\"function_call\"] = {\"name\": openai_functions[0][\"name\"]}\n",
"output_parser = get_openai_output_parser((RecordPerson, RecordDog))\n",
"runnable = prompt | llm.bind(**llm_kwargs) | output_parser"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "a32148a2-8495-4a2b-942a-d605b131bf69",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"RecordDog(name='Harry', color='brown', fav_food='chicken')"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"runnable.invoke({\"input\": \"Harry was a chubby brown beagle who loved chicken\"})"
]
},
{
"cell_type": "markdown",
"id": "b57b2ca4-6519-4f7e-9b62-9ce14aad914f",
"metadata": {},
"source": [
"For convenience we can use the `create_openai_fn_runnable` method to help build our Runnable"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "88538970-91b3-4eea-9c2b-47210713492a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"RecordDog(name='Harry', color='brown', fav_food='chicken')"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"runnable = create_openai_fn_runnable([RecordPerson, RecordDog], llm, prompt)\n",
"runnable.invoke({\"input\": \"Harry was a chubby brown beagle who loved chicken\"})"
]
},
{
"cell_type": "markdown",
"id": "df6d9147",
"metadata": {},
"source": [
"### Using Python functions\n",
"We can pass in functions as Pydantic classes, directly as OpenAI function dicts, or Python functions. To pass Python function in directly, we'll want to make sure our parameters have type hints, we have a docstring, and we use [Google Python style docstrings](https://google.github.io/styleguide/pyguide.html#doc-function-args) to describe the parameters.\n",
"\n",
"**NOTE**: To use Python functions, make sure the function arguments are of primitive types (str, float, int, bool) or that they are Pydantic objects."
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "95ac5825",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'name': 'Tommy', 'age': 12, 'fav_food': {'food': 'apple pie'}}"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"class OptionalFavFood(BaseModel):\n",
" \"\"\"Either a food or null.\"\"\"\n",
"\n",
" food: Optional[str] = Field(\n",
" None,\n",
" description=\"Either the name of a food or null. Should be null if the food isn't known.\",\n",
" )\n",
"\n",
"\n",
"def record_person(name: str, age: int, fav_food: OptionalFavFood) -> str:\n",
" \"\"\"Record some basic identifying information about a person.\n",
"\n",
" Args:\n",
" name: The person's name.\n",
" age: The person's age in years.\n",
" fav_food: An OptionalFavFood object that either contains the person's favorite food or a null value. Food should be null if it's not known.\n",
" \"\"\"\n",
" return f\"Recording person {name} of age {age} with favorite food {fav_food.food}!\"\n",
"\n",
"\n",
"runnable = create_openai_fn_runnable([record_person], llm, prompt)\n",
"runnable.invoke(\n",
" {\n",
" \"input\": \"The most important thing to remember about Tommy, my 12 year old, is that he'll do anything for apple pie.\"\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "403ea5dd",
"metadata": {},
"source": [
"If we pass in multiple Python functions or OpenAI functions, then the returned output will be of the form:\n",
"```python\n",
"{\"name\": \"<<function_name>>\", \"arguments\": {<<function_arguments>>}}\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "8b0d11de",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'name': 'record_dog',\n",
" 'arguments': {'name': 'Henry', 'color': 'brown', 'fav_food': {'food': None}}}"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def record_dog(name: str, color: str, fav_food: OptionalFavFood) -> str:\n",
" \"\"\"Record some basic identifying information about a dog.\n",
"\n",
" Args:\n",
" name: The dog's name.\n",
" color: The dog's color.\n",
" fav_food: An OptionalFavFood object that either contains the dog's favorite food or a null value. Food should be null if it's not known.\n",
" \"\"\"\n",
" return f\"Recording dog {name} of color {color} with favorite food {fav_food}!\"\n",
"\n",
"\n",
"runnable = create_openai_fn_runnable([record_person, record_dog], llm, prompt)\n",
"runnable.invoke(\n",
" {\n",
" \"input\": \"I can't find my dog Henry anywhere, he's a small brown beagle. Could you send a message about him?\"\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c81e301d-3125-4b25-8a74-86ba9562952c",
"metadata": {},
"source": [
"## [Legacy] LLMChain-based approach"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "32711985-8dac-448a-ad65-cd3dd5e45fbe",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mSystem: You are a world class algorithm for recording entities.\n",
"Human: Make calls to the relevant function to record the entities in the following input: Harry was a chubby brown beagle who loved chicken\n",
"Human: Tip: Make sure to answer in the correct format\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"RecordDog(name='Harry', color='brown', fav_food='chicken')"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain = create_openai_fn_chain([RecordPerson, RecordDog], llm, prompt, verbose=True)\n",
"chain.run(\"Harry was a chubby brown beagle who loved chicken\")"
]
},
{
"cell_type": "markdown",
"id": "5f93686b",
"metadata": {},
"source": [
"## Other Chains using OpenAI functions\n",
"\n",
"There are a number of more specific chains that use OpenAI functions.\n",
"- [Extraction](/docs/use_cases/extraction): very similar to structured output chain, intended for information/entity extraction specifically.\n",
"- [Tagging](/docs/use_cases/tagging): tag inputs.\n",
"- [OpenAPI](/docs/use_cases/apis/openapi_openai): take an OpenAPI spec and create + execute valid requests against the API, using OpenAI functions under the hood.\n",
"- [QA with citations](/docs/use_cases/question_answering/qa_citations): use OpenAI functions ability to extract citations from text."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,196 +0,0 @@
{
"cells": [
{
"cell_type": "raw",
"id": "bcb4ca40-c3cb-4f23-b09f-4d6c3c46999f",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"title: Chains\n",
"sidebar_class_name: hidden\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "b872d874-ad6e-49b5-9435-66063a64d1a8",
"metadata": {},
"source": [
"Using an LLM in isolation is fine for simple applications, but more complex applications require chaining LLMs - either with each other or with other components.\n",
"\n",
"LangChain provides two high-level frameworks for \"chaining\" components. The legacy approach is to use the `Chain` interface. The updated approach is to use the [LangChain Expression Language (LCEL)](/docs/expression_language/). When building new applications we recommend using LCEL for chain composition. But there are a number of useful, built-in `Chain`'s that we continue to support, so we document both frameworks here. As we'll touch on below, `Chain`'s can also themselves be used in LCEL, so the two are not mutually exclusive."
]
},
{
"cell_type": "markdown",
"id": "6aedf9f6-b53f-4456-90cb-be3cfec04b4e",
"metadata": {},
"source": [
"## LCEL\n",
"\n",
"The most visible part of LCEL is that it provides an intuitive and readable syntax for composition. But more importantly, it also provides first-class support for:\n",
"\n",
"* [streaming](/docs/expression_language/interface#stream),\n",
"* [async calls](/docs/expression_language/interface#async-stream),\n",
"* [batching](/docs/expression_language/interface#batch),\n",
"* [parallelization](/docs/expression_language/interface#parallelism),\n",
"* retries,\n",
"* [fallbacks](/docs/expression_language/how_to/fallbacks),\n",
"* tracing,\n",
"* [and more.](/docs/expression_language/why)\n",
"\n",
"As a simple and common example, we can see what it's like to combine a prompt, model and output parser:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "beb2a555-866d-4837-bfe5-988dd4ab09a5",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema import StrOutputParser\n",
"from langchain_community.chat_models import ChatAnthropic\n",
"\n",
"model = ChatAnthropic()\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You're a very knowledgeable historian who provides accurate and eloquent answers to historical questions.\",\n",
" ),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"runnable = prompt | model | StrOutputParser()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2c14e850-32fa-4de7-9a9d-9ed0a3fb5e99",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Mansa Musa was the emperor of the Mali Empire in West Africa during the 14th century. He accumulated immense wealth through several means:\n",
"\n",
"- Gold mining - Mali contained very rich gold deposits, especially in the region of Bambuk. Gold mining and gold trade was a major source of wealth for the empire.\n",
"\n",
"- Control of trade routes - Mali dominated the trans-Saharan trade routes connecting West Africa to North Africa and beyond. By taxing the goods that passed through its territory, Mali profited greatly.\n",
"\n",
"- Tributary states - Many lands surrounding Mali paid tribute to the empire. This came in the form of gold, slaves, and other valuable resources.\n",
"\n",
"- Agriculture - Mali also had extensive agricultural lands irrigated by the Niger River. Surplus food produced could be sold or traded. \n",
"\n",
"- Royal monopolies - The emperor claimed monopoly rights over the production and sale of certain goods like salt from the Taghaza mines. This added to his personal wealth.\n",
"\n",
"- Inheritance - As an emperor, Mansa Musa inherited a wealthy state. His predecessors had already consolidated lands and accumulated riches which fell to Musa.\n",
"\n",
"So in summary, mining, trade, taxes,"
]
}
],
"source": [
"for chunk in runnable.stream({\"question\": \"How did Mansa Musa accumulate his wealth?\"}):\n",
" print(chunk, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "b09e115c-ca2d-4ec8-9676-5a37bd2692ab",
"metadata": {},
"source": [
"For more head to the [LCEL section](/docs/expression_language/)."
]
},
{
"cell_type": "markdown",
"id": "e0cc6c2c-bc35-4415-b894-9ef88014ba33",
"metadata": {},
"source": [
"## [Legacy] `Chain` interface\n",
"\n",
"**Chain**'s are the legacy interface for \"chained\" applications. We define a Chain very generically as a sequence of calls to components, which can include other chains. The base interface is simple:\n",
"\n",
"```python\n",
"class Chain(BaseModel, ABC):\n",
" \"\"\"Base interface that all chains should implement.\"\"\"\n",
"\n",
" memory: BaseMemory\n",
" callbacks: Callbacks\n",
"\n",
" def __call__(\n",
" self,\n",
" inputs: Any,\n",
" return_only_outputs: bool = False,\n",
" callbacks: Callbacks = None,\n",
" ) -> Dict[str, Any]:\n",
" ...\n",
"```\n",
"\n",
"We can recreate the LCEL runnable we made above using the built-in `LLMChain`:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1026a508-b2c7-4567-8ecf-487628737c16",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\" Mansa Musa was the emperor of the Mali Empire in West Africa in the early 14th century. He accumulated his vast wealth through several means:\\n\\n- Gold mining - Mali contained very rich gold deposits, especially in the southern part of the empire. Gold mining and trade was a major source of wealth.\\n\\n- Control of trade routes - Mali dominated the trans-Saharan trade routes connecting West Africa to North Africa and beyond. By taxing and controlling this lucrative trade, Mansa Musa reaped great riches.\\n\\n- Tributes from conquered lands - The Mali Empire expanded significantly under Mansa Musa's rule. As new lands were conquered, they paid tribute to the mansa in the form of gold, salt, and slaves.\\n\\n- Inheritance - Mansa Musa inherited a wealthy empire from his predecessor. He continued to build the wealth of Mali through the factors above.\\n\\n- Sound fiscal management - Musa is considered to have managed the empire and its finances very effectively, including keeping taxes reasonable and promoting a robust economy. This allowed him to accumulate and maintain wealth.\\n\\nSo in summary, conquest, trade, taxes, mining, and inheritance all contributed to Mansa Musa growing the M\""
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains import LLMChain\n",
"\n",
"chain = LLMChain(llm=model, prompt=prompt, output_parser=StrOutputParser())\n",
"chain.run(question=\"How did Mansa Musa accumulate his wealth?\")"
]
},
{
"cell_type": "markdown",
"id": "c1767c9e-eebd-4e07-9805-cf445920c38d",
"metadata": {},
"source": [
"For more specifics check out:\n",
"- [How-to](/docs/modules/chains/how_to/) for walkthroughs of different chain features\n",
"- [Foundational](/docs/modules/chains/foundational/) to get acquainted with core building block chains\n",
"- [Document](/docs/modules/chains/document/) to learn how to incorporate documents into chains\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -51,12 +51,12 @@ module.exports = {
{ type: "category", label: "Model I/O", collapsed: true, items: [{type:"autogenerated", dirName: "modules/model_io" }], link: { type: 'doc', id: "modules/model_io/index" }},
{ type: "category", label: "Retrieval", collapsed: true, items: [{type:"autogenerated", dirName: "modules/data_connection" }], link: { type: 'doc', id: "modules/data_connection/index" }},
{ type: "category", label: "Agents", collapsed: true, items: [{type:"autogenerated", dirName: "modules/agents" }], link: { type: 'doc', id: "modules/agents/index" }},
"modules/chains",
{
type: "category",
label: "More",
collapsed: true,
items: [
{ type: "category", label: "Chains", collapsed: true, items: [{type:"autogenerated", dirName: "modules/chains" }], link: { type: 'doc', id: "modules/chains/index" }},
{ type: "category", label: "Memory", collapsed: true, items: [{type:"autogenerated", dirName: "modules/memory" }], link: { type: 'doc', id: "modules/memory/index" }},
{ type: "category", label: "Callbacks", collapsed: true, items: [{type:"autogenerated", dirName: "modules/callbacks" }], link: { type: 'doc', id: "modules/callbacks/index" }},
]

@ -1,5 +1,9 @@
{
"redirects": [
{
"source": "/docs/modules/chains/:path*",
"destination": "/docs/modules/chains"
},
{
"source": "/docs/modules/agents/how_to/custom_llm_agent",
"destination": "/docs/modules/agents/how_to/custom_agent"
@ -1424,10 +1428,6 @@
"source": "/docs/modules/callbacks/integrations/argilla",
"destination": "/docs/integrations/callbacks/argilla"
},
{
"source": "/en/latest/modules/chains/examples/extraction.html",
"destination": "/docs/modules/chains/additional/extraction"
},
{
"source": "/en/latest/modules/chains/examples/flare.html",
"destination": "/cookbook"
@ -3660,102 +3660,6 @@
"source": "/en/latest/:path*",
"destination": "/docs/:path*"
},
{
"source": "/docs/modules/chains/additional/constitutional_chain",
"destination": "/docs/guides/safety/constitutional_chain"
},
{
"source": "/docs/modules/chains/additional/moderation",
"destination": "/docs/guides/safety/moderation"
},
{
"source": "/docs/modules/chains/popular/api",
"destination": "/docs/use_cases/apis/api"
},
{
"source": "/docs/modules/chains/additional/analyze_document",
"destination": "/cookbook"
},
{
"source": "/docs/modules/chains/popular/chat_vector_db",
"destination": "/docs/use_cases/question_answering/"
},
{
"source": "/docs/modules/chains/additional/multi_retrieval_qa_router",
"destination": "/docs/use_cases/question_answering/multi_retrieval_qa_router"
},
{
"source": "/docs/modules/chains/additional/question_answering",
"destination": "/docs/use_cases/question_answering/question_answering"
},
{
"source": "/docs/modules/chains/popular/vector_db_qa",
"destination": "/docs/use_cases/question_answering/vector_db_qa"
},
{
"source": "/docs/modules/chains/popular/summarize",
"destination": "/docs/use_cases/summarization/summarize"
},
{
"source": "/docs/modules/chains/popular/sqlite",
"destination": "/docs/use_cases/qa_structured/sql"
},
{
"source": "/docs/modules/chains/popular/openai_functions",
"destination": "/docs/modules/chains/how_to/openai_functions"
},
{
"source": "/docs/modules/chains/additional/llm_requests",
"destination": "/docs/use_cases/apis/llm_requests"
},
{
"source": "/docs/modules/chains/additional/openai_openapi",
"destination": "/docs/use_cases/apis/openai_openapi"
},
{
"source": "/docs/modules/chains/additional/openapi",
"destination": "/docs/use_cases/apis/openapi"
},
{
"source": "/docs/modules/chains/additional/openapi_openai",
"destination": "/docs/use_cases/apis/openapi_openai"
},
{
"source": "/docs/modules/chains/additional/cpal",
"destination": "/docs/use_cases/more/code_writing/cpal"
},
{
"source": "/docs/use_cases/code_writing/cpal",
"destination": "/docs/use_cases/more/code_writing/cpal"
},
{
"source": "/docs/modules/chains/additional/llm_bash",
"destination": "/docs/use_cases/more/code_writing/llm_bash"
},
{
"source": "/docs/use_cases/code_writing/llm_bash",
"destination": "/docs/use_cases/more/code_writing/llm_bash"
},
{
"source": "/docs/modules/chains/additional/llm_math",
"destination": "/docs/use_cases/more/code_writing/llm_math"
},
{
"source": "/docs/use_cases/code_writing/llm_math",
"destination": "/docs/use_cases/more/code_writing/llm_math"
},
{
"source": "/docs/modules/chains/additional/llm_symbolic_math",
"destination": "/docs/use_cases/more/code_writing/llm_symbolic_math"
},
{
"source": "/docs/use_cases/code_writing/llm_symbolic_math",
"destination": "/docs/use_cases/more/code_writing/llm_symbolic_math"
},
{
"source": "/docs/modules/chains/additional/pal",
"destination": "/docs/use_cases/more/code_writing/pal"
},
{
"source": "/docs/use_cases/code_writing/pal",
"destination": "/docs/use_cases/more/code_writing/pal"
@ -3768,74 +3672,6 @@
"source": "/docs/use_cases/code_writing(/?)",
"destination": "/docs/use_cases/more/code_writing/"
},
{
"source": "/docs/modules/chains/additional/graph_arangodb_qa",
"destination": "/docs/use_cases/graph/graph_arangodb_qa"
},
{
"source": "/docs/modules/chains/additional/graph_cypher_qa",
"destination": "/docs/use_cases/graph/graph_cypher_qa"
},
{
"source": "/docs/modules/chains/additional/graph_hugegraph_qa",
"destination": "/docs/use_cases/graph/graph_hugegraph_qa"
},
{
"source": "/docs/modules/chains/additional/graph_kuzu_qa",
"destination": "/docs/use_cases/graph/graph_kuzu_qa"
},
{
"source": "/docs/modules/chains/additional/graph_falkordb_qa",
"destination": "/docs/use_cases/graph/graph_falkordb_qa"
},
{
"source": "/docs/modules/chains/additional/graph_nebula_qa",
"destination": "/docs/use_cases/graph/graph_nebula_qa"
},
{
"source": "/docs/modules/chains/additional/graph_qa",
"destination": "/docs/use_cases/graph/graph_qa"
},
{
"source": "/docs/modules/chains/additional/graph_sparql_qa",
"destination": "/docs/use_cases/graph/graph_sparql_qa"
},
{
"source": "/docs/modules/chains/additional/neptune_cypher_qa",
"destination": "/docs/use_cases/graph/neptune_cypher_qa"
},
{
"source": "/docs/modules/chains/additional/tot",
"destination": "/docs/use_cases/graph/tot"
},
{
"source": "/docs/modules/chains/additional/flare",
"destination": "/cookbook"
},
{
"source": "/docs/modules/chains/additional/hyde",
"destination": "/cookbook"
},
{
"source": "/docs/modules/chains/additional/qa_citations",
"destination": "/cookbook"
},
{
"source": "/docs/modules/chains/additional/vector_db_text_generation",
"destination": "/docs/use_cases/question_answering/vector_db_text_generation"
},
{
"source": "/docs/modules/chains/additional/llm_checker",
"destination": "/docs/use_cases/more/self_check/llm_checker"
},
{
"source": "/docs/use_cases/self_check/llm_checker",
"destination": "/docs/use_cases/more/self_check/llm_checker"
},
{
"source": "/docs/modules/chains/additional/llm_summarization_checker",
"destination": "/docs/use_cases/more/self_check/llm_summarization_checker"
},
{
"source": "/docs/use_cases/self_check/llm_summarization_checker",
"destination": "/docs/use_cases/more/self_check/llm_summarization_checker"
@ -3847,13 +3683,6 @@
{
"source": "/docs/use_cases/self_check(/?)",
"destination": "/docs/use_cases/more/self_check/"
}, {
"source": "/docs/modules/chains/additional/elasticsearch_database",
"destination": "/docs/use_cases/qa_structured/integrations/elasticsearch"
},
{
"source": "/docs/modules/chains/additional/tagging",
"destination": "/docs/use_cases/tagging"
},
{
"source": "docs/integrations/providers/agent_with_wandb_tracing",

@ -80,6 +80,7 @@ from langchain.chains.router import (
)
from langchain.chains.sequential import SequentialChain, SimpleSequentialChain
from langchain.chains.sql_database.query import create_sql_query_chain
from langchain.chains.summarize import load_summarize_chain
from langchain.chains.transform import TransformChain
__all__ = [
@ -140,4 +141,5 @@ __all__ = [
"create_sql_query_chain",
"create_retrieval_chain",
"create_history_aware_retriever",
"load_summarize_chain",
]

@ -59,6 +59,7 @@ EXPECTED_ALL = [
"create_sql_query_chain",
"create_history_aware_retriever",
"create_retrieval_chain",
"load_summarize_chain",
]

Loading…
Cancel
Save