docs: fix broken links (#16855)

pull/16844/head
Erick Friis 4 months ago committed by GitHub
parent a265878d71
commit 6fc2835255
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -93,6 +93,3 @@ Head to the reference section for full documentation of all classes and methods
### [Developer's guide](/docs/contributing)
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
### [Community](/docs/community)
Head to the [Community navigator](/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLMs.

@ -101,7 +101,7 @@
"metadata": {},
"source": [
"You can create custom prompt templates that format the prompt in any way you want.\n",
"For more information, see [Custom Prompt Templates](./custom_prompt_template).\n",
"For more information, see [Prompt Template Composition](./composition).\n",
"\n",
"## `ChatPromptTemplate`\n",
"\n",

@ -752,7 +752,7 @@
"\n",
"* [SQL use case](/docs/use_cases/sql/): Many of the challenges of working with SQL db's and CSV's are generic to any structured data type, so it's useful to read the SQL techniques even if you're using Pandas for CSV data analysis.\n",
"* [Tool use](/docs/use_cases/tool_use/): Guides on general best practices when working with chains and agents that invoke tools\n",
"* [Agents](/docs/use_cases/agents/): Understand the fundamentals of building LLM agents.\n",
"* [Agents](/docs/modules/agents/): Understand the fundamentals of building LLM agents.\n",
"* Integrations: Sandboxed envs like [E2B](/docs/integrations/tools/e2b_data_analysis) and [Bearly](/docs/integrations/tools/bearly), utilities like [SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase), related agents like [Spark DataFrame agent](/docs/integrations/toolkits/spark)."
]
}

@ -586,11 +586,12 @@
"Vector stores are commonly used for retrieval, but there are other ways to do retrieval, too.\n",
"\n",
"`Retriever`: An object that returns `Document`s given a text query\n",
"\n",
"- [Docs](/docs/modules/data_connection/retrievers/): Further documentation on the interface and built-in retrieval techniques. Some of which include:\n",
" - `MultiQueryRetriever` [generates variants of the input question](/docs/modules/data_connection/retrievers/MultiQueryRetriever) to improve retrieval hit rate.\n",
" - `MultiVectorRetriever` (diagram below) instead generates [variants of the embeddings](/docs/modules/data_connection/retrievers/multi_vector), also in order to improve retrieval hit rate.\n",
" - `Max marginal relevance` selects for [relevance and diversity](https://www.cs.cmu.edu/~jgc/publication/The_Use_MMR_Diversity_Based_LTMIR_1998.pdf) among the retrieved documents to avoid passing in duplicate context.\n",
" - Documents can be filtered during vector store retrieval using [`metadata` filters](/docs/use_cases/question_answering/document-context-aware-QA).\n",
" - Documents can be filtered during vector store retrieval using metadata filters, such as with a [Self Query Retriever](/docs/modules/data_connection/retrievers/self_query).\n",
"- [Integrations](/docs/integrations/retrievers/): Integrations with retrieval services.\n",
"- [Interface](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html): API reference for the base interface."
]

@ -45,9 +45,9 @@
"\n",
"A central question for building a summarizer is how to pass your documents into the LLM's context window. Two common approaches for this are:\n",
"\n",
"1. `Stuff`: Simply \"stuff\" all your documents into a single prompt. This is the simplest approach (see [here](/docs/modules/chains/document/stuff) for more on the `StuffDocumentsChains`, which is used for this method).\n",
"1. `Stuff`: Simply \"stuff\" all your documents into a single prompt. This is the simplest approach (see [here](/docs/modules/chains#lcel-chains) for more on the `create_stuff_documents_chain` constructor, which is used for this method).\n",
"\n",
"2. `Map-reduce`: Summarize each document on it's own in a \"map\" step and then \"reduce\" the summaries into a final summary (see [here](/docs/modules/chains/document/map_reduce) for more on the `MapReduceDocumentsChain`, which is used for this method)."
"2. `Map-reduce`: Summarize each document on it's own in a \"map\" step and then \"reduce\" the summaries into a final summary (see [here](/docs/modules/chains#legacy-chains) for more on the `MapReduceDocumentsChain`, which is used for this method)."
]
},
{
@ -523,7 +523,7 @@
"source": [
"## Option 3. Refine\n",
" \n",
"[Refine](/docs/modules/chains/document/refine) is similar to map-reduce:\n",
"[RefineDocumentsChain](/docs/modules/chains#legacy-chains) is similar to map-reduce:\n",
"\n",
"> The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.\n",
"\n",
@ -647,24 +647,10 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": null,
"id": "0ddd522e-30dc-4f6a-b993-c4f97e656c4f",
"metadata": {},
"outputs": [
{
"ename": "ValueError",
"evalue": "`run` not supported when there is not exactly one output key. Got ['output_text', 'intermediate_steps'].",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mValueError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[17], line 4\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchains\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AnalyzeDocumentChain\n\u001b[1;32m 3\u001b[0m summarize_document_chain \u001b[38;5;241m=\u001b[39m AnalyzeDocumentChain(combine_docs_chain\u001b[38;5;241m=\u001b[39mchain, text_splitter\u001b[38;5;241m=\u001b[39mtext_splitter)\n\u001b[0;32m----> 4\u001b[0m \u001b[43msummarize_document_chain\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrun\u001b[49m\u001b[43m(\u001b[49m\u001b[43mdocs\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[0;32m~/langchain/libs/langchain/langchain/chains/base.py:496\u001b[0m, in \u001b[0;36mChain.run\u001b[0;34m(self, callbacks, tags, metadata, *args, **kwargs)\u001b[0m\n\u001b[1;32m 459\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m\"\"\"Convenience method for executing chain.\u001b[39;00m\n\u001b[1;32m 460\u001b[0m \n\u001b[1;32m 461\u001b[0m \u001b[38;5;124;03mThe main difference between this method and `Chain.__call__` is that this\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 493\u001b[0m \u001b[38;5;124;03m # -> \"The temperature in Boise is...\"\u001b[39;00m\n\u001b[1;32m 494\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 495\u001b[0m \u001b[38;5;66;03m# Run at start to make sure this is possible/defined\u001b[39;00m\n\u001b[0;32m--> 496\u001b[0m _output_key \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_run_output_key\u001b[49m\n\u001b[1;32m 498\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m args \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m kwargs:\n\u001b[1;32m 499\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(args) \u001b[38;5;241m!=\u001b[39m \u001b[38;5;241m1\u001b[39m:\n",
"File \u001b[0;32m~/langchain/libs/langchain/langchain/chains/base.py:445\u001b[0m, in \u001b[0;36mChain._run_output_key\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 442\u001b[0m \u001b[38;5;129m@property\u001b[39m\n\u001b[1;32m 443\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_run_output_key\u001b[39m(\u001b[38;5;28mself\u001b[39m) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m \u001b[38;5;28mstr\u001b[39m:\n\u001b[1;32m 444\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys) \u001b[38;5;241m!=\u001b[39m \u001b[38;5;241m1\u001b[39m:\n\u001b[0;32m--> 445\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\n\u001b[1;32m 446\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`run` not supported when there is not exactly \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 447\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mone output key. Got \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 448\u001b[0m )\n\u001b[1;32m 449\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys[\u001b[38;5;241m0\u001b[39m]\n",
"\u001b[0;31mValueError\u001b[0m: `run` not supported when there is not exactly one output key. Got ['output_text', 'intermediate_steps']."
]
}
],
"outputs": [],
"source": [
"from langchain.chains import AnalyzeDocumentChain\n",
"\n",

@ -1,6 +1,6 @@
# Chat Bot Feedback Template
This template shows how to evaluate your chat bot without explicit user feedback. It defines a simple chat bot in [chain.py](./chat_bot_feedback/chain.py) and custom evaluator that scores bot response effectiveness based on the subsequent user response. You can apply this run evaluator to your own chat bot by calling `with_config` on the chat bot before serving. You can also directly deploy your chat app using this template.
This template shows how to evaluate your chat bot without explicit user feedback. It defines a simple chat bot in [chain.py](https://github.com/langchain-ai/langchain/blob/master/templates/chat-bot-feedback/chat_bot_feedback/chain.py) and custom evaluator that scores bot response effectiveness based on the subsequent user response. You can apply this run evaluator to your own chat bot by calling `with_config` on the chat bot before serving. You can also directly deploy your chat app using this template.
[Chat bots](https://python.langchain.com/docs/use_cases/chatbots) are one of the most common interfaces for deploying LLMs. The quality of chat bots varies, making continuous development important. But users are wont to leave explicit feedback through mechanisms like thumbs-up or thumbs-down buttons. Furthermore, traditional analytics such as "session length" or "conversation length" often lack clarity. However, multi-turn conversations with a chat bot can provide a wealth of information, which we can transform into metrics for fine-tuning, evaluation, and product analytics.

@ -2,7 +2,7 @@
This template enables user to use `pgvector` for combining postgreSQL with semantic search / RAG.
It uses [PGVector](https://github.com/pgvector/pgvector) extension as shown in the [RAG empowered SQL cookbook](cookbook/retrieval_in_sql.ipynb)
It uses [PGVector](https://github.com/pgvector/pgvector) extension as shown in the [RAG empowered SQL cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/retrieval_in_sql.ipynb)
## Environment Setup

Loading…
Cancel
Save