docs: add wikipedia integration docs (#24932)

Dear langchain maintainers, 

I add the wikipedia integration docs according to the [web
docs](https://python.langchain.com/v0.2/docs/integrations/retrievers/wikipedia/),
and follow the format of [tavily
example](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/retrievers/tavily.ipynb)
and [retriever
template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/retrievers.ipynb),
this is my first time contributing large repo. please let me know if I'm
doing anything wrong, thank you!

Topic related: #24908

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
This commit is contained in:
David Gao 2024-08-02 10:12:04 -04:00 committed by GitHub
parent 71c0564c9f
commit fe1820cdaf
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 152 additions and 161 deletions

View File

@ -38,3 +38,4 @@ The below retrievers will search over an external index (e.g., constructed from
|-----------|--------|---------|
| [ArxivRetriever](/docs/integrations/retrievers/arxiv) | Scholarly articles on [arxiv.org](https://arxiv.org/) | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.arxiv.ArxivRetriever.html) |
| [TavilySearchAPIRetriever](/docs/integrations/retrievers/tavily) | Internet search | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.tavily_search_api.TavilySearchAPIRetriever.html) |
| [WikipediaRetriever](/docs/integrations/retrievers/wikipedia) | [Wikipedia](https://www.wikipedia.org/) articles | [langchain_community](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html) |

View File

@ -2,14 +2,51 @@
"cells": [
{
"cell_type": "markdown",
"id": "9fc6205b",
"id": "62727aaa-bcff-4087-891c-e539f824ee1f",
"metadata": {},
"source": [
"# Wikipedia\n",
"---\n",
"sidebar_label: Wikipedia\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "d62a16c1-10de-4f99-b392-c4ad2e6123a1",
"metadata": {},
"source": [
"# WikipediaRetriever\n",
"\n",
"## Overview\n",
">[Wikipedia](https://wikipedia.org/) is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. `Wikipedia` is the largest and most-read reference work in history.\n",
"\n",
"This notebook shows how to retrieve wiki pages from `wikipedia.org` into the Document format that is used downstream."
"This notebook shows how to retrieve wiki pages from `wikipedia.org` into the [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) format that is used downstream.\n",
"\n",
"### Integration details\n",
"\n",
"| Retriever | Source | Package |\n",
"| :--- | :--- | :---: |\n",
"[WikipediaRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html) | [Wikipedia](https://www.wikipedia.org/) articles | langchain_community |"
]
},
{
"cell_type": "markdown",
"id": "eb7d377c-168b-40e8-bd61-af6a4fb1b44f",
"metadata": {},
"source": [
"## Setup\n",
"If you want to get automated tracing from runs of individual tools, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1bbc6013-2617-4f7e-9d8b-7453d09315c0",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
@ -17,15 +54,9 @@
"id": "51489529-5dcd-4b86-bda6-de0a39d8ffd1",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "1435c804-069d-4ade-9a7b-006b97b767c1",
"metadata": {},
"source": [
"First, you need to install `wikipedia` python package."
"### Installation\n",
"\n",
"The integration lives in the `langchain-community` package. We also need to install the `wikipedia` python package itself."
]
},
{
@ -37,7 +68,15 @@
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet wikipedia"
"%pip install -qU langchain_community wikipedia"
]
},
{
"cell_type": "markdown",
"id": "ae622ac6-d18a-4754-a4bd-d30a078c19b5",
"metadata": {},
"source": [
"## Instantiation"
]
},
{
@ -45,7 +84,9 @@
"id": "6c15470b-a16b-4e0d-bc6a-6998bafbb5a4",
"metadata": {},
"source": [
"`WikipediaRetriever` has these arguments:\n",
"Now we can instantiate our retriever:\n",
"\n",
"`WikipediaRetriever` parameters include:\n",
"- optional `lang`: default=\"en\". Use it to search in a specific language part of Wikipedia\n",
"- optional `load_max_docs`: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.\n",
"- optional `load_all_available_meta`: default=False. By default only the most important fields downloaded: `Published` (date when document was published/last updated), `title`, `Summary`. If True, other fields also downloaded.\n",
@ -53,200 +94,149 @@
"`get_relevant_documents()` has one argument, `query`: free text which used to find documents in Wikipedia"
]
},
{
"cell_type": "markdown",
"id": "ae3c3d16",
"metadata": {},
"source": [
"## Examples"
]
},
{
"cell_type": "markdown",
"id": "6fafb73b-d6ec-4822-b161-edf0aaf5224a",
"metadata": {},
"source": [
"### Running retriever"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "d0e6f506",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.retrievers import WikipediaRetriever"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "f381f642",
"execution_count": 1,
"id": "b78f0cd0-ffea-4fe3-9d1d-54639c4ef1ff",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.retrievers import WikipediaRetriever\n",
"\n",
"retriever = WikipediaRetriever()"
]
},
{
"cell_type": "markdown",
"id": "12aead36-7b97-4d9c-82e7-ec644a3127f9",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "20ae1a74",
"execution_count": 2,
"id": "54a76605-6b1e-44bf-b8a2-7d48119290c4",
"metadata": {},
"outputs": [],
"source": [
"docs = retriever.invoke(\"HUNTER X HUNTER\")"
"docs = retriever.invoke(\"TOKYO GHOUL\")"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "1d5a5088",
"execution_count": 3,
"id": "65ada2b7-3507-4dcb-9982-5f8f4e97a2e1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'Hunter × Hunter',\n",
" 'summary': 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced \"hunter hunter\") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\\'s Toonami programming block from April 2016 to June 2019.\\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\\n\\n'}"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"Tokyo Ghoul (Japanese: 東京喰種(トーキョーグール), Hepburn: Tōkyō Gūru) is a Japanese dark fantasy manga series written and illustrated by Sui Ishida. It was serialized in Shueisha's seinen manga magazine Weekly Young Jump from September 2011 to September 2014, with its chapters collected in 14 tankōbon volumes. The story is set in an alternate version of Tokyo where humans coexist with ghouls, beings who loo\n"
]
}
],
"source": [
"docs[0].metadata # meta-information of the Document"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "c0ccd0c7-f6a6-43e7-b842-5f57afb94224",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced \"hunter hunter\") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The sto'"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].page_content[:400] # a content of the Document"
"print(docs[0].page_content[:400])"
]
},
{
"cell_type": "markdown",
"id": "2670363b-3806-4c7e-b14d-90a4d5d2a200",
"id": "ae3c3d16",
"metadata": {},
"source": [
"### Question Answering on facts"
"## Use within a chain\n",
"Like other retrievers, `WikipediaRetriever` can be incorporated into LLM applications via [chains](/docs/how_to/sequence/).\n",
"\n",
"We will need a LLM or chat model:\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs customVarName=\"llm\" />\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "bb3601df-53ea-4826-bdbe-554387bc3ad4",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"# get a token: https://platform.openai.com/account/api-keys\n",
"\n",
"from getpass import getpass\n",
"\n",
"OPENAI_API_KEY = getpass()"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "e9c1a114-0410-4804-be30-05f34a9760f9",
"metadata": {
"tags": []
},
"execution_count": 4,
"id": "4bd3d268-eb8c-46e9-930a-18f5e2a50008",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"# | output: false\n",
"# | echo: false\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "51a33cc9-ec42-4afc-8a2d-3bfff476aa59",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo\") # switch to 'gpt-4'\n",
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "ea537767-a8bf-4adf-ae03-b353c9145d58",
"metadata": {
"tags": []
},
"execution_count": 5,
"id": "9b52bc65-1b2e-4c30-ab43-41eaa5bf79c3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"\n",
" Answer the question based only on the context provided.\n",
" Context: {context}\n",
" Question: {question}\n",
" \"\"\"\n",
")\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "0d268905-3b19-4338-ac10-223c0fe4d5e4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"-> **Question**: What is Apify? \n",
"\n",
"**Answer**: Apify is a platform that allows you to easily automate web scraping, data extraction and web automation. It provides a cloud-based infrastructure for running web crawlers and other automation tasks, as well as a web-based tool for building and managing your crawlers. Additionally, Apify offers a marketplace for buying and selling pre-built crawlers and related services. \n",
"\n",
"-> **Question**: When the Monument to the Martyrs of the 1830 Revolution was created? \n",
"\n",
"**Answer**: Apify is a web scraping and automation platform that enables you to extract data from websites, turn unstructured data into structured data, and automate repetitive tasks. It provides a user-friendly interface for creating web scraping scripts without any coding knowledge. Apify can be used for various web scraping tasks such as data extraction, web monitoring, content aggregation, and much more. Additionally, it offers various features such as proxy support, scheduling, and integration with other tools to make web scraping and automation tasks easier and more efficient. \n",
"\n",
"-> **Question**: What is the Abhayagiri Vihāra? \n",
"\n",
"**Answer**: Abhayagiri Vihāra was a major monastery site of Theravada Buddhism that was located in Anuradhapura, Sri Lanka. It was founded in the 2nd century BCE and is considered to be one of the most important monastic complexes in Sri Lanka. \n",
"\n"
]
"data": {
"text/plain": [
"'The main character in Tokyo Ghoul is Ken Kaneki, who transforms into a ghoul after receiving an organ transplant from a ghoul named Rize.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"questions = [\n",
" \"What is Apify?\",\n",
" \"When the Monument to the Martyrs of the 1830 Revolution was created?\",\n",
" \"What is the Abhayagiri Vihāra?\",\n",
" # \"How big is Wikipédia en français?\",\n",
"]\n",
"chat_history = []\n",
"chain.invoke(\n",
" \"Who is the main character in `Tokyo Ghoul` and does he transform into a ghoul?\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "236bbafb-ebd4-4165-9b8f-d47605f6eef3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"for question in questions:\n",
" result = qa({\"question\": question, \"chat_history\": chat_history})\n",
" chat_history.append((question, result[\"answer\"]))\n",
" print(f\"-> **Question**: {question} \\n\")\n",
" print(f\"**Answer**: {result['answer']} \\n\")"
"For detailed documentation of all `WikipediaRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.wikipedia.WikipediaRetriever.html#langchain-community-retrievers-wikipedia-wikipediaretriever)."
]
}
],
@ -266,7 +256,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.10.4"
}
},
"nbformat": 4,