added `Wikipedia` retriever (#4302)

- added `Wikipedia` retriever. It is effectively a wrapper for
`WikipediaAPIWrapper`. It wrapps load() into get_relevant_documents()
- sorted `__all__` in the `retrievers/__init__`
- added integration tests for the WikipediaRetriever
- added an example (as Jupyter notebook) for the WikipediaRetriever
parallel_dir_loader
Leonid Ganeline 1 year ago committed by GitHub
parent ea83eed9ba
commit ce15ffae6a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,274 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9fc6205b",
"metadata": {},
"source": [
"# Wikipedia\n",
"\n",
">[Wikipedia](https://wikipedia.org/) is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. `Wikipedia` is the largest and most-read reference work in history.\n",
"\n",
"This notebook shows how to retrieve wiki pages from `wikipedia.org` into the Document format that is used downstream."
]
},
{
"cell_type": "markdown",
"id": "51489529-5dcd-4b86-bda6-de0a39d8ffd1",
"metadata": {},
"source": [
"## Installation"
]
},
{
"cell_type": "markdown",
"id": "1435c804-069d-4ade-9a7b-006b97b767c1",
"metadata": {},
"source": [
"First, you need to install `wikipedia` python package."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1a737220",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"#!pip install wikipedia"
]
},
{
"cell_type": "markdown",
"id": "6c15470b-a16b-4e0d-bc6a-6998bafbb5a4",
"metadata": {},
"source": [
"`WikipediaRetriever` has these arguments:\n",
"- optional `lang`: default=\"en\". Use it to search in a specific language part of Wikipedia\n",
"- optional `load_max_docs`: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.\n",
"- optional `load_all_available_meta`: default=False. By default only the most important fields downloaded: `Published` (date when document was published/last updated), `title`, `Summary`. If True, other fields also downloaded.\n",
"\n",
"`get_relevant_documents()` has one argument, `query`: free text which used to find documents in Wikipedia"
]
},
{
"cell_type": "markdown",
"id": "ae3c3d16",
"metadata": {},
"source": [
"## Examples"
]
},
{
"cell_type": "markdown",
"id": "6fafb73b-d6ec-4822-b161-edf0aaf5224a",
"metadata": {},
"source": [
"### Running retriever"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "d0e6f506",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.retrievers import WikipediaRetriever"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "f381f642",
"metadata": {},
"outputs": [],
"source": [
"retriever = WikipediaRetriever()"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "20ae1a74",
"metadata": {},
"outputs": [],
"source": [
"docs = retriever.get_relevant_documents(query='HUNTER X HUNTER')"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "1d5a5088",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'title': 'Hunter × Hunter',\n",
" 'summary': 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced \"hunter hunter\") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\\'s Toonami programming block from April 2016 to June 2019.\\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\\n\\n'}"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].metadata # meta-information of the Document"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "c0ccd0c7-f6a6-43e7-b842-5f57afb94224",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced \"hunter hunter\") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The sto'"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].page_content[:400] # a content of the Document "
]
},
{
"cell_type": "markdown",
"id": "2670363b-3806-4c7e-b14d-90a4d5d2a200",
"metadata": {},
"source": [
"### Question Answering on facts"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "bb3601df-53ea-4826-bdbe-554387bc3ad4",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
" ········\n"
]
}
],
"source": [
"# get a token: https://platform.openai.com/account/api-keys\n",
"\n",
"from getpass import getpass\n",
"\n",
"OPENAI_API_KEY = getpass()"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "e9c1a114-0410-4804-be30-05f34a9760f9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "51a33cc9-ec42-4afc-8a2d-3bfff476aa59",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.chains import ConversationalRetrievalChain\n",
"\n",
"model = ChatOpenAI(model='gpt-3.5-turbo') # switch to 'gpt-4'\n",
"qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "ea537767-a8bf-4adf-ae03-b353c9145d58",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"-> **Question**: What is Apify? \n",
"\n",
"**Answer**: Apify is a platform that allows you to easily automate web scraping, data extraction and web automation. It provides a cloud-based infrastructure for running web crawlers and other automation tasks, as well as a web-based tool for building and managing your crawlers. Additionally, Apify offers a marketplace for buying and selling pre-built crawlers and related services. \n",
"\n",
"-> **Question**: When the Monument to the Martyrs of the 1830 Revolution was created? \n",
"\n",
"**Answer**: Apify is a web scraping and automation platform that enables you to extract data from websites, turn unstructured data into structured data, and automate repetitive tasks. It provides a user-friendly interface for creating web scraping scripts without any coding knowledge. Apify can be used for various web scraping tasks such as data extraction, web monitoring, content aggregation, and much more. Additionally, it offers various features such as proxy support, scheduling, and integration with other tools to make web scraping and automation tasks easier and more efficient. \n",
"\n",
"-> **Question**: What is the Abhayagiri Vihāra? \n",
"\n",
"**Answer**: Abhayagiri Vihāra was a major monastery site of Theravada Buddhism that was located in Anuradhapura, Sri Lanka. It was founded in the 2nd century BCE and is considered to be one of the most important monastic complexes in Sri Lanka. \n",
"\n"
]
}
],
"source": [
"questions = [\n",
" \"What is Apify?\",\n",
" \"When the Monument to the Martyrs of the 1830 Revolution was created?\",\n",
" \"What is the Abhayagiri Vihāra?\", \n",
" # \"How big is Wikipédia en français?\",\n",
"] \n",
"chat_history = []\n",
"\n",
"for question in questions: \n",
" result = qa({\"question\": question, \"chat_history\": chat_history})\n",
" chat_history.append((question, result['answer']))\n",
" print(f\"-> **Question**: {question} \\n\")\n",
" print(f\"**Answer**: {result['answer']} \\n\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -14,20 +14,22 @@ from langchain.retrievers.time_weighted_retriever import (
)
from langchain.retrievers.vespa_retriever import VespaRetriever
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever
from langchain.retrievers.wikipedia import WikipediaRetriever
__all__ = [
"ChatGPTPluginRetriever",
"ContextualCompressionRetriever",
"RemoteLangChainRetriever",
"PineconeHybridSearchRetriever",
"MetalRetriever",
"DataberryRetriever",
"ElasticSearchBM25Retriever",
"KNNRetriever",
"MetalRetriever",
"PineconeHybridSearchRetriever",
"RemoteLangChainRetriever",
"SVMRetriever",
"SelfQueryRetriever",
"TFIDFRetriever",
"WeaviateHybridSearchRetriever",
"DataberryRetriever",
"TimeWeightedVectorStoreRetriever",
"SVMRetriever",
"KNNRetriever",
"VespaRetriever",
"SelfQueryRetriever",
"WeaviateHybridSearchRetriever",
"WikipediaRetriever",
]

@ -0,0 +1,18 @@
from typing import List
from langchain.schema import BaseRetriever, Document
from langchain.utilities.wikipedia import WikipediaAPIWrapper
class WikipediaRetriever(BaseRetriever, WikipediaAPIWrapper):
"""
It is effectively a wrapper for WikipediaAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all WikipediaAPIWrapper arguments without any change.
"""
def get_relevant_documents(self, query: str) -> List[Document]:
return self.load(query=query)
async def aget_relevant_documents(self, query: str) -> List[Document]:
raise NotImplementedError

@ -17,13 +17,15 @@ class WikipediaAPIWrapper(BaseModel):
To use, you should have the ``wikipedia`` python package installed.
This wrapper will use the Wikipedia API to conduct searches and
fetch page summaries. By default, it will return the page summaries
of the top-k results of an input search.
of the top-k results.
It limits the Document content by doc_content_chars_max.
"""
wiki_client: Any #: :meta private:
top_k_results: int = 3
lang: str = "en"
load_all_available_meta: bool = False
doc_content_chars_max: int = 4000
class Config:
"""Configuration for this pydantic object."""
@ -55,7 +57,7 @@ class WikipediaAPIWrapper(BaseModel):
summaries.append(summary)
if not summaries:
return "No good Wikipedia Search Result was found"
return "\n\n".join(summaries)
return "\n\n".join(summaries)[: self.doc_content_chars_max]
@staticmethod
def _formatted_page_summary(page_title: str, wiki_page: Any) -> Optional[str]:
@ -82,7 +84,7 @@ class WikipediaAPIWrapper(BaseModel):
else {}
)
doc = Document(
page_content=wiki_page.content,
page_content=wiki_page.content[: self.doc_content_chars_max],
metadata={
**main_meta,
**add_meta,
@ -104,7 +106,7 @@ class WikipediaAPIWrapper(BaseModel):
Run Wikipedia search and get the article text plus the meta information.
See
Returns: a list of documents with the document.page_content in PDF format
Returns: a list of documents.
"""
page_titles = self.wiki_client.search(query[:WIKIPEDIA_MAX_QUERY_LENGTH])

@ -131,6 +131,7 @@ sentence-transformers = "^2"
gptcache = "^0.1.9"
promptlayer = "^0.1.80"
tair = "^1.3.3"
wikipedia = "^1"
pymongo = "^4.3.3"
[tool.poetry.group.lint.dependencies]

@ -0,0 +1,53 @@
"""Integration test for Wikipedia API Wrapper."""
from typing import List
import pytest
from langchain.retrievers import WikipediaRetriever
from langchain.schema import Document
@pytest.fixture
def retriever() -> WikipediaRetriever:
return WikipediaRetriever()
def assert_docs(docs: List[Document], all_meta: bool = False) -> None:
for doc in docs:
assert doc.page_content
assert doc.metadata
main_meta = {"title", "summary"}
assert set(doc.metadata).issuperset(main_meta)
if all_meta:
assert len(set(doc.metadata)) > len(main_meta)
else:
assert len(set(doc.metadata)) == len(main_meta)
def test_load_success(retriever: WikipediaRetriever) -> None:
docs = retriever.get_relevant_documents("HUNTER X HUNTER")
assert len(docs) > 1
assert_docs(docs, all_meta=False)
def test_load_success_all_meta(retriever: WikipediaRetriever) -> None:
retriever.load_all_available_meta = True
docs = retriever.get_relevant_documents("HUNTER X HUNTER")
assert len(docs) > 1
assert_docs(docs, all_meta=True)
def test_load_success_init_args() -> None:
retriever = WikipediaRetriever(
lang="en", top_k_results=1, load_all_available_meta=True
)
docs = retriever.get_relevant_documents("HUNTER X HUNTER")
assert len(docs) == 1
assert_docs(docs, all_meta=True)
def test_load_no_result(retriever: WikipediaRetriever) -> None:
docs = retriever.get_relevant_documents(
"NORESULTCALL_NORESULTCALL_NORESULTCALL_NORESULTCALL_NORESULTCALL_NORESULTCALL"
)
assert not docs
Loading…
Cancel
Save