mirror of
https://github.com/hwchase17/langchain
synced 2024-11-08 07:10:35 +00:00
bd4865b6fe
Description: This PR improves the function of recursive_url_loader, such as limiting the depth of the access, and customizable extractors(from the raw webpage to the text of the Document object), so that users can use other tools to extract the webpage. This PR also includes the document and test for the new loader. Old PR closed due to project structure change. #7756 Because socket requests are not allowed, the old unit test was removed. Issue: N/A Dependencies: asyncio, aiohttp Tag maintainer: @rlancemartin Twitter handle: @ Zend_Nihility --------- Co-authored-by: Lance Martin <lance@langchain.dev>
179 lines
4.9 KiB
Plaintext
179 lines
4.9 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5a7cc773",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Recursive URL Loader\n",
|
|
"\n",
|
|
"We may want to process load all URLs under a root directory.\n",
|
|
"\n",
|
|
"For example, let's look at the [Python 3.9 Document](https://docs.python.org/3.9/).\n",
|
|
"\n",
|
|
"This has many interesting child pages that we may want to read in bulk.\n",
|
|
"\n",
|
|
"Of course, the `WebBaseLoader` can load a list of pages. \n",
|
|
"\n",
|
|
"But, the challenge is traversing the tree of child pages and actually assembling that list!\n",
|
|
" \n",
|
|
"We do this using the `RecursiveUrlLoader`.\n",
|
|
"\n",
|
|
"This also gives us the flexibility to exclude some children, customize the extractor, and more."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1be8094f",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Parameters\n",
|
|
"- url: str, the target url to crawl.\n",
|
|
"- exclude_dirs: Optional[str], webpage directories to exclude.\n",
|
|
"- use_async: Optional[bool], wether to use async requests, using async requests is usually faster in large tasks. However, async will disable the lazy loading feature(the function still works, but it is not lazy). By default, it is set to False.\n",
|
|
"- extractor: Optional[Callable[[str], str]], a function to extract the text of the document from the webpage, by default it returns the page as it is. It is recommended to use tools like goose3 and beautifulsoup to extract the text. By default, it just returns the page as it is.\n",
|
|
"- max_depth: Optional[int] = None, the maximum depth to crawl. By default, it is set to 2. If you need to crawl the whole website, set it to a number that is large enough would simply do the job.\n",
|
|
"- timeout: Optional[int] = None, the timeout for each request, in the unit of seconds. By default, it is set to 10.\n",
|
|
"- prevent_outside: Optional[bool] = None, whether to prevent crawling outside the root url. By default, it is set to True."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "23c18539",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.document_loaders.recursive_url_loader import RecursiveUrlLoader"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "6384c057",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's try a simple example."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "55394afe",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from bs4 import BeautifulSoup as Soup\n",
|
|
"\n",
|
|
"url = \"https://docs.python.org/3.9/\"\n",
|
|
"loader = RecursiveUrlLoader(url=url, max_depth=2, extractor=lambda x: Soup(x, \"html.parser\").text)\n",
|
|
"docs = loader.load()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "084fb2ce",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'\\n\\n\\n\\n\\nPython Frequently Asked Questions — Python 3.'"
|
|
]
|
|
},
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"docs[0].page_content[:50]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "13bd7e16",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"{'source': 'https://docs.python.org/3.9/library/index.html',\n",
|
|
" 'title': 'The Python Standard Library — Python 3.9.17 documentation',\n",
|
|
" 'language': None}"
|
|
]
|
|
},
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"docs[-1].metadata"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5866e5a6",
|
|
"metadata": {},
|
|
"source": [
|
|
"However, since it's hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents by yourself, if it's needed. Most of the time, the returned results are good enough."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4ec8ecef",
|
|
"metadata": {},
|
|
"source": [
|
|
"Testing on LangChain docs."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "349b5598",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"8"
|
|
]
|
|
},
|
|
"execution_count": 2,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"url = \"https://js.langchain.com/docs/modules/memory/integrations/\"\n",
|
|
"loader = RecursiveUrlLoader(url=url)\n",
|
|
"docs = loader.load()\n",
|
|
"len(docs)"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.9.16"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|