Add text_content kwarg to BrowserlessLoader (#7856)

Added keyword argument to toggle between getting the text content of a
site versus its HTML when using the `BrowserlessLoader`
This commit is contained in:
Jasper 2023-07-17 17:02:19 -07:00 committed by GitHub
parent 2aa3cf4e5f
commit 5b4d53e8ef
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 75 additions and 26 deletions

View File

@ -5,12 +5,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Browserless"
"# Browserless\n",
"\n",
"Browserless is a service that allows you to run headless Chrome instances in the cloud. It's a great way to run browser-based automation at scale without having to worry about managing your own infrastructure.\n",
"\n",
"To use Browserless as a document loader, initialize a `BrowserlessLoader` instance as shown in this notebook. Note that by default, `BrowserlessLoader` returns the `innerText` of the page's `body` element. To disable this and get the raw HTML, set `text_content` to `False`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
@ -19,26 +23,44 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"BROWSERLESS_API_TOKEN = \"YOUR_API_TOKEN\""
"BROWSERLESS_API_TOKEN = \"YOUR_BROWSERLESS_API_TOKEN\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<!DOCTYPE html><html class=\"client-js vector-feature-language-in-header-enabled vector-feature-language-in-main-page-header-disabled vector-feature-sticky-header-disabled vector-feature-page-tools-pinned-disabled vector-feature-toc-pinned-enabled vector-feature-main-menu-pinned-disabled vector-feature-limited-width-enabled vector-feature-limited-width-content-enabled vector-feature-zebra-design-disabled\" lang=\"en\" dir=\"ltr\"><head>\n",
"<meta charset=\"UTF-8\">\n",
"<title>Document classification - Wikipedia</title>\n",
"<script>document.documentElement.className=\"client-js vector-feature-language-in-header-enabled vector-feature-language-in-main-page-header-disabled vector-feature-sticky-header-disabled vector-feature-page-tools-pinned-disabled vector-feature-toc-pinned-enabled vector-feature-main-menu-pinned-disabled vector-feature-limited-width-enabled vector-feature-limited-width-content-enabled vector-feature-zebra-design-disabled\";(function(){var cookie=document.cookie.match(/(?:^|; )enwikimwclien\n"
"Jump to content\n",
"Main menu\n",
"Search\n",
"Create account\n",
"Log in\n",
"Personal tools\n",
"Toggle the table of contents\n",
"Document classification\n",
"17 languages\n",
"Article\n",
"Talk\n",
"Read\n",
"Edit\n",
"View history\n",
"Tools\n",
"From Wikipedia, the free encyclopedia\n",
"\n",
"Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done \"manually\" (or \"intellectually\") or algorithmically. The intellectual classification of documents has mostly been the province of library science, while the algorithmic classification of documents is mainly in information science and computer science. The problems are overlapping, however, and there is therefore interdisciplinary research on document classification.\n",
"\n",
"The documents to be classified may be texts, images, music, etc. Each kind of document possesses its special classification problems. When not otherwise specified, text classification is implied.\n",
"\n",
"Do\n"
]
}
],
@ -48,6 +70,7 @@
" urls=[\n",
" \"https://en.wikipedia.org/wiki/Document_classification\",\n",
" ],\n",
" text_content=True,\n",
")\n",
"\n",
"documents = loader.load()\n",
@ -72,7 +95,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.1"
"version": "3.10.9"
},
"orig_nbformat": 4
},

View File

@ -9,32 +9,58 @@ from langchain.document_loaders.base import BaseLoader
class BrowserlessLoader(BaseLoader):
"""Loads the content of webpages using Browserless' /content endpoint"""
def __init__(self, api_token: str, urls: Union[str, List[str]]):
def __init__(
self, api_token: str, urls: Union[str, List[str]], text_content: bool = True
):
"""Initialize with API token and the URLs to scrape"""
self.api_token = api_token
"""Browserless API token."""
self.urls = urls
"""List of URLs to scrape."""
self.text_content = text_content
def lazy_load(self) -> Iterator[Document]:
"""Lazy load Documents from URLs."""
for url in self.urls:
response = requests.post(
"https://chrome.browserless.io/content",
params={
"token": self.api_token,
},
json={
"url": url,
},
)
yield Document(
page_content=response.text,
metadata={
"source": url,
},
)
if self.text_content:
response = requests.post(
"https://chrome.browserless.io/scrape",
params={
"token": self.api_token,
},
json={
"url": url,
"elements": [
{
"selector": "body",
}
],
},
)
yield Document(
page_content=response.json()["data"][0]["results"][0]["text"],
metadata={
"source": url,
},
)
else:
response = requests.post(
"https://chrome.browserless.io/content",
params={
"token": self.api_token,
},
json={
"url": url,
},
)
yield Document(
page_content=response.text,
metadata={
"source": url,
},
)
def load(self) -> List[Document]:
"""Load Documents from URLs."""