mirror of
https://github.com/hwchase17/langchain
synced 2024-10-29 17:07:25 +00:00
23231d65a9
Different PDF libraries have different strengths and weaknesses. PyMuPDF does a good job at extracting the most amount of content from the doc, regardless of the source quality, extremely fast (especially compared to Unstructured). https://pymupdf.readthedocs.io/en/latest/index.html
374 lines
16 KiB
Plaintext
374 lines
16 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f70e6118",
|
||
"metadata": {},
|
||
"source": [
|
||
"# PDF\n",
|
||
"\n",
|
||
"This covers how to load pdfs into a document format that we can use downstream."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "743f9413",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Using PyPDF\n",
|
||
"\n",
|
||
"Allows for tracking of page numbers as well."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"id": "c428b0c5",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.document_loaders import PagedPDFSplitter\n",
|
||
"\n",
|
||
"loader = PagedPDFSplitter(\"example_data/layout-parser-paper.pdf\")\n",
|
||
"pages = loader.load_and_split()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "d333cabb",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"Document(page_content='LayoutParser : A Uni\\x0ced Toolkit for Deep\\nLearning Based Document Image Analysis\\nZejiang Shen1( \\x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\\nLee4, Jacob Carlson3, and Weining Li5\\n1Allen Institute for AI\\nshannons@allenai.org\\n2Brown University\\nruochen zhang@brown.edu\\n3Harvard University\\nfmelissadell,jacob carlson g@fas.harvard.edu\\n4University of Washington\\nbcgl@cs.washington.edu\\n5University of Waterloo\\nw422li@uwaterloo.ca\\nAbstract. Recent advances in document image analysis (DIA) have been\\nprimarily driven by the application of neural networks. Ideally, research\\noutcomes could be easily deployed in production and extended for further\\ninvestigation. However, various factors like loosely organized codebases\\nand sophisticated model con\\x0cgurations complicate the easy reuse of im-\\nportant innovations by a wide audience. Though there have been on-going\\ne\\x0borts to improve reusability and simplify deep learning (DL) model\\ndevelopment in disciplines like natural language processing and computer\\nvision, none of them are optimized for challenges in the domain of DIA.\\nThis represents a major gap in the existing toolkit, as DIA is central to\\nacademic research across a wide range of disciplines in the social sciences\\nand humanities. This paper introduces LayoutParser , an open-source\\nlibrary for streamlining the usage of DL in DIA research and applica-\\ntions. The core LayoutParser library comes with a set of simple and\\nintuitive interfaces for applying and customizing DL models for layout de-\\ntection, character recognition, and many other document processing tasks.\\nTo promote extensibility, LayoutParser also incorporates a community\\nplatform for sharing both pre-trained models and full document digiti-\\nzation pipelines. We demonstrate that LayoutParser is helpful for both\\nlightweight and large-scale digitization pipelines in real-word use cases.\\nThe library is publicly available at https://layout-parser.github.io .\\nKeywords: Document Image Analysis ·Deep Learning ·Layout Analysis\\n·Character Recognition ·Open Source library ·Toolkit.\\n1 Introduction\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndocument image analysis (DIA) tasks including document image classi\\x0ccation [ 11,arXiv:2103.15348v2 [cs.CV] 21 Jun 2021', lookup_str='', metadata={'source': 'example_data/layout-parser-paper.pdf', 'page': '0'}, lookup_index=0)"
|
||
]
|
||
},
|
||
"execution_count": 4,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"pages[0]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "ebd895e4",
|
||
"metadata": {},
|
||
"source": [
|
||
"An advantage of this approach is that documents can be retrieved with page numbers."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"id": "87fa7b3a",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"9: 10 Z. Shen et al.\n",
|
||
"Fig. 4: Illustration of (a) the original historical Japanese document with layout\n",
|
||
"detection results and (b) a recreated version of the document image that achieves\n",
|
||
"much better character recognition recall. The reorganization algorithm rearranges\n",
|
||
"the tokens based on the their detected bounding boxes given a maximum allowed\n",
|
||
"height.\n",
|
||
"4LayoutParser Community Platform\n",
|
||
"Another focus of LayoutParser is promoting the reusability of layout detection\n",
|
||
"models and full digitization pipelines. Similar to many existing deep learning\n",
|
||
"libraries, LayoutParser comes with a community model hub for distributing\n",
|
||
"layout models. End-users can upload their self-trained models to the model hub,\n",
|
||
"and these models can be loaded into a similar interface as the currently available\n",
|
||
"LayoutParser pre-trained models. For example, the model trained on the News\n",
|
||
"Navigator dataset [17] has been incorporated in the model hub.\n",
|
||
"Beyond DL models, LayoutParser also promotes the sharing of entire doc-\n",
|
||
"ument digitization pipelines. For example, sometimes the pipeline requires the\n",
|
||
"combination of multiple DL models to achieve better accuracy. Currently, pipelines\n",
|
||
"are mainly described in academic papers and implementations are often not pub-\n",
|
||
"licly available. To this end, the LayoutParser community platform also enables\n",
|
||
"the sharing of layout pipelines to promote the discussion and reuse of techniques.\n",
|
||
"For each shared pipeline, it has a dedicated project page, with links to the source\n",
|
||
"code, documentation, and an outline of the approaches. A discussion panel is\n",
|
||
"provided for exchanging ideas. Combined with the core LayoutParser library,\n",
|
||
"users can easily build reusable components based on the shared pipelines and\n",
|
||
"apply them to solve their unique problems.\n",
|
||
"5 Use Cases\n",
|
||
"The core objective of LayoutParser is to make it easier to create both large-scale\n",
|
||
"and light-weight document digitization pipelines. Large-scale document processing\n",
|
||
"3: 4 Z. Shen et al.\n",
|
||
"Efficient Data AnnotationC u s t o m i z e d M o d e l T r a i n i n gModel Cust omizationDI A Model HubDI A Pipeline SharingCommunity PlatformLa y out Detection ModelsDocument Images \n",
|
||
"T h e C o r e L a y o u t P a r s e r L i b r a r yOCR ModuleSt or age & VisualizationLa y out Data Structur e\n",
|
||
"Fig. 1: The overall architecture of LayoutParser . For an input document image,\n",
|
||
"the core LayoutParser library provides a set of o\u000B",
|
||
"-the-shelf tools for layout\n",
|
||
"detection, OCR, visualization, and storage, backed by a carefully designed layout\n",
|
||
"data structure. LayoutParser also supports high level customization via e\u000Ecient\n",
|
||
"layout annotation and model training functions. These improve model accuracy\n",
|
||
"on the target samples. The community platform enables the easy sharing of DIA\n",
|
||
"models and whole digitization pipelines to promote reusability and reproducibility.\n",
|
||
"A collection of detailed documentation, tutorials and exemplar projects make\n",
|
||
"LayoutParser easy to learn and use.\n",
|
||
"AllenNLP [ 8] and transformers [ 34] have provided the community with complete\n",
|
||
"DL-based support for developing and deploying models for general computer\n",
|
||
"vision and natural language processing problems. LayoutParser , on the other\n",
|
||
"hand, specializes speci\f",
|
||
"cally in DIA tasks. LayoutParser is also equipped with a\n",
|
||
"community platform inspired by established model hubs such as Torch Hub [23]\n",
|
||
"andTensorFlow Hub [1]. It enables the sharing of pretrained models as well as\n",
|
||
"full document processing pipelines that are unique to DIA tasks.\n",
|
||
"There have been a variety of document data collections to facilitate the\n",
|
||
"development of DL models. Some examples include PRImA [ 3](magazine layouts),\n",
|
||
"PubLayNet [ 38](academic paper layouts), Table Bank [ 18](tables in academic\n",
|
||
"papers), Newspaper Navigator Dataset [ 16,17](newspaper \f",
|
||
"gure layouts) and\n",
|
||
"HJDataset [31](historical Japanese document layouts). A spectrum of models\n",
|
||
"trained on these datasets are currently available in the LayoutParser model zoo\n",
|
||
"to support di\u000B",
|
||
"erent use cases.\n",
|
||
"3 The Core LayoutParser Library\n",
|
||
"At the core of LayoutParser is an o\u000B",
|
||
"-the-shelf toolkit that streamlines DL-\n",
|
||
"based document image analysis. Five components support a simple interface\n",
|
||
"with comprehensive functionalities: 1) The layout detection models enable using\n",
|
||
"pre-trained or self-trained DL models for layout detection with just four lines\n",
|
||
"of code. 2) The detected layout information is stored in carefully engineered\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from langchain.vectorstores import FAISS\n",
|
||
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
|
||
"\n",
|
||
"faiss_index = FAISS.from_documents(pages, OpenAIEmbeddings())\n",
|
||
"docs = faiss_index.similarity_search(\"How will the community be engaged?\", k=2)\n",
|
||
"for doc in docs:\n",
|
||
" print(str(doc.metadata[\"page\"]) + \":\", doc.page_content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "09d64998",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Using Unstructured"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"id": "0cc0cd42",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.document_loaders import UnstructuredPDFLoader"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "082d557c",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"loader = UnstructuredPDFLoader(\"example_data/layout-parser-paper.pdf\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "df11c953",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"data = loader.load()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "09957371",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Retain Elements\n",
|
||
"\n",
|
||
"Under the hood, Unstructured creates different \"elements\" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode=\"elements\"`."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "0fab833b",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"loader = UnstructuredPDFLoader(\"example_data/layout-parser-paper.pdf\", mode=\"elements\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "c3e8ff1b",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"data = loader.load()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "43c23d2d",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"data[0]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "21998d18",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Using PDFMiner"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"id": "2f0cc9ff",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.document_loaders import PDFMinerLoader"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"id": "42b531e8",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"loader = PDFMinerLoader(\"example_data/layout-parser-paper.pdf\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"id": "010d5cdd",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"data = loader.load()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"## Using PyMuPDF\n",
|
||
"\n",
|
||
"This is the fastest of the PDF parsing options, and contains detailed metadata about the PDF and its pages, as well as returns one document per page."
|
||
],
|
||
"metadata": {
|
||
"collapsed": false
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"outputs": [],
|
||
"source": [
|
||
"from langchain.document_loaders import PyMuPDFLoader"
|
||
],
|
||
"metadata": {
|
||
"collapsed": false
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"outputs": [],
|
||
"source": [
|
||
"loader = PyMuPDFLoader(\"example_data/layout-parser-paper.pdf\")"
|
||
],
|
||
"metadata": {
|
||
"collapsed": false
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"outputs": [],
|
||
"source": [
|
||
"data = loader.load()"
|
||
],
|
||
"metadata": {
|
||
"collapsed": false
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": "Document(page_content='LayoutParser: A Unified Toolkit for Deep\\nLearning Based Document Image Analysis\\nZejiang Shen1 (<28>), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\\nLee4, Jacob Carlson3, and Weining Li5\\n1 Allen Institute for AI\\nshannons@allenai.org\\n2 Brown University\\nruochen zhang@brown.edu\\n3 Harvard University\\n{melissadell,jacob carlson}@fas.harvard.edu\\n4 University of Washington\\nbcgl@cs.washington.edu\\n5 University of Waterloo\\nw422li@uwaterloo.ca\\nAbstract. Recent advances in document image analysis (DIA) have been\\nprimarily driven by the application of neural networks. Ideally, research\\noutcomes could be easily deployed in production and extended for further\\ninvestigation. However, various factors like loosely organized codebases\\nand sophisticated model configurations complicate the easy reuse of im-\\nportant innovations by a wide audience. Though there have been on-going\\nefforts to improve reusability and simplify deep learning (DL) model\\ndevelopment in disciplines like natural language processing and computer\\nvision, none of them are optimized for challenges in the domain of DIA.\\nThis represents a major gap in the existing toolkit, as DIA is central to\\nacademic research across a wide range of disciplines in the social sciences\\nand humanities. This paper introduces LayoutParser, an open-source\\nlibrary for streamlining the usage of DL in DIA research and applica-\\ntions. The core LayoutParser library comes with a set of simple and\\nintuitive interfaces for applying and customizing DL models for layout de-\\ntection, character recognition, and many other document processing tasks.\\nTo promote extensibility, LayoutParser also incorporates a community\\nplatform for sharing both pre-trained models and full document digiti-\\nzation pipelines. We demonstrate that LayoutParser is helpful for both\\nlightweight and large-scale digitization pipelines in real-word use cases.\\nThe library is publicly available at https://layout-parser.github.io.\\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\\n· Character Recognition · Open Source library · Toolkit.\\n1\\nIntroduction\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndocument image analysis (DIA) tasks including document image classification [11,\\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\\n', lookup_str='', metadata={'file_path': 'example_data/layout-parser-paper.pdf', 'page_number': 1, 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationDate': 'D:20210622012710Z', 'modDate': 'D:20210622012710Z', 'trapped': '', 'encryption': None}, lookup_index=0)"
|
||
},
|
||
"execution_count": 4,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"data[0]"
|
||
],
|
||
"metadata": {
|
||
"collapsed": false
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"source": [
|
||
"Additionally, you can pass along any of the options from the [PyMuPDF documentation](https://pymupdf.readthedocs.io/en/latest/app1.html#plain-text/) as keyword arguments in the `load` call, and it will be pass along to the `get_text()` call."
|
||
],
|
||
"metadata": {
|
||
"collapsed": false
|
||
}
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"outputs": [],
|
||
"source": [],
|
||
"metadata": {
|
||
"collapsed": false
|
||
}
|
||
}
|
||
],
|
||
"metadata": {
|
||
"kernelspec": {
|
||
"display_name": "Python 3 (ipykernel)",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3.9.1"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
}
|