Remove redundant .docx loader (closes #1716) + update how_to_guides.rst (#1891)

In https://github.com/hwchase17/langchain/issues/1716 , it was
identified that there were two .py files performing similar tasks. As a
resolution, one of the files has been removed, as its purpose had
already been fulfilled by the other file. Additionally, the init has
been updated accordingly.

Furthermore, the how_to_guides.rst file has been updated to include
links to documentation that was previously missing. This was deemed
necessary as the existing list on
https://langchain.readthedocs.io/en/latest/modules/document_loaders/how_to_guides.html
was incomplete, causing confusion for users who rely on the full list of
documentation on the left sidebar of the website.
This commit is contained in:
Klein Tahiraj 2023-03-22 23:19:42 +01:00 committed by GitHub
parent 1f93c5cf69
commit d3d4503ce2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 22 additions and 164 deletions

View File

@ -1,145 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "34c90eed",
"metadata": {},
"source": [
"# Microsoft Word\n",
"\n",
"This notebook shows how to load text from Microsoft word documents."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "28ded768",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import UnstructuredDocxLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f1f26035",
"metadata": {},
"outputs": [],
"source": [
"loader = UnstructuredDocxLoader('example_data/fake.docx')"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2c87dde9",
"metadata": {},
"outputs": [],
"source": [
"data = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "0e4a884c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'example_data/fake.docx'}, lookup_index=0)]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data"
]
},
{
"cell_type": "markdown",
"id": "5d1472e9",
"metadata": {},
"source": [
"## Retain Elements\n",
"\n",
"Under the hood, Unstructured creates different \"elements\" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode=\"elements\"`."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "93abf60b",
"metadata": {},
"outputs": [],
"source": [
"loader = UnstructuredDocxLoader('example_data/fake.docx', mode=\"elements\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "c35cdbcc",
"metadata": {},
"outputs": [],
"source": [
"data = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fae2d730",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'example_data/fake.docx'}, lookup_index=0)]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "961a7b1d",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -27,7 +27,7 @@
"metadata": {},
"outputs": [],
"source": [
"loader = UnstructuredWordDocumentLoader(\"fake.docx\")"
"loader = UnstructuredWordDocumentLoader(\"example_data/fake.docx\")"
]
},
{
@ -78,7 +78,7 @@
"metadata": {},
"outputs": [],
"source": [
"loader = UnstructuredWordDocumentLoader(\"fake.docx\", mode=\"elements\")"
"loader = UnstructuredWordDocumentLoader(\"example_data/fake.docx\", mode=\"elements\")"
]
},
{

View File

@ -21,8 +21,6 @@ There are a lot of different document loaders that LangChain supports. Below are
`GoogleDrive <./examples/googledrive.html>`_: A walkthrough of how to load data from Google drive.
`Microsoft Word <./examples/microsoft_word.html>`_: A walkthrough of how to load data from Microsoft Word files.
`Obsidian <./examples/obsidian.html>`_: A walkthrough of how to load data from an Obsidian file dump.
`Roam <./examples/roam.html>`_: A walkthrough of how to load data from a Roam file export.
@ -59,6 +57,26 @@ There are a lot of different document loaders that LangChain supports. Below are
`iFixit <./examples/ifixit.html>`_: A walkthrough of how to search and load data like guides, technical Q&A's, and device wikis from iFixit.com
`Notebook <./examples/notebook.html>`_: A walkthrough of how to load data from .ipynb notebook.
`Copypaste <./examples/copypaste.html>`_: A walkthrough of how to load a document object from something you just want to copy and paste.
`CSV <./examples/csv.html>`_: A walkthrough of how to load data from a .csv file.
`Facebook Chat <./examples/facebook_chat.html>`_: A walkthrough of how to load data from a Facebook Chat json file.
`Image <./examples/image.html>`_: A walkthrough of how to load images such as JPGs PNGs into a document format that can be used downstream.
`Markdown <./examples/markdown.html>`_: A walkthrough of how to load data from a markdown file.
`SRT <./examples/srt.html>`_: A walkthrough of how to load data from a subtitle (`.srt`) file.
`Telegram <./examples/telegram.html>`_: A walkthrough of how to load data from a Telegram Chat json file.
`URL <./examples/url.html>`_: A walkthrough of how to load HTML documents from a list of URLs into a document format that we can use downstream.
`Word Document <./examples/word_document.html>`_: A walkthrough of how to load data from Microsoft Word files.
`Blackboard <./examples/blackboard.html>`_: A walkthrough of how to load data from a Blackboard course.
.. toctree::

View File

@ -7,7 +7,6 @@ from langchain.document_loaders.college_confidential import CollegeConfidentialL
from langchain.document_loaders.conllu import CoNLLULoader
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.document_loaders.directory import DirectoryLoader
from langchain.document_loaders.docx import UnstructuredDocxLoader
from langchain.document_loaders.email import UnstructuredEmailLoader
from langchain.document_loaders.evernote import EverNoteLoader
from langchain.document_loaders.facebook_chat import FacebookChatLoader
@ -72,7 +71,6 @@ __all__ = [
"UnstructuredPDFLoader",
"UnstructuredImageLoader",
"ObsidianLoader",
"UnstructuredDocxLoader",
"UnstructuredEmailLoader",
"UnstructuredMarkdownLoader",
"RoamLoader",

View File

@ -1,13 +0,0 @@
"""Loader that loads Microsoft Word files."""
from typing import List
from langchain.document_loaders.unstructured import UnstructuredFileLoader
class UnstructuredDocxLoader(UnstructuredFileLoader):
"""Loader that uses unstructured to load Microsoft Word files."""
def _get_elements(self) -> List:
from unstructured.partition.docx import partition_docx
return partition_docx(filename=self.file_path)