langchain: adds recursive json splitter (#17144)

- **Description:** This adds a recursive json splitter class to the
existing text_splitters as well as unit tests
- **Issue:** splitting text from structured data can cause issues if you
have a large nested json object and you split it as regular text you may
end up losing the structure of the json. To mitigate against this you
can split the nested json into large chunks and overlap them, but this
causes unnecessary text processing and there will still be times where
the nested json is so big that the chunks get separated from the parent
keys.

As an example you wouldn't want the following to be split in half:
```shell
{'val0': 'DFWeNdWhapbR',
 'val1': {'val10': 'QdJo',
          'val11': 'FWSDVFHClW',
          'val12': 'bkVnXMMlTiQh',
          'val13': 'tdDMKRrOY',
          'val14': 'zybPALvL',
          'val15': 'JMzGMNH',
          'val16': {'val160': 'qLuLKusFw',
                    'val161': 'DGuotLh',
                    'val162': 'KztlcSBropT',
-----------------------------------------------------------------------split-----
                    'val163': 'YlHHDrN',
                    'val164': 'CtzsxlGBZKf',
                    'val165': 'bXzhcrWLmBFp',
                    'val166': 'zZAqC',
                    'val167': 'ZtyWno',
                    'val168': 'nQQZRsLnaBhb',
                    'val169': 'gSpMbJwA'},
          'val17': 'JhgiyF',
          'val18': 'aJaqjUSFFrI',
          'val19': 'glqNSvoyxdg'}}
```
Any llm processing the second chunk of text may not have the context of
val1, and val16 reducing accuracy. Embeddings will also lack this
context and this makes retrieval less accurate.

Instead you want it to be split into chunks that retain the json
structure.
```shell
{'val0': 'DFWeNdWhapbR',
 'val1': {'val10': 'QdJo',
          'val11': 'FWSDVFHClW',
          'val12': 'bkVnXMMlTiQh',
          'val13': 'tdDMKRrOY',
          'val14': 'zybPALvL',
          'val15': 'JMzGMNH',
          'val16': {'val160': 'qLuLKusFw',
                    'val161': 'DGuotLh',
                    'val162': 'KztlcSBropT',
                    'val163': 'YlHHDrN',
                    'val164': 'CtzsxlGBZKf'}}}
```
and
```shell
{'val1':{'val16':{
                    'val165': 'bXzhcrWLmBFp',
                    'val166': 'zZAqC',
                    'val167': 'ZtyWno',
                    'val168': 'nQQZRsLnaBhb',
                    'val169': 'gSpMbJwA'},
          'val17': 'JhgiyF',
          'val18': 'aJaqjUSFFrI',
          'val19': 'glqNSvoyxdg'}}
```
This recursive json text splitter does this. Values that contain a list
can be converted to dict first by using split(... convert_lists=True)
otherwise long lists will not be split and you may end up with chunks
larger than the max chunk.

In my testing large json objects could be split into small chunks with 
   Increased question answering accuracy
 The ability to split into smaller chunks meant retrieval queries can
use fewer tokens


- **Dependencies:** json import added to text_splitter.py, and random
added to the unit test
  - **Twitter handle:** @joelsprunger

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
pull/17270/head
joelsprunger 4 months ago committed by GitHub
parent f0ada1a396
commit 3984f6604f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -0,0 +1,225 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a678d550",
"metadata": {},
"source": [
"# Recursively split JSON\n",
"\n",
"This json splitter traverses json data depth first and builds smaller json chunks. It attempts to keep nested json objects whole but will split them if needed to keep chunks between a min_chunk_size and the max_chunk_size. If the value is not a nested json, but rather a very large string the string will not be split. If you need a hard cap on the chunk size considder following this with a Recursive Text splitter on those chunks. There is an optional pre-processing step to split lists, by first converting them to json (dict) and then splitting them as such.\n",
"\n",
"1. How the text is split: json value.\n",
"2. How the chunk size is measured: by number of characters."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a504e1e7",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"import requests"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3390ae1d",
"metadata": {},
"outputs": [],
"source": [
"# This is a large nested json object and will be loaded as a python dict\n",
"json_data = requests.get(\"https://api.smith.langchain.com/openapi.json\").json()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "7bfe2c1e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveJsonSplitter"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "2833c409",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"splitter = RecursiveJsonSplitter(max_chunk_size=300)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "f941aa56",
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"# Recursively split json data - If you need to access/manipulate the smaller json chunks\n",
"json_chunks = splitter.split_json(json_data=json_data)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "0839f4f0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\"openapi\": \"3.0.2\", \"info\": {\"title\": \"LangChainPlus\", \"version\": \"0.1.0\"}, \"paths\": {\"/sessions/{session_id}\": {\"get\": {\"tags\": [\"tracer-sessions\"], \"summary\": \"Read Tracer Session\", \"description\": \"Get a specific session.\", \"operationId\": \"read_tracer_session_sessions__session_id__get\"}}}}\n",
"{\"paths\": {\"/sessions/{session_id}\": {\"get\": {\"parameters\": [{\"required\": true, \"schema\": {\"title\": \"Session Id\", \"type\": \"string\", \"format\": \"uuid\"}, \"name\": \"session_id\", \"in\": \"path\"}, {\"required\": false, \"schema\": {\"title\": \"Include Stats\", \"type\": \"boolean\", \"default\": false}, \"name\": \"include_stats\", \"in\": \"query\"}, {\"required\": false, \"schema\": {\"title\": \"Accept\", \"type\": \"string\"}, \"name\": \"accept\", \"in\": \"header\"}]}}}}\n"
]
}
],
"source": [
"# The splitter can also output documents\n",
"docs = splitter.create_documents(texts=[json_data])\n",
"\n",
"# or a list of strings\n",
"texts = splitter.split_text(json_data=json_data)\n",
"\n",
"print(texts[0])\n",
"print(texts[1])"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c34b1f7f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[293, 431, 203, 277, 230, 194, 162, 280, 223, 193]\n",
"{\"paths\": {\"/sessions/{session_id}\": {\"get\": {\"parameters\": [{\"required\": true, \"schema\": {\"title\": \"Session Id\", \"type\": \"string\", \"format\": \"uuid\"}, \"name\": \"session_id\", \"in\": \"path\"}, {\"required\": false, \"schema\": {\"title\": \"Include Stats\", \"type\": \"boolean\", \"default\": false}, \"name\": \"include_stats\", \"in\": \"query\"}, {\"required\": false, \"schema\": {\"title\": \"Accept\", \"type\": \"string\"}, \"name\": \"accept\", \"in\": \"header\"}]}}}}\n"
]
}
],
"source": [
"# Let's look at the size of the chunks\n",
"print([len(text) for text in texts][:10])\n",
"\n",
"# Reviewing one of these chunks that was bigger we see there is a list object there\n",
"print(texts[1])"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "992477c2",
"metadata": {},
"outputs": [],
"source": [
"# The json splitter by default does not split lists\n",
"# the following will preprocess the json and convert list to dict with index:item as key:val pairs\n",
"texts = splitter.split_text(json_data=json_data, convert_lists=True)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "2d23b3aa",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[293, 431, 203, 277, 230, 194, 162, 280, 223, 193]\n"
]
}
],
"source": [
"# Let's look at the size of the chunks. Now they are all under the max\n",
"print([len(text) for text in texts][:10])"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "d2c2773e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\"paths\": {\"/sessions/{session_id}\": {\"get\": {\"parameters\": [{\"required\": true, \"schema\": {\"title\": \"Session Id\", \"type\": \"string\", \"format\": \"uuid\"}, \"name\": \"session_id\", \"in\": \"path\"}, {\"required\": false, \"schema\": {\"title\": \"Include Stats\", \"type\": \"boolean\", \"default\": false}, \"name\": \"include_stats\", \"in\": \"query\"}, {\"required\": false, \"schema\": {\"title\": \"Accept\", \"type\": \"string\"}, \"name\": \"accept\", \"in\": \"header\"}]}}}}\n"
]
}
],
"source": [
"# The list has been converted to a dict, but retains all the needed contextual information even if split into many chunks\n",
"print(texts[1])"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "8963b01a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='{\"paths\": {\"/sessions/{session_id}\": {\"get\": {\"parameters\": [{\"required\": true, \"schema\": {\"title\": \"Session Id\", \"type\": \"string\", \"format\": \"uuid\"}, \"name\": \"session_id\", \"in\": \"path\"}, {\"required\": false, \"schema\": {\"title\": \"Include Stats\", \"type\": \"boolean\", \"default\": false}, \"name\": \"include_stats\", \"in\": \"query\"}, {\"required\": false, \"schema\": {\"title\": \"Accept\", \"type\": \"string\"}, \"name\": \"accept\", \"in\": \"header\"}]}}}}')"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# We can also look at the documents\n",
"docs[1]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "168da4f0",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -22,6 +22,7 @@ Note: **MarkdownHeaderTextSplitter** and **HTMLHeaderTextSplitter do not derive
from __future__ import annotations
import copy
import json
import logging
import pathlib
import re
@ -1489,3 +1490,116 @@ class LatexTextSplitter(RecursiveCharacterTextSplitter):
"""Initialize a LatexTextSplitter."""
separators = self.get_separators_for_language(Language.LATEX)
super().__init__(separators=separators, **kwargs)
class RecursiveJsonSplitter:
def __init__(
self, max_chunk_size: int = 2000, min_chunk_size: Optional[int] = None
):
super().__init__()
self.max_chunk_size = max_chunk_size
self.min_chunk_size = (
min_chunk_size
if min_chunk_size is not None
else max(max_chunk_size - 200, 50)
)
@staticmethod
def _json_size(data: Dict) -> int:
"""Calculate the size of the serialized JSON object."""
return len(json.dumps(data))
@staticmethod
def _set_nested_dict(d: Dict, path: List[str], value: Any) -> None:
"""Set a value in a nested dictionary based on the given path."""
for key in path[:-1]:
d = d.setdefault(key, {})
d[path[-1]] = value
def _list_to_dict_preprocessing(self, data: Any) -> Any:
if isinstance(data, dict):
# Process each key-value pair in the dictionary
return {k: self._list_to_dict_preprocessing(v) for k, v in data.items()}
elif isinstance(data, list):
# Convert the list to a dictionary with index-based keys
return {
str(i): self._list_to_dict_preprocessing(item)
for i, item in enumerate(data)
}
else:
# Base case: the item is neither a dict nor a list, so return it unchanged
return data
def _json_split(
self,
data: Dict[str, Any],
current_path: List[str] = [],
chunks: List[Dict] = [{}],
) -> List[Dict]:
"""
Split json into maximum size dictionaries while preserving structure.
"""
if isinstance(data, dict):
for key, value in data.items():
new_path = current_path + [key]
chunk_size = self._json_size(chunks[-1])
size = self._json_size({key: value})
remaining = self.max_chunk_size - chunk_size
if size < remaining:
# Add item to current chunk
self._set_nested_dict(chunks[-1], new_path, value)
else:
if chunk_size >= self.min_chunk_size:
# Chunk is big enough, start a new chunk
chunks.append({})
# Iterate
self._json_split(value, new_path, chunks)
else:
# handle single item
self._set_nested_dict(chunks[-1], current_path, data)
return chunks
def split_json(
self,
json_data: Dict[str, Any],
convert_lists: bool = False,
) -> List[Dict]:
"""Splits JSON into a list of JSON chunks"""
if convert_lists:
chunks = self._json_split(self._list_to_dict_preprocessing(json_data))
else:
chunks = self._json_split(json_data)
# Remove the last chunk if it's empty
if not chunks[-1]:
chunks.pop()
return chunks
def split_text(
self, json_data: Dict[str, Any], convert_lists: bool = False
) -> List[str]:
"""Splits JSON into a list of JSON formatted strings"""
chunks = self.split_json(json_data=json_data, convert_lists=convert_lists)
# Convert to string
return [json.dumps(chunk) for chunk in chunks]
def create_documents(
self,
texts: List[Dict],
convert_lists: bool = False,
metadatas: Optional[List[dict]] = None,
) -> List[Document]:
"""Create documents from a list of json objects (Dict)."""
_metadatas = metadatas or [{}] * len(texts)
documents = []
for i, text in enumerate(texts):
for chunk in self.split_text(json_data=text, convert_lists=convert_lists):
metadata = copy.deepcopy(_metadatas[i])
new_doc = Document(page_content=chunk, metadata=metadata)
documents.append(new_doc)
return documents

@ -1,7 +1,9 @@
"""Test text splitting functionality."""
import random
import re
import string
from pathlib import Path
from typing import List
from typing import Any, List
import pytest
from langchain_core.documents import Document
@ -13,6 +15,7 @@ from langchain.text_splitter import (
MarkdownHeaderTextSplitter,
PythonCodeTextSplitter,
RecursiveCharacterTextSplitter,
RecursiveJsonSplitter,
TextSplitter,
Tokenizer,
split_text_on_tokens,
@ -1302,3 +1305,48 @@ def test_split_text_on_tokens() -> None:
output = split_text_on_tokens(text=text, tokenizer=tokenizer)
expected_output = ["foo bar", "bar baz", "baz 123"]
assert output == expected_output
def test_split_json() -> None:
"""Test json text splitter"""
max_chunk = 800
splitter = RecursiveJsonSplitter(max_chunk_size=max_chunk)
def random_val() -> str:
return "".join(random.choices(string.ascii_letters, k=random.randint(4, 12)))
test_data: Any = {
"val0": random_val(),
"val1": {f"val1{i}": random_val() for i in range(100)},
}
test_data["val1"]["val16"] = {f"val16{i}": random_val() for i in range(100)}
# uses create_docs and split_text
docs = splitter.create_documents(texts=[test_data])
output = [len(doc.page_content) < max_chunk * 1.05 for doc in docs]
expected_output = [True for doc in docs]
assert output == expected_output
def test_split_json_with_lists() -> None:
"""Test json text splitter with list conversion"""
max_chunk = 800
splitter = RecursiveJsonSplitter(max_chunk_size=max_chunk)
def random_val() -> str:
return "".join(random.choices(string.ascii_letters, k=random.randint(4, 12)))
test_data: Any = {
"val0": random_val(),
"val1": {f"val1{i}": random_val() for i in range(100)},
}
test_data["val1"]["val16"] = {f"val16{i}": random_val() for i in range(100)}
test_data_list: Any = {"testPreprocessing": [test_data]}
# test text splitter
texts = splitter.split_text(json_data=test_data)
texts_list = splitter.split_text(json_data=test_data_list, convert_lists=True)
assert len(texts_list) >= len(texts)

Loading…
Cancel
Save