From 05664a6f20f5059fc00cc888a5dc3500c860d88f Mon Sep 17 00:00:00 2001 From: Pu Cao <48318302+caopulan@users.noreply.github.com> Date: Mon, 4 Sep 2023 05:45:45 +0800 Subject: [PATCH] docs(text_splitter): update document of character splitter with tiktoken (#10001) The current document has not mentioned that splits larger than chunk size would happen. I update the related document and explain why it happens and how to solve it. related issue #1349 #3838 #2140 --- .../document_transformers/text_splitters/split_by_token.ipynb | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/extras/modules/data_connection/document_transformers/text_splitters/split_by_token.ipynb b/docs/extras/modules/data_connection/document_transformers/text_splitters/split_by_token.ipynb index 1a99e3c417..dfa81c3e28 100644 --- a/docs/extras/modules/data_connection/document_transformers/text_splitters/split_by_token.ipynb +++ b/docs/extras/modules/data_connection/document_transformers/text_splitters/split_by_token.ipynb @@ -91,7 +91,9 @@ "id": "de5b6a6e", "metadata": {}, "source": [ - "We can also load a tiktoken splitter directly" + "Note that if we use `CharacterTextSplitter.from_tiktoken_encoder`, text is only split by `CharacterTextSplitter` and `tiktoken` tokenizer is used to merge splits. It means that split can be larger than chunk size measured by `tiktoken` tokenizer. We can use `RecursiveCharacterTextSplitter.from_tiktoken_encoder` to make sure splits are not larger than chunk size of tokens allowed by the language model, where each split will be recursively split if it has a larger size.\n", + "\n", + "We can also load a tiktoken splitter directly, which ensure each split is smaller than chunk size." ] }, {