forked from Archives/langchain
c998569c8f
#docs: text splitters improvements Changes are only in the Jupyter notebooks. - added links to the source packages and a short description of these packages - removed " Text Splitters" suffixes from the TOC elements (they made the list of the text splitters messy) - moved text splitters, based on the length function into a separate list. They can be mixed with any classes from the "Text Splitters", so it is a different classification. ## Who can review? @hwchase17 - project lead @eyurtsev @vowelparrot NOTE: please, check out the results of the `Python code` text splitter example (text_splitters/examples/python.ipynb). It looks suboptimal.
106 lines
2.7 KiB
Plaintext
106 lines
2.7 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "13dc0983",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Hugging Face tokenizer\n",
|
|
"\n",
|
|
">[Hugging Face](https://huggingface.co/docs/tokenizers/index) has many tokenizers.\n",
|
|
"\n",
|
|
"We use Hugging Face tokenizer, the [GPT2TokenizerFast](https://huggingface.co/Ransaka/gpt2-tokenizer-fast) to count the text length in tokens.\n",
|
|
"\n",
|
|
"1. How the text is split: by character passed in\n",
|
|
"2. How the chunk size is measured: by number of tokens calculated by the `Hugging Face` tokenizer\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "a8ce51d5",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from transformers import GPT2TokenizerFast\n",
|
|
"\n",
|
|
"tokenizer = GPT2TokenizerFast.from_pretrained(\"gpt2\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "388369ed",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# This is a long document we can split up.\n",
|
|
"with open('../../../state_of_the_union.txt') as f:\n",
|
|
" state_of_the_union = f.read()\n",
|
|
"from langchain.text_splitter import CharacterTextSplitter"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "ca5e72c0",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=100, chunk_overlap=0)\n",
|
|
"texts = text_splitter.split_text(state_of_the_union)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "37cdfbeb",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
|
|
"\n",
|
|
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
|
|
"\n",
|
|
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
|
|
"\n",
|
|
"With a duty to one another to the American people to the Constitution.\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(texts[0])"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.6"
|
|
},
|
|
"vscode": {
|
|
"interpreter": {
|
|
"hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49"
|
|
}
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|