diff --git a/docs/docs/guides/privacy/presidio_data_anonymization/qa_privacy_protection.ipynb b/docs/docs/guides/privacy/presidio_data_anonymization/qa_privacy_protection.ipynb index 51e3a05c0c..12027c923b 100644 --- a/docs/docs/guides/privacy/presidio_data_anonymization/qa_privacy_protection.ipynb +++ b/docs/docs/guides/privacy/presidio_data_anonymization/qa_privacy_protection.ipynb @@ -21,7 +21,7 @@ "\n", "In this notebook, we will look at building a basic system for question answering, based on private data. Before feeding the LLM with this data, we need to protect it so that it doesn't go to an external API (e.g. OpenAI, Anthropic). Then, after receiving the model output, we would like the data to be restored to its original form. Below you can observe an example flow of this QA system:\n", "\n", - "\n", + "\n", "\n", "\n", "In the following notebook, we will not go into the details of how the anonymizer works. If you are interested, please visit [this part of the documentation](https://python.langchain.com/docs/guides/privacy/presidio_data_anonymization/).\n", @@ -839,6 +839,8 @@ "metadata": {}, "outputs": [], "source": [ + "documents = [Document(page_content=document_content)]\n", + "\n", "text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)\n", "chunks = text_splitter.split_documents(documents)\n", "\n", diff --git a/docs/static/img/qa_privacy_protection.png b/docs/static/img/qa_privacy_protection.png index 41557fd6a3..28b68da1c0 100644 Binary files a/docs/static/img/qa_privacy_protection.png and b/docs/static/img/qa_privacy_protection.png differ