langchain/docs/extras/integrations/text_embedding/gpt4all.ipynb

157 lines
3.7 KiB
Plaintext
Raw Normal View History

{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "d63d56c2",
"metadata": {},
"source": [
"# GPT4All\n",
"\n",
"[GPT4All](https://gpt4all.io/index.html) is a free-to-use, locally running, privacy-aware chatbot. There is no GPU or internet required. It features popular models and its own models such as GPT4All Falcon, Wizard, etc.\n",
"\n",
"This notebook explains how to use [GPT4All embeddings](https://docs.gpt4all.io/gpt4all_python_embedding.html#gpt4all.gpt4all.Embed4All) with LangChain."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "46b7aa85",
"metadata": {},
"source": [
"## Install GPT4All's Python Bindings"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cdd68231",
"metadata": {},
"outputs": [],
"source": [
"%pip install gpt4all > /dev/null"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "d80f4b92",
"metadata": {},
"source": [
"Note: you may need to restart the kernel to use updated packages."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "08f267d6",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings import GPT4AllEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0120e939",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"100%|████████████████████████| 45.5M/45.5M [00:02<00:00, 18.5MiB/s]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Model downloaded at: /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"objc[45711]: Class GGMLMetalClass is implemented in both /Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x29fe18208) and /Users/rlm/anaconda3/envs/lcn2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllamamodel-mainline-metal.dylib (0x2a0244208). One of the two will be used. Which one is undefined.\n"
]
}
],
"source": [
"gpt4all_embd = GPT4AllEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "53134a38",
"metadata": {},
"outputs": [],
"source": [
"text = \"This is a test document.\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "eef36bde",
"metadata": {},
"source": [
"## Embed the Textual Data"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a55adf9f",
"metadata": {},
"outputs": [],
"source": [
"query_result = gpt4all_embd.embed_query(text)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "12b24e69",
"metadata": {},
"source": [
"With embed_documents you can embed multiple pieces of text. You can also map these embeddings with [Nomic's Atlas](https://docs.nomic.ai/index.html) to see a visual representation of your data."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6ebd42d7",
"metadata": {},
"outputs": [],
"source": [
"doc_result = gpt4all_embd.embed_documents([text])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}