mirror of
https://github.com/hwchase17/langchain
synced 2024-11-08 07:10:35 +00:00
5e43768f61
This PR updates documentations only, `max_length` should be `max_tokens` according to latest LlamaCpp API doc: https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html
558 lines
19 KiB
Plaintext
558 lines
19 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Llama.cpp\n",
|
|
"\n",
|
|
"[llama-cpp-python](https://github.com/abetlen/llama-cpp-python) is a Python binding for [llama.cpp](https://github.com/ggerganov/llama.cpp). \n",
|
|
"It supports [several LLMs](https://github.com/ggerganov/llama.cpp).\n",
|
|
"\n",
|
|
"This notebook goes over how to run `llama-cpp-python` within LangChain."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Installation\n",
|
|
"\n",
|
|
"There are different options on how to install the llama-cpp package: \n",
|
|
"- only CPU usage\n",
|
|
"- CPU + GPU (using one of many BLAS backends)\n",
|
|
"- Metal GPU (MacOS with Apple Silicon Chip) \n",
|
|
"\n",
|
|
"### CPU only installation"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"!pip install llama-cpp-python"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Installation with OpenBLAS / cuBLAS / CLBlast\n",
|
|
"\n",
|
|
"`lama.cpp` supports multiple BLAS backends for faster processing. Use the `FORCE_CMAKE=1` environment variable to force the use of cmake and install the pip package for the desired BLAS backend ([source](https://github.com/abetlen/llama-cpp-python#installation-with-openblas--cublas--clblast)).\n",
|
|
"\n",
|
|
"Example installation with cuBLAS backend:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**IMPORTANT**: If you have already installed the CPU only version of the package, you need to reinstall it from scratch. Consider the following command: "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Installation with Metal\n",
|
|
"\n",
|
|
"`llama.cpp` supports Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks. Use the `FORCE_CMAKE=1` environment variable to force the use of cmake and install the pip package for the Metal support ([source](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md)).\n",
|
|
"\n",
|
|
"Example installation with Metal Support:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**IMPORTANT**: If you have already installed a cpu only version of the package, you need to reinstall it from scratch: consider the following command: "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Installation with Windows\n",
|
|
"\n",
|
|
"It is stable to install the `llama-cpp-python` library by compiling from the source. You can follow most of the instructions in the repository itself but there are some windows specific instructions which might be useful.\n",
|
|
"\n",
|
|
"Requirements to install the `llama-cpp-python`,\n",
|
|
"\n",
|
|
"- git\n",
|
|
"- python\n",
|
|
"- cmake\n",
|
|
"- Visual Studio Community (make sure you install this with the following settings)\n",
|
|
" - Desktop development with C++\n",
|
|
" - Python development\n",
|
|
" - Linux embedded development with C++\n",
|
|
"\n",
|
|
"1. Clone git repository recursively to get `llama.cpp` submodule as well \n",
|
|
"\n",
|
|
"```\n",
|
|
"git clone --recursive -j8 https://github.com/abetlen/llama-cpp-python.git\n",
|
|
"```\n",
|
|
"\n",
|
|
"2. Open up command Prompt (or anaconda prompt if you have it installed), set up environment variables to install. Follow this if you do not have a GPU, you must set both of the following variables.\n",
|
|
"\n",
|
|
"```\n",
|
|
"set FORCE_CMAKE=1\n",
|
|
"set CMAKE_ARGS=-DLLAMA_CUBLAS=OFF\n",
|
|
"```\n",
|
|
"You can ignore the second environment variable if you have an NVIDIA GPU.\n",
|
|
"\n",
|
|
"#### Compiling and installing\n",
|
|
"\n",
|
|
"In the same command prompt (anaconda prompt) you set the variables, you can `cd` into `llama-cpp-python` directory and run the following commands.\n",
|
|
"\n",
|
|
"```\n",
|
|
"python setup.py clean\n",
|
|
"python setup.py install\n",
|
|
"```"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Usage"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Make sure you are following all instructions to [install all necessary model files](https://github.com/ggerganov/llama.cpp).\n",
|
|
"\n",
|
|
"You don't need an `API_TOKEN` as you will run the LLM locally.\n",
|
|
"\n",
|
|
"It is worth understanding which models are suitable to be used on the desired machine."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.llms import LlamaCpp\n",
|
|
"from langchain import PromptTemplate, LLMChain\n",
|
|
"from langchain.callbacks.manager import CallbackManager\n",
|
|
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"**Consider using a template that suits your model! Check the models page on HuggingFace etc. to get a correct prompting template.**"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"template = \"\"\"Question: {question}\n",
|
|
"\n",
|
|
"Answer: Let's work this out in a step by step way to be sure we have the right answer.\"\"\"\n",
|
|
"\n",
|
|
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"metadata": {
|
|
"tags": []
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Callbacks support token-wise streaming\n",
|
|
"callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])\n",
|
|
"# Verbose is required to pass to the callback manager"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### CPU"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Example using a LLaMA 2 7B model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Make sure the model path is correct for your system!\n",
|
|
"llm = LlamaCpp(\n",
|
|
" model_path=\"/Users/rlm/Desktop/Code/llama/llama-2-7b-ggml/llama-2-7b-chat.ggmlv3.q4_0.bin\",\n",
|
|
" temperature=0.75,\n",
|
|
" max_tokens=2000,\n",
|
|
" top_p=1,\n",
|
|
" callback_manager=callback_manager,\n",
|
|
" verbose=True,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"Stephen Colbert:\n",
|
|
"Yo, John, I heard you've been talkin' smack about me on your show.\n",
|
|
"Let me tell you somethin', pal, I'm the king of late-night TV\n",
|
|
"My satire is sharp as a razor, it cuts deeper than a knife\n",
|
|
"While you're just a british bloke tryin' to be funny with your accent and your wit.\n",
|
|
"John Oliver:\n",
|
|
"Oh Stephen, don't be ridiculous, you may have the ratings but I got the real talk.\n",
|
|
"My show is the one that people actually watch and listen to, not just for the laughs but for the facts.\n",
|
|
"While you're busy talkin' trash, I'm out here bringing the truth to light.\n",
|
|
"Stephen Colbert:\n",
|
|
"Truth? Ha! You think your show is about truth? Please, it's all just a joke to you.\n",
|
|
"You're just a fancy-pants british guy tryin' to be funny with your news and your jokes.\n",
|
|
"While I'm the one who's really makin' a difference, with my sat"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"llama_print_timings: load time = 358.60 ms\n",
|
|
"llama_print_timings: sample time = 172.55 ms / 256 runs ( 0.67 ms per token, 1483.59 tokens per second)\n",
|
|
"llama_print_timings: prompt eval time = 613.36 ms / 16 tokens ( 38.33 ms per token, 26.09 tokens per second)\n",
|
|
"llama_print_timings: eval time = 10151.17 ms / 255 runs ( 39.81 ms per token, 25.12 tokens per second)\n",
|
|
"llama_print_timings: total time = 11332.41 ms\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"\"\\nStephen Colbert:\\nYo, John, I heard you've been talkin' smack about me on your show.\\nLet me tell you somethin', pal, I'm the king of late-night TV\\nMy satire is sharp as a razor, it cuts deeper than a knife\\nWhile you're just a british bloke tryin' to be funny with your accent and your wit.\\nJohn Oliver:\\nOh Stephen, don't be ridiculous, you may have the ratings but I got the real talk.\\nMy show is the one that people actually watch and listen to, not just for the laughs but for the facts.\\nWhile you're busy talkin' trash, I'm out here bringing the truth to light.\\nStephen Colbert:\\nTruth? Ha! You think your show is about truth? Please, it's all just a joke to you.\\nYou're just a fancy-pants british guy tryin' to be funny with your news and your jokes.\\nWhile I'm the one who's really makin' a difference, with my sat\""
|
|
]
|
|
},
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"prompt = \"\"\"\n",
|
|
"Question: A rap battle between Stephen Colbert and John Oliver\n",
|
|
"\"\"\"\n",
|
|
"llm(prompt)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"Example using a LLaMA v1 model"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 18,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Make sure the model path is correct for your system!\n",
|
|
"llm = LlamaCpp(\n",
|
|
" model_path=\"./ggml-model-q4_0.bin\", callback_manager=callback_manager, verbose=True\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 16,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 17,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"\n",
|
|
"1. First, find out when Justin Bieber was born.\n",
|
|
"2. We know that Justin Bieber was born on March 1, 1994.\n",
|
|
"3. Next, we need to look up when the Super Bowl was played in that year.\n",
|
|
"4. The Super Bowl was played on January 28, 1995.\n",
|
|
"5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers."
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"llama_print_timings: load time = 434.15 ms\n",
|
|
"llama_print_timings: sample time = 41.81 ms / 121 runs ( 0.35 ms per token)\n",
|
|
"llama_print_timings: prompt eval time = 2523.78 ms / 48 tokens ( 52.58 ms per token)\n",
|
|
"llama_print_timings: eval time = 23971.57 ms / 121 runs ( 198.11 ms per token)\n",
|
|
"llama_print_timings: total time = 28945.95 ms\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"'\\n\\n1. First, find out when Justin Bieber was born.\\n2. We know that Justin Bieber was born on March 1, 1994.\\n3. Next, we need to look up when the Super Bowl was played in that year.\\n4. The Super Bowl was played on January 28, 1995.\\n5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers.'"
|
|
]
|
|
},
|
|
"execution_count": 17,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"question = \"What NFL team won the Super Bowl in the year Justin Bieber was born?\"\n",
|
|
"\n",
|
|
"llm_chain.run(question)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### GPU\n",
|
|
"\n",
|
|
"If the installation with BLAS backend was correct, you will see a `BLAS = 1` indicator in model properties.\n",
|
|
"\n",
|
|
"Two of the most important parameters for use with GPU are:\n",
|
|
"\n",
|
|
"- `n_gpu_layers` - determines how many layers of the model are offloaded to your GPU.\n",
|
|
"- `n_batch` - how many tokens are processed in parallel. \n",
|
|
"\n",
|
|
"Setting these parameters correctly will dramatically improve the evaluation speed (see [wrapper code](https://github.com/mmagnesium/langchain/blob/master/langchain/llms/llamacpp.py) for more details)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"n_gpu_layers = 40 # Change this value based on your model and your GPU VRAM pool.\n",
|
|
"n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.\n",
|
|
"\n",
|
|
"# Make sure the model path is correct for your system!\n",
|
|
"llm = LlamaCpp(\n",
|
|
" model_path=\"./ggml-model-q4_0.bin\",\n",
|
|
" n_gpu_layers=n_gpu_layers,\n",
|
|
" n_batch=n_batch,\n",
|
|
" callback_manager=callback_manager,\n",
|
|
" verbose=True,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
" We are looking for an NFL team that won the Super Bowl when Justin Bieber (born March 1, 1994) was born. \n",
|
|
"\n",
|
|
"First, let's look up which year is closest to when Justin Bieber was born:\n",
|
|
"\n",
|
|
"* The year before he was born: 1993\n",
|
|
"* The year of his birth: 1994\n",
|
|
"* The year after he was born: 1995\n",
|
|
"\n",
|
|
"We want to know what NFL team won the Super Bowl in the year that is closest to when Justin Bieber was born. Therefore, we should look up the NFL team that won the Super Bowl in either 1993 or 1994.\n",
|
|
"\n",
|
|
"Now let's find out which NFL team did win the Super Bowl in either of those years:\n",
|
|
"\n",
|
|
"* In 1993, the San Francisco 49ers won the Super Bowl against the Dallas Cowboys by a score of 20-16.\n",
|
|
"* In 1994, the San Francisco 49ers won the Super Bowl again, this time against the San Diego Chargers by a score of 49-26.\n"
|
|
]
|
|
},
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"llama_print_timings: load time = 238.10 ms\n",
|
|
"llama_print_timings: sample time = 84.23 ms / 256 runs ( 0.33 ms per token)\n",
|
|
"llama_print_timings: prompt eval time = 238.04 ms / 49 tokens ( 4.86 ms per token)\n",
|
|
"llama_print_timings: eval time = 10391.96 ms / 255 runs ( 40.75 ms per token)\n",
|
|
"llama_print_timings: total time = 15664.80 ms\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"\" We are looking for an NFL team that won the Super Bowl when Justin Bieber (born March 1, 1994) was born. \\n\\nFirst, let's look up which year is closest to when Justin Bieber was born:\\n\\n* The year before he was born: 1993\\n* The year of his birth: 1994\\n* The year after he was born: 1995\\n\\nWe want to know what NFL team won the Super Bowl in the year that is closest to when Justin Bieber was born. Therefore, we should look up the NFL team that won the Super Bowl in either 1993 or 1994.\\n\\nNow let's find out which NFL team did win the Super Bowl in either of those years:\\n\\n* In 1993, the San Francisco 49ers won the Super Bowl against the Dallas Cowboys by a score of 20-16.\\n* In 1994, the San Francisco 49ers won the Super Bowl again, this time against the San Diego Chargers by a score of 49-26.\\n\""
|
|
]
|
|
},
|
|
"execution_count": 8,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"question = \"What NFL team won the Super Bowl in the year Justin Bieber was born?\"\n",
|
|
"\n",
|
|
"llm_chain.run(question)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Metal\n",
|
|
"\n",
|
|
"If the installation with Metal was correct, you will see a `NEON = 1` indicator in model properties.\n",
|
|
"\n",
|
|
"Two of the most important GPU parameters are:\n",
|
|
"\n",
|
|
"- `n_gpu_layers` - determines how many layers of the model are offloaded to your Metal GPU, in the most case, set it to `1` is enough for Metal\n",
|
|
"- `n_batch` - how many tokens are processed in parallel, default is 8, set to bigger number.\n",
|
|
"- `f16_kv` - for some reason, Metal only support `True`, otherwise you will get error such as `Asserting on type 0\n",
|
|
"GGML_ASSERT: .../ggml-metal.m:706: false && \"not implemented\"`\n",
|
|
"\n",
|
|
"Setting these parameters correctly will dramatically improve the evaluation speed (see [wrapper code](https://github.com/mmagnesium/langchain/blob/master/langchain/llms/llamacpp.py) for more details)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"n_gpu_layers = 1 # Metal set to 1 is enough.\n",
|
|
"n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.\n",
|
|
"\n",
|
|
"# Make sure the model path is correct for your system!\n",
|
|
"llm = LlamaCpp(\n",
|
|
" model_path=\"./ggml-model-q4_0.bin\",\n",
|
|
" n_gpu_layers=n_gpu_layers,\n",
|
|
" n_batch=n_batch,\n",
|
|
" f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls\n",
|
|
" callback_manager=callback_manager,\n",
|
|
" verbose=True,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"The console log will show the following log to indicate Metal was enable properly.\n",
|
|
"\n",
|
|
"```\n",
|
|
"ggml_metal_init: allocating\n",
|
|
"ggml_metal_init: using MPS\n",
|
|
"...\n",
|
|
"```\n",
|
|
"\n",
|
|
"You also could check `Activity Monitor` by watching the GPU usage of the process, the CPU usage will drop dramatically after turn on `n_gpu_layers=1`. \n",
|
|
"\n",
|
|
"For the first call to the LLM, the performance may be slow due to the model compilation in Metal GPU."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.9"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 4
|
|
}
|