From 73901ef132b563084a27dc2d779d4d52ffa0713a Mon Sep 17 00:00:00 2001 From: Bhashithe Abeysinghe <137561690+bhashithe-air@users.noreply.github.com> Date: Thu, 20 Jul 2023 09:31:25 -0400 Subject: [PATCH] Added windows specific instructions to Llama.cpp documentation. (#8000) - Description: Added windows specific instructions on llama.cpp in the notebook file - Issue: #6356 - Dependencies: None - Tag maintainer: @baskaryan --- .../models/llms/integrations/llamacpp.ipynb | 44 ++++++++++++++++++- 1 file changed, 43 insertions(+), 1 deletion(-) diff --git a/docs/extras/modules/model_io/models/llms/integrations/llamacpp.ipynb b/docs/extras/modules/model_io/models/llms/integrations/llamacpp.ipynb index 345aa0b7dc..c7c3a46446 100644 --- a/docs/extras/modules/model_io/models/llms/integrations/llamacpp.ipynb +++ b/docs/extras/modules/model_io/models/llms/integrations/llamacpp.ipynb @@ -18,7 +18,7 @@ "source": [ "## Installation\n", "\n", - "There is a banch of options how to install the llama-cpp package: \n", + "There is a bunch of options how to install the llama-cpp package: \n", "- only CPU usage\n", "- CPU + GPU (using one of many BLAS backends)\n", "- Metal GPU (MacOS with Apple Silicon Chip) \n", @@ -109,6 +109,48 @@ "!CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Installation with Windows\n", + "\n", + "It is stable to install the `llama-cpp-python` library by compiling from the source. You can follow most of the instructions in the repository itself but there are some windows specific instructions which might be useful.\n", + "\n", + "Requirements to install the `llama-cpp-python`,\n", + "\n", + "- git\n", + "- python\n", + "- cmake\n", + "- Visual Studio Community (make sure you install this with the following settings)\n", + " - Desktop development with C++\n", + " - Python development\n", + " - Linux embedded development with C++\n", + "\n", + "1. Clone git repository recursively to get `llama.cpp` submodule as well \n", + "\n", + "```\n", + "git clone --recursive -j8 https://github.com/abetlen/llama-cpp-python.git\n", + "```\n", + "\n", + "2. Open up command Prompt (or anaconda prompt if you have it installed), set up environment variables to install. Follow this if you do not have a GPU, you must set both of the following variables.\n", + "\n", + "```\n", + "set FORCE_CMAKE=1\n", + "set CMAKE_ARGS=-DLLAMA_CUBLAS=OFF\n", + "```\n", + "You can ignore the second environment variable if you have an NVIDIA GPU.\n", + "\n", + "#### Compiling and installing\n", + "\n", + "In the same command prompt (anaconda prompt) you set the variables, you can cd into `llama-cpp-python` directory and run the following commands.\n", + "\n", + "```\n", + "python setup.py clean\n", + "python setup.py install\n", + "```" + ] + }, { "cell_type": "markdown", "metadata": {},