gpt4all/gpt4all-bindings/python/README.md

81 lines
2.4 KiB
Markdown
Raw Normal View History

2023-05-10 17:38:32 +00:00
# Python GPT4All
2023-05-11 15:02:44 +00:00
This package contains a set of Python bindings around the `llmodel` C-API.
2023-05-10 17:38:32 +00:00
Package on PyPI: https://pypi.org/project/gpt4all/
2023-05-12 14:30:03 +00:00
## Documentation
https://docs.gpt4all.io/gpt4all_python.html
2023-05-12 14:30:03 +00:00
2023-05-11 18:29:17 +00:00
## Installation
```
pip install gpt4all
```
2023-05-10 17:38:32 +00:00
2023-05-11 15:02:44 +00:00
## Local Build Instructions
2023-05-10 17:38:32 +00:00
### Prerequisites
On Windows and Linux, building GPT4All requires the complete Vulkan SDK. You may download it from here: https://vulkan.lunarg.com/sdk/home
macOS users do not need Vulkan, as GPT4All will use Metal instead.
### Building the python bindings
2023-05-16 19:29:27 +00:00
**NOTE**: If you are doing this on a Windows machine, you must build the GPT4All backend using [MinGW64](https://www.mingw-w64.org/) compiler.
2023-05-10 17:38:32 +00:00
1. Setup `llmodel`
```
git clone --recurse-submodules https://github.com/nomic-ai/gpt4all.git
cd gpt4all/gpt4all-backend/
2023-05-10 17:38:32 +00:00
mkdir build
cd build
cmake ..
cmake --build . --parallel # optionally append: --config Release
2023-05-10 17:38:32 +00:00
```
Confirm that `libllmodel.*` exists in `gpt4all-backend/build`.
2023-05-10 17:38:32 +00:00
2. Setup Python package
```
2023-05-10 17:48:36 +00:00
cd ../../gpt4all-bindings/python
2023-05-10 17:38:32 +00:00
pip3 install -e .
```
## Usage
Test it out! In a Python script or console:
2023-05-10 17:38:32 +00:00
```python
from gpt4all import GPT4All
model = GPT4All("orca-mini-3b-gguf2-q4_0.gguf")
output = model.generate("The capital of France is ", max_tokens=3)
print(output)
```
GPU Usage
```python
from gpt4all import GPT4All
model = GPT4All("orca-mini-3b-gguf2-q4_0.gguf", device='gpu') # device='amd', device='intel'
output = model.generate("The capital of France is ", max_tokens=3)
print(output)
2023-05-10 17:38:32 +00:00
```
## Troubleshooting a Local Build
- If you're on Windows and have compiled with a MinGW toolchain, you might run into an error like:
```
FileNotFoundError: Could not find module '<...>\gpt4all-bindings\python\gpt4all\llmodel_DO_NOT_MODIFY\build\libllmodel.dll'
(or one of its dependencies). Try using the full path with constructor syntax.
```
The key phrase in this case is _"or one of its dependencies"_. The Python interpreter you're using
probably doesn't see the MinGW runtime dependencies. At the moment, the following three are required:
`libgcc_s_seh-1.dll`, `libstdc++-6.dll` and `libwinpthread-1.dll`. You should copy them from MinGW
into a folder where Python will see them, preferably next to `libllmodel.dll`.
- Note regarding the Microsoft toolchain: Compiling with MSVC is possible, but not the official way to
go about it at the moment. MSVC doesn't produce DLLs with a `lib` prefix, which the bindings expect.
You'd have to amend that yourself.