mirror of
https://github.com/nomic-ai/gpt4all
synced 2024-11-08 07:10:32 +00:00
d3ba1295a7
Support latest llama with Metal --------- Co-authored-by: Adam Treat <adam@nomic.ai> Co-authored-by: niansa/tuxifan <tuxifan@posteo.de> |
||
---|---|---|
.. | ||
docs | ||
gpt4all | ||
tests | ||
.gitignore | ||
LICENSE.txt | ||
makefile | ||
MANIFEST.in | ||
mkdocs.yml | ||
README.md | ||
setup.py |
Python GPT4All
This package contains a set of Python bindings around the llmodel
C-API.
Package on PyPI: https://pypi.org/project/gpt4all/
Documentation
https://docs.gpt4all.io/gpt4all_python.html
Installation
pip install gpt4all
Local Build Instructions
NOTE: If you are doing this on a Windows machine, you must build the GPT4All backend using MinGW64 compiler.
- Setup
llmodel
git clone --recurse-submodules https://github.com/nomic-ai/gpt4all
cd gpt4all/gpt4all-backend/
mkdir build
cd build
cmake ..
cmake --build . --parallel
Confirm that libllmodel.*
exists in gpt4all-backend/build
.
- Setup Python package
cd ../../gpt4all-bindings/python
pip3 install -e .
Usage
Test it out! In a Python script or console:
from gpt4all import GPT4All
gptj = GPT4All("ggml-gpt4all-j-v1.3-groovy")
messages = [{"role": "user", "content": "Name 3 colors"}]
gptj.chat_completion(messages)