Go to file
2023-04-08 23:54:25 -04:00
ggml@c9f702ac57 Initial commit. 2023-04-08 23:28:39 -04:00
icons Initial commit. 2023-04-08 23:28:39 -04:00
.gitignore Initial commit. 2023-04-08 23:28:39 -04:00
.gitmodules Initial commit. 2023-04-08 23:28:39 -04:00
CMakeLists.txt Initial commit. 2023-04-08 23:28:39 -04:00
gptj.cpp Initial commit. 2023-04-08 23:28:39 -04:00
gptj.h Initial commit. 2023-04-08 23:28:39 -04:00
LICENSE Initial commit. 2023-04-08 23:28:39 -04:00
llm.cpp Initial commit. 2023-04-08 23:28:39 -04:00
llm.h Initial commit. 2023-04-08 23:28:39 -04:00
main.cpp Initial commit. 2023-04-08 23:28:39 -04:00
main.qml Initial commit. 2023-04-08 23:28:39 -04:00
README.md Update README.md 2023-04-08 23:54:25 -04:00

gpt4all-chat

Cross platform Qt based GUI for GPT4All versions with GPT-J as the base model. NOTE: The model seen in the screenshot is actually the original GPT-J model.

image

Features

  • Cross-platform (Linux, Windows, MacOSX, iOS, Android, Embedded Linux, QNX)
  • Fast CPU based inference using ggml for GPT-J based models
  • The UI is made to look and feel like you've come to expect from a chatty gpt
  • Easy to install... The plan is to create precompiled binaries for major platforms with easy installer including model

Building

git clone --recurse-submodules https://github.com/manyoso/gpt4all-chat.git
cd gpt4all-chat
mkdir build
cd build
cmake ..
cmake --build . --parallel

Running

  • Simply place the appropriate model file named ggml-model-q4_0.bin in the same directory as the exe produced above and start the exe

Contributing

  • Pull requests welcome :)