Commit Graph

343 Commits (dfe85386b5a2e38ae438c5884b97a82dde77d341)
 

Author SHA1 Message Date
Adam Treat 993a43d33a Minor cleanup. 1 year ago
Adam Treat cca2a88e47 Getting ready for next update. 1 year ago
Adam Treat bec8072fe1 Fix logic. 1 year ago
eachadea 116f740fb5 Don't build test_hw on apple silicon 1 year ago
Adam Treat 3e7cf346d6 Restore basic functionality. 1 year ago
Adam Treat 670bbe4db5 Make the settings dialog persist across sessions. 1 year ago
Adam Treat 294f2d6041 Revamp hardware tester to print to stdout the result in single word. 1 year ago
Adam Treat e4d75cbfcd Remove this as clang does not support. 1 year ago
AT 6f1fe51087
Update README.md 1 year ago
Adam Treat 14831cd1c0 Add a small program that tests hardware. 1 year ago
AT 2dc26cfd09
Update README.md 1 year ago
Adam Treat 4d26f5daeb Silence a warning now that we're forked. 1 year ago
Adam Treat 442ca09b32 Remove ggml submodule in favor of llama.cpp 1 year ago
Adam Treat bb78ee0025 Back out the prompt/response finding in gptj since it doesn't seem to help.
Guard against reaching the end of the context window which we don't handle
gracefully except for avoiding a crash.
1 year ago
Tom Jobbins 154f35ce53
Update HTTP link to model to point to the latest Jazzy model (in the CLI-only build section) (#78) 1 year ago
Adam Treat 65abaa19e5 Fix warning and update llama.cpp submodule to latest. 1 year ago
Adam Treat 51768bfbda Use default params unless we override them. 1 year ago
Adam Treat b15feb5a4c Crop the filename. 1 year ago
Adam Treat 5a00c83139 Display filesize info in the model downloader. 1 year ago
Adam Treat cd5f525950 Add multi-line prompt support. 1 year ago
Adam Treat 4c970fdc9c Pin the llama.cpp to a slightly older version. 1 year ago
Adam Treat 43e6d05d21 Don't crash starting with no model. 1 year ago
Adam Treat d336db9fe9 Don't use versions for model downloader. 1 year ago
eachadea b09ca009c5 Don't build a universal binary
unless -DBUILD_UNIVERSAL=ON
1 year ago
Adam Treat 55084333a9 Add llama.cpp support for loading llama based models in the gui. We now
support loading both gptj derived models and llama derived models.
1 year ago
Aaron Miller f1b87d0b56 Add thread count setting 1 year ago
Adam Treat e6cb6a2ae3 Add a new model download feature. 1 year ago
Adam Treat 1eda8f030e Allow unloading/loading/changing of models. 1 year ago
Aaron Miller 3a82a1d96c remove fill color for prompt template box 1 year ago
Adam Treat a842f6c33f Fix link color to have consistency across platforms. 1 year ago
Adam Treat 0928c01ddb Make the gui accessible. 1 year ago
Pavol Rusnak 0e599e6b8a readme: GPL -> MIT license 1 year ago
Adam Treat ef711b305b Changing to MIT license. 1 year ago
Adam Treat bbf838354e Don't add version number to the installer or the install location. 1 year ago
Adam Treat 9f4e3cb7f4 Bump the version for the context bug fix. 1 year ago
Adam Treat 15ae0a4441 Fix the context. 1 year ago
Adam Treat 801107a12c Set a new default temp that is more conservative. 1 year ago
AT ea7179e2e8
Update README.md 1 year ago
Adam Treat 7dbf81ed8f Update submodule. 1 year ago
Adam Treat 42fb215f61 Bump version to 2.1 as this has been referred to far and wide as
GPT4All v2 so doing this to decrease confusion. Also, making the version
number visible in the title bar.
1 year ago
Adam Treat 1dcd4dce58 Update the bundled model name. 1 year ago
Adam Treat 7ea548736b New version. 1 year ago
Adam Treat 659ab13665 Don't allow empty prompts. Context past always equal or greater than zero. 1 year ago
Adam Treat 7e9ca06366 Trim trailing whitespace at the end of generation. 1 year ago
Adam Treat fdf7f20d90 Remove newlines too. 1 year ago
Adam Treat f8b962d50a More conservative default params and trim leading whitespace from response. 1 year ago
TheBloke 7215b9f3fb Change the example CLI prompt to something more appropriate, as this is not a Llama model! :) 1 year ago
TheBloke 16f6b04a47 Fix repo name 1 year ago
TheBloke 67fcfeea8b Update README to include instructions for building CLI only
Users may want to play around with gpt4all-j from the command line. But they may not have Qt, and might not want to get it, or may find it very hard to do so - eg when using a Google Colab or similar hosted service.

It's easy to build the CLI tools just by building the `ggml` sub folder.  So this commit adds instructions on doing that, including an example invocation of the `gpt-j` binary.
1 year ago
TheBloke 605b3d18ad Update .gitignore to ignore a local `build` directory. 1 year ago