Commit Graph

184 Commits

Author SHA1 Message Date
Adam Treat
14831cd1c0 Add a small program that tests hardware. 2023-04-20 19:34:56 -04:00
AT
2dc26cfd09
Update README.md 2023-04-20 18:56:38 -04:00
Adam Treat
4d26f5daeb Silence a warning now that we're forked. 2023-04-20 17:27:06 -04:00
Adam Treat
442ca09b32 Remove ggml submodule in favor of llama.cpp 2023-04-20 17:20:44 -04:00
Adam Treat
bb78ee0025 Back out the prompt/response finding in gptj since it doesn't seem to help.
Guard against reaching the end of the context window which we don't handle
gracefully except for avoiding a crash.
2023-04-20 17:15:46 -04:00
Tom Jobbins
154f35ce53
Update HTTP link to model to point to the latest Jazzy model (in the CLI-only build section) (#78) 2023-04-20 14:15:07 -04:00
Adam Treat
65abaa19e5 Fix warning and update llama.cpp submodule to latest. 2023-04-20 13:27:11 -04:00
Adam Treat
51768bfbda Use default params unless we override them. 2023-04-20 12:07:43 -04:00
Adam Treat
b15feb5a4c Crop the filename. 2023-04-20 10:54:42 -04:00
Adam Treat
5a00c83139 Display filesize info in the model downloader. 2023-04-20 09:32:51 -04:00
Adam Treat
cd5f525950 Add multi-line prompt support. 2023-04-20 08:31:33 -04:00
Adam Treat
4c970fdc9c Pin the llama.cpp to a slightly older version. 2023-04-20 07:34:15 -04:00
Adam Treat
43e6d05d21 Don't crash starting with no model. 2023-04-20 07:17:07 -04:00
Adam Treat
d336db9fe9 Don't use versions for model downloader. 2023-04-20 06:48:13 -04:00
eachadea
b09ca009c5 Don't build a universal binary
unless -DBUILD_UNIVERSAL=ON
2023-04-20 06:37:54 -04:00
Adam Treat
55084333a9 Add llama.cpp support for loading llama based models in the gui. We now
support loading both gptj derived models and llama derived models.
2023-04-20 06:19:09 -04:00
Aaron Miller
f1b87d0b56 Add thread count setting 2023-04-19 08:33:13 -04:00
Adam Treat
e6cb6a2ae3 Add a new model download feature. 2023-04-18 21:10:06 -04:00
Adam Treat
1eda8f030e Allow unloading/loading/changing of models. 2023-04-18 11:42:38 -04:00
Aaron Miller
3a82a1d96c remove fill color for prompt template box 2023-04-18 08:47:37 -04:00
Adam Treat
a842f6c33f Fix link color to have consistency across platforms. 2023-04-18 08:45:21 -04:00
Adam Treat
0928c01ddb Make the gui accessible. 2023-04-18 08:40:04 -04:00
Pavol Rusnak
0e599e6b8a readme: GPL -> MIT license 2023-04-17 16:45:29 -04:00
Adam Treat
ef711b305b Changing to MIT license. 2023-04-17 16:37:50 -04:00
Adam Treat
bbf838354e Don't add version number to the installer or the install location. 2023-04-17 15:59:14 -04:00
Adam Treat
9f4e3cb7f4 Bump the version for the context bug fix. 2023-04-17 15:37:24 -04:00
Adam Treat
15ae0a4441 Fix the context. 2023-04-17 14:11:41 -04:00
Adam Treat
801107a12c Set a new default temp that is more conservative. 2023-04-17 09:49:59 -04:00
AT
ea7179e2e8
Update README.md 2023-04-17 09:02:26 -04:00
Adam Treat
7dbf81ed8f Update submodule. 2023-04-17 08:04:40 -04:00
Adam Treat
42fb215f61 Bump version to 2.1 as this has been referred to far and wide as
GPT4All v2 so doing this to decrease confusion. Also, making the version
number visible in the title bar.
2023-04-17 07:50:39 -04:00
Adam Treat
1dcd4dce58 Update the bundled model name. 2023-04-16 22:10:26 -04:00
Adam Treat
7ea548736b New version. 2023-04-16 19:20:43 -04:00
Adam Treat
659ab13665 Don't allow empty prompts. Context past always equal or greater than zero. 2023-04-16 14:57:58 -04:00
Adam Treat
7e9ca06366 Trim trailing whitespace at the end of generation. 2023-04-16 14:19:59 -04:00
Adam Treat
fdf7f20d90 Remove newlines too. 2023-04-16 14:04:25 -04:00
Adam Treat
f8b962d50a More conservative default params and trim leading whitespace from response. 2023-04-16 13:56:56 -04:00
TheBloke
7215b9f3fb Change the example CLI prompt to something more appropriate, as this is not a Llama model! :) 2023-04-16 12:52:23 -04:00
TheBloke
16f6b04a47 Fix repo name 2023-04-16 12:52:23 -04:00
TheBloke
67fcfeea8b Update README to include instructions for building CLI only
Users may want to play around with gpt4all-j from the command line. But they may not have Qt, and might not want to get it, or may find it very hard to do so - eg when using a Google Colab or similar hosted service.

It's easy to build the CLI tools just by building the `ggml` sub folder.  So this commit adds instructions on doing that, including an example invocation of the `gpt-j` binary.
2023-04-16 12:52:23 -04:00
TheBloke
605b3d18ad Update .gitignore to ignore a local build directory. 2023-04-16 12:52:23 -04:00
TheBloke
0abea1db35 Update git clone command in README to point to main nomic repo=
I'm not sure if it was intentional that the build instructions tell the user to clone `manyoso/gpt4all-chat.git`?

But I would think this should be cloning the main repo at `nomic-ai/gpt4all-chat` instead.  Otherwise users following this command might get changes not yet merged into the main repo, which could be confusing.
2023-04-16 12:52:23 -04:00
AT
a29420cbc8
Update README.md 2023-04-16 11:53:02 -04:00
Adam Treat
71ff6bc6f4 Rearrange the buttons and provide a message what the copy button does. 2023-04-16 11:44:55 -04:00
Adam Treat
185dc2460e Check for ###Prompt: or ###Response and stop generating and modify the default template a little bit. 2023-04-16 11:25:48 -04:00
Aaron Miller
d4767478fc add tooltips to settings dialog 2023-04-16 11:16:30 -04:00
Aaron Miller
421a3ed8e7 add "restore defaults" button 2023-04-16 11:16:30 -04:00
Aaron Miller
cb6d2128d3 use the settings dialog settings when generating 2023-04-16 11:16:30 -04:00
Aaron Miller
17c3fa820b add settings dialog 2023-04-16 11:16:30 -04:00
Aaron Miller
be0375e32d add settings icon 2023-04-16 11:16:30 -04:00