AT
01ab758df9
Update README.md
2023-04-20 18:56:38 -04:00
Adam Treat
ad210589d3
Silence a warning now that we're forked.
2023-04-20 17:27:06 -04:00
Adam Treat
f225b238cb
Remove ggml submodule in favor of llama.cpp
2023-04-20 17:20:44 -04:00
Adam Treat
bfee4994f3
Back out the prompt/response finding in gptj since it doesn't seem to help.
...
Guard against reaching the end of the context window which we don't handle
gracefully except for avoiding a crash.
2023-04-20 17:15:46 -04:00
Tom Jobbins
26b1402b7c
Update HTTP link to model to point to the latest Jazzy model (in the CLI-only build section) ( #78 )
2023-04-20 14:15:07 -04:00
Adam Treat
4e06ed4f0a
Fix warning and update llama.cpp submodule to latest.
2023-04-20 13:27:11 -04:00
Adam Treat
d7f1214bf1
Use default params unless we override them.
2023-04-20 12:07:43 -04:00
Adam Treat
2ef8857393
Crop the filename.
2023-04-20 10:54:42 -04:00
Adam Treat
33beec0cdd
Display filesize info in the model downloader.
2023-04-20 09:32:51 -04:00
Adam Treat
9c85a2ceb2
Add multi-line prompt support.
2023-04-20 08:31:33 -04:00
Adam Treat
963ef4b617
Pin the llama.cpp to a slightly older version.
2023-04-20 07:34:15 -04:00
Adam Treat
795715fb59
Don't crash starting with no model.
2023-04-20 07:17:07 -04:00
Adam Treat
338b5ca703
Don't use versions for model downloader.
2023-04-20 06:48:13 -04:00
eachadea
b36e235112
Don't build a universal binary
...
unless -DBUILD_UNIVERSAL=ON
2023-04-20 06:37:54 -04:00
Adam Treat
71b308e914
Add llama.cpp support for loading llama based models in the gui. We now
...
support loading both gptj derived models and llama derived models.
2023-04-20 06:19:09 -04:00
Zach Nussbaum
43a49e9590
add model + data revisions
2023-04-19 11:35:08 -07:00
Aaron Miller
00cb5fe2a5
Add thread count setting
2023-04-19 08:33:13 -04:00
Adam Treat
169afbdc80
Add a new model download feature.
2023-04-18 21:10:06 -04:00
Adam Treat
2b1cae5a7e
Allow unloading/loading/changing of models.
2023-04-18 11:42:38 -04:00
Aaron Miller
8a4f7897f4
remove fill color for prompt template box
2023-04-18 08:47:37 -04:00
Adam Treat
708bc1bbe6
Fix link color to have consistency across platforms.
2023-04-18 08:45:21 -04:00
Adam Treat
e540edcb1e
Make the gui accessible.
2023-04-18 08:40:04 -04:00
AT
ca6e2b19bb
Update README.md
2023-04-17 18:14:42 -04:00
Pavol Rusnak
da0912d531
readme: GPL -> MIT license
2023-04-17 16:45:29 -04:00
Adam Treat
99d4a7f573
Changing to MIT license.
2023-04-17 16:37:50 -04:00
Adam Treat
74757d4d1b
Don't add version number to the installer or the install location.
2023-04-17 15:59:14 -04:00
Adam Treat
cf27c3b1a7
Bump the version for the context bug fix.
2023-04-17 15:37:24 -04:00
Adam Treat
f73fbf28a4
Fix the context.
2023-04-17 14:11:41 -04:00
Adam Treat
b0ce635338
Set a new default temp that is more conservative.
2023-04-17 09:49:59 -04:00
AT
0ad1e21f78
Update README.md
2023-04-17 09:02:26 -04:00
Adam Treat
90e1eb0c37
Update submodule.
2023-04-17 08:04:40 -04:00
Adam Treat
29990fd27b
Bump version to 2.1 as this has been referred to far and wide as
...
GPT4All v2 so doing this to decrease confusion. Also, making the version
number visible in the title bar.
2023-04-17 07:50:39 -04:00
Adam Treat
c924acffa7
Update the bundled model name.
2023-04-16 22:10:26 -04:00
Adam Treat
cbf44d61b0
New version.
2023-04-16 19:20:43 -04:00
Adam Treat
a7c2d65824
Don't allow empty prompts. Context past always equal or greater than zero.
2023-04-16 14:57:58 -04:00
Adam Treat
4bf4b2a080
Trim trailing whitespace at the end of generation.
2023-04-16 14:19:59 -04:00
Adam Treat
9381a69b2b
Remove newlines too.
2023-04-16 14:04:25 -04:00
Adam Treat
b39acea516
More conservative default params and trim leading whitespace from response.
2023-04-16 13:56:56 -04:00
TheBloke
f63a2df715
Change the example CLI prompt to something more appropriate, as this is not a Llama model! :)
2023-04-16 12:52:23 -04:00
TheBloke
a03bac57bf
Fix repo name
2023-04-16 12:52:23 -04:00
TheBloke
032bc11d36
Update README to include instructions for building CLI only
...
Users may want to play around with gpt4all-j from the command line. But they may not have Qt, and might not want to get it, or may find it very hard to do so - eg when using a Google Colab or similar hosted service.
It's easy to build the CLI tools just by building the `ggml` sub folder. So this commit adds instructions on doing that, including an example invocation of the `gpt-j` binary.
2023-04-16 12:52:23 -04:00
TheBloke
e85f5ee9b8
Update .gitignore to ignore a local build
directory.
2023-04-16 12:52:23 -04:00
TheBloke
7c3f8c2168
Update git clone command in README to point to main nomic repo=
...
I'm not sure if it was intentional that the build instructions tell the user to clone `manyoso/gpt4all-chat.git`?
But I would think this should be cloning the main repo at `nomic-ai/gpt4all-chat` instead. Otherwise users following this command might get changes not yet merged into the main repo, which could be confusing.
2023-04-16 12:52:23 -04:00
AT
33e9f350e5
Update README.md
2023-04-16 11:53:02 -04:00
Adam Treat
ef789b354b
Rearrange the buttons and provide a message what the copy button does.
2023-04-16 11:44:55 -04:00
Adam Treat
f19abd6a18
Check for ###Prompt: or ###Response and stop generating and modify the default template a little bit.
2023-04-16 11:25:48 -04:00
Aaron Miller
7ec47c659b
add tooltips to settings dialog
2023-04-16 11:16:30 -04:00
Aaron Miller
8204e7d047
add "restore defaults" button
2023-04-16 11:16:30 -04:00
Aaron Miller
5bfb3f8229
use the settings dialog settings when generating
2023-04-16 11:16:30 -04:00
Aaron Miller
098ab803a0
add settings dialog
2023-04-16 11:16:30 -04:00