Adam Treat
e0c9d7f8e0
Fail early/gracefully if incompatible hardware detected. And default to universal builds on mac.
2023-05-08 08:23:00 -04:00
Adam Treat
280ad04c63
The GUI should come up immediately and not wait on deserializing from disk.
2023-05-06 20:01:14 -04:00
Adam Treat
01e582f15b
First attempt at providing a persistent chat list experience.
...
Limitations:
1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
data the llama.cpp backend tries to persist. Need to investigate how
we can shrink this.
2023-05-04 15:31:41 -04:00
Adam Treat
c3d923cdc7
Don't add new chats willy nilly.
2023-05-02 07:53:09 -04:00
Adam Treat
d91dd567e2
Hot swapping of conversations. Destroys context for now.
2023-05-01 20:27:07 -04:00
Adam Treat
925ad70833
Turn the chat list into a model.
2023-05-01 17:13:20 -04:00
Adam Treat
463c1474dc
Provide convenience methods for adding/removing/changing chat.
2023-05-01 14:24:16 -04:00
Adam Treat
482f543675
Handle the fwd of important signals from LLM object so qml doesn't have to deal with which chat is current.
2023-05-01 12:41:03 -04:00
Adam Treat
414a12c33d
Major refactor in prep for multiple conversations.
2023-05-01 09:10:05 -04:00
Adam Treat
bbffa7364b
Add new C++ version of the chat model. Getting ready for chat history.
2023-04-30 20:28:43 -04:00
Adam Treat
037a9a6ec5
Remove these as it is mitigated by repeat penalty and models really should train this out.
2023-04-30 08:02:39 -04:00
Adam Treat
9b467f2dee
Use the universal sep.
2023-04-29 21:03:10 -04:00
Adam Treat
2a5b34b193
Load models from filepath only.
2023-04-28 20:15:10 -04:00
Adam Treat
c6c5e0bb4f
Always try and load default model first. Groovy is the default default.
2023-04-27 13:52:29 -04:00
Adam Treat
a3253c4ab1
Move the saving of the tokens to the impl and not the callbacks responsibility.
2023-04-27 11:16:51 -04:00
Adam Treat
9a65f73392
Move the promptCallback to own function.
2023-04-27 11:08:15 -04:00
Adam Treat
5c3c1317f8
Track check for updates.
2023-04-27 07:41:23 -04:00
Adam Treat
eafb98b3a9
Initial support for opt-in telemetry.
2023-04-26 22:05:56 -04:00
Adam Treat
70e6b45123
Don't crash when prompt is too large.
2023-04-26 19:08:37 -04:00
Aaron Miller
aa20bafc91
new settings (model path, repeat penalty) w/ tabs
2023-04-25 16:24:55 -04:00
Adam Treat
b6937c39db
Infinite context window through trimming.
2023-04-25 11:20:51 -04:00
Adam Treat
8b1ddabe3e
Implement repeat penalty for both llama and gptj in gui.
2023-04-25 08:38:29 -04:00
Adam Treat
cd2e559db4
Don't crash right out of the installer ;)
2023-04-24 21:07:16 -04:00
Aaron Miller
6e92d93b53
persistent threadcount setting
...
threadcount is now on the Settings object and
gets reapplied after a model switch
2023-04-24 18:05:08 -04:00
Adam Treat
e4b110639c
Add a fixme for dubious code.
2023-04-24 14:03:04 -04:00
Adam Treat
29685b3eab
Provide a non-priviledged place for model downloads when exe is installed to root.
2023-04-23 11:28:17 -04:00
Adam Treat
795715fb59
Don't crash starting with no model.
2023-04-20 07:17:07 -04:00
Adam Treat
71b308e914
Add llama.cpp support for loading llama based models in the gui. We now
...
support loading both gptj derived models and llama derived models.
2023-04-20 06:19:09 -04:00
Aaron Miller
00cb5fe2a5
Add thread count setting
2023-04-19 08:33:13 -04:00
Adam Treat
169afbdc80
Add a new model download feature.
2023-04-18 21:10:06 -04:00
Adam Treat
2b1cae5a7e
Allow unloading/loading/changing of models.
2023-04-18 11:42:38 -04:00
Adam Treat
f73fbf28a4
Fix the context.
2023-04-17 14:11:41 -04:00
Adam Treat
a7c2d65824
Don't allow empty prompts. Context past always equal or greater than zero.
2023-04-16 14:57:58 -04:00
Adam Treat
4bf4b2a080
Trim trailing whitespace at the end of generation.
2023-04-16 14:19:59 -04:00
Adam Treat
9381a69b2b
Remove newlines too.
2023-04-16 14:04:25 -04:00
Adam Treat
b39acea516
More conservative default params and trim leading whitespace from response.
2023-04-16 13:56:56 -04:00
Aaron Miller
5bfb3f8229
use the settings dialog settings when generating
2023-04-16 11:16:30 -04:00
Adam Treat
a77946e745
Provide an instruct/chat template.
2023-04-15 16:33:37 -04:00
Aaron Miller
391904efae
Use completeBaseName to display model name
...
this cuts the filename at the *final* dot instead of the first, allowing
model names with version numbers to be displayed correctly.
2023-04-15 13:29:51 -04:00
Adam Treat
078b755ab8
Erase the correct amount of logits when regenerating which is not the same
...
as the number of tokens.
2023-04-15 09:19:54 -04:00
Adam Treat
b1bb9866ab
Fix crash with recent change to erase context.
2023-04-15 09:10:34 -04:00
Adam Treat
1c5dd6710d
When regenerating erase the previous response and prompt from the context.
2023-04-15 09:10:27 -04:00
Adam Treat
a9eced2d1e
Add an abstraction around gpt-j that will allow other arch models to be loaded in ui.
2023-04-13 22:15:40 -04:00
Adam Treat
661191ce12
Fix the check for updates on mac.
2023-04-12 17:57:02 -04:00
Adam Treat
a06fd8a487
Provide a busy indicator if we're processing a long prompt and make the
...
stop button work in the middle of processing a long prompt as well.
2023-04-12 15:31:32 -04:00
Adam Treat
1e13f8648c
Fix the name of the updates tool.
2023-04-11 12:16:04 -04:00
Adam Treat
01dee6f20d
Programmatically get the model name from the LLM. The LLM now searches
...
for applicable models in the directory of the executable given a pattern
match and then loads the first one it finds.
Also, add a busy indicator for model loading.
2023-04-11 08:29:55 -04:00
Adam Treat
f1bbe97a5c
Big updates to the UI.
2023-04-10 23:34:34 -04:00
Adam Treat
c62ebdb81c
Add a reset context feature to clear the chat history and the context for now.
2023-04-10 17:13:22 -04:00
Adam Treat
b088929df4
Fixes for linux and macosx.
2023-04-10 16:33:14 -04:00