Commit Graph

28 Commits

Author SHA1 Message Date
Adam Treat
f291853e51 First attempt at providing a persistent chat list experience.
Limitations:

1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
   the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
   data the llama.cpp backend tries to persist. Need to investigate how
   we can shrink this.
2023-05-04 15:31:41 -04:00
Adam Treat
412cad99f2 Hot swapping of conversations. Destroys context for now. 2023-05-01 20:27:07 -04:00
Adam Treat
a48226613c Turn the chat list into a model. 2023-05-01 17:13:20 -04:00
Adam Treat
679b61ee07 Provide convenience methods for adding/removing/changing chat. 2023-05-01 14:24:16 -04:00
Adam Treat
6e6b96375d Handle the fwd of important signals from LLM object so qml doesn't have to deal with which chat is current. 2023-05-01 12:41:03 -04:00
Adam Treat
4d87c46948 Major refactor in prep for multiple conversations. 2023-05-01 09:10:05 -04:00
Adam Treat
d1e3198b65 Add new C++ version of the chat model. Getting ready for chat history. 2023-04-30 20:28:43 -04:00
Adam Treat
ba4b28fcd5 Move the promptCallback to own function. 2023-04-27 11:08:15 -04:00
Adam Treat
ee5c58c26c Initial support for opt-in telemetry. 2023-04-26 22:05:56 -04:00
Adam Treat
3c9139b5d2 Move the backend code into own subdirectory and make it a shared library. Begin fleshing out the C api wrapper that bindings can use. 2023-04-26 08:22:38 -04:00
Aaron Miller
15a979b327 new settings (model path, repeat penalty) w/ tabs 2023-04-25 16:24:55 -04:00
Adam Treat
cf8a4dd868 Infinite context window through trimming. 2023-04-25 11:20:51 -04:00
Adam Treat
a79bc4233c Implement repeat penalty for both llama and gptj in gui. 2023-04-25 08:38:29 -04:00
Aaron Miller
29e3e04fcf persistent threadcount setting
threadcount is now on the Settings object and
gets reapplied after a model switch
2023-04-24 18:05:08 -04:00
Adam Treat
7ce6b6ba89 Don't define this twice. 2023-04-24 07:59:42 -04:00
Adam Treat
55084333a9 Add llama.cpp support for loading llama based models in the gui. We now
support loading both gptj derived models and llama derived models.
2023-04-20 06:19:09 -04:00
Aaron Miller
f1b87d0b56 Add thread count setting 2023-04-19 08:33:13 -04:00
Adam Treat
1eda8f030e Allow unloading/loading/changing of models. 2023-04-18 11:42:38 -04:00
Adam Treat
15ae0a4441 Fix the context. 2023-04-17 14:11:41 -04:00
Aaron Miller
cb6d2128d3 use the settings dialog settings when generating 2023-04-16 11:16:30 -04:00
Adam Treat
2f3a46c17f Erase the correct amount of logits when regenerating which is not the same
as the number of tokens.
2023-04-15 09:19:54 -04:00
Adam Treat
f8005cff45 When regenerating erase the previous response and prompt from the context. 2023-04-15 09:10:27 -04:00
Adam Treat
9de185488c Add an abstraction around gpt-j that will allow other arch models to be loaded in ui. 2023-04-13 22:15:40 -04:00
Adam Treat
0ea31487e3 Programmatically get the model name from the LLM. The LLM now searches
for applicable models in the directory of the executable given a pattern
match and then loads the first one it finds.

Also, add a busy indicator for model loading.
2023-04-11 08:29:55 -04:00
Adam Treat
a56a258099 Big updates to the UI. 2023-04-10 23:34:34 -04:00
Adam Treat
b1b7744241 Add a reset context feature to clear the chat history and the context for now. 2023-04-10 17:13:22 -04:00
Adam Treat
47d3fd1621 Comment out the list of chat features until it is ready. 2023-04-09 20:23:52 -04:00
Adam Treat
ff2fdecce1 Initial commit. 2023-04-08 23:28:39 -04:00