Commit Graph

28 Commits (56e9fd7e63d4d8f49f3270b2a46d5f09fc65f92d)

Author SHA1 Message Date
Adam Treat 01e582f15b First attempt at providing a persistent chat list experience.
Limitations:

1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
   the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
   data the llama.cpp backend tries to persist. Need to investigate how
   we can shrink this.
1 year ago
Adam Treat d91dd567e2 Hot swapping of conversations. Destroys context for now. 1 year ago
Adam Treat 925ad70833 Turn the chat list into a model. 1 year ago
Adam Treat 463c1474dc Provide convenience methods for adding/removing/changing chat. 1 year ago
Adam Treat 482f543675 Handle the fwd of important signals from LLM object so qml doesn't have to deal with which chat is current. 1 year ago
Adam Treat 414a12c33d Major refactor in prep for multiple conversations. 1 year ago
Adam Treat bbffa7364b Add new C++ version of the chat model. Getting ready for chat history. 1 year ago
Adam Treat 9a65f73392 Move the promptCallback to own function. 1 year ago
Adam Treat eafb98b3a9 Initial support for opt-in telemetry. 1 year ago
Adam Treat 4b47478626 Move the backend code into own subdirectory and make it a shared library. Begin fleshing out the C api wrapper that bindings can use. 1 year ago
Aaron Miller aa20bafc91 new settings (model path, repeat penalty) w/ tabs 1 year ago
Adam Treat b6937c39db Infinite context window through trimming. 1 year ago
Adam Treat 8b1ddabe3e Implement repeat penalty for both llama and gptj in gui. 1 year ago
Aaron Miller 6e92d93b53 persistent threadcount setting
threadcount is now on the Settings object and
gets reapplied after a model switch
1 year ago
Adam Treat 4c5a772b12 Don't define this twice. 1 year ago
Adam Treat 71b308e914 Add llama.cpp support for loading llama based models in the gui. We now
support loading both gptj derived models and llama derived models.
1 year ago
Aaron Miller 00cb5fe2a5 Add thread count setting 1 year ago
Adam Treat 2b1cae5a7e Allow unloading/loading/changing of models. 1 year ago
Adam Treat f73fbf28a4 Fix the context. 1 year ago
Aaron Miller 5bfb3f8229 use the settings dialog settings when generating 1 year ago
Adam Treat 078b755ab8 Erase the correct amount of logits when regenerating which is not the same
as the number of tokens.
1 year ago
Adam Treat 1c5dd6710d When regenerating erase the previous response and prompt from the context. 1 year ago
Adam Treat a9eced2d1e Add an abstraction around gpt-j that will allow other arch models to be loaded in ui. 1 year ago
Adam Treat 01dee6f20d Programmatically get the model name from the LLM. The LLM now searches
for applicable models in the directory of the executable given a pattern
match and then loads the first one it finds.

Also, add a busy indicator for model loading.
1 year ago
Adam Treat f1bbe97a5c Big updates to the UI. 1 year ago
Adam Treat c62ebdb81c Add a reset context feature to clear the chat history and the context for now. 1 year ago
Adam Treat f4ab48b0fa Comment out the list of chat features until it is ready. 1 year ago
Adam Treat ff2fdecce1 Initial commit. 1 year ago