Commit Graph

15 Commits (cd2e559db4e52cc03cd995e919800a459d40ecb6)

Author SHA1 Message Date
Aaron Miller 6e92d93b53 persistent threadcount setting
threadcount is now on the Settings object and
gets reapplied after a model switch
1 year ago
Adam Treat 4c5a772b12 Don't define this twice. 1 year ago
Adam Treat 71b308e914 Add llama.cpp support for loading llama based models in the gui. We now
support loading both gptj derived models and llama derived models.
1 year ago
Aaron Miller 00cb5fe2a5 Add thread count setting 1 year ago
Adam Treat 2b1cae5a7e Allow unloading/loading/changing of models. 1 year ago
Adam Treat f73fbf28a4 Fix the context. 1 year ago
Aaron Miller 5bfb3f8229 use the settings dialog settings when generating 1 year ago
Adam Treat 078b755ab8 Erase the correct amount of logits when regenerating which is not the same
as the number of tokens.
1 year ago
Adam Treat 1c5dd6710d When regenerating erase the previous response and prompt from the context. 1 year ago
Adam Treat a9eced2d1e Add an abstraction around gpt-j that will allow other arch models to be loaded in ui. 1 year ago
Adam Treat 01dee6f20d Programmatically get the model name from the LLM. The LLM now searches
for applicable models in the directory of the executable given a pattern
match and then loads the first one it finds.

Also, add a busy indicator for model loading.
1 year ago
Adam Treat f1bbe97a5c Big updates to the UI. 1 year ago
Adam Treat c62ebdb81c Add a reset context feature to clear the chat history and the context for now. 1 year ago
Adam Treat f4ab48b0fa Comment out the list of chat features until it is ready. 1 year ago
Adam Treat ff2fdecce1 Initial commit. 1 year ago