Commit Graph

47 Commits (a48226613c04a287fc303f642e69b219eb7822a4)

Author SHA1 Message Date
Adam Treat a48226613c Turn the chat list into a model. 1 year ago
Adam Treat 679b61ee07 Provide convenience methods for adding/removing/changing chat. 1 year ago
Adam Treat 6e6b96375d Handle the fwd of important signals from LLM object so qml doesn't have to deal with which chat is current. 1 year ago
Adam Treat 4d87c46948 Major refactor in prep for multiple conversations. 1 year ago
Adam Treat d1e3198b65 Add new C++ version of the chat model. Getting ready for chat history. 1 year ago
Adam Treat 9f323759ce Remove these as it is mitigated by repeat penalty and models really should train this out. 1 year ago
Adam Treat a6ca45c9dd Use the universal sep. 1 year ago
Adam Treat 69f92d8ea8 Load models from filepath only. 1 year ago
Adam Treat 62a885de40 Always try and load default model first. Groovy is the default default. 1 year ago
Adam Treat 5a7d40f604 Move the saving of the tokens to the impl and not the callbacks responsibility. 1 year ago
Adam Treat ba4b28fcd5 Move the promptCallback to own function. 1 year ago
Adam Treat 386ce08fca Track check for updates. 1 year ago
Adam Treat ee5c58c26c Initial support for opt-in telemetry. 1 year ago
Adam Treat a3d97fa009 Don't crash when prompt is too large. 1 year ago
Aaron Miller 15a979b327 new settings (model path, repeat penalty) w/ tabs 1 year ago
Adam Treat cf8a4dd868 Infinite context window through trimming. 1 year ago
Adam Treat a79bc4233c Implement repeat penalty for both llama and gptj in gui. 1 year ago
Adam Treat a02b0c14ca Don't crash right out of the installer ;) 1 year ago
Aaron Miller 29e3e04fcf persistent threadcount setting
threadcount is now on the Settings object and
gets reapplied after a model switch
1 year ago
Adam Treat 74621109c9 Add a fixme for dubious code. 1 year ago
Adam Treat c086a45173 Provide a non-priviledged place for model downloads when exe is installed to root. 1 year ago
Adam Treat 43e6d05d21 Don't crash starting with no model. 1 year ago
Adam Treat 55084333a9 Add llama.cpp support for loading llama based models in the gui. We now
support loading both gptj derived models and llama derived models.
1 year ago
Aaron Miller f1b87d0b56 Add thread count setting 1 year ago
Adam Treat e6cb6a2ae3 Add a new model download feature. 1 year ago
Adam Treat 1eda8f030e Allow unloading/loading/changing of models. 1 year ago
Adam Treat 15ae0a4441 Fix the context. 1 year ago
Adam Treat 659ab13665 Don't allow empty prompts. Context past always equal or greater than zero. 1 year ago
Adam Treat 7e9ca06366 Trim trailing whitespace at the end of generation. 1 year ago
Adam Treat fdf7f20d90 Remove newlines too. 1 year ago
Adam Treat f8b962d50a More conservative default params and trim leading whitespace from response. 1 year ago
Aaron Miller cb6d2128d3 use the settings dialog settings when generating 1 year ago
Adam Treat 2354779ac1 Provide an instruct/chat template. 1 year ago
Aaron Miller 0f9b80e6b6 Use completeBaseName to display model name
this cuts the filename at the *final* dot instead of the first, allowing
model names with version numbers to be displayed correctly.
1 year ago
Adam Treat 2f3a46c17f Erase the correct amount of logits when regenerating which is not the same
as the number of tokens.
1 year ago
Adam Treat 12bf78bf24 Fix crash with recent change to erase context. 1 year ago
Adam Treat f8005cff45 When regenerating erase the previous response and prompt from the context. 1 year ago
Adam Treat 9de185488c Add an abstraction around gpt-j that will allow other arch models to be loaded in ui. 1 year ago
Adam Treat 0d8b5bbd49 Fix the check for updates on mac. 1 year ago
Adam Treat c183702aa4 Provide a busy indicator if we're processing a long prompt and make the
stop button work in the middle of processing a long prompt as well.
1 year ago
Adam Treat 72b964e064 Fix the name of the updates tool. 1 year ago
Adam Treat 0ea31487e3 Programmatically get the model name from the LLM. The LLM now searches
for applicable models in the directory of the executable given a pattern
match and then loads the first one it finds.

Also, add a busy indicator for model loading.
1 year ago
Adam Treat a56a258099 Big updates to the UI. 1 year ago
Adam Treat b1b7744241 Add a reset context feature to clear the chat history and the context for now. 1 year ago
Adam Treat ae91bfa48a Fixes for linux and macosx. 1 year ago
Adam Treat 6ce4089c4f Prelim support for past context. 1 year ago
Adam Treat ff2fdecce1 Initial commit. 1 year ago