Commit Graph

340 Commits (005c092943e573c015f0bb23727fd8e576ef0ee8)

Author SHA1 Message Date
Adam Treat 908aec27fe Always save chats to disk, but save them as text by default. This also changes
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.
1 year ago
cebtenzzre 04499d1c7d
chatllm: do not write uninitialized data to stream (#1486) 1 year ago
Adam Treat f0742c22f4 Restore state from text if necessary. 1 year ago
Adam Treat 35f9cdb70a Do not delete saved chats if we fail to serialize properly. 1 year ago
cebtenzzre 9fb135e020
cmake: install the GPT-J plugin (#1487) 1 year ago
Aaron Miller 3c25d81759 make codespell happy 1 year ago
Jan Philipp Harries 4f0cee9330 added EM German Mistral Model 1 year ago
Adam Treat 56c0d2898d Update the language here to avoid misunderstanding. 1 year ago
Adam Treat b2cd3bdb3f Fix crasher with an empty string for prompt template. 1 year ago
Cebtenzzre 5fe685427a chat: clearer CPU fallback messages 1 year ago
Aaron Miller 9325075f80 fix stray comma in models2.json
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
1 year ago
Adam Treat f028f67c68 Add starcoder, rift and sbert to our models2.json. 1 year ago
Adam Treat 4528f73479 Reorder and refresh our models2.json. 1 year ago
Cebtenzzre 1534df3e9f backend: do not use Vulkan with non-LLaMA models 1 year ago
Cebtenzzre 672cb850f9 differentiate between init failure and unsupported models 1 year ago
Cebtenzzre a5b93cf095 more accurate fallback descriptions 1 year ago
Cebtenzzre 75deee9adb chat: make sure to clear fallback reason on success 1 year ago
Cebtenzzre 2eb83b9f2a chat: report reason for fallback to CPU 1 year ago
Adam Treat ea66669cef Switch to new models2.json for new gguf release and bump our version to
2.5.0.
1 year ago
Adam Treat 12f943e966 Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf. 1 year ago
Cebtenzzre a49a1dcdf4 chatllm: grammar fix 1 year ago
Cebtenzzre 31b20f093a modellist: fix the system prompt 1 year ago
Cebtenzzre 8f3abb37ca fix references to removed model types 1 year ago
Adam Treat d90d003a1d Latest rebase on llama.cpp with gguf support. 1 year ago
Akarshan Biswas 5f3d739205 appdata: update software description 1 year ago
Akarshan Biswas b4cf12e1bd Update to 2.4.19 1 year ago
Akarshan Biswas 21a5709b07 Remove unnecessary stuffs from manifest 1 year ago
Akarshan Biswas 4426640f44 Add flatpak manifest 1 year ago
Aaron Miller 6711bddc4c launch browser instead of maintenancetool from offline builds 1 year ago
Aaron Miller 7f979c8258 Build offline installers in CircleCI 1 year ago
Adam Treat dc80d1e578 Fix up the offline installer. 1 year ago
Adam Treat f47e698193 Release notes for v2.4.19 and bump the version. 1 year ago
Adam Treat ecf014f03b Release notes for v2.4.18 and bump the version. 1 year ago
Adam Treat e6e724d2dc Actually bump the version. 1 year ago
Adam Treat 06a833e652 Send actual and requested device info for those who have opt-in. 1 year ago
Adam Treat 045f6e6cdc Link against ggml in bin so we can get the available devices without loading a model. 1 year ago
Adam Treat 655372dbfa Release notes for v2.4.17 and bump the version. 1 year ago
Adam Treat aa33419c6e Fallback to CPU more robustly. 1 year ago
Adam Treat 79843c269e Release notes for v2.4.16 and bump the version. 1 year ago
Adam Treat 3076e0bf26 Only show GPU when we're actually using it. 1 year ago
Adam Treat 1fa67a585c Report the actual device we're using. 1 year ago
Adam Treat 21a3244645 Fix a bug where we're not properly falling back to CPU. 1 year ago
Adam Treat 0458c9b4e6 Add version 2.4.15 and bump the version number. 1 year ago
Aaron Miller 6f038c136b init at most one vulkan device, submodule update
fixes issues w/ multiple of the same gpu
1 year ago
Adam Treat 86e862df7e Fix up the name and formatting. 1 year ago
Adam Treat 358ff2a477 Show the device we're currently using. 1 year ago
Adam Treat 891ddafc33 When device is Auto (the default) then we will only consider discrete GPU's otherwise fallback to CPU. 1 year ago
Adam Treat 8f99dca70f Bring the vulkan backend to the GUI. 1 year ago
Adam Treat 987546c63b Nomic vulkan backend licensed under the Software for Open Models License (SOM), version 1.0. 1 year ago
Adam Treat d55cbbee32 Update to newer llama.cpp and disable older forks. 1 year ago