Commit Graph

51 Commits (30692a2dfc32569bfee73373815a50bc5a5f97c5)

Author SHA1 Message Date
Jared Van Bortel 01870b4a46
chat: fix blank device in UI and improve Mixpanel reporting (#2409)
Also remove LLModel::hasGPUDevice.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
3 months ago
AT 9273b49b62
chat: major UI redesign for v3.0.0 (#2396)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
3 months ago
Jared Van Bortel 41c9013fa4
chat: don't use incomplete types with signals/slots/Q_INVOKABLE (#2408)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
4 months ago
Jared Van Bortel d3d777bc51
chat: fix #includes with include-what-you-use (#2401)
Also use qGuiApp instead of qApp.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
4 months ago
Jared Van Bortel 7e1e00f331
chat: fix issues with quickly switching between multiple chats (#2343)
* prevent load progress from getting out of sync with the current chat
* fix memory leak on exit if the LLModelStore contains a model
* do not report cancellation as a failure in console/Mixpanel
* show "waiting for model" separately from "switching context" in UI
* do not show lower "reload" button on error
* skip context switch if unload is pending
* skip unnecessary calls to LLModel::saveState

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
4 months ago
Jared Van Bortel c622921894
improve mixpanel usage statistics (#2238)
Other changes:
- Always display first start dialog if privacy options are unset (e.g. if the user closed GPT4All without selecting them)
- LocalDocs scanQueue is now always deferred
- Fix a potential crash in magic_match
- LocalDocs indexing is now started after the first start dialog is dismissed so usage stats are included

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
5 months ago
Adam Treat 94a9943782 Change the behavior of show references setting for localdocs.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
5 months ago
Adam Treat 17dee02287 Fix for issue #2080 where the GUI appears to hang when a chat with a large
model is deleted. There is no reason to save the context for a chat that
is being deleted.

Signed-off-by: Adam Treat <treat.adam@gmail.com>
7 months ago
Jared Van Bortel 44717682a7
chat: implement display of model loading warnings (#2034)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
7 months ago
Adam Treat 67099f80ba Add comment to make this clear.
Signed-off-by: Adam Treat <treat.adam@gmail.com>
7 months ago
Adam Treat d948a4f2ee Complete revamp of model loading to allow for more discreet control by
the user of the models loading behavior.

Signed-off-by: Adam Treat <treat.adam@gmail.com>
7 months ago
Jared Van Bortel 29d2c936d1
chat: don't show "retrieving localdocs" for zero collections (#1874)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
8 months ago
Gerhard Stein 3e99b90c0b Some cleanps 9 months ago
Jared Van Bortel 0600f551b3
chatllm: do not attempt to serialize incompatible state (#1742) 9 months ago
Adam Treat 371e2a5cbc LocalDocs version 2 with text embeddings. 10 months ago
Adam Treat f529d55380 Move this logic to QML. 11 months ago
Adam Treat 131cfcdeae Don't regenerate the name for deserialized chats. 11 months ago
Adam Treat 908aec27fe Always save chats to disk, but save them as text by default. This also changes
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.
11 months ago
Adam Treat f0742c22f4 Restore state from text if necessary. 11 months ago
Cebtenzzre 2eb83b9f2a chat: report reason for fallback to CPU 12 months ago
Adam Treat 1fa67a585c Report the actual device we're using. 1 year ago
Adam Treat 9dccc96e70 Immediately signal when the model is in a new loading state. 1 year ago
Adam Treat eab92a9d73 Fix typo and add new show references setting to localdocs. 1 year ago
Adam Treat 6d9cdf228c Huge change that completely revamps the settings dialog and implements
per model settings as well as the ability to clone a model into a "character."
This also implements system prompts as well as quite a few bugfixes for
instance this fixes chatgpt.
1 year ago
Adam Treat 285aa50b60 Consolidate generation and application settings on the new settings object. 1 year ago
Adam Treat 64e98b8ea9 Fix bug with model loading on initial load. 1 year ago
Adam Treat 7f01b153b3 Modellist temp 1 year ago
Adam Treat c8a590bc6f Get rid of last blocking operations and make the chat/llm thread safe. 1 year ago
Adam Treat 7d2ce06029 Start working on more thread safety and model load error handling. 1 year ago
Adam Treat a3a6a20146 Don't store db results in ChatLLM. 1 year ago
AT 2b6cc99a31
Show token generation speed in gui. (#1020) 1 year ago
Adam Treat 782e1e77a4 Fix up model names that don't begin with 'ggml-' 1 year ago
AT a576220b18
Support loading files if 'ggml' is found anywhere in the name not just at (#1001)
the beginning and add deprecated flag to models.json so older versions will
show a model, but later versions don't. This will allow us to transition
away from models < ggmlv2 and still allow older installs of gpt4all to work.
1 year ago
Adam Treat 9f590db98d Better error handling when the model fails to load. 1 year ago
niansa/tuxifan f3564ac6b9
Fixed tons of warnings and clazy findings (#811) 1 year ago
Adam Treat 28944ac01b Fix for stale references after we regenerate. 1 year ago
Adam Treat aea94f756d Better name for database results. 1 year ago
Adam Treat f62e439a2d Make localdocs work with server mode. 1 year ago
Adam Treat 912cb2a842 Get rid of blocking behavior for regenerate response. 1 year ago
Adam Treat 9b0629db8b Add context link to references. 1 year ago
Adam Treat db9eecdce4 Store the references separately so they are not sent to datalake. 1 year ago
Adam Treat b5380c9b7f Adds the collections to serialize and implement references for localdocs. 1 year ago
Adam Treat 01b8c7617f Add more of the UI for selecting collections for chats. 1 year ago
Adam Treat c800291e7f Add prompt processing and localdocs to the busy indicator in UI. 1 year ago
Adam Treat 7e42af5f33 localdocs 1 year ago
Adam Treat f931de21c5 Add save/restore to chatgpt chats and allow serialize/deseralize from disk. 1 year ago
Adam Treat dd27c10f54 Preliminary support for chatgpt models. 1 year ago
Adam Treat ddc24acf33 Much better memory mgmt for multi-threaded model loading/unloading. 1 year ago
Adam Treat 2989b74d43 httpserver 1 year ago
Adam Treat 76675536b0 Cleanup the chatllm properly. 1 year ago