Commit Graph

20 Commits

Author SHA1 Message Date
Jared Van Bortel
d3d777bc51
chat: fix #includes with include-what-you-use (#2401)
Also use qGuiApp instead of qApp.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-06-04 14:47:11 -04:00
Jared Van Bortel
fbbf810020
chat: fix issues with the initial "New Chat" (#2330)
* select the existing new chat if there already is one when "New Chat" is clicked
* scroll to the new chat when "New Chat" is clicked
* fix the "New Chat" being scrolled past the top of the chat list

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-15 14:09:32 -04:00
Jared Van Bortel
7e1e00f331
chat: fix issues with quickly switching between multiple chats (#2343)
* prevent load progress from getting out of sync with the current chat
* fix memory leak on exit if the LLModelStore contains a model
* do not report cancellation as a failure in console/Mixpanel
* show "waiting for model" separately from "switching context" in UI
* do not show lower "reload" button on error
* skip context switch if unload is pending
* skip unnecessary calls to LLModel::saveState

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-15 14:07:03 -04:00
Adam Treat
17dee02287 Fix for issue #2080 where the GUI appears to hang when a chat with a large
model is deleted. There is no reason to save the context for a chat that
is being deleted.

Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-03-06 16:52:17 -06:00
Jared Van Bortel
a0bd96f75d
chat: join ChatLLM threads without calling destructors (#2043)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-03-06 16:42:59 -05:00
Adam Treat
d948a4f2ee Complete revamp of model loading to allow for more discreet control by
the user of the models loading behavior.

Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-02-21 10:15:20 -06:00
Jared Van Bortel
0a40e71652
Maxwell/Pascal GPU support and crash fix (#1895)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-01-31 16:32:32 -05:00
Jared Van Bortel
061d1969f8
expose n_gpu_layers parameter of llama.cpp (#1890)
Also dynamically limit the GPU layers and context length fields to the maximum supported by the model.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-01-31 14:17:44 -05:00
Gerhard Stein
3e99b90c0b Some cleanps 2024-01-03 08:41:40 -06:00
Adam Treat
908aec27fe Always save chats to disk, but save them as text by default. This also changes
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.
2023-10-12 07:52:11 -04:00
Adam Treat
285aa50b60 Consolidate generation and application settings on the new settings object. 2023-06-28 20:36:43 -03:00
Adam Treat
7f01b153b3 Modellist temp 2023-06-26 14:14:46 -04:00
Adam Treat
968868415e Move saving chats to a thread and display what we're doing to the user. 2023-06-20 17:18:33 -04:00
Adam Treat
c8a590bc6f Get rid of last blocking operations and make the chat/llm thread safe. 2023-06-20 18:18:10 -03:00
Adam Treat
9f590db98d Better error handling when the model fails to load. 2023-06-04 14:55:05 -04:00
Adam Treat
f931de21c5 Add save/restore to chatgpt chats and allow serialize/deseralize from disk. 2023-05-16 10:31:55 -04:00
Adam Treat
b71c0ac3bd The server has different lifetime mgmt than the other chats. 2023-05-13 19:34:54 -04:00
Adam Treat
ddc24acf33 Much better memory mgmt for multi-threaded model loading/unloading. 2023-05-13 19:10:56 -04:00
Adam Treat
2989b74d43 httpserver 2023-05-13 19:07:06 -04:00
Adam Treat
6015154bef Moving everything to subdir for monorepo merge. 2023-05-10 10:26:55 -04:00