Adam Treat
7094fd0788
Gracefully handle when we have a previous chat where the model that it used has gone away.
2023-05-08 20:51:03 -04:00
Adam Treat
9da4fac023
Fix gptj to have lower memory requirements for kv cache and add versioning to the internal state to smoothly handle such a fix in the future.
2023-05-08 17:23:02 -04:00
Adam Treat
c7f5280f9f
Fix the version.
2023-05-08 16:50:21 -04:00
Adam Treat
4bcc88b051
Convert the old format properly.
2023-05-08 05:53:16 -04:00
Adam Treat
fb464bb60e
Add debug for chatllm model loading and fix order of getting rid of the
...
dummy chat when no models are restored.
2023-05-07 14:40:02 -04:00
Adam Treat
3a039c8dc1
Deserialize one at a time and don't block gui until all of them are done.
2023-05-07 09:20:09 -04:00
Adam Treat
fc8c158fac
Use last lts for this.
2023-05-07 06:39:32 -04:00
Adam Treat
280ad04c63
The GUI should come up immediately and not wait on deserializing from disk.
2023-05-06 20:01:14 -04:00
Adam Treat
ec7ea8a550
Move the location of the chat files to the model download directory and add a magic+version.
2023-05-06 18:51:49 -04:00
Adam Treat
6ba0a1b693
Turn off saving chats to disk by default as it eats so much disk space.
2023-05-05 12:30:11 -04:00
Adam Treat
c2a81e5692
Add about dialog.
2023-05-05 10:47:05 -04:00
Adam Treat
01e582f15b
First attempt at providing a persistent chat list experience.
...
Limitations:
1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
data the llama.cpp backend tries to persist. Need to investigate how
we can shrink this.
2023-05-04 15:31:41 -04:00