Commit Graph

12 Commits (88a0ee35098004a6599949e7a1a699170ab94b28)

Author SHA1 Message Date
Adam Treat 5f372bd881 Gracefully handle when we have a previous chat where the model that it used has gone away. 1 year ago
Adam Treat 8c4b8f215f Fix gptj to have lower memory requirements for kv cache and add versioning to the internal state to smoothly handle such a fix in the future. 1 year ago
Adam Treat ccbd16cf18 Fix the version. 1 year ago
Adam Treat 3c30310539 Convert the old format properly. 1 year ago
Adam Treat 7b66cb7119 Add debug for chatllm model loading and fix order of getting rid of the
dummy chat when no models are restored.
1 year ago
Adam Treat 9bd5609ba0 Deserialize one at a time and don't block gui until all of them are done. 1 year ago
Adam Treat 86da175e1c Use last lts for this. 1 year ago
Adam Treat ab13148430 The GUI should come up immediately and not wait on deserializing from disk. 1 year ago
Adam Treat eb7b61a76d Move the location of the chat files to the model download directory and add a magic+version. 1 year ago
Adam Treat 8d2c8c8cb0 Turn off saving chats to disk by default as it eats so much disk space. 1 year ago
Adam Treat 06bb6960d4 Add about dialog. 1 year ago
Adam Treat f291853e51 First attempt at providing a persistent chat list experience.
Limitations:

1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
   the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
   data the llama.cpp backend tries to persist. Need to investigate how
   we can shrink this.
1 year ago