Commit Graph

8 Commits (fb464bb60ee9f5510dc3180b6a41c282abfa4dc7)

Author SHA1 Message Date
Adam Treat fb464bb60e Add debug for chatllm model loading and fix order of getting rid of the
dummy chat when no models are restored.
1 year ago
Adam Treat 3a039c8dc1 Deserialize one at a time and don't block gui until all of them are done. 1 year ago
Adam Treat fc8c158fac Use last lts for this. 1 year ago
Adam Treat 280ad04c63 The GUI should come up immediately and not wait on deserializing from disk. 1 year ago
Adam Treat ec7ea8a550 Move the location of the chat files to the model download directory and add a magic+version. 1 year ago
Adam Treat 6ba0a1b693 Turn off saving chats to disk by default as it eats so much disk space. 1 year ago
Adam Treat c2a81e5692 Add about dialog. 1 year ago
Adam Treat 01e582f15b First attempt at providing a persistent chat list experience.
Limitations:

1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
   the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
   data the llama.cpp backend tries to persist. Need to investigate how
   we can shrink this.
1 year ago