Zach Nussbaum
58069dc8b9
chore: import for mpt
2023-05-08 12:21:30 -04:00
Adam Treat
40b976436a
Only generate three words max.
2023-05-08 12:21:30 -04:00
Adam Treat
3c30310539
Convert the old format properly.
2023-05-08 05:53:16 -04:00
Adam Treat
7b66cb7119
Add debug for chatllm model loading and fix order of getting rid of the
...
dummy chat when no models are restored.
2023-05-07 14:40:02 -04:00
Adam Treat
e397fda250
Bump the version and save up to an order of magnitude of disk space for chat files.
2023-05-05 20:12:00 -04:00
Adam Treat
5b71d39024
Don't crash if state has not been set.
2023-05-05 10:00:17 -04:00
Adam Treat
f291853e51
First attempt at providing a persistent chat list experience.
...
Limitations:
1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
data the llama.cpp backend tries to persist. Need to investigate how
we can shrink this.
2023-05-04 15:31:41 -04:00
Adam Treat
081d32bd97
Restore the model when switching chats.
2023-05-03 12:45:14 -04:00
Adam Treat
4a09f0f0ec
More extensive usage stats to help diagnose errors and problems in the ui.
2023-05-02 20:31:17 -04:00
Adam Treat
f13f4f4700
Generate names via llm.
2023-05-02 11:19:17 -04:00
Adam Treat
412cad99f2
Hot swapping of conversations. Destroys context for now.
2023-05-01 20:27:07 -04:00
Adam Treat
4d87c46948
Major refactor in prep for multiple conversations.
2023-05-01 09:10:05 -04:00