Adam Treat
7094fd0788
Gracefully handle when we have a previous chat where the model that it used has gone away.
2023-05-08 20:51:03 -04:00
Adam Treat
9da4fac023
Fix gptj to have lower memory requirements for kv cache and add versioning to the internal state to smoothly handle such a fix in the future.
2023-05-08 17:23:02 -04:00
Zach Nussbaum
d928540a08
feat: load model
2023-05-08 12:21:30 -04:00
Zach Nussbaum
f8f248c18a
chore: import for mpt
2023-05-08 12:21:30 -04:00
Adam Treat
da5b057041
Only generate three words max.
2023-05-08 12:21:30 -04:00
Adam Treat
4bcc88b051
Convert the old format properly.
2023-05-08 05:53:16 -04:00
Adam Treat
fb464bb60e
Add debug for chatllm model loading and fix order of getting rid of the
...
dummy chat when no models are restored.
2023-05-07 14:40:02 -04:00
Adam Treat
eb294d5623
Bump the version and save up to an order of magnitude of disk space for chat files.
2023-05-05 20:12:00 -04:00
Adam Treat
a548448fcf
Don't crash if state has not been set.
2023-05-05 10:00:17 -04:00
Adam Treat
01e582f15b
First attempt at providing a persistent chat list experience.
...
Limitations:
1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
data the llama.cpp backend tries to persist. Need to investigate how
we can shrink this.
2023-05-04 15:31:41 -04:00
Adam Treat
02c9bb4ac7
Restore the model when switching chats.
2023-05-03 12:45:14 -04:00
Adam Treat
db094c5b92
More extensive usage stats to help diagnose errors and problems in the ui.
2023-05-02 20:31:17 -04:00
Adam Treat
c217b7538a
Generate names via llm.
2023-05-02 11:19:17 -04:00
Adam Treat
d91dd567e2
Hot swapping of conversations. Destroys context for now.
2023-05-01 20:27:07 -04:00
Adam Treat
414a12c33d
Major refactor in prep for multiple conversations.
2023-05-01 09:10:05 -04:00