Commit Graph

323 Commits (3c30310539d10a82406dbb9a2a5becbc6b315d11)
 

Author SHA1 Message Date
Adam Treat 3c30310539 Convert the old format properly. 1 year ago
Adam Treat 7b66cb7119 Add debug for chatllm model loading and fix order of getting rid of the
dummy chat when no models are restored.
1 year ago
Adam Treat 9bd5609ba0 Deserialize one at a time and don't block gui until all of them are done. 1 year ago
Adam Treat 86da175e1c Use last lts for this. 1 year ago
Adam Treat ab13148430 The GUI should come up immediately and not wait on deserializing from disk. 1 year ago
Adam Treat eb7b61a76d Move the location of the chat files to the model download directory and add a magic+version. 1 year ago
Aaron Miller 7a8f437f8f add name to LICENSE 1 year ago
Adam Treat e397fda250 Bump the version and save up to an order of magnitude of disk space for chat files. 1 year ago
Adam Treat 8d2c8c8cb0 Turn off saving chats to disk by default as it eats so much disk space. 1 year ago
Adam Treat 6d4d86d07c Bump the version. 1 year ago
Adam Treat d0d5d84e06 Add reverse prompt support for gptj too. 1 year ago
Adam Treat 06bb6960d4 Add about dialog. 1 year ago
Adam Treat 659442394f Persistent state for gpt-j models too. 1 year ago
Adam Treat 5b71d39024 Don't crash if state has not been set. 1 year ago
Aaron Miller 019f6d0103 include <cstdint> in llmodel.h 1 year ago
Adam Treat f291853e51 First attempt at providing a persistent chat list experience.
Limitations:

1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
   the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
   data the llama.cpp backend tries to persist. Need to investigate how
   we can shrink this.
1 year ago
Adam Treat 081d32bd97 Restore the model when switching chats. 1 year ago
Adam Treat 0bb52fc5fe Experiment with a much shorter default prompt template. 1 year ago
Adam Treat 82c1d08b33 Add reverse prompts for llama models. 1 year ago
Adam Treat 01accf9e33 Don't exceed the window size for dialogs. 1 year ago
Adam Treat 0f70289ba4 Changes the datalake feature so all conversations are captured when opted-in. 1 year ago
Aaron Miller edad3baa99 download: make model downloads resumable
* save files as `incomplete-{filename}` in the dest folder
* rename into place after hash is confirmed or delete if hash is bad
* resume downloads using http `range`
* if DL is resumed from a different app session rewind a bit -
  this is to deal with the case where the file size changes before
  the content is fully flushed out
* flush dest file at end of readyRead, this mitigates the above
  and provides backpressure on the download if the destination disk
  is slower than the network connection
1 year ago
Adam Treat 4a09f0f0ec More extensive usage stats to help diagnose errors and problems in the ui. 1 year ago
Adam Treat cb085a6418 Some more startup info to help determine what hardware we need to support. 1 year ago
Adam Treat 21dc522200 Don't block the GUI when reloading via combobox. 1 year ago
Adam Treat 48837a62fa Provide a confirm button for deletion of chats. 1 year ago
Adam Treat bb3e08e3dd Use different border colors if we're current or being edited. 1 year ago
Adam Treat f4f27fc38f Update the right index when removing. 1 year ago
Adam Treat f13f4f4700 Generate names via llm. 1 year ago
Adam Treat a62fafc308 Always have a chat. 1 year ago
Adam Treat 86132cfc8b Don't add new chats willy nilly. 1 year ago
Adam Treat 118e0bdc44 Allow removing chats. 1 year ago
Adam Treat 412cad99f2 Hot swapping of conversations. Destroys context for now. 1 year ago
Adam Treat a48226613c Turn the chat list into a model. 1 year ago
Adam Treat 679b61ee07 Provide convenience methods for adding/removing/changing chat. 1 year ago
Adam Treat 8f80f8e3a2 Break out the drawer into own component. 1 year ago
Adam Treat 6e6b96375d Handle the fwd of important signals from LLM object so qml doesn't have to deal with which chat is current. 1 year ago
Adam Treat c0d4a9d426 Continue to shrink the API space for qml and the backend. 1 year ago
Adam Treat ed59190e48 Consolidate these into single api from qml to backend. 1 year ago
Adam Treat 4d87c46948 Major refactor in prep for multiple conversations. 1 year ago
Adam Treat e005ab8c0a Move the reset and id into the chat object. 1 year ago
Adam Treat d1e3198b65 Add new C++ version of the chat model. Getting ready for chat history. 1 year ago
AT 65d4b8a886
Update README.md 1 year ago
AT d3d8229b04
Update README.md 1 year ago
AT 8e696bb4e4
Update README.md 1 year ago
Adam Treat 9f323759ce Remove these as it is mitigated by repeat penalty and models really should train this out. 1 year ago
AT ef2e1bd4fe
Update README.md 1 year ago
AT f37e9f9039
Update README.md 1 year ago
Adam Treat 13401fc52f Bump the version. 1 year ago
Adam Treat a6ca45c9dd Use the universal sep. 1 year ago