gpt4all/llmodel
Adam Treat 01e582f15b First attempt at providing a persistent chat list experience.
Limitations:

1) Context is not restored for gpt-j models
2) When you switch between different model types in an existing chat
   the context and all the conversation is lost
3) The settings are not chat or conversation specific
4) The sizes of the chat persisted files are very large due to how much
   data the llama.cpp backend tries to persist. Need to investigate how
   we can shrink this.
2023-05-04 15:31:41 -04:00
..
llama.cpp@7296c961d9 Update to latest llama.cpp 2023-04-28 11:03:16 -04:00
CMakeLists.txt Don't set the app version in the llmodel. 2023-04-29 10:31:12 -04:00
gptj.cpp Load models from filepath only. 2023-04-28 20:15:10 -04:00
gptj.h Load models from filepath only. 2023-04-28 20:15:10 -04:00
llamamodel.cpp First attempt at providing a persistent chat list experience. 2023-05-04 15:31:41 -04:00
llamamodel.h First attempt at providing a persistent chat list experience. 2023-05-04 15:31:41 -04:00
llmodel_c.cpp First attempt at providing a persistent chat list experience. 2023-05-04 15:31:41 -04:00
llmodel_c.h First attempt at providing a persistent chat list experience. 2023-05-04 15:31:41 -04:00
llmodel.h First attempt at providing a persistent chat list experience. 2023-05-04 15:31:41 -04:00
utils.cpp Move the backend code into own subdirectory and make it a shared library. Begin fleshing out the C api wrapper that bindings can use. 2023-04-26 08:22:38 -04:00
utils.h Move the backend code into own subdirectory and make it a shared library. Begin fleshing out the C api wrapper that bindings can use. 2023-04-26 08:22:38 -04:00