gpt4all/llmodel
2023-05-08 12:21:30 -04:00
..
llama.cpp@7296c961d9 Update to latest llama.cpp 2023-04-28 11:03:16 -04:00
CMakeLists.txt Scaffolding for the mpt <-> ggml project. 2023-05-08 12:21:30 -04:00
gptj.cpp Add debug for chatllm model loading and fix order of getting rid of the 2023-05-07 14:40:02 -04:00
gptj.h Persistent state for gpt-j models too. 2023-05-05 10:00:17 -04:00
llamamodel.cpp First attempt at providing a persistent chat list experience. 2023-05-04 15:31:41 -04:00
llamamodel.h First attempt at providing a persistent chat list experience. 2023-05-04 15:31:41 -04:00
llmodel_c.cpp feat: load model 2023-05-08 12:21:30 -04:00
llmodel_c.h feat: load model 2023-05-08 12:21:30 -04:00
llmodel.h include <cstdint> in llmodel.h 2023-05-04 20:36:19 -04:00
mpt.cpp Match Helly's impl of kv cache. 2023-05-08 12:21:30 -04:00
mpt.h Scaffolding for the mpt <-> ggml project. 2023-05-08 12:21:30 -04:00
utils.cpp Move the backend code into own subdirectory and make it a shared library. Begin fleshing out the C api wrapper that bindings can use. 2023-04-26 08:22:38 -04:00
utils.h Move the backend code into own subdirectory and make it a shared library. Begin fleshing out the C api wrapper that bindings can use. 2023-04-26 08:22:38 -04:00