llama.cpp@7296c961d9
|
Update to latest llama.cpp
|
2023-04-28 11:03:16 -04:00 |
CMakeLists.txt
|
Scaffolding for the mpt <-> ggml project.
|
2023-05-08 12:21:30 -04:00 |
gptj.h
|
Persistent state for gpt-j models too.
|
2023-05-05 10:00:17 -04:00 |
llmodel_c.cpp
|
feat: load model
|
2023-05-08 12:21:30 -04:00 |
llmodel_c.h
|
feat: load model
|
2023-05-08 12:21:30 -04:00 |
llmodel.h
|
include <cstdint> in llmodel.h
|
2023-05-04 20:36:19 -04:00 |
mpt.cpp
|
Match Helly's impl of kv cache.
|
2023-05-08 12:21:30 -04:00 |
mpt.h
|
Scaffolding for the mpt <-> ggml project.
|
2023-05-08 12:21:30 -04:00 |