Commit Graph

12 Commits (e1eac00ee067a9fd336e89a3191d89fd01daa934)

Author SHA1 Message Date
Jared Van Bortel 061d1969f8
expose n_gpu_layers parameter of llama.cpp (#1890)
Also dynamically limit the GPU layers and context length fields to the maximum supported by the model.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
8 months ago
Jared Van Bortel d1c56b8b28
Implement configurable context length (#1749) 9 months ago
Aaron Miller ad0e7fd01f chatgpt: ensure no extra newline in header 1 year ago
Adam Treat 0d726b22b8 When we explicitly cancel an operation we shouldn't throw an error. 1 year ago
Adam Treat 34a3b9c857 Don't block on exit when not connected. 1 year ago
Adam Treat 4f9e489093 Don't use a local event loop which can lead to recursion and crashes. 1 year ago
Adam Treat 8467e69f24 Check that we're not null. This is necessary because the loop can make us recursive. Need to fix that. 1 year ago
Aaron Miller b19a3e5b2c add requiredMem method to llmodel impls
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
1 year ago
Juuso Alasuutari 81fdc28e58 llmodel: constify LLModel::threadCount() 1 year ago
Adam Treat 79d6243fe1 Use the default for max_tokens to avoid errors. 1 year ago
Adam Treat f931de21c5 Add save/restore to chatgpt chats and allow serialize/deseralize from disk. 1 year ago
Adam Treat dd27c10f54 Preliminary support for chatgpt models. 1 year ago