Commit Graph

9 Commits (f0faa23ad5c248c0597c81c853bf8bdfe7d43a80)

Author SHA1 Message Date
Adam Treat 0d726b22b8 When we explicitly cancel an operation we shouldn't throw an error. 1 year ago
Adam Treat 34a3b9c857 Don't block on exit when not connected. 1 year ago
Adam Treat 4f9e489093 Don't use a local event loop which can lead to recursion and crashes. 1 year ago
Adam Treat 8467e69f24 Check that we're not null. This is necessary because the loop can make us recursive. Need to fix that. 1 year ago
Aaron Miller b19a3e5b2c add requiredMem method to llmodel impls
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
1 year ago
Juuso Alasuutari 81fdc28e58 llmodel: constify LLModel::threadCount() 1 year ago
Adam Treat 79d6243fe1 Use the default for max_tokens to avoid errors. 1 year ago
Adam Treat f931de21c5 Add save/restore to chatgpt chats and allow serialize/deseralize from disk. 1 year ago
Adam Treat dd27c10f54 Preliminary support for chatgpt models. 1 year ago