backend: bump llama.cpp for VRAM leak fix when switching models

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
pull/1788/merge
Jared Van Bortel 8 months ago
parent 6db5307730
commit eadc3b8d80

@ -1 +1 @@
Subproject commit e18ff04f9fcff1c56fa50e455e3da6807a057612
Subproject commit 47aec1bcc09e090f0b8f196dc0a4e43b89507e4a
Loading…
Cancel
Save