mirror of https://github.com/nomic-ai/gpt4all
backend: bump llama.cpp for VRAM leak fix when switching models
Signed-off-by: Jared Van Bortel <jared@nomic.ai>pull/1788/merge
parent
6db5307730
commit
eadc3b8d80
@ -1 +1 @@
|
||||
Subproject commit e18ff04f9fcff1c56fa50e455e3da6807a057612
|
||||
Subproject commit 47aec1bcc09e090f0b8f196dc0a4e43b89507e4a
|
Loading…
Reference in New Issue