Commit Graph

17 Commits

Author SHA1 Message Date
Aaron Miller
d3ba1295a7
Metal+LLama take two (#929)
Support latest llama with Metal
---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 16:48:46 -04:00
Adam Treat
b162b5c64e Revert "llama on Metal (#885)"
This reverts commit c55f81b860.
2023-06-09 15:08:46 -04:00
Aaron Miller
c55f81b860
llama on Metal (#885)
Support latest llama with Metal

---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 14:58:12 -04:00
Adam Treat
301d2fdbea Fix up for newer models on reset context. This fixes the model from totally failing after a reset context. 2023-06-04 19:31:20 -04:00
AT
bbe195ee02
Backend prompt dedup (#822)
* Deduplicated prompt() function code
2023-06-04 08:59:24 -04:00
Peter Gagarinov
23391d44e0 Only default mlock on macOS where swap seems to be a problem
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by 48275d0dcc

Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
2023-06-03 07:51:18 -04:00
niansa/tuxifan
f3564ac6b9
Fixed tons of warnings and clazy findings (#811) 2023-06-02 15:46:41 -04:00
Adam Treat
a41bd6ac0a Trying to shrink the copy+paste code and do more code sharing between backend model impl. 2023-06-02 07:20:59 -04:00
niansa
a3d08cdcd5 Dlopen better implementation management (Version 2) 2023-06-01 07:44:15 -04:00
AT
48275d0dcc
Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
2023-05-31 17:04:01 -04:00
Adam Treat
9bfff8bfcb Add new reverse prompt for new localdocs context feature. 2023-05-25 11:28:06 -04:00
Juuso Alasuutari
81fdc28e58 llmodel: constify LLModel::threadCount() 2023-05-22 08:54:46 -04:00
Adam Treat
8204c2eb80 Only default mlock on macOS where swap seems to be a problem. 2023-05-21 10:27:04 -04:00
Adam Treat
aba1147a22 Always default mlock to true. 2023-05-20 21:16:15 -04:00
aaron miller
e6fd0a240d backend: fix buffer overrun in repeat penalty code
Caught with AddressSanitizer running a basic prompt test against llmodel
standalone. This fix allows ASan builds to complete a simple prompt
without illegal accesses but there are still notably several leaks.
2023-05-17 07:54:10 -04:00
kuvaus
507e913faf
gpt4all-backend: Add MSVC support to backend (#595)
* Add MSVC compatibility

* Add _MSC_VER macro

---------

Co-authored-by: kuvaus <kuvaus@users.noreply.github.com>
2023-05-16 11:35:33 -04:00
Adam Treat
d918b02c29 Move the llmodel C API to new top-level directory and version it. 2023-05-10 11:46:40 -04:00