Adam Treat
b36ea3dde5
Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
2023-06-04 19:31:20 -04:00
AT
b5971b0d41
Backend prompt dedup ( #822 )
...
* Deduplicated prompt() function code
2023-06-04 08:59:24 -04:00
niansa/tuxifan
8203d65445
Fixed tons of warnings and clazy findings ( #811 )
2023-06-02 15:46:41 -04:00
Adam Treat
7ee32d605f
Trying to shrink the copy+paste code and do more code sharing between backend model impl.
2023-06-02 07:20:59 -04:00
niansa
b68d359b4f
Dlopen better implementation management (Version 2)
2023-06-01 07:44:15 -04:00
AT
9c6c09cbd2
Dlopen backend 5 ( #779 )
...
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
2023-05-31 17:04:01 -04:00
Adam Treat
4a317eeb33
Revert "New tokenizer implementation for MPT and GPT-J"
...
This reverts commit ee3469ba6c
.
2023-05-30 12:59:00 -04:00
Aaron Miller
ee3469ba6c
New tokenizer implementation for MPT and GPT-J
...
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
2023-05-30 12:05:57 -04:00
Adam Treat
80024a029c
Add new reverse prompt for new localdocs context feature.
2023-05-25 11:28:06 -04:00
Juuso Alasuutari
f2528e6f62
llmodel: constify LLModel::threadCount()
2023-05-22 08:54:46 -04:00
aaron miller
08f3bd2a82
backend: fix buffer overrun in repeat penalty code
...
Caught with AddressSanitizer running a basic prompt test against llmodel
standalone. This fix allows ASan builds to complete a simple prompt
without illegal accesses but there are still notably several leaks.
2023-05-17 07:54:10 -04:00
kuvaus
4f021ebcbb
gpt4all-backend: Add MSVC support to backend ( #595 )
...
* Add MSVC compatibility
* Add _MSC_VER macro
---------
Co-authored-by: kuvaus <kuvaus@users.noreply.github.com>
2023-05-16 11:35:33 -04:00
Aaron Miller
9aaa355d41
backend: dedupe tokenizing code in mpt/gptj
2023-05-16 10:30:19 -04:00
Aaron Miller
fc2869f0b7
backend: dedupe tokenizing code in gptj/mpt
2023-05-16 10:30:19 -04:00
Aaron Miller
16b7bf01a8
backend: make initial buf_size const in model impls
...
more unifying mpt and gptj code - this one's never written so also
changing the name to be clearer
2023-05-16 10:30:19 -04:00
Aaron Miller
0c9b7a6ae8
mpt: use buf in model struct (thread safety)
2023-05-16 10:30:19 -04:00
Adam Treat
8e7b96bd92
Move the llmodel C API to new top-level directory and version it.
2023-05-10 11:46:40 -04:00