niansa/tuxifan
8203d65445
Fixed tons of warnings and clazy findings ( #811 )
2023-06-02 15:46:41 -04:00
niansa/tuxifan
1832a887b5
Fixed model type for GPT-J ( #815 )
...
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-02 15:46:33 -04:00
Richard Guo
d051ac889c
more cleanup
2023-06-02 12:32:26 -04:00
Richard Guo
fb09f412ff
cleanup
2023-06-02 12:32:26 -04:00
Richard Guo
67b7641390
fixed finding model libs
2023-06-02 12:32:26 -04:00
Adam Treat
15c6bf09e9
Fix mac build again.
2023-06-02 10:51:09 -04:00
Adam Treat
1b755b6cba
Try and fix build on mac.
2023-06-02 10:47:12 -04:00
Adam Treat
7ee32d605f
Trying to shrink the copy+paste code and do more code sharing between backend model impl.
2023-06-02 07:20:59 -04:00
Tim Miller
455e6aa7ce
Fix MSVC Build, Update C# Binding Scripts
2023-06-01 14:24:23 -04:00
niansa/tuxifan
c4f9535fd0
Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH ( #789 )
2023-06-01 17:41:04 +02:00
niansa
ab56119470
Fixed double-free in LLModel::Implementation destructor
2023-06-01 11:19:08 -04:00
niansa/tuxifan
8aa707fdb4
Cleaned up implementation management ( #787 )
...
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
2023-06-01 16:51:46 +02:00
Adam Treat
8be42683ac
Add fixme's and clean up a bit.
2023-06-01 07:57:10 -04:00
niansa
b68d359b4f
Dlopen better implementation management (Version 2)
2023-06-01 07:44:15 -04:00
niansa/tuxifan
991a0e4bd8
Advanced avxonly autodetection ( #744 )
...
* Advanced avxonly requirement detection
2023-05-31 21:26:18 -04:00
AT
9c6c09cbd2
Dlopen backend 5 ( #779 )
...
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
2023-05-31 17:04:01 -04:00
Adam Treat
4a317eeb33
Revert "New tokenizer implementation for MPT and GPT-J"
...
This reverts commit ee3469ba6c
.
2023-05-30 12:59:00 -04:00
Adam Treat
06434f0042
Revert "buf_ref.into() can be const now"
...
This reverts commit 840e011b75
.
2023-05-30 12:58:53 -04:00
Adam Treat
92bc92d232
Revert "add tokenizer readme w/ instructions for convert script"
...
This reverts commit 9c15d1f83e
.
2023-05-30 12:58:18 -04:00
aaron miller
9c15d1f83e
add tokenizer readme w/ instructions for convert script
2023-05-30 12:05:57 -04:00
Aaron Miller
840e011b75
buf_ref.into() can be const now
2023-05-30 12:05:57 -04:00
Aaron Miller
ee3469ba6c
New tokenizer implementation for MPT and GPT-J
...
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.
Featuring:
* Fixed unicode handling (via ICU)
* Fixed BPE token merge handling
* Complete added vocabulary handling
2023-05-30 12:05:57 -04:00
Adam Treat
d40735a2d2
Get the backend as well as the client building/working with msvc.
2023-05-25 15:22:45 -04:00
Adam Treat
80024a029c
Add new reverse prompt for new localdocs context feature.
2023-05-25 11:28:06 -04:00
Juuso Alasuutari
8d822f9898
llmodel: constify some casts in LLModelWrapper
2023-05-22 08:54:46 -04:00
Juuso Alasuutari
f2528e6f62
llmodel: constify LLModel::threadCount()
2023-05-22 08:54:46 -04:00
Juuso Alasuutari
f0b942d323
llmodel: fix wrong and/or missing prompt callback type
...
Fix occurrences of the prompt callback being incorrectly specified, or
the response callback's prototype being incorrectly used in its place.
Signed-off-by: Juuso Alasuutari <juuso.alasuutari@gmail.com>
2023-05-21 16:02:11 -04:00
Adam Treat
9e13f813d5
Only default mlock on macOS where swap seems to be a problem.
2023-05-21 10:27:04 -04:00
Adam Treat
d5dd4e87de
Always default mlock to true.
2023-05-20 21:16:15 -04:00
aaron miller
08f3bd2a82
backend: fix buffer overrun in repeat penalty code
...
Caught with AddressSanitizer running a basic prompt test against llmodel
standalone. This fix allows ASan builds to complete a simple prompt
without illegal accesses but there are still notably several leaks.
2023-05-17 07:54:10 -04:00
kuvaus
4f2b7f7be4
Bugfix on llmodel_model_create function
...
Fixes the bug where llmodel_model_create prints "Invalid model file" even though the model is loaded correctly. Credits and thanks to @serendipity for the fix.
2023-05-17 07:49:32 -04:00
kuvaus
a0b98dc55d
gpt4all-backend: Add llmodel create and destroy functions ( #554 )
...
* Add llmodel create and destroy functions
* Fix capitalization
* Fix capitalization
* Fix capitalization
* Update CMakeLists.txt
---------
Co-authored-by: kuvaus <kuvaus@users.noreply.github.com>
2023-05-16 11:36:46 -04:00
kuvaus
4f021ebcbb
gpt4all-backend: Add MSVC support to backend ( #595 )
...
* Add MSVC compatibility
* Add _MSC_VER macro
---------
Co-authored-by: kuvaus <kuvaus@users.noreply.github.com>
2023-05-16 11:35:33 -04:00
Aaron Miller
9aaa355d41
backend: dedupe tokenizing code in mpt/gptj
2023-05-16 10:30:19 -04:00
Aaron Miller
fc2869f0b7
backend: dedupe tokenizing code in gptj/mpt
2023-05-16 10:30:19 -04:00
Aaron Miller
16b7bf01a8
backend: make initial buf_size const in model impls
...
more unifying mpt and gptj code - this one's never written so also
changing the name to be clearer
2023-05-16 10:30:19 -04:00
Aaron Miller
0c9b7a6ae8
mpt: use buf in model struct (thread safety)
2023-05-16 10:30:19 -04:00
AT
48ca4a047c
Update README.md
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-05-14 15:26:00 -04:00
Zach Nussbaum
53730c5f7f
fix: use right conversion script
2023-05-11 11:20:43 -04:00
Adam Treat
8e7b96bd92
Move the llmodel C API to new top-level directory and version it.
2023-05-10 11:46:40 -04:00
Richard Guo
6304f6d322
mono repo structure
2023-05-01 15:45:23 -04:00