Adam Treat
3cd7d2f3c7
Make installers work with mac/windows for big backend change.
2023-06-05 09:23:17 -04:00
AT
964e2ffc1b
We no longer have an avx_only repository and better error handling for minimum hardware requirements. ( #833 )
2023-06-04 15:28:58 -04:00
Richard Guo
fb09f412ff
cleanup
2023-06-02 12:32:26 -04:00
Richard Guo
67b7641390
fixed finding model libs
2023-06-02 12:32:26 -04:00
Adam Treat
1b755b6cba
Try and fix build on mac.
2023-06-02 10:47:12 -04:00
Adam Treat
7ee32d605f
Trying to shrink the copy+paste code and do more code sharing between backend model impl.
2023-06-02 07:20:59 -04:00
niansa/tuxifan
c4f9535fd0
Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH ( #789 )
2023-06-01 17:41:04 +02:00
niansa
ab56119470
Fixed double-free in LLModel::Implementation destructor
2023-06-01 11:19:08 -04:00
niansa/tuxifan
8aa707fdb4
Cleaned up implementation management ( #787 )
...
* Cleaned up implementation management
* Initialize LLModel::m_implementation to nullptr
* llmodel.h: Moved dlhandle fwd declare above LLModel class
2023-06-01 16:51:46 +02:00
Adam Treat
8be42683ac
Add fixme's and clean up a bit.
2023-06-01 07:57:10 -04:00
niansa
b68d359b4f
Dlopen better implementation management (Version 2)
2023-06-01 07:44:15 -04:00
niansa/tuxifan
991a0e4bd8
Advanced avxonly autodetection ( #744 )
...
* Advanced avxonly requirement detection
2023-05-31 21:26:18 -04:00
AT
9c6c09cbd2
Dlopen backend 5 ( #779 )
...
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
2023-05-31 17:04:01 -04:00