Commit Graph

48 Commits

Author SHA1 Message Date
AT
9273b49b62
chat: major UI redesign for v3.0.0 (#2396)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-06-24 18:49:23 -04:00
Jared Van Bortel
636307160e
backend: fix #includes with include-what-you-use (#2371)
Also fix a PARENT_SCOPE warning when building the backend.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-31 16:34:54 -04:00
Jared Van Bortel
8ba7ef4832
dlhandle: suppress DLL errors on Windows (#2389)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-31 16:33:40 -04:00
Jared Van Bortel
4e89a9c44f
backend: support non-ASCII characters in path to llmodel libs on Windows (#2388)
* backend: refactor dlhandle.h into oscompat.{cpp,h}

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* llmodel: alias std::filesystem

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* llmodel: use wide strings for paths on Windows

Using the native path representation allows us to manipulate paths and
call LoadLibraryEx without mangling non-ASCII characters.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* llmodel: prefer built-in std::filesystem functionality

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* oscompat: fix string type error

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* backend: rename oscompat back to dlhandle

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* dlhandle: fix #includes

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* dlhandle: remove another #include

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* dlhandle: move dlhandle #include

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* dlhandle: remove #includes that are covered by dlhandle.h

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

* llmodel: fix #include order

Signed-off-by: Jared Van Bortel <jared@nomic.ai>

---------

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-31 13:12:28 -04:00
Jared Van Bortel
2025d2d15b
llmodel: add CUDA to the DLL search path if CUDA_PATH is set (#2357)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-16 17:39:49 -04:00
Jared Van Bortel
d2a99d9bc6
support the llama.cpp CUDA backend (#2310)
* rebase onto llama.cpp commit ggerganov/llama.cpp@d46dbc76f
* support for CUDA backend (enabled by default)
* partial support for Occam's Vulkan backend (disabled by default)
* partial support for HIP/ROCm backend (disabled by default)
* sync llama.cpp.cmake with upstream llama.cpp CMakeLists.txt
* changes to GPT4All backend, bindings, and chat UI to handle choice of llama.cpp backend (Kompute or CUDA)
* ship CUDA runtime with installed version
* make device selection in the UI on macOS actually do something
* model whitelist: remove dbrx, mamba, persimmon, plamo; add internlm and starcoder2

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-15 15:27:50 -04:00
Jared Van Bortel
577ebd4826
mixpanel: report cpu_supports_avx2 on startup (#2299)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-05-02 16:09:41 -04:00
Jared Van Bortel
c622921894
improve mixpanel usage statistics (#2238)
Other changes:
- Always display first start dialog if privacy options are unset (e.g. if the user closed GPT4All without selecting them)
- LocalDocs scanQueue is now always deferred
- Fix a potential crash in magic_match
- LocalDocs indexing is now started after the first start dialog is dismissed so usage stats are included

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-04-25 13:16:52 -04:00
Jared Van Bortel
1b84a48c47
python: add list_gpus to the GPT4All API (#2194)
Other changes:
* fix memory leak in llmodel_available_gpu_devices
* drop model argument from llmodel_available_gpu_devices
* breaking: make GPT4All/Embed4All arguments past model_name keyword-only

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-04-04 14:52:13 -04:00
Jared Van Bortel
699410014a
fix non-AVX CPU detection (#2141)
* chat: fix non-AVX CPU detection on Windows
* bindings: throw exception instead of logging to console

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-03-19 10:56:14 -04:00
Jared Van Bortel
406e88b59a
implement local Nomic Embed via llama.cpp (#2086)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-03-13 18:09:24 -04:00
Jared Van Bortel
061d1969f8
expose n_gpu_layers parameter of llama.cpp (#1890)
Also dynamically limit the GPU layers and context length fields to the maximum supported by the model.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-01-31 14:17:44 -05:00
Jared Van Bortel
7e9786fccf chat: set search path early
This fixes the issues with installed versions of v2.6.0.
2024-01-11 12:04:18 -05:00
ThiloteE
2d566710e5 Address review 2024-01-03 11:13:07 -06:00
ThiloteE
38d81c14d0 Fixes https://github.com/nomic-ai/gpt4all/issues/1760 LLModel ERROR: Could not find CPU LLaMA implementation.
Inspired by Microsoft docs for LoadLibraryExA (https://learn.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-loadlibraryexa).
When using LOAD_LIBRARY_SEARCH_DLL_LOAD_DIR, the lpFileName parameter must specify a fully qualified path, also it needs to be backslashes (\), not forward slashes (/).
2024-01-03 11:13:07 -06:00
Jared Van Bortel
d1c56b8b28
Implement configurable context length (#1749) 2023-12-16 17:58:15 -05:00
Jared Van Bortel
3acbef14b7
fix AVX support by removing direct linking to AVX2 libs (#1750) 2023-12-13 12:11:09 -05:00
Jared Van Bortel
d4ce9f4a7c
llmodel_c: improve quality of error messages (#1625) 2023-11-07 11:20:14 -05:00
Aaron Miller
f414c28589 llmodel: whitelist library name patterns
this fixes some issues that were being seen on installed windows builds of 2.5.0

only load dlls that actually might be model impl dlls, otherwise we pull all sorts of random junk into the process before it might expect to be

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-23 21:40:14 -07:00
cebtenzzre
7b611b49f2
llmodel: print an error if the CPU does not support AVX (#1499) 2023-10-11 15:09:40 -04:00
cebtenzzre
7a19047329
llmodel: do not call magic_match unless build variant is correct (#1488) 2023-10-11 11:30:48 -04:00
Aaron Miller
507753a37c macos build fixes 2023-10-05 18:16:19 -04:00
Adam Treat
d90d003a1d Latest rebase on llama.cpp with gguf support. 2023-10-05 18:16:19 -04:00
Cosmic Snow
108d950874 Fix Windows unable to load models on older Windows builds
- Replace high-level IsProcessorFeaturePresent
- Reintroduce low-level compiler intrinsics implementation
2023-08-09 09:27:43 +02:00
cosmic-snow
6200900677
Fix Windows MSVC arch detection (#1194)
- in llmodel.cpp to fix AVX-only handling

Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
2023-07-13 14:44:17 -04:00
Adam Treat
315a1f2aa2 Move it back as internal class. 2023-07-13 14:21:46 -04:00
Adam Treat
1f749d7633 Clean up backend code a bit and hide impl. details. 2023-07-13 14:21:46 -04:00
Adam Treat
33557b1f39 Move the implementation out of llmodel class. 2023-07-13 14:21:46 -04:00
Aaron Miller
432b7ebbd7 include windows.h just to be safe 2023-07-12 12:46:46 -04:00
Aaron Miller
95b8fb312e windows/msvc: use high level processor feature detection API
see https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-isprocessorfeaturepresent
2023-07-12 12:46:46 -04:00
Aaron Miller
db34a2f670 llmodel: skip attempting Metal if model+kvcache > 53% of system ram 2023-06-26 19:46:49 -03:00
Aaron Miller
d3ba1295a7
Metal+LLama take two (#929)
Support latest llama with Metal
---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 16:48:46 -04:00
Adam Treat
b162b5c64e Revert "llama on Metal (#885)"
This reverts commit c55f81b860.
2023-06-09 15:08:46 -04:00
Aaron Miller
c55f81b860
llama on Metal (#885)
Support latest llama with Metal

---------

Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 14:58:12 -04:00
Adam Treat
8a9ad258f4 Fix symbol resolution on windows. 2023-06-05 11:19:02 -04:00
Adam Treat
812b2f4b29 Make installers work with mac/windows for big backend change. 2023-06-05 09:23:17 -04:00
AT
5f95aa9fc6
We no longer have an avx_only repository and better error handling for minimum hardware requirements. (#833) 2023-06-04 15:28:58 -04:00
Richard Guo
98420ea6d5 cleanup 2023-06-02 12:32:26 -04:00
Richard Guo
c54c42e3fb fixed finding model libs 2023-06-02 12:32:26 -04:00
Adam Treat
70e3b7e907 Try and fix build on mac. 2023-06-02 10:47:12 -04:00
Adam Treat
a41bd6ac0a Trying to shrink the copy+paste code and do more code sharing between backend model impl. 2023-06-02 07:20:59 -04:00
niansa/tuxifan
27e80e1d10
Allow user to specify custom search path via $GPT4ALL_IMPLEMENTATIONS_PATH (#789) 2023-06-01 17:41:04 +02:00
niansa
5175db2781 Fixed double-free in LLModel::Implementation destructor 2023-06-01 11:19:08 -04:00
niansa/tuxifan
fc60f0c09c
Cleaned up implementation management (#787)
* Cleaned up implementation management

* Initialize LLModel::m_implementation to nullptr

* llmodel.h: Moved dlhandle fwd declare above LLModel class
2023-06-01 16:51:46 +02:00
Adam Treat
1eca524171 Add fixme's and clean up a bit. 2023-06-01 07:57:10 -04:00
niansa
a3d08cdcd5 Dlopen better implementation management (Version 2) 2023-06-01 07:44:15 -04:00
niansa/tuxifan
92407438c8
Advanced avxonly autodetection (#744)
* Advanced avxonly requirement detection
2023-05-31 21:26:18 -04:00
AT
48275d0dcc
Dlopen backend 5 (#779)
Major change to the backend that allows for pluggable versions of llama.cpp/ggml. This was squashed merged from dlopen_backend_5 where the history is preserved.
2023-05-31 17:04:01 -04:00