mirror of
https://github.com/nomic-ai/gpt4all
synced 2024-11-08 07:10:32 +00:00
88d85be0f9
* chat: remove unused oscompat source files These files are no longer needed now that the hnswlib index is gone. This fixes an issue with the Windows build as there was a compilation error in oscompat.cpp. Signed-off-by: Jared Van Bortel <jared@nomic.ai> * llm: fix pragma to be recognized by MSVC Replaces this MSVC warning: C:\msys64\home\Jared\gpt4all\gpt4all-chat\llm.cpp(53,21): warning C4081: expected '('; found 'string' With this: C:\msys64\home\Jared\gpt4all\gpt4all-chat\llm.cpp : warning : offline installer build will not check for updates! Signed-off-by: Jared Van Bortel <jared@nomic.ai> * usearch: fork usearch to fix `CreateFile` build error Signed-off-by: Jared Van Bortel <jared@nomic.ai> * dlhandle: fix incorrect assertion on Windows SetErrorMode returns the previous value of the error mode flags, not an indicator of success. Signed-off-by: Jared Van Bortel <jared@nomic.ai> * llamamodel: fix UB in LLamaModel::embedInternal It is undefined behavior to increment an STL iterator past the end of the container. Use offsets to do the math instead. Signed-off-by: Jared Van Bortel <jared@nomic.ai> * cmake: install embedding model to bundle's Resources dir on macOS Signed-off-by: Jared Van Bortel <jared@nomic.ai> * ci: fix macOS build by explicitly installing Rosetta Signed-off-by: Jared Van Bortel <jared@nomic.ai> --------- Signed-off-by: Jared Van Bortel <jared@nomic.ai>
8 lines
253 B
Plaintext
8 lines
253 B
Plaintext
[submodule "llama.cpp-mainline"]
|
|
path = gpt4all-backend/llama.cpp-mainline
|
|
url = https://github.com/nomic-ai/llama.cpp.git
|
|
branch = master
|
|
[submodule "gpt4all-chat/usearch"]
|
|
path = gpt4all-chat/usearch
|
|
url = https://github.com/nomic-ai/usearch.git
|