* new skeleton
Signed-off-by: Max Cembalest <max@nomic.ai>
* v3 docs
Signed-off-by: Max Cembalest <max@nomic.ai>
---------
Signed-off-by: Max Cembalest <max@nomic.ai>
As discussed on Discord, this PR was not ready to be merged. CI fails on
it.
This reverts commit a602f7fde7.
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* remove outdated comments
Signed-off-by: limez <limez@protonmail.com>
* simpler build from source
Signed-off-by: limez <limez@protonmail.com>
* update unix build script to create .so runtimes correctly
Signed-off-by: limez <limez@protonmail.com>
* configure ci build type, use RelWithDebInfo for dev build script
Signed-off-by: limez <limez@protonmail.com>
* add clean script
Signed-off-by: limez <limez@protonmail.com>
* fix streamed token decoding / emoji
Signed-off-by: limez <limez@protonmail.com>
* remove deprecated nCtx
Signed-off-by: limez <limez@protonmail.com>
* update typings
Signed-off-by: jacob <jacoobes@sern.dev>
update typings
Signed-off-by: jacob <jacoobes@sern.dev>
* readme,mspell
Signed-off-by: jacob <jacoobes@sern.dev>
* cuda/backend logic changes + name napi methods like their js counterparts
Signed-off-by: limez <limez@protonmail.com>
* convert llmodel example into a test, separate test suite that can run in ci
Signed-off-by: limez <limez@protonmail.com>
* update examples / naming
Signed-off-by: limez <limez@protonmail.com>
* update deps, remove the need for binding.ci.gyp, make node-gyp-build fallback easier testable
Signed-off-by: limez <limez@protonmail.com>
* make sure the assert-backend-sources.js script is published, but not the others
Signed-off-by: limez <limez@protonmail.com>
* build correctly on windows (regression on node-gyp-build)
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
* codespell
Signed-off-by: limez <limez@protonmail.com>
* make sure dlhandle.cpp gets linked correctly
Signed-off-by: limez <limez@protonmail.com>
* add include for check_cxx_compiler_flag call during aarch64 builds
Signed-off-by: limez <limez@protonmail.com>
* x86 > arm64 cross compilation of runtimes and bindings
Signed-off-by: limez <limez@protonmail.com>
* default to cpu instead of kompute on arm64
Signed-off-by: limez <limez@protonmail.com>
* formatting, more minimal example
Signed-off-by: limez <limez@protonmail.com>
---------
Signed-off-by: limez <limez@protonmail.com>
Signed-off-by: jacob <jacoobes@sern.dev>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: jacob <jacoobes@sern.dev>
* rebase onto llama.cpp commit ggerganov/llama.cpp@d46dbc76f
* support for CUDA backend (enabled by default)
* partial support for Occam's Vulkan backend (disabled by default)
* partial support for HIP/ROCm backend (disabled by default)
* sync llama.cpp.cmake with upstream llama.cpp CMakeLists.txt
* changes to GPT4All backend, bindings, and chat UI to handle choice of llama.cpp backend (Kompute or CUDA)
* ship CUDA runtime with installed version
* make device selection in the UI on macOS actually do something
* model whitelist: remove dbrx, mamba, persimmon, plamo; add internlm and starcoder2
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* fixed bindings to match new API
Signed-off-by: Jerry Caligiure <jerry@noof.biz>
* added update to readme
Signed-off-by: Jerry Caligiure <jerry@noof.biz>
---------
Signed-off-by: Jerry Caligiure <jerry@noof.biz>
Co-authored-by: Jerry Caligiure <jerry@noof.biz>
* llamamodel: only print device used in verbose mode
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* python: expose backend and device via GPT4All properties
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* backend: const correctness fixes
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* python: bump version
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* python: typing fixups
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* python: fix segfault with closed GPT4All
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
---------
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* actually submit larger batches with increased n_ctx
* fix crash when llama_tokenize returns no tokens
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Other changes:
* fix memory leak in llmodel_available_gpu_devices
* drop model argument from llmodel_available_gpu_devices
* breaking: make GPT4All/Embed4All arguments past model_name keyword-only
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: jacob <jacoobes@sern.dev>
Signed-off-by: limez <limez@protonmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: limez <limez@protonmail.com>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
* make sure encoding is identity for Range requests
* use a .part file for partial downloads
* verify using file size and MD5 from models3.json
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
* chat: fix non-AVX CPU detection on Windows
* bindings: throw exception instead of logging to console
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Key changes:
* honor empty system prompt argument
* current_chat_session is now read-only and defaults to None
* deprecate fallback prompt template for unknown models
* fix mistakes from #2086
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Tare Ebelo <75279482+TareHimself@users.noreply.github.com>
Signed-off-by: jacob <jacoobes@sern.dev>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: jacob <jacoobes@sern.dev>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>