Commit Graph

1663 Commits (a153cc5b253e0dee78f63d147985bfbc55677db7)
 

Author SHA1 Message Date
Daniel Salvatierra c72c73a94f
app.py: add --device option for GPU support (#1769)
Signed-off-by: Daniel Salvatierra <dsalvat1@gmail.com>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
9 months ago
Cal Alaera 528eb1e7ad
Update server.cpp to return valid created timestamps (#1763)
Signed-off-by: Cal Alaera <59891537+CalAlaera@users.noreply.github.com>
9 months ago
Jared Van Bortel d1c56b8b28
Implement configurable context length (#1749) 9 months ago
Jacob Nguyen 7aa0f779de
Update mkdocs.yml (#1759)
update doc routing
9 months ago
Jacob Nguyen a1f27072c2
fix/macm1ts (#1746)
* make runtime library backend universal searchable

* corepack enable

* fix

* pass tests

* simpler

* add more jsdoc

* fix testS

* fix up circle ci

* bump version

* remove false positive warning

* add disclaimer

* update readme

* revert

* update ts docs

---------

Co-authored-by: Matthew Nguyen <matthewpnguyen@Matthews-MacBook-Pro-7.local>
9 months ago
Jared Van Bortel 3acbef14b7
fix AVX support by removing direct linking to AVX2 libs (#1750) 9 months ago
Jared Van Bortel 0600f551b3
chatllm: do not attempt to serialize incompatible state (#1742) 9 months ago
Jacob Nguyen 9481762802
Update continue_config.yml, shoudl fix ts docs failing (#1743) 9 months ago
Jared Van Bortel 778264fbab python: don't use importlib as_file for a directory
The only reason to use as_file is to support copying a file from a
frozen package. We don't currently support this anyway, and as_file
isn't supported until Python 3.9, so get rid of it.

Fixes #1605
9 months ago
Jared Van Bortel 1df3da0a88 update llama.cpp for clang warning fix 9 months ago
aj-gameon 7facb8207b
docs: golang --recurse-submodules (#1720)
Co-authored-by: aj-gameon <aj@gameontechnology.com>
9 months ago
Jared Van Bortel dfd8ef0186
backend: use ggml_new_graph for GGML backend v2 (#1719) 10 months ago
Adam Treat fb3b1ceba2 Do not attempt to do a blocking retrieval if we don't have any collections. 10 months ago
Jared Van Bortel 9e28dfac9c
Update to latest llama.cpp (#1706) 10 months ago
Moritz Tim W 012f399639
fix typo (#1697) 10 months ago
Adam Treat a328f9ed3f Add a button to the collections dialog. Fix close button. 10 months ago
Adam Treat e4ff972522 Bump and release v2.5.4 10 months ago
Adam Treat 4862e8b650 Networking retry on download error for models. 10 months ago
Jared Van Bortel 078c3bd85c
models2.json: add Orca 2 models (#1672) 10 months ago
AT 84749a4ced Update gpt4all_chat.md
Signed-off-by: AT <manyoso@users.noreply.github.com>
10 months ago
AT f1c58d0e2c Update gpt4all_chat.md
Signed-off-by: AT <manyoso@users.noreply.github.com>
10 months ago
dsalvatierra 76413e1d03 Refactor engines module to fetch engine details
from API

Update chat.py

Signed-off-by: Daniel Salvatierra <dsalvat1@gmail.com>
10 months ago
dsalvatierra db70f1752a Update .gitignore and Dockerfile, add .env file
and modify test batch
10 months ago
dsalvat1 f3eaa33ce7 Fixing API problem - bin files are deprecated 10 months ago
Adam Treat 9e27a118ed Fix system prompt. 10 months ago
Adam Treat 34555c4934 Bump version and release notes for v2.5.3 10 months ago
Adam Treat 9a3dd8815d Fix GUI hang with localdocs by removing file system watcher in modellist. 10 months ago
Adam Treat c1809a23ba Fix text color on mac. 10 months ago
Adam Treat 59ed2a0bea Use a global constant and remove a debug line. 10 months ago
Adam Treat eecf351c64 Reduce copied code. 10 months ago
AT abd4703c79 Update gpt4all-chat/embllm.cpp
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
10 months ago
AT 4b413a60e4 Update gpt4all-chat/embeddings.cpp
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
10 months ago
AT 17b346dfe7 Update gpt4all-chat/embeddings.cpp
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
10 months ago
AT 71e37816cc Update gpt4all-chat/qml/ModelDownloaderDialog.qml
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
10 months ago
Adam Treat cce5fe2045 Fix macos build. 10 months ago
Adam Treat 371e2a5cbc LocalDocs version 2 with text embeddings. 10 months ago
Jared Van Bortel d4ce9f4a7c
llmodel_c: improve quality of error messages (#1625) 11 months ago
aj-gameon 8fabf0be4a
Updated readme for correct install instructions (#1607)
Co-authored-by: aj-gameon <aj@gameontechnology.com>
11 months ago
Jacob Nguyen 45d76d6234
ts/tooling (#1602) 11 months ago
Jacob Nguyen da95bcfb4b
vulkan support for typescript bindings, gguf support (#1390)
* adding some native methods to cpp wrapper

* gpu seems to work

* typings and add availibleGpus method

* fix spelling

* fix syntax

* more

* normalize methods to conform to py

* remove extra dynamic linker deps when building with vulkan

* bump python version (library linking fix)

* Don't link against libvulkan.

* vulkan python bindings on windows fixes

* Bring the vulkan backend to the GUI.

* When device is Auto (the default) then we will only consider discrete GPU's otherwise fallback to CPU.

* Show the device we're currently using.

* Fix up the name and formatting.

* init at most one vulkan device, submodule update

fixes issues w/ multiple of the same gpu

* Update the submodule.

* Add version 2.4.15 and bump the version number.

* Fix a bug where we're not properly falling back to CPU.

* Sync to a newer version of llama.cpp with bugfix for vulkan.

* Report the actual device we're using.

* Only show GPU when we're actually using it.

* Bump to new llama with new bugfix.

* Release notes for v2.4.16 and bump the version.

* Fallback to CPU more robustly.

* Release notes for v2.4.17 and bump the version.

* Bump the Python version to python-v1.0.12 to restrict the quants that vulkan recognizes.

* Link against ggml in bin so we can get the available devices without loading a model.

* Send actual and requested device info for those who have opt-in.

* Actually bump the version.

* Release notes for v2.4.18 and bump the version.

* Fix for crashes on systems where vulkan is not installed properly.

* Release notes for v2.4.19 and bump the version.

* fix typings and vulkan build works on win

* Add flatpak manifest

* Remove unnecessary stuffs from manifest

* Update to 2.4.19

* appdata: update software description

* Latest rebase on llama.cpp with gguf support.

* macos build fixes

* llamamodel: metal supports all quantization types now

* gpt4all.py: GGUF

* pyllmodel: print specific error message

* backend: port BERT to GGUF

* backend: port MPT to GGUF

* backend: port Replit to GGUF

* backend: use gguf branch of llama.cpp-mainline

* backend: use llamamodel.cpp for StarCoder

* conversion scripts: cleanup

* convert scripts: load model as late as possible

* convert_mpt_hf_to_gguf.py: better tokenizer decoding

* backend: use llamamodel.cpp for Falcon

* convert scripts: make them directly executable

* fix references to removed model types

* modellist: fix the system prompt

* backend: port GPT-J to GGUF

* gpt-j: update inference to match latest llama.cpp insights

- Use F16 KV cache
- Store transposed V in the cache
- Avoid unnecessary Q copy

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

ggml upstream commit 0265f0813492602fec0e1159fe61de1bf0ccaf78

* chatllm: grammar fix

* convert scripts: use bytes_to_unicode from transformers

* convert scripts: make gptj script executable

* convert scripts: add feed-forward length for better compatiblilty

This GGUF key is used by all llama.cpp models with upstream support.

* gptj: remove unused variables

* Refactor for subgroups on mat * vec kernel.

* Add q6_k kernels for vulkan.

* python binding: print debug message to stderr

* Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf.

* Bump to the latest fixes for vulkan in llama.

* llamamodel: fix static vector in LLamaModel::endTokens

* Switch to new models2.json for new gguf release and bump our version to
2.5.0.

* Bump to latest llama/gguf branch.

* chat: report reason for fallback to CPU

* chat: make sure to clear fallback reason on success

* more accurate fallback descriptions

* differentiate between init failure and unsupported models

* backend: do not use Vulkan with non-LLaMA models

* Add q8_0 kernels to kompute shaders and bump to latest llama/gguf.

* backend: fix build with Visual Studio generator

Use the $<CONFIG> generator expression instead of CMAKE_BUILD_TYPE. This
is needed because Visual Studio is a multi-configuration generator, so
we do not know what the build type will be until `cmake --build` is
called.

Fixes #1470

* remove old llama.cpp submodules

* Reorder and refresh our models2.json.

* rebase on newer llama.cpp

* python/embed4all: use gguf model, allow passing kwargs/overriding model

* Add starcoder, rift and sbert to our models2.json.

* Push a new version number for llmodel backend now that it is based on gguf.

* fix stray comma in models2.json

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>

* Speculative fix for build on mac.

* chat: clearer CPU fallback messages

* Fix crasher with an empty string for prompt template.

* Update the language here to avoid misunderstanding.

* added EM German Mistral Model

* make codespell happy

* issue template: remove "Related Components" section

* cmake: install the GPT-J plugin (#1487)

* Do not delete saved chats if we fail to serialize properly.

* Restore state from text if necessary.

* Another codespell attempted fix.

* llmodel: do not call magic_match unless build variant is correct (#1488)

* chatllm: do not write uninitialized data to stream (#1486)

* mat*mat for q4_0, q8_0

* do not process prompts on gpu yet

* python: support Path in GPT4All.__init__ (#1462)

* llmodel: print an error if the CPU does not support AVX (#1499)

* python bindings should be quiet by default

* disable llama.cpp logging unless GPT4ALL_VERBOSE_LLAMACPP envvar is
  nonempty
* make verbose flag for retrieve_model default false (but also be
  overridable via gpt4all constructor)

should be able to run a basic test:

```python
import gpt4all
model = gpt4all.GPT4All('/Users/aaron/Downloads/rift-coder-v0-7b-q4_0.gguf')
print(model.generate('def fib(n):'))
```

and see no non-model output when successful

* python: always check status code of HTTP responses (#1502)

* Always save chats to disk, but save them as text by default. This also changes
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.

* Update README.md

Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>

* fix embed4all filename

https://discordapp.com/channels/1076964370942267462/1093558720690143283/1161778216462192692

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>

* Improves Java API signatures maintaining back compatibility

* python: replace deprecated pkg_resources with importlib (#1505)

* Updated chat wishlist (#1351)

* q6k, q4_1 mat*mat

* update mini-orca 3b to gguf2, license

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>

* convert scripts: fix AutoConfig typo (#1512)

* publish config https://docs.npmjs.com/cli/v9/configuring-npm/package-json#publishconfig (#1375)

merge into my branch

* fix appendBin

* fix gpu not initializing first

* sync up

* progress, still wip on destructor

* some detection work

* untested dispose method

* add js side of dispose

* Update gpt4all-bindings/typescript/index.cc

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/index.cc

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/index.cc

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/src/gpt4all.d.ts

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/src/gpt4all.js

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* Update gpt4all-bindings/typescript/src/util.js

Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>

* fix tests

* fix circleci for nodejs

* bump version

---------

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
Co-authored-by: Aaron Miller <apage43@ninjawhale.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Akarshan Biswas <akarshan.biswas@gmail.com>
Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
Co-authored-by: Jan Philipp Harries <jpdus@users.noreply.github.com>
Co-authored-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>
Co-authored-by: Alex Soto <asotobu@gmail.com>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
11 months ago
cebtenzzre 64101d3af5 update llama.cpp-mainline 11 months ago
cebtenzzre 3c561bcdf2 python: bump bindings version for AMD fixes 11 months ago
Adam Treat ffef60912f Update to llama.cpp 11 months ago
Adam Treat bc88271520 Bump version to v2.5.3 and release notes. 11 months ago
cebtenzzre 5508e43466 build_and_run: clarify which additional Qt libs are needed
Signed-off-by: cebtenzzre <cebtenzzre@gmail.com>
11 months ago
cebtenzzre 79a5522931 fix references to old backend implementations 11 months ago
Adam Treat f529d55380 Move this logic to QML. 11 months ago
Adam Treat f5f22fdbd0 Update llama.cpp for latest bugfixes. 11 months ago
Adam Treat 5c0d077f74 Remove leading whitespace in responses. 11 months ago
Adam Treat 131cfcdeae Don't regenerate the name for deserialized chats. 11 months ago