Commit Graph

1934 Commits (df5d3741870a564065a48f0ef0630c25d34bfc17)
 

Author SHA1 Message Date
Adam Treat b4d82ea289 Bump to the latest fixes for vulkan in llama. 12 months ago
Adam Treat 12f943e966 Fix regenerate button to be deterministic and bump the llama version to latest we have for gguf. 12 months ago
Cebtenzzre 40c78d2f78 python binding: print debug message to stderr 12 months ago
Adam Treat 5d346e13d7 Add q6_k kernels for vulkan. 12 months ago
Adam Treat 4eefd386d0 Refactor for subgroups on mat * vec kernel. 12 months ago
Cebtenzzre 3c2aa299d8 gptj: remove unused variables 12 months ago
Cebtenzzre f9deb87d20 convert scripts: add feed-forward length for better compatiblilty
This GGUF key is used by all llama.cpp models with upstream support.
12 months ago
Cebtenzzre cc7675d432 convert scripts: make gptj script executable 12 months ago
Cebtenzzre 0493e6eb07 convert scripts: use bytes_to_unicode from transformers 12 months ago
Cebtenzzre a49a1dcdf4 chatllm: grammar fix 12 months ago
Cebtenzzre d5d72f0361 gpt-j: update inference to match latest llama.cpp insights
- Use F16 KV cache
- Store transposed V in the cache
- Avoid unnecessary Q copy

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

ggml upstream commit 0265f0813492602fec0e1159fe61de1bf0ccaf78
12 months ago
Cebtenzzre 050e7f076e backend: port GPT-J to GGUF 12 months ago
Cebtenzzre 31b20f093a modellist: fix the system prompt 12 months ago
Cebtenzzre 8f3abb37ca fix references to removed model types 12 months ago
Cebtenzzre 4219c0e2e7 convert scripts: make them directly executable 12 months ago
Cebtenzzre ce7be1db48 backend: use llamamodel.cpp for Falcon 12 months ago
Cebtenzzre cca9e6ce81 convert_mpt_hf_to_gguf.py: better tokenizer decoding 12 months ago
Cebtenzzre 25297786db convert scripts: load model as late as possible 12 months ago
Cebtenzzre fd47088f2b conversion scripts: cleanup 12 months ago
Cebtenzzre 6277eac9cc backend: use llamamodel.cpp for StarCoder 12 months ago
Cebtenzzre aa706ab1ff backend: use gguf branch of llama.cpp-mainline 12 months ago
Cebtenzzre 17fc9e3e58 backend: port Replit to GGUF 12 months ago
Cebtenzzre 7c67262a13 backend: port MPT to GGUF 12 months ago
Cebtenzzre 42bcb814b3 backend: port BERT to GGUF 12 months ago
Cebtenzzre 4392bf26e0 pyllmodel: print specific error message 12 months ago
Cebtenzzre 34f2ec2b33 gpt4all.py: GGUF 12 months ago
Cebtenzzre 1d29e4696c llamamodel: metal supports all quantization types now 12 months ago
Aaron Miller 507753a37c macos build fixes 12 months ago
Adam Treat d90d003a1d Latest rebase on llama.cpp with gguf support. 12 months ago
Akarshan Biswas 5f3d739205 appdata: update software description 12 months ago
Akarshan Biswas b4cf12e1bd Update to 2.4.19 12 months ago
Akarshan Biswas 21a5709b07 Remove unnecessary stuffs from manifest 12 months ago
Akarshan Biswas 4426640f44 Add flatpak manifest 12 months ago
Aaron Miller 6711bddc4c launch browser instead of maintenancetool from offline builds 12 months ago
Aaron Miller 7f979c8258 Build offline installers in CircleCI 12 months ago
Adam Treat 99c106e6b5 Fix a bug seen on AMD RADEON cards with vulkan backend. 12 months ago
Andriy Mulyar 9611c4081a
Update README.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
1 year ago
kevinbazira 17cb4a86d1 Replace git clone SSH URI with HTTPS URL
Running `git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git`
returns `Permission denied (publickey)` as shown below:
```
git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git
Cloning into gpt4all...
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
```

This change replaces `git@github.com:nomic-ai/gpt4all.git` with
`https://github.com/nomic-ai/gpt4all.git` which runs without permission issues.

resolves nomic-ai/gpt4all#8, resolves nomic-ai/gpt4all#49
1 year ago
Andriy Mulyar 0d1edaf029
Update README.md with GPU support
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
1 year ago
Adam Treat dc80d1e578 Fix up the offline installer. 1 year ago
Jacob Nguyen e86c63750d Update llama.cpp.cmake
Signed-off-by: Jacob Nguyen <76754747+jacoobes@users.noreply.github.com>
1 year ago
Adam Treat f47e698193 Release notes for v2.4.19 and bump the version. 1 year ago
Adam Treat 84905aa281 Fix for crashes on systems where vulkan is not installed properly. 1 year ago
Adam Treat ecf014f03b Release notes for v2.4.18 and bump the version. 1 year ago
Adam Treat e6e724d2dc Actually bump the version. 1 year ago
Adam Treat 06a833e652 Send actual and requested device info for those who have opt-in. 1 year ago
Adam Treat 045f6e6cdc Link against ggml in bin so we can get the available devices without loading a model. 1 year ago
Adam Treat 0f046cf905 Bump the Python version to python-v1.0.12 to restrict the quants that vulkan recognizes. 1 year ago
Adam Treat 655372dbfa Release notes for v2.4.17 and bump the version. 1 year ago
Adam Treat aa33419c6e Fallback to CPU more robustly. 1 year ago