Commit Graph

1483 Commits

Author SHA1 Message Date
cebtenzzre
fd3014016b
docs: clarify Vulkan dep in build instructions for bindings (#1525) 2023-10-18 12:09:52 -04:00
cebtenzzre
ac33bafb91
docs: improve build_and_run.md (#1524) 2023-10-18 11:37:28 -04:00
cebtenzzre
9a19c740ee
kompute: fix library loading issues with kp_logger (#1517) 2023-10-16 16:58:17 -04:00
Aaron Miller
f79557d2aa speedup: just use mat*vec shaders for mat*mat
so far my from-scratch mat*mats are still slower than just running more
invocations of the existing Metal ported mat*vec shaders - it should be
theoretically possible to make a mat*mat that's faster (for actual
mat*mat cases) than an optimal mat*vec, but it will need to be at
*least* as fast as the mat*vec op and then take special care to be
cache-friendly and save memory bandwidth, as the # of compute ops is the
same
2023-10-16 13:45:51 -04:00
cebtenzzre
22de3c56bd
convert scripts: fix AutoConfig typo (#1512) 2023-10-13 14:16:51 -04:00
Aaron Miller
10f9b49313 update mini-orca 3b to gguf2, license
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-12 14:57:07 -04:00
Aaron Miller
2490977f89 q6k, q4_1 mat*mat 2023-10-12 14:56:54 -04:00
niansa/tuxifan
a35f1ab784
Updated chat wishlist (#1351) 2023-10-12 14:01:44 -04:00
cebtenzzre
4d4275d1b8
python: replace deprecated pkg_resources with importlib (#1505) 2023-10-12 13:35:27 -04:00
Alex Soto
3c45a555e9 Improves Java API signatures maintaining back compatibility 2023-10-12 07:53:12 -04:00
Aaron Miller
f39df0906e fix embed4all filename
https://discordapp.com/channels/1076964370942267462/1093558720690143283/1161778216462192692

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-12 07:52:56 -04:00
umarmnaq
005c092943 Update README.md
Signed-off-by: umarmnaq <102142660+umarmnaq@users.noreply.github.com>
2023-10-12 07:52:36 -04:00
Adam Treat
908aec27fe Always save chats to disk, but save them as text by default. This also changes
the UI behavior to always open a 'New Chat' and setting it as current instead
of setting a restored chat as current. This improves usability by not requiring
the user to wait if they want to immediately start chatting.
2023-10-12 07:52:11 -04:00
cebtenzzre
aed2068342
python: always check status code of HTTP responses (#1502) 2023-10-11 18:11:28 -04:00
Aaron Miller
afaa291eab python bindings should be quiet by default
* disable llama.cpp logging unless GPT4ALL_VERBOSE_LLAMACPP envvar is
  nonempty
* make verbose flag for retrieve_model default false (but also be
  overridable via gpt4all constructor)

should be able to run a basic test:

```python
import gpt4all
model = gpt4all.GPT4All('/Users/aaron/Downloads/rift-coder-v0-7b-q4_0.gguf')
print(model.generate('def fib(n):'))
```

and see no non-model output when successful
2023-10-11 14:14:36 -07:00
cebtenzzre
7b611b49f2
llmodel: print an error if the CPU does not support AVX (#1499) 2023-10-11 15:09:40 -04:00
cebtenzzre
f81b4b45bf
python: support Path in GPT4All.__init__ (#1462) 2023-10-11 14:12:40 -04:00
Aaron Miller
043617168e do not process prompts on gpu yet 2023-10-11 13:15:50 -04:00
Aaron Miller
64001a480a mat*mat for q4_0, q8_0 2023-10-11 13:15:50 -04:00
cebtenzzre
04499d1c7d
chatllm: do not write uninitialized data to stream (#1486) 2023-10-11 11:31:34 -04:00
cebtenzzre
7a19047329
llmodel: do not call magic_match unless build variant is correct (#1488) 2023-10-11 11:30:48 -04:00
Adam Treat
df8528df73 Another codespell attempted fix. 2023-10-11 09:17:38 -04:00
Adam Treat
f0742c22f4 Restore state from text if necessary. 2023-10-11 09:16:02 -04:00
Adam Treat
35f9cdb70a Do not delete saved chats if we fail to serialize properly. 2023-10-11 09:16:02 -04:00
cebtenzzre
9fb135e020
cmake: install the GPT-J plugin (#1487) 2023-10-10 15:50:03 -04:00
Cebtenzzre
df66226f7d issue template: remove "Related Components" section 2023-10-10 10:39:28 -07:00
Aaron Miller
3c25d81759 make codespell happy 2023-10-10 12:00:06 -04:00
Jan Philipp Harries
4f0cee9330 added EM German Mistral Model 2023-10-10 11:44:43 -04:00
Adam Treat
56c0d2898d Update the language here to avoid misunderstanding. 2023-10-06 14:38:42 -04:00
Adam Treat
b2cd3bdb3f Fix crasher with an empty string for prompt template. 2023-10-06 12:44:53 -04:00
Cebtenzzre
5fe685427a chat: clearer CPU fallback messages 2023-10-06 11:35:14 -04:00
Adam Treat
eec906aa05 Speculative fix for build on mac. 2023-10-05 18:37:33 -04:00
Aaron Miller
9325075f80 fix stray comma in models2.json
Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-05 18:32:23 -04:00
Adam Treat
a9acdd25de Push a new version number for llmodel backend now that it is based on gguf. 2023-10-05 18:18:07 -04:00
Adam Treat
f028f67c68 Add starcoder, rift and sbert to our models2.json. 2023-10-05 18:16:19 -04:00
Aaron Miller
a10f3aea5e python/embed4all: use gguf model, allow passing kwargs/overriding model 2023-10-05 18:16:19 -04:00
Cebtenzzre
8bb6a6c201 rebase on newer llama.cpp 2023-10-05 18:16:19 -04:00
Adam Treat
4528f73479 Reorder and refresh our models2.json. 2023-10-05 18:16:19 -04:00
Cebtenzzre
d87573ea75 remove old llama.cpp submodules 2023-10-05 18:16:19 -04:00
Cebtenzzre
cc6db61c93 backend: fix build with Visual Studio generator
Use the $<CONFIG> generator expression instead of CMAKE_BUILD_TYPE. This
is needed because Visual Studio is a multi-configuration generator, so
we do not know what the build type will be until `cmake --build` is
called.

Fixes #1470
2023-10-05 18:16:19 -04:00
Adam Treat
f605a5b686 Add q8_0 kernels to kompute shaders and bump to latest llama/gguf. 2023-10-05 18:16:19 -04:00
Cebtenzzre
1534df3e9f backend: do not use Vulkan with non-LLaMA models 2023-10-05 18:16:19 -04:00
Cebtenzzre
672cb850f9 differentiate between init failure and unsupported models 2023-10-05 18:16:19 -04:00
Cebtenzzre
a5b93cf095 more accurate fallback descriptions 2023-10-05 18:16:19 -04:00
Cebtenzzre
75deee9adb chat: make sure to clear fallback reason on success 2023-10-05 18:16:19 -04:00
Cebtenzzre
2eb83b9f2a chat: report reason for fallback to CPU 2023-10-05 18:16:19 -04:00
Adam Treat
906699e8e9 Bump to latest llama/gguf branch. 2023-10-05 18:16:19 -04:00
Adam Treat
ea66669cef Switch to new models2.json for new gguf release and bump our version to
2.5.0.
2023-10-05 18:16:19 -04:00
Cebtenzzre
088afada49 llamamodel: fix static vector in LLamaModel::endTokens 2023-10-05 18:16:19 -04:00
Adam Treat
b4d82ea289 Bump to the latest fixes for vulkan in llama. 2023-10-05 18:16:19 -04:00