Richard Guo
e0a8480c0e
Generator in Python Bindings - streaming yields tokens at a time ( #895 )
...
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
2023-06-09 10:17:44 -04:00
niansa/tuxifan
f03da8d732
Removed double-static from variables in replit.cpp
...
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 08:55:15 -04:00
pingpongching
0d0fae0ca8
Change the default values for generation in GUI
2023-06-09 08:51:09 -04:00
Adam Treat
8fb73c2114
Forgot to bump.
2023-06-09 08:45:31 -04:00
Richard Guo
be2310322f
update models json with replit model
2023-06-09 08:44:46 -04:00
Adam Treat
f2387d6f77
Always sync for circleci.
2023-06-09 08:42:49 -04:00
Claudius Ellsel
3c1b59f5c6
Move usage in Python bindings readme to own section ( #907 )
...
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-09 10:13:35 +02:00
niansa
0cb2b86730
Synced llama.cpp.cmake with upstream
2023-06-08 18:21:32 -04:00
Adam Treat
343a6a308f
Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
2023-06-08 18:02:44 -04:00
Aaron Miller
47fbc0e309
non-llama: explicitly greedy sampling for temp<=0 ( #901 )
...
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
2023-06-08 11:08:30 -07:00
Aaron Miller
b14953e136
sampling: remove incorrect offset for n_vocab ( #900 )
...
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
2023-06-08 11:08:10 -07:00
Andriy Mulyar
eb26293205
Update CollectionsDialog.qml ( #856 )
...
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-08 13:44:17 -04:00
Claudius Ellsel
39a7c35d03
Update README.md ( #906 )
...
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-08 13:43:31 -04:00
Adam Treat
010a04d96f
Revert "Synced llama.cpp.cmake with upstream ( #887 )"
...
This reverts commit 89910c7ca8
.
2023-06-08 07:23:41 -04:00
Adam Treat
7e304106cc
Fix for windows.
2023-06-07 12:58:51 -04:00
niansa/tuxifan
89910c7ca8
Synced llama.cpp.cmake with upstream ( #887 )
2023-06-07 09:18:22 -07:00
Richard Guo
c4706d0c14
Replit Model ( #713 )
...
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
2023-06-06 17:09:00 -04:00
Andriy Mulyar
ef35eb496f
Supports downloading officially supported models not hosted on gpt4all R2
2023-06-06 16:21:02 -04:00
Andriy Mulyar
266f13aee9
Update gpt4all_faq.md ( #861 )
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 15:41:30 -04:00
Ettore Di Giacinto
44dc1ade62
Set thread counts after loading model ( #836 )
2023-06-05 21:35:40 +02:00
Adam Treat
fdffad9efe
New release notes
2023-06-05 14:55:59 -04:00
Adam Treat
f5bdf7c94c
Bump the version.
2023-06-05 14:32:00 -04:00
Adam Treat
c5de9634c9
Fix llama models on linux and windows.
2023-06-05 14:31:15 -04:00
Andriy Mulyar
d8e821134e
Revert "Fix bug with resetting context with chatgpt model." ( #859 )
...
This reverts commit 031d7149a7
.
2023-06-05 14:25:37 -04:00
Adam Treat
ecfeba2710
Revert "Speculative fix for windows llama models with installer."
...
This reverts commit c99e03e22e
.
2023-06-05 14:25:01 -04:00
Adam Treat
c99e03e22e
Speculative fix for windows llama models with installer.
2023-06-05 13:21:08 -04:00
Andriy Mulyar
01071efc9c
Documentation for model sideloading ( #851 )
...
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 12:35:02 -04:00
AT
ec8618628c
Update README.md ( #854 )
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-06-05 12:25:51 -04:00
AT
da757734ea
Release notes for version 2.4.5 ( #853 )
2023-06-05 12:10:17 -04:00
Richard Guo
f5f9f28f74
updated pypi version
2023-06-05 12:02:25 -04:00
Adam Treat
8a9ad258f4
Fix symbol resolution on windows.
2023-06-05 11:19:02 -04:00
Adam Treat
969ff0ee6b
Fix installers for windows and linux.
2023-06-05 10:50:16 -04:00
Adam Treat
1d4c8e7091
These need to be installed for them to be packaged and work for both mac and windows.
2023-06-05 09:57:00 -04:00
Adam Treat
3a9cc329b1
Fix compile on mac.
2023-06-05 09:31:57 -04:00
Adam Treat
25eec33bda
Try and fix mac.
2023-06-05 09:30:50 -04:00
Adam Treat
91f20becef
Need this so the linux installer packages it as a dependency.
2023-06-05 09:23:43 -04:00
Adam Treat
812b2f4b29
Make installers work with mac/windows for big backend change.
2023-06-05 09:23:17 -04:00
Andriy Mulyar
2e5b114364
Update models.json
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:48:45 -04:00
Andriy Mulyar
0db6fd6867
Update models.json ( #838 )
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:36:12 -04:00
AT
d5cf584f8d
Remove older models that are not as popular. ( #837 )
...
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:26:43 -04:00
Adam Treat
f73333c6a1
Update to latest llama.cpp
2023-06-04 19:57:34 -04:00
Adam Treat
301d2fdbea
Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
2023-06-04 19:31:20 -04:00
Adam Treat
bdba2e8de6
Allow for download of models hosted on third party hosts.
2023-06-04 19:02:43 -04:00
Adam Treat
5073630759
Try again with the url.
2023-06-04 18:39:36 -04:00
Adam Treat
6ba37f47c1
Trying out a new feature to download directly from huggingface.
2023-06-04 18:34:04 -04:00
AT
be3c63ffcd
Update build_and_run.md ( #834 )
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-06-04 15:39:32 -04:00
AT
5f95aa9fc6
We no longer have an avx_only repository and better error handling for minimum hardware requirements. ( #833 )
2023-06-04 15:28:58 -04:00
Adam Treat
9f590db98d
Better error handling when the model fails to load.
2023-06-04 14:55:05 -04:00
AT
bbe195ee02
Backend prompt dedup ( #822 )
...
* Deduplicated prompt() function code
2023-06-04 08:59:24 -04:00
Ikko Eltociear Ashimine
945297d837
Update README.md
...
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
2023-06-04 08:46:37 -04:00