Adam Treat
e986f18904
Add c++/c highighting support.
2023-06-12 05:08:18 -07:00
Adam Treat
ae46234261
Spelling error.
2023-06-11 14:20:05 -07:00
Adam Treat
318c51c141
Add code blocks and python syntax highlighting.
2023-06-11 14:20:05 -07:00
Adam Treat
b67cba19f0
Don't interfere with selection.
2023-06-11 14:20:05 -07:00
Adam Treat
50c5b82e57
Clean up the context links a bit.
2023-06-11 14:20:05 -07:00
AT
a9c2f47303
Add new solution for context links that does not force regular markdown ( #938 )
...
in responses which is disruptive to code completions in responses.
2023-06-10 10:15:38 -04:00
Aaron Miller
d3ba1295a7
Metal+LLama take two ( #929 )
...
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 16:48:46 -04:00
Adam Treat
b162b5c64e
Revert "llama on Metal ( #885 )"
...
This reverts commit c55f81b860
.
2023-06-09 15:08:46 -04:00
Aaron Miller
c55f81b860
llama on Metal ( #885 )
...
Support latest llama with Metal
---------
Co-authored-by: Adam Treat <adam@nomic.ai>
Co-authored-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 14:58:12 -04:00
niansa/tuxifan
14e9ccbc6a
Do auto detection by default in C++ API
...
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 17:01:19 +02:00
Richard Guo
e0a8480c0e
Generator in Python Bindings - streaming yields tokens at a time ( #895 )
...
* generator method
* cleanup
* bump version number for clarity
* added replace in decode to avoid unicodedecode exception
* revert back to _build_prompt
2023-06-09 10:17:44 -04:00
niansa/tuxifan
f03da8d732
Removed double-static from variables in replit.cpp
...
The anonymous namespace already makes it static.
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-09 08:55:15 -04:00
pingpongching
0d0fae0ca8
Change the default values for generation in GUI
2023-06-09 08:51:09 -04:00
Adam Treat
8fb73c2114
Forgot to bump.
2023-06-09 08:45:31 -04:00
Richard Guo
be2310322f
update models json with replit model
2023-06-09 08:44:46 -04:00
Adam Treat
f2387d6f77
Always sync for circleci.
2023-06-09 08:42:49 -04:00
Claudius Ellsel
3c1b59f5c6
Move usage in Python bindings readme to own section ( #907 )
...
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-09 10:13:35 +02:00
niansa
0cb2b86730
Synced llama.cpp.cmake with upstream
2023-06-08 18:21:32 -04:00
Adam Treat
343a6a308f
Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
2023-06-08 18:02:44 -04:00
Aaron Miller
47fbc0e309
non-llama: explicitly greedy sampling for temp<=0 ( #901 )
...
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
2023-06-08 11:08:30 -07:00
Aaron Miller
b14953e136
sampling: remove incorrect offset for n_vocab ( #900 )
...
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
2023-06-08 11:08:10 -07:00
Andriy Mulyar
eb26293205
Update CollectionsDialog.qml ( #856 )
...
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-08 13:44:17 -04:00
Claudius Ellsel
39a7c35d03
Update README.md ( #906 )
...
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-08 13:43:31 -04:00
Adam Treat
010a04d96f
Revert "Synced llama.cpp.cmake with upstream ( #887 )"
...
This reverts commit 89910c7ca8
.
2023-06-08 07:23:41 -04:00
Adam Treat
7e304106cc
Fix for windows.
2023-06-07 12:58:51 -04:00
niansa/tuxifan
89910c7ca8
Synced llama.cpp.cmake with upstream ( #887 )
2023-06-07 09:18:22 -07:00
Richard Guo
c4706d0c14
Replit Model ( #713 )
...
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
2023-06-06 17:09:00 -04:00
Andriy Mulyar
ef35eb496f
Supports downloading officially supported models not hosted on gpt4all R2
2023-06-06 16:21:02 -04:00
Andriy Mulyar
266f13aee9
Update gpt4all_faq.md ( #861 )
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 15:41:30 -04:00
Ettore Di Giacinto
44dc1ade62
Set thread counts after loading model ( #836 )
2023-06-05 21:35:40 +02:00
Adam Treat
fdffad9efe
New release notes
2023-06-05 14:55:59 -04:00
Adam Treat
f5bdf7c94c
Bump the version.
2023-06-05 14:32:00 -04:00
Adam Treat
c5de9634c9
Fix llama models on linux and windows.
2023-06-05 14:31:15 -04:00
Andriy Mulyar
d8e821134e
Revert "Fix bug with resetting context with chatgpt model." ( #859 )
...
This reverts commit 031d7149a7
.
2023-06-05 14:25:37 -04:00
Adam Treat
ecfeba2710
Revert "Speculative fix for windows llama models with installer."
...
This reverts commit c99e03e22e
.
2023-06-05 14:25:01 -04:00
Adam Treat
c99e03e22e
Speculative fix for windows llama models with installer.
2023-06-05 13:21:08 -04:00
Andriy Mulyar
01071efc9c
Documentation for model sideloading ( #851 )
...
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 12:35:02 -04:00
AT
ec8618628c
Update README.md ( #854 )
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-06-05 12:25:51 -04:00
AT
da757734ea
Release notes for version 2.4.5 ( #853 )
2023-06-05 12:10:17 -04:00
Richard Guo
f5f9f28f74
updated pypi version
2023-06-05 12:02:25 -04:00
Adam Treat
8a9ad258f4
Fix symbol resolution on windows.
2023-06-05 11:19:02 -04:00
Adam Treat
969ff0ee6b
Fix installers for windows and linux.
2023-06-05 10:50:16 -04:00
Adam Treat
1d4c8e7091
These need to be installed for them to be packaged and work for both mac and windows.
2023-06-05 09:57:00 -04:00
Adam Treat
3a9cc329b1
Fix compile on mac.
2023-06-05 09:31:57 -04:00
Adam Treat
25eec33bda
Try and fix mac.
2023-06-05 09:30:50 -04:00
Adam Treat
91f20becef
Need this so the linux installer packages it as a dependency.
2023-06-05 09:23:43 -04:00
Adam Treat
812b2f4b29
Make installers work with mac/windows for big backend change.
2023-06-05 09:23:17 -04:00
Andriy Mulyar
2e5b114364
Update models.json
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:48:45 -04:00
Andriy Mulyar
0db6fd6867
Update models.json ( #838 )
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:36:12 -04:00
AT
d5cf584f8d
Remove older models that are not as popular. ( #837 )
...
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:26:43 -04:00