Lakshay Kansal
70cbff70cc
created highlighting rules for java using regex for the gpt4all chat interface
2023-06-29 13:11:37 -03:00
Adam Treat
1cd734efdc
Provide an abstraction to break up the settings dialog into managable pieces.
2023-06-29 09:59:54 -04:00
Adam Treat
7f252b4970
This completes the work of consolidating all settings that can be changed by the user on new settings object.
2023-06-29 00:44:48 -03:00
Adam Treat
285aa50b60
Consolidate generation and application settings on the new settings object.
2023-06-28 20:36:43 -03:00
Adam Treat
7f66c28649
Use the new settings for response generation.
2023-06-28 20:11:24 -03:00
Adam Treat
a8baa4da52
The sync for save should be after.
2023-06-28 20:11:24 -03:00
Adam Treat
705b480d72
Start moving toward a single authoritative class for all settings. This
...
is necessary to get rid of technical debt before we drastically increase
the complexity of settings by adding per model settings and mirostat and
other fun things. Right now the settings are divided between QML and C++
and some convenience methods to deal with settings sync and so on that are
in other singletons. This change consolidates all the logic for settings
into a single class with a single API for both C++ and QML.
2023-06-28 20:11:24 -03:00
Adam Treat
e70899a26c
Make the retrieval/parsing of models.json sync on startup. We were jumping to many hoops to mitigate the async behavior.
2023-06-28 12:32:22 -03:00
Adam Treat
9560336490
Match on the filename too for server mode.
2023-06-28 09:20:05 -04:00
Adam Treat
58cd346686
Bump release again and new release notes.
2023-06-27 18:01:23 -04:00
Adam Treat
0f8f364d76
Fix mac again for falcon.
2023-06-27 17:20:40 -04:00
Adam Treat
8aae4e52b3
Fix for falcon on mac.
2023-06-27 17:13:13 -04:00
Adam Treat
9375c71aa7
New release notes for 2.4.9 and bump version.
2023-06-27 17:01:49 -04:00
Adam Treat
71449bbc4b
Fix this correctly?
2023-06-27 16:01:11 -04:00
Adam Treat
07a5405618
Make it clear this is our finetune.
2023-06-27 15:33:38 -04:00
Adam Treat
189ac82277
Fix server mode.
2023-06-27 15:01:16 -04:00
Adam Treat
b56cc61ca2
Don't allow setting an invalid prompt template.
2023-06-27 14:52:44 -04:00
Adam Treat
0780393d00
Don't use local.
2023-06-27 14:13:42 -04:00
Adam Treat
924efd9e25
Add falcon to our models.json
2023-06-27 13:56:16 -04:00
Adam Treat
d3b8234106
Fix spelling.
2023-06-27 14:23:56 -03:00
Adam Treat
42c0a6673a
Don't persist the force metal setting.
2023-06-27 14:23:56 -03:00
Adam Treat
267601d670
Enable the force metal setting.
2023-06-27 14:23:56 -03:00
Aaron Miller
e22dd164d8
add falcon to chatllm::serialize
2023-06-27 14:06:39 -03:00
Aaron Miller
198b5e4832
add Falcon 7B model
...
Tested with https://huggingface.co/TheBloke/falcon-7b-instruct-GGML/blob/main/falcon7b-instruct.ggmlv3.q4_0.bin
2023-06-27 14:06:39 -03:00
Adam Treat
985d3bbfa4
Add Orca models to list.
2023-06-27 09:38:43 -04:00
Adam Treat
8558fb4297
Fix models.json for spanning multiple lines with string.
2023-06-26 21:35:56 -04:00
Adam Treat
c24ad02a6a
Wait just a bit to set the model name so that we can display the proper name instead of filename.
2023-06-26 21:00:09 -04:00
Adam Treat
57fa8644d6
Make spelling check happy.
2023-06-26 17:56:56 -04:00
Adam Treat
d0a3e82ffc
Restore feature I accidentally erased in modellist update.
2023-06-26 17:50:45 -04:00
Aaron Miller
b19a3e5b2c
add requiredMem method to llmodel impls
...
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
2023-06-26 18:27:58 -03:00
Adam Treat
dead954134
Fix save chats setting.
2023-06-26 16:43:37 -04:00
Adam Treat
26c9193227
Sigh. Windows.
2023-06-26 16:34:35 -04:00
Adam Treat
5deec2afe1
Change this back now that it is ready.
2023-06-26 16:21:09 -04:00
Adam Treat
676248fe8f
Update the language.
2023-06-26 14:14:49 -04:00
Adam Treat
ef92492d8c
Add better warnings and links.
2023-06-26 14:14:49 -04:00
Adam Treat
71c972f8fa
Provide a more stark warning for localdocs and add more size to dialogs.
2023-06-26 14:14:49 -04:00
Adam Treat
1b5aa4617f
Enable the add button always, but show an error in placeholder text.
2023-06-26 14:14:49 -04:00
Adam Treat
a0f80453e5
Use sysinfo in backend.
2023-06-26 14:14:49 -04:00
Adam Treat
5e520bb775
Fix so that models are searched in subdirectories.
2023-06-26 14:14:49 -04:00
Adam Treat
64e98b8ea9
Fix bug with model loading on initial load.
2023-06-26 14:14:49 -04:00
Adam Treat
3ca9e8692c
Don't try and load incomplete files.
2023-06-26 14:14:49 -04:00
Adam Treat
27f25d5878
Get rid of recursive mutex.
2023-06-26 14:14:49 -04:00
Adam Treat
7f01b153b3
Modellist temp
2023-06-26 14:14:46 -04:00
Adam Treat
c1794597a7
Revert "Enable Wayland in build"
...
This reverts commit d686a583f9
.
2023-06-26 14:10:27 -04:00
Akarshan Biswas
d686a583f9
Enable Wayland in build
...
# Describe your changes
The patch include support for running natively on a Linux Wayland display server/compositor which is successor to old Xorg.
Cmakelist was missing WaylandClient so added it back.
Will fix #1047 .
Signed-off-by: Akarshan Biswas <akarshan.biswas@gmail.com>
2023-06-26 14:58:23 -03:00
AMOGUS
3417a37c54
Change "web server" to "API server" for less confusion ( #1039 )
...
* Change "Web server" to "API server"
* Changed "API server" to "OpenAPI server"
* Reversed back to "API server" and updated tooltip
2023-06-23 16:28:52 -04:00
cosmic-snow
a423075403
Allow Cross-Origin Resource Sharing (CORS) ( #1008 )
2023-06-22 09:19:49 -07:00
Martin Mauch
af28173a25
Parse Org Mode files ( #1038 )
2023-06-22 09:09:39 -07:00
niansa/tuxifan
01acb8d250
Update download speed less often
...
To not show every little tiny network spike to the user
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-22 09:29:15 +02:00
Adam Treat
09ae04cee9
This needs to work even when localdocs and codeblocks are detected.
2023-06-20 19:07:02 -04:00