Aaron Miller
c252fd93fd
CI: apt update before apt install ( #1046 )
...
avoid spurious failures due to apt cache being out of date
2023-06-22 15:21:11 -07:00
cosmic-snow
a423075403
Allow Cross-Origin Resource Sharing (CORS) ( #1008 )
2023-06-22 09:19:49 -07:00
Martin Mauch
af28173a25
Parse Org Mode files ( #1038 )
2023-06-22 09:09:39 -07:00
Max Vincent Goldgamer
5a1d22804e
Fix Typos on macOS ( #1018 )
...
Signed-off-by: Max Vincent Goldgamer <11319871+iMonZ@users.noreply.github.com>
2023-06-22 11:48:12 -04:00
niansa/tuxifan
01acb8d250
Update download speed less often
...
To not show every little tiny network spike to the user
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-22 09:29:15 +02:00
niansa/tuxifan
5eee16c97c
Do not specify "success" as error for unsupported models
...
Signed-off-by: niansa/tuxifan <tuxifan@posteo.de>
2023-06-22 09:28:40 +02:00
Adam Treat
09ae04cee9
This needs to work even when localdocs and codeblocks are detected.
2023-06-20 19:07:02 -04:00
Adam Treat
ce7333029f
Make the copy button a little more tolerant.
2023-06-20 18:59:08 -04:00
Adam Treat
508993de75
Exit early when no chats are saved.
2023-06-20 18:30:17 -04:00
Adam Treat
bd58c46da0
Initialize these to nullptr to prevent double deletion when a model fails to load.
2023-06-20 18:23:45 -04:00
Adam Treat
85bc861835
Fix the alignment.
2023-06-20 17:40:02 -04:00
Adam Treat
eebfe642c4
Add an error message to download dialog if models.json can't be retrieved.
2023-06-20 17:31:36 -04:00
Adam Treat
968868415e
Move saving chats to a thread and display what we're doing to the user.
2023-06-20 17:18:33 -04:00
Adam Treat
c8a590bc6f
Get rid of last blocking operations and make the chat/llm thread safe.
2023-06-20 18:18:10 -03:00
Adam Treat
84ec4311e9
Remove duplicated state tracking for chatgpt.
2023-06-20 18:18:10 -03:00
Adam Treat
7d2ce06029
Start working on more thread safety and model load error handling.
2023-06-20 14:39:22 -03:00
Adam Treat
d5f56d3308
Forgot to add a signal handler.
2023-06-20 14:39:22 -03:00
Richard Guo
a39a897e34
0.3.5 bump
2023-06-20 10:21:51 -04:00
Richard Guo
25ce8c6a1e
revert version
2023-06-20 10:21:51 -04:00
Richard Guo
282a3b5498
setup.py update
2023-06-20 10:21:51 -04:00
Adam Treat
aa2c824258
Initialize these.
2023-06-19 15:38:01 -07:00
Adam Treat
d018b4c821
Make this atomic.
2023-06-19 15:38:01 -07:00
Adam Treat
a3a6a20146
Don't store db results in ChatLLM.
2023-06-19 15:38:01 -07:00
Adam Treat
0cfe225506
Remove this as unnecessary.
2023-06-19 15:38:01 -07:00
Adam Treat
7c28e79644
Fix regenerate response with references.
2023-06-19 17:52:14 -04:00
AT
f76df0deac
Typescript ( #1022 )
...
* Show token generation speed in gui.
* Add typescript/javascript to list of highlighted languages.
2023-06-19 16:12:37 -04:00
AT
2b6cc99a31
Show token generation speed in gui. ( #1020 )
2023-06-19 14:34:53 -04:00
cosmic-snow
fd419caa55
Minor models.json description corrections. ( #1013 )
...
Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
2023-06-18 14:10:29 -04:00
cosmic-snow
b00ac632e3
Update python/README.md with troubleshooting info ( #1012 )
...
- Add some notes about common Windows problems when trying to make a local build (MinGW and MSVC).
Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
2023-06-18 14:08:43 -04:00
standby24x7
cdea838671
Fix spelling typo in gpt4all.py ( #1007 )
...
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2023-06-18 14:07:46 -04:00
Adam Treat
42e8049564
Bump version and new release notes for metal bugfix edition.
2023-06-16 17:43:10 -04:00
Adam Treat
e2c807d4df
Always install metal on apple.
2023-06-16 17:24:20 -04:00
Adam Treat
d5179ac0c0
Fix cmake build.
2023-06-16 17:18:17 -04:00
Adam Treat
d4283c0053
Fix metal and replit.
2023-06-16 17:13:49 -04:00
cosmic-snow
b66d0b4fff
Fix CLI app.py ( #910 )
...
- the bindings API changed in 057b9, but the CLI was not updated
- change 'std_passthrough' param to the renamed 'streaming'
- remove '_cli_override_response_callback' as it breaks and is no longer needed
- bump version to 0.3.4
2023-06-16 16:06:22 -04:00
niansa/tuxifan
68f9786ed9
Use operator ""_MiB ( #991 )
2023-06-16 15:56:22 -04:00
Adam Treat
0a0d4a714e
New release and bump the version.
2023-06-16 15:20:23 -04:00
Adam Treat
782e1e77a4
Fix up model names that don't begin with 'ggml-'
2023-06-16 14:43:14 -04:00
Adam Treat
b39a7d4fd9
Fix json.
2023-06-16 14:21:20 -04:00
Adam Treat
6690b49a9f
Converts the following to Q4_0
...
* Snoozy
* Nous Hermes
* Wizard 13b uncensored
Uses the filenames from actual download for these three.
2023-06-16 14:12:56 -04:00
AT
a576220b18
Support loading files if 'ggml' is found anywhere in the name not just at ( #1001 )
...
the beginning and add deprecated flag to models.json so older versions will
show a model, but later versions don't. This will allow us to transition
away from models < ggmlv2 and still allow older installs of gpt4all to work.
2023-06-16 11:09:33 -04:00
Aaron Miller
abc081e48d
fix llama.cpp k-quants ( #988 )
...
* enable k-quants on *all* mainline builds
2023-06-15 14:06:14 -07:00
Ettore Di Giacinto
b004c53a7b
Allow to set a SetLibrarySearchPath in the golang bindings ( #981 )
...
This is used to identify the path where all the various implementations
are
2023-06-14 16:27:19 +02:00
Adam Treat
8953b7f6a6
Fix regression in checked of db and network.
2023-06-13 20:08:46 -04:00
Aaron Miller
c4319d2c8e
dlhandle: prevent libs from using each other's symbols ( #977 )
...
use RTLD_LOCAL so that symbols are *only* exposed via dlsym
without this all symbols exported by the libs are available for symbol
resolution, resulting in different lib versions potentially resolving
*each other's* symbols, causing incredibly cursed behavior such as
https://gist.github.com/apage43/085c1ff69f6dd05387793ebc301840f6
2023-06-13 14:52:11 -04:00
Aaron Miller
f71d8efc71
metal replit ( #931 )
...
metal+replit
makes replit work with Metal and removes its use of `mem_per_token`
in favor of fixed size scratch buffers (closer to llama.cpp)
2023-06-13 07:29:14 -07:00
Richard Guo
a9b33c3d10
update setup.py
2023-06-13 09:07:08 -04:00
Richard Guo
a99cc34efb
fix prompt context so it's preserved in class
2023-06-13 09:07:08 -04:00
Aaron Miller
85964a7635
bump llama.cpp mainline to latest ( #964 )
2023-06-13 08:40:38 -04:00
Tim Miller
797891c995
Initial Library Loader for .NET Bindings / Update bindings to support newest changes ( #763 )
...
* Initial Library Loader
* Load library as part of Model factory
* Dynamically search and find the dlls
* Update tests to use locally built runtimes
* Fix dylib loading, add macos runtime support for sample/tests
* Bypass automatic loading by default.
* Only set CMAKE_OSX_ARCHITECTURES if not already set, allow cross-compile
* Switch Loading again
* Update build scripts for mac/linux
* Update bindings to support newest breaking changes
* Fix build
* Use llmodel for Windows
* Actually, it does need to be libllmodel
* Name
* Remove TFMs, bypass loading by default
* Fix script
* Delete mac script
---------
Co-authored-by: Tim Miller <innerlogic4321@ghmail.com>
2023-06-13 14:05:34 +02:00