Commit Graph

1405 Commits

Author SHA1 Message Date
Aaron Miller
883775bc5f move 230511 submodule to nomic fork, fix alibi assert 2023-06-30 21:07:21 -03:00
Aaron Miller
6a74e515e1 bindings/python: make target to set up env 2023-06-30 21:07:21 -03:00
Aaron Miller
ac5c8e964f
bindings/python: fix typo (#1111) 2023-06-30 17:00:42 -04:00
Andriy Mulyar
46a0762bd5
Python Bindings: Improved unit tests, documentation and unification of API (#1090)
* Makefiles, black, isort

* Black and isort

* unit tests and generation method

* chat context provider

* context does not reset

* Current state

* Fixup

* Python bindings with unit tests

* GPT4All Python Bindings: chat contexts, tests

* New python bindings and backend fixes

* Black and Isort

* Documentation error

* preserved n_predict for backwords compat with langchain

---------

Co-authored-by: Adam Treat <treat.adam@gmail.com>
2023-06-30 16:02:02 -04:00
Aaron Miller
40a3faeb05
Use ggml scratch bufs for mpt and gptj models (#1104)
* backend/gptj: use scratch buffers

reduces total memory required and makes eval buf not grow with n_past

* backend/mpt: use scratch bufs

* fix format-related compile warnings
2023-06-30 10:53:45 -07:00
Lakshay Kansal
70cbff70cc created highlighting rules for java using regex for the gpt4all chat interface 2023-06-29 13:11:37 -03:00
Adam Treat
1cd734efdc Provide an abstraction to break up the settings dialog into managable pieces. 2023-06-29 09:59:54 -04:00
Adam Treat
7f252b4970 This completes the work of consolidating all settings that can be changed by the user on new settings object. 2023-06-29 00:44:48 -03:00
Aaron Miller
8d19ef3909
backend: factor out common elements in model code (#1089)
* backend: factor out common structs in model code

prepping to hack on these by hopefully making there be fewer places to fix the same bug

rename

* use common buffer wrapper instead of manual malloc

* fix replit compile warnings
2023-06-28 17:35:07 -07:00
Adam Treat
285aa50b60 Consolidate generation and application settings on the new settings object. 2023-06-28 20:36:43 -03:00
Adam Treat
7f66c28649 Use the new settings for response generation. 2023-06-28 20:11:24 -03:00
Adam Treat
a8baa4da52 The sync for save should be after. 2023-06-28 20:11:24 -03:00
Adam Treat
705b480d72 Start moving toward a single authoritative class for all settings. This
is necessary to get rid of technical debt before we drastically increase
the complexity of settings by adding per model settings and mirostat and
other fun things. Right now the settings are divided between QML and C++
and some convenience methods to deal with settings sync and so on that are
in other singletons. This change consolidates all the logic for settings
into a single class with a single API for both C++ and QML.
2023-06-28 20:11:24 -03:00
Andriy Mulyar
390994ea5e
Update README.md to include inference example
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-28 16:24:48 -04:00
Andriy Mulyar
a67f8132e1
Update README.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-28 14:29:15 -04:00
Andriy Mulyar
633e2a2137
GPT4All API Scaffolding. Matches OpenAI OpenAPI spec for chats and completions (#839)
* GPT4All API Scaffolding. Matches OpenAI OpenAI spec for engines, chats and completions

* Edits for docker building

* FastAPI app builds and pydantic models are accurate

* Added groovy download into dockerfile

* improved dockerfile

* Chat completions endpoint edits

* API uni test sketch

* Working example of groovy inference with open ai api

* Added lines to test

* Set default to mpt
2023-06-28 14:28:52 -04:00
Andriy Mulyar
6b8456bf99
Update README.md (#1086)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-28 12:15:05 -04:00
Adam Treat
e70899a26c Make the retrieval/parsing of models.json sync on startup. We were jumping to many hoops to mitigate the async behavior. 2023-06-28 12:32:22 -03:00
Adam Treat
9560336490 Match on the filename too for server mode. 2023-06-28 09:20:05 -04:00
Aaron Miller
28d41d4f6d
falcon: use *model-local* eval & scratch bufs (#1079)
fixes memory leaks copied from ggml/examples based implementation
2023-06-27 16:09:11 -07:00
Adam Treat
58cd346686 Bump release again and new release notes. 2023-06-27 18:01:23 -04:00
Adam Treat
0f8f364d76 Fix mac again for falcon. 2023-06-27 17:20:40 -04:00
Adam Treat
8aae4e52b3 Fix for falcon on mac. 2023-06-27 17:13:13 -04:00
Adam Treat
9375c71aa7 New release notes for 2.4.9 and bump version. 2023-06-27 17:01:49 -04:00
Adam Treat
71449bbc4b Fix this correctly? 2023-06-27 16:01:11 -04:00
Adam Treat
07a5405618 Make it clear this is our finetune. 2023-06-27 15:33:38 -04:00
Adam Treat
189ac82277 Fix server mode. 2023-06-27 15:01:16 -04:00
Adam Treat
b56cc61ca2 Don't allow setting an invalid prompt template. 2023-06-27 14:52:44 -04:00
Adam Treat
0780393d00 Don't use local. 2023-06-27 14:13:42 -04:00
Adam Treat
924efd9e25 Add falcon to our models.json 2023-06-27 13:56:16 -04:00
Adam Treat
d3b8234106 Fix spelling. 2023-06-27 14:23:56 -03:00
Adam Treat
42c0a6673a Don't persist the force metal setting. 2023-06-27 14:23:56 -03:00
Adam Treat
267601d670 Enable the force metal setting. 2023-06-27 14:23:56 -03:00
Zach Nussbaum
2565f6a94a feat: add conversion script 2023-06-27 14:06:39 -03:00
Aaron Miller
e22dd164d8 add falcon to chatllm::serialize 2023-06-27 14:06:39 -03:00
Aaron Miller
198b5e4832 add Falcon 7B model
Tested with https://huggingface.co/TheBloke/falcon-7b-instruct-GGML/blob/main/falcon7b-instruct.ggmlv3.q4_0.bin
2023-06-27 14:06:39 -03:00
AMOGUS
b8464073b8
Update gpt4all_chat.md (#1050)
* Update gpt4all_chat.md

Cleaned up and made the sideloading part more readable, also moved Replit architecture to supported ones. (+ renamed all "ggML" to "GGML" because who calls it "ggML"??)

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>

* Removed the prefixing part

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>

* Bump version

Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>

---------

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-27 10:49:45 -04:00
Adam Treat
985d3bbfa4 Add Orca models to list. 2023-06-27 09:38:43 -04:00
Adam Treat
8558fb4297 Fix models.json for spanning multiple lines with string. 2023-06-26 21:35:56 -04:00
Adam Treat
c24ad02a6a Wait just a bit to set the model name so that we can display the proper name instead of filename. 2023-06-26 21:00:09 -04:00
Aaron Miller
db34a2f670 llmodel: skip attempting Metal if model+kvcache > 53% of system ram 2023-06-26 19:46:49 -03:00
Adam Treat
57fa8644d6 Make spelling check happy. 2023-06-26 17:56:56 -04:00
Adam Treat
d0a3e82ffc Restore feature I accidentally erased in modellist update. 2023-06-26 17:50:45 -04:00
Aaron Miller
b19a3e5b2c add requiredMem method to llmodel impls
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
2023-06-26 18:27:58 -03:00
Adam Treat
dead954134 Fix save chats setting. 2023-06-26 16:43:37 -04:00
Adam Treat
26c9193227 Sigh. Windows. 2023-06-26 16:34:35 -04:00
Adam Treat
5deec2afe1 Change this back now that it is ready. 2023-06-26 16:21:09 -04:00
Adam Treat
676248fe8f Update the language. 2023-06-26 14:14:49 -04:00
Adam Treat
ef92492d8c Add better warnings and links. 2023-06-26 14:14:49 -04:00
Adam Treat
71c972f8fa Provide a more stark warning for localdocs and add more size to dialogs. 2023-06-26 14:14:49 -04:00