AT
dd3e0f8679
Update README.md
2023-05-09 23:04:54 -04:00
Adam Treat
64aff8a35b
Rename to build_and_run.md
2023-05-09 23:02:41 -04:00
AT
b185b9da0d
Update dev_setup.md
2023-05-09 23:00:50 -04:00
AT
cf61b8259f
Update dev_setup.md
2023-05-09 22:36:02 -04:00
AT
daa0801555
Update dev_setup.md
2023-05-09 22:00:42 -04:00
AT
1a40be68fd
Update dev_setup.md
2023-05-09 21:59:11 -04:00
Adam Treat
999ed1b560
Add a page to fill in for setting up a dev environment.
2023-05-09 21:38:24 -04:00
Adam Treat
80bd55590f
Shorten text.
2023-05-09 20:54:16 -04:00
Adam Treat
42926a484f
Couple of bugfixes.
2023-05-09 19:15:18 -04:00
Adam Treat
2206fa7f8c
Provide a user default model setting and honor it.
2023-05-09 17:10:47 -04:00
Adam Treat
069c243f1a
Add MPT info to the download list and fix it so that isDefault will work even if the required version isn't there.
2023-05-09 12:09:49 -04:00
Adam Treat
a13dcfb13b
Move this script and rename.
2023-05-09 11:48:32 -04:00
Adam Treat
9c008fb677
Simplify.
2023-05-09 11:46:33 -04:00
Adam Treat
53a39b9ecf
Don't keep this in memory when it is not needed.
2023-05-08 21:05:50 -04:00
Adam Treat
5f372bd881
Gracefully handle when we have a previous chat where the model that it used has gone away.
2023-05-08 20:51:03 -04:00
Adam Treat
8b80345c98
Copy pasta.
2023-05-08 19:10:22 -04:00
Adam Treat
af4a67c109
Fix for special im_end token in mpt-7b-chat model.
2023-05-08 18:57:40 -04:00
Adam Treat
d3ec333314
Allow these to load for gptj too.
2023-05-08 18:31:20 -04:00
Aaron Miller
5002614b20
mpt: allow q4_2 quantized models to load
2023-05-08 18:23:36 -04:00
Aaron Miller
832720dd27
mpt tokenizer: better special token handling
...
closer to the behavior of huggingface `tokenizers`,
do not attempt to handle additional tokens as if they were part
of the original vocabulary as this cannot prevent them from being
split into smaller chunks - handle added tokens *before*
the regular tokenizing pass
note this is still necessary even with a "proper" tokenizer implementation
2023-05-08 18:23:36 -04:00
Adam Treat
8c4b8f215f
Fix gptj to have lower memory requirements for kv cache and add versioning to the internal state to smoothly handle such a fix in the future.
2023-05-08 17:23:02 -04:00
Adam Treat
ccbd16cf18
Fix the version.
2023-05-08 16:50:21 -04:00
Adam Treat
a549871220
Remove as upstream has removed.
2023-05-08 15:09:23 -04:00
Adam Treat
dfe85386b5
This shouldn't have snuck in.
2023-05-08 15:09:23 -04:00
Adam Treat
992e553cfa
Update to the alibi version that Zach made.
2023-05-08 12:27:01 -04:00
Adam Treat
98aedd2173
Match Helly's impl of kv cache.
2023-05-08 12:21:30 -04:00
Adam Treat
eb77d5157b
Use F16 for kv cache on mpt.
2023-05-08 12:21:30 -04:00
Adam Treat
dc559c1575
Fix for special tokens.
2023-05-08 12:21:30 -04:00
Adam Treat
b6886c0e31
Fix up mpt.
2023-05-08 12:21:30 -04:00
Zach Nussbaum
61e2aabadb
fix: helly changes
2023-05-08 12:21:30 -04:00
Zach Nussbaum
d30be81506
fix: model loading
2023-05-08 12:21:30 -04:00
Zach Nussbaum
f732ba2d56
fix: convert script working
2023-05-08 12:21:30 -04:00
Zach Nussbaum
6a56bcaf06
feat: load model
2023-05-08 12:21:30 -04:00
Zach Nussbaum
58069dc8b9
chore: import for mpt
2023-05-08 12:21:30 -04:00
Zach Nussbaum
03bde18e49
feat: mpt convert from hf to ggml
2023-05-08 12:21:30 -04:00
Zach Nussbaum
2f6ecbe798
feat: build works + tokenizer
2023-05-08 12:21:30 -04:00
Zach Nussbaum
525b703984
feat: add ln 2, rename vars
2023-05-08 12:21:30 -04:00
Zach Nussbaum
aef524b460
feat: mpt wip
2023-05-08 12:21:30 -04:00
Adam Treat
159053be5a
Scaffolding for the mpt <-> ggml project.
2023-05-08 12:21:30 -04:00
Adam Treat
40b976436a
Only generate three words max.
2023-05-08 12:21:30 -04:00
Adam Treat
49a6a6ed65
Restore defaults for repeat penalty too.
2023-05-08 12:21:30 -04:00
Adam Treat
c054efa6ac
Send info on how many are running into this error.
2023-05-08 08:31:35 -04:00
Adam Treat
6d943917f1
Fail early/gracefully if incompatible hardware detected. And default to universal builds on mac.
2023-05-08 08:23:00 -04:00
Adam Treat
3c30310539
Convert the old format properly.
2023-05-08 05:53:16 -04:00
Adam Treat
7b66cb7119
Add debug for chatllm model loading and fix order of getting rid of the
...
dummy chat when no models are restored.
2023-05-07 14:40:02 -04:00
Adam Treat
9bd5609ba0
Deserialize one at a time and don't block gui until all of them are done.
2023-05-07 09:20:09 -04:00
Adam Treat
86da175e1c
Use last lts for this.
2023-05-07 06:39:32 -04:00
Adam Treat
ab13148430
The GUI should come up immediately and not wait on deserializing from disk.
2023-05-06 20:01:14 -04:00
Adam Treat
eb7b61a76d
Move the location of the chat files to the model download directory and add a magic+version.
2023-05-06 18:51:49 -04:00
Aaron Miller
7a8f437f8f
add name to LICENSE
2023-05-06 13:11:39 -04:00