Richard Guo
0534ab59ec
some cleanup and for job specific names for circleci
2023-05-10 16:40:24 -04:00
Richard Guo
23e748d1c2
clean up and jank windows wheel build
2023-05-10 15:58:27 -04:00
Richard Guo
5e873a060c
why is there no way of stopping pipelines on branches
2023-05-10 14:11:13 -04:00
Richard Guo
eb2b19bfaf
fixed paths for c lib
2023-05-10 14:07:56 -04:00
Richard Guo
c056c6f23d
filter jobs on main branch only
2023-05-10 14:03:13 -04:00
Richard Guo
48c5ab10b9
refactor circle ci config
2023-05-10 13:57:54 -04:00
Richard Guo
f48eb1a0d7
updated README with new paths
2023-05-10 13:48:36 -04:00
Richard Guo
f1ec61fd17
updated path
2023-05-10 13:41:19 -04:00
Richard Guo
62031c22d3
transfer python bindings code
2023-05-10 13:38:32 -04:00
AT
75591061fd
Update README.md
2023-05-10 12:18:45 -04:00
AT
e66ebb0750
Update README.md
2023-05-10 12:17:57 -04:00
AT
d1a940ea3b
Update README.md
2023-05-10 12:10:33 -04:00
Andriy Mulyar
99b509b3f5
Create old-README.md
2023-05-10 12:06:43 -04:00
Andriy Mulyar
be367019d6
Update README.md
2023-05-10 12:05:42 -04:00
Adam Treat
8e7b96bd92
Move the llmodel C API to new top-level directory and version it.
2023-05-10 11:46:40 -04:00
Andriy Mulyar
658248205a
Merge pull request #520 from nomic-ai/monorepo
...
Make Monorepos Cool Again 2023
2023-05-10 11:16:36 -04:00
Adam Treat
6f8513bfc2
Fix ignore for build dirs.
2023-05-10 10:51:47 -04:00
Adam Treat
75f7145814
Merge commit gpt4all-chat into monorepo
2023-05-10 10:28:36 -04:00
Adam Treat
3bc9fee16e
Moving everything to subdir for monorepo merge.
2023-05-10 10:26:55 -04:00
AT
41fdd24664
Update README.md
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-05-10 09:09:29 -04:00
AT
707a8e1a0a
Update README.md
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-05-10 09:09:04 -04:00
Adam Treat
0978c260c4
Bump the version to 2.4.2
2023-05-10 09:05:39 -04:00
AT
6ae1e8a842
Update issue templates
2023-05-09 23:57:06 -04:00
Adam Treat
27599d4a0a
Fix some usage events.
2023-05-09 23:43:16 -04:00
Adam Treat
c8e9259bd1
Default to true for compat hardware.
2023-05-09 23:17:36 -04:00
AT
dd71513309
Update README.md
2023-05-09 23:10:53 -04:00
AT
148225372d
Update README.md
2023-05-09 23:10:06 -04:00
AT
ec81db43df
Update README.md
2023-05-09 23:04:54 -04:00
Adam Treat
dcea2f3491
Rename to build_and_run.md
2023-05-09 23:02:41 -04:00
AT
ac665f3739
Update dev_setup.md
2023-05-09 23:00:50 -04:00
AT
bb2ac26459
Update dev_setup.md
2023-05-09 22:36:02 -04:00
AT
37ea0f6c29
Update dev_setup.md
2023-05-09 22:00:42 -04:00
AT
f14458db44
Update dev_setup.md
2023-05-09 21:59:11 -04:00
Adam Treat
3b802fb0f5
Add a page to fill in for setting up a dev environment.
2023-05-09 21:38:24 -04:00
Adam Treat
dfe641222b
Shorten text.
2023-05-09 20:54:16 -04:00
Adam Treat
09b5f87b8d
Couple of bugfixes.
2023-05-09 19:15:18 -04:00
Adam Treat
f3c81c42a7
Provide a user default model setting and honor it.
2023-05-09 17:10:47 -04:00
Adam Treat
ff257eb52c
Add MPT info to the download list and fix it so that isDefault will work even if the required version isn't there.
2023-05-09 12:09:49 -04:00
Adam Treat
5d95085cbe
Move this script and rename.
2023-05-09 11:48:32 -04:00
Adam Treat
8eeca20fd7
Simplify.
2023-05-09 11:46:33 -04:00
Adam Treat
8d295550eb
Don't keep this in memory when it is not needed.
2023-05-08 21:05:50 -04:00
Adam Treat
7094fd0788
Gracefully handle when we have a previous chat where the model that it used has gone away.
2023-05-08 20:51:03 -04:00
Adam Treat
ad82aaebb1
Copy pasta.
2023-05-08 19:10:22 -04:00
Adam Treat
9c66308922
Fix for special im_end token in mpt-7b-chat model.
2023-05-08 18:57:40 -04:00
Adam Treat
a4bec78ec6
Allow these to load for gptj too.
2023-05-08 18:31:20 -04:00
Aaron Miller
821b28a4fa
mpt: allow q4_2 quantized models to load
2023-05-08 18:23:36 -04:00
Aaron Miller
49fc7b315a
mpt tokenizer: better special token handling
...
closer to the behavior of huggingface `tokenizers`,
do not attempt to handle additional tokens as if they were part
of the original vocabulary as this cannot prevent them from being
split into smaller chunks - handle added tokens *before*
the regular tokenizing pass
note this is still necessary even with a "proper" tokenizer implementation
2023-05-08 18:23:36 -04:00
Adam Treat
9da4fac023
Fix gptj to have lower memory requirements for kv cache and add versioning to the internal state to smoothly handle such a fix in the future.
2023-05-08 17:23:02 -04:00
Adam Treat
c7f5280f9f
Fix the version.
2023-05-08 16:50:21 -04:00
Adam Treat
be9e748abe
Remove as upstream has removed.
2023-05-08 15:09:23 -04:00