Commit Graph

13 Commits

Author SHA1 Message Date
Adam Treat
71b308e914 Add llama.cpp support for loading llama based models in the gui. We now
support loading both gptj derived models and llama derived models.
2023-04-20 06:19:09 -04:00
Aaron Miller
00cb5fe2a5 Add thread count setting 2023-04-19 08:33:13 -04:00
Adam Treat
f19abd6a18 Check for ###Prompt: or ###Response and stop generating and modify the default template a little bit. 2023-04-16 11:25:48 -04:00
Adam Treat
1c5dd6710d When regenerating erase the previous response and prompt from the context. 2023-04-15 09:10:27 -04:00
Adam Treat
a06fd8a487 Provide a busy indicator if we're processing a long prompt and make the
stop button work in the middle of processing a long prompt as well.
2023-04-12 15:31:32 -04:00
Adam Treat
b088929df4 Fixes for linux and macosx. 2023-04-10 16:33:14 -04:00
Adam Treat
f4ab48b0fa Comment out the list of chat features until it is ready. 2023-04-09 20:23:52 -04:00
Adam Treat
e1677cb353 Working efficient chat context. 2023-04-09 14:03:53 -04:00
Adam Treat
c763d4737d Prelim support for past context. 2023-04-09 13:01:29 -04:00
Adam Treat
b0817a2582 Time how long it takes to process the prompt. 2023-04-09 10:24:47 -04:00
Adam Treat
16d2160d40 Don't display the endoftext token. 2023-04-09 01:22:12 -04:00
Adam Treat
c430ed12c6 Don't repeat the prompt in the response. 2023-04-09 01:11:52 -04:00
Adam Treat
ff2fdecce1 Initial commit. 2023-04-08 23:28:39 -04:00