Commit Graph

15 Commits

Author SHA1 Message Date
Adam Treat
4d26f5daeb Silence a warning now that we're forked. 2023-04-20 17:27:06 -04:00
Adam Treat
bb78ee0025 Back out the prompt/response finding in gptj since it doesn't seem to help.
Guard against reaching the end of the context window which we don't handle
gracefully except for avoiding a crash.
2023-04-20 17:15:46 -04:00
Adam Treat
55084333a9 Add llama.cpp support for loading llama based models in the gui. We now
support loading both gptj derived models and llama derived models.
2023-04-20 06:19:09 -04:00
Aaron Miller
f1b87d0b56 Add thread count setting 2023-04-19 08:33:13 -04:00
Adam Treat
185dc2460e Check for ###Prompt: or ###Response and stop generating and modify the default template a little bit. 2023-04-16 11:25:48 -04:00
Adam Treat
f8005cff45 When regenerating erase the previous response and prompt from the context. 2023-04-15 09:10:27 -04:00
Adam Treat
c183702aa4 Provide a busy indicator if we're processing a long prompt and make the
stop button work in the middle of processing a long prompt as well.
2023-04-12 15:31:32 -04:00
Adam Treat
ae91bfa48a Fixes for linux and macosx. 2023-04-10 16:33:14 -04:00
Adam Treat
47d3fd1621 Comment out the list of chat features until it is ready. 2023-04-09 20:23:52 -04:00
Adam Treat
b8f8a37d87 Working efficient chat context. 2023-04-09 14:03:53 -04:00
Adam Treat
6ce4089c4f Prelim support for past context. 2023-04-09 13:01:29 -04:00
Adam Treat
596592ce12 Time how long it takes to process the prompt. 2023-04-09 10:24:47 -04:00
Adam Treat
bd5e279621 Don't display the endoftext token. 2023-04-09 01:22:12 -04:00
Adam Treat
02e13737f3 Don't repeat the prompt in the response. 2023-04-09 01:11:52 -04:00
Adam Treat
ff2fdecce1 Initial commit. 2023-04-08 23:28:39 -04:00