Commit Graph

16 Commits (8b1ddabe3ec01d9638dd2377e09cfbc0e45b75d4)

Author SHA1 Message Date
Adam Treat 8b1ddabe3e Implement repeat penalty for both llama and gptj in gui. 1 year ago
Adam Treat ad210589d3 Silence a warning now that we're forked. 1 year ago
Adam Treat bfee4994f3 Back out the prompt/response finding in gptj since it doesn't seem to help.
Guard against reaching the end of the context window which we don't handle
gracefully except for avoiding a crash.
1 year ago
Adam Treat 71b308e914 Add llama.cpp support for loading llama based models in the gui. We now
support loading both gptj derived models and llama derived models.
1 year ago
Aaron Miller 00cb5fe2a5 Add thread count setting 1 year ago
Adam Treat f19abd6a18 Check for ###Prompt: or ###Response and stop generating and modify the default template a little bit. 1 year ago
Adam Treat 1c5dd6710d When regenerating erase the previous response and prompt from the context. 1 year ago
Adam Treat a06fd8a487 Provide a busy indicator if we're processing a long prompt and make the
stop button work in the middle of processing a long prompt as well.
1 year ago
Adam Treat b088929df4 Fixes for linux and macosx. 1 year ago
Adam Treat f4ab48b0fa Comment out the list of chat features until it is ready. 1 year ago
Adam Treat e1677cb353 Working efficient chat context. 1 year ago
Adam Treat c763d4737d Prelim support for past context. 1 year ago
Adam Treat b0817a2582 Time how long it takes to process the prompt. 1 year ago
Adam Treat 16d2160d40 Don't display the endoftext token. 1 year ago
Adam Treat c430ed12c6 Don't repeat the prompt in the response. 1 year ago
Adam Treat ff2fdecce1 Initial commit. 1 year ago