Commit Graph

15 Commits (93f54742b9c34b0c89dc59098e0bed7af6decf93)

Author SHA1 Message Date
Adam Treat 4d26f5daeb Silence a warning now that we're forked. 1 year ago
Adam Treat bb78ee0025 Back out the prompt/response finding in gptj since it doesn't seem to help.
Guard against reaching the end of the context window which we don't handle
gracefully except for avoiding a crash.
1 year ago
Adam Treat 55084333a9 Add llama.cpp support for loading llama based models in the gui. We now
support loading both gptj derived models and llama derived models.
1 year ago
Aaron Miller f1b87d0b56 Add thread count setting 1 year ago
Adam Treat 185dc2460e Check for ###Prompt: or ###Response and stop generating and modify the default template a little bit. 1 year ago
Adam Treat f8005cff45 When regenerating erase the previous response and prompt from the context. 1 year ago
Adam Treat c183702aa4 Provide a busy indicator if we're processing a long prompt and make the
stop button work in the middle of processing a long prompt as well.
1 year ago
Adam Treat ae91bfa48a Fixes for linux and macosx. 1 year ago
Adam Treat 47d3fd1621 Comment out the list of chat features until it is ready. 1 year ago
Adam Treat b8f8a37d87 Working efficient chat context. 1 year ago
Adam Treat 6ce4089c4f Prelim support for past context. 1 year ago
Adam Treat 596592ce12 Time how long it takes to process the prompt. 1 year ago
Adam Treat bd5e279621 Don't display the endoftext token. 1 year ago
Adam Treat 02e13737f3 Don't repeat the prompt in the response. 1 year ago
Adam Treat ff2fdecce1 Initial commit. 1 year ago