sigoden
930d8c12ab
feat: new model claude:claude-3-haiku-20240307 ( #353 )
7 months ago
sigoden
669d6ffde2
chore: update readme
7 months ago
sigoden
601288029d
chore: update description and readme
7 months ago
tequ
fc04d56577
chore: update README.md ( #350 )
7 months ago
sigoden
aed243c3aa
feat: allow use of temporary role in a session ( #348 )
7 months ago
sigoden
8f14498969
fix: erratic behaviour when using temp role in a session ( #347 )
7 months ago
sigoden
c3677e3380
chore: release v0.14.0 ( #341 )
7 months ago
sigoden
2404a619e4
chore: update deps
7 months ago
sigoden
d5b10ea0c1
chore: fix typos
7 months ago
sigoden
f82f577b6d
feat: add nushell integration ( #344 )
7 months ago
sigoden
d6e12b1f56
chore: update readme
7 months ago
sigoden
f65c18886b
chore: update description
7 months ago
sigoden
32ce76e4d1
chore: update deps
7 months ago
sigoden
46cb320aa5
chore: spell check
7 months ago
sigoden
b82ae88a15
chore: update project description and README
7 months ago
sigoden
9e4f21546c
chore: update LICENSE-MIT
7 months ago
sigoden
20c78d6f15
feat: add new ernie models ( #340 )
7 months ago
sigoden
8e5d4e55b1
refactor: rename model's `max_tokens` to `max_input_tokens` ( #339 )
...
BREAKING CHANGE: rename model's `max_tokens` to `max_input_tokens`
7 months ago
sigoden
be4e5e569a
feat: support claude-3 ( #336 )
7 months ago
sigoden
3f693ea060
feat: compress session automaticlly ( #333 )
...
* feat: compress session automaticlly
* non-block
* update field description
* set compress_threshold
* update session::clear_messages
* able to override session compress_threshold
* enable compress_threshold by default
* make session compress_threshold optional
7 months ago
sigoden
9e15a3409e
feat: reduce gemini blocking of unsafe content with safetySettings ( #334 )
7 months ago
sigoden
e443905dc1
feat: add `.clear messages` to clear session messages ( #332 )
7 months ago
sigoden
7e32787dba
chore: improve code quanity on ReplCommand/State
7 months ago
sigoden
8421f23b45
feat: allow overriding execute/code role ( #331 )
7 months ago
sigoden
b2f86f2899
refactor: auto_copy works when executing command ( #328 )
7 months ago
sigoden
3c16aff591
feat: add `-c/--code` to generate only code ( #327 )
7 months ago
sigoden
d275c33632
chore: move integration scripts
7 months ago
sigoden
6216b40f8c
chore: update model_request issue_template and readme
7 months ago
sigoden
bc3cbce2f4
refactor: enhanced shell support ( #326 )
7 months ago
sigoden
bbb13d8227
chore: improve openai api error handling
7 months ago
sigoden
7633940e03
chore: update order of openai models
7 months ago
sigoden
75fe0b9205
feat: support mistral ( #324 )
7 months ago
sigoden
c538533014
feat: support claude ( #278 )
7 months ago
sigoden
c73a0acbbc
chore: update readme
7 months ago
sigoden
05dfdd0b37
chore: update readme
7 months ago
sigoden
ff0ec15e06
feat: add shell integration ( #323 )
7 months ago
sigoden
aec1b707af
chore: release v0.13.0
7 months ago
sigoden
c773e8d2e1
refactor: improve saving messages ( #322 )
7 months ago
sigoden
a3fbf71c1c
chore: improve execute commands prompt
7 months ago
sigoden
21d1be5bed
refactor: improve prompt error handling ( #319 )
7 months ago
sigoden
16b7ac071f
chore: update deps and readme
7 months ago
sigoden
7638412128
feat: support `-e/--execute` to execute shell command ( #318 )
7 months ago
sigoden
6c0204e696
refactor: change header of messages saved to markdown ( #317 )
7 months ago
sigoden
373b34ef5c
feat: edit current prompt on $VISUAL/$EDITOR ( #314 )
8 months ago
sigoden
179a5f5749
refactor: update vertexai/gemini/ernie clients ( #309 )
8 months ago
sigoden
5e4210980d
feat: support vertexai ( #308 )
8 months ago
sigoden
3bf0c371e4
feat: update openai/qianwen/gemini models ( #306 )
8 months ago
Joseph Goulden
25b27c148a
fix: do not attempt to deserialize zero byte chunks in ollama stream ( #303 )
8 months ago
Kelvie Wong
176ff6f83e
feat: add `extra_fields` to models of localai/ollama clients ( #298 )
...
* Add an "extra_fields" config to localai models
Because there are so many local AIs out there with a bunch of custom
parameters you can set, this allows users to send in extra parameters to
a local LLM runner, such as, e.g. `instruction_template: Alpaca`, so
that Mixtral can take a system prompt.
* support ollama
---------
Co-authored-by: sigoden <sigoden@gmail.com>
8 months ago
sigoden
a30c3cc4c1
feat: add openai.api_base config ( #302 )
8 months ago